id
stringlengths
40
40
text
stringlengths
1
2.96M
7a07e7a335d9692f5cd6b6db37a54dc162142bc6
\section*{Acknowledgements} This research was conducted under the project “EXaSCale smArt pLatform Against paThogEns for Corona Virus—Exscalate4CoV” founded by the EU’s H2020-SC1-PHE-CORONAVIRUS-2020 call, grant N. 101003551 \bibliographystyle{bibliography_style} \section{Exscalate platform} \label{sec:platform} This section describes the Exscalate platform from a software engineering point of view. \prettyref{sec:bigrun} provides more detail on how we tailored the platform for the experiment. \subsection{The dock and score algorithm} \label{sec:dock_and_score} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{exec_time_cpu} \caption{\textit{C\texttt{+}\texttt{+}} implementation on CPU} \label{fig:dock_time_cpu} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{exec_time_gpu} \caption{CUDA implementation on GPU} \label{fig:dock_time_gpu} \end{subfigure} \caption{Time required to dock and score a ligand by varying the number of atoms and torsional bonds. The C\texttt{+}\texttt{+} implementation use a single core IBM $8335$-GTG $2.6$ GHz. The CUDA implementation use a single NVIDIA V$100$.} \label{fig:dock_time} \end{figure*} The final output of the algorithm is an estimation of the bond strength between a given ligand and the binding site of the target protein. In the virtual screening context, it is common to reduce the problem's complexity by using heuristics and empirical rules instead of performing a molecular dynamic simulation \cite{cheng2012structure}. One implication of this choice is that the numeric score of a ligand is strongly correlated by the given 3D displacement of its atoms, which is not trivial to compute due to the high number of degrees of freedom involved in the operation. Besides the six degrees of freedom derived by rotating and translating a rigid object in a 3D space, we must consider the ligand flexibility. A subset of the ligand's bonds, named \textit{torsional bonds} \cite{veber2002molecular}, partition the ligand's atoms in two disjoint set, that can rotate along the bond's axis, changing the ligand's shape. A small molecule can have tens of torsional bonds. The algorithm that we use in Exscalate to score a ligand is composed of four steps. The first step is a ligand pre-processing that flattens the ligand by rotating the torsional bonds to maximize the sum of the internal distances between all the molecule atoms. This computation is protein independent. The second step docks the ligand inside the binding site of the target protein by using a greedy optimization algorithm with multiple restarts. The scoring function that we use to drive the docking considers only geometrical steric effects. We take into account the ligand's flexibility, but we consider the pocket as a rigid body \cite{10.1145/3235830.3235835}. In the experiment, we evaluated $256$ different initial poses for each ligand. The third step sorts the generated poses to select only a few to re-score using the LiGen chemical scoring function \cite{beato2013use} in the fourth step. In particular, we cluster the generated poses using a root mean square deviation (RMSD) of $3A$ as the threshold to deem two poses as similar. Then we sort them to have in the first places the top-scoring pose for each cluster, then all the others; sorted according to the geometrical scoring function. In the experiment, we scored only the top $30$ poses for each ligand. The score of the ligand is the score of the best pose that we found. We implemented the algorithm in \textit{C\texttt{+}\texttt{+}17} to target CPUs, while we implmented an OpenACC and CUDA implementation to target accelerators \cite{vitali2019exploiting}. The CUDA implementation is the fastest when we target NVIDIA GPUs. The algorithm's asymptotic complexity is $O(n \cdot m)$, where $n$ is the number of atoms and $m$ is the number of torsional bonds. We omit features of the target docking site since it is constant during the docking application lifetime. \prettyref{fig:dock_time} measures the time required to dock and score a ligand on a Marconi100 node\cite{m100}, by varying the implementation, the ligand's number of atoms, and torsional bonds. While the C\texttt{+}\texttt{+} implementation's performance (\prettyref{fig:dock_time_cpu}) is expected, the CUDA one (\prettyref{fig:dock_time_gpu})) is less related with number of atoms. This is due to the fact that we can use hardware parallelism to elaborate them, while we need to process the torsional bonds serially to preserve the molecule geometry. Moreover, since we organized the elaboration in bundles of $32$ atoms, we have a steep increase in the docking time when we need to process more atoms; for example after $64$ and $96$ atoms. We can notice in both implementations how the docking time is heavily input dependent, where the difference between the fastest and slowest class of ligands is more than one order of magnitude. \subsection{Exscalate high-throughput docking application} \label{sec:docker} The only information required to dock and score a ligand in the target binding site is their description. Thus, the virtual screening process is an embarrassingly parallel problem. However, it is of paramount importance to design how the data can be read from the storage, transferred to the accelerator, and written back to the storage. \prettyref{fig:docker_ht} shows an overview of the application abstraction and software stack of the Exscalate high-throughput docking application. We have chosen to write an MPI application that implements an asynchronous pipeline. In particular, we want to execute a single process for each node available. Then, each process spawns a pipeline to carry out the elaboration using all the computation resources of its node. We use a dedicated thread for each stage of the pipeline. Moreover, each stage may have a thread-safe queue that stores input data. The first stage is the \textit{reader}, which reads from the actual file that represents the chemical library a chunk of data that it enqueues in the \textit{splitter}'s queue. The splitter stage inspects each chunk of data to separate all the ligand's descriptions that contain. Then it enqueues each ligand description in the \textit{docker}'s queue. In the experiment, we describe a ligand using a custom binary format derived from the TRIPOS Mol2 format, described in more detail in \prettyref{sec:bigrun}. The docker stage dequeues a ligand description, it constructs the related data structures, performs the dock and score steps described in \prettyref{sec:dock_and_score}, and it enqueues the ligand's score in the \textit{writer}'s queue. The writer stage dequeues the ligand score and accumulates in an internal buffer the related output, which is the ligand's SMILES representation and its score value in a CSV-like fashion. When the accumulation buffer is full, the writer stage initiates the writing procedure. The docker stage is the only one that can be composed of several threads that operate on the same queues to enable work-stealing. Moreover, it is possible to use different algorithm implementations, such as CUDA and \textit{C\texttt{+}\texttt{+}}, to leverage the node's heterogeneity. We refer to any docker thread as \textit{worker}. All the workers that use the CUDA implementation are named \textit{CUDA workers}, while the ones that use the \textit{C\texttt{+}\texttt{+}} implementation are named \textit{CPP workers}. Even if a single CUDA worker is tied to a single GPU, it is possible to have multiple CUDA workers tied to the same GPU. We consider the target binding site constant during the elaboration. Therefore, each process will fetch the related information once at the beginning of the execution. Each algorithm implementation is free to store the pocket data structures in the most appropriate memory location during its initialization. In particular, the \textit{C\texttt{+}\texttt{+}} implementation uses constant static memory, while the CUDA implementation uses texture memory. The Exscalate Docking Pipeline library contains all the application's stages implementations. To parallelize the computation it uses the high-level interfaces for MPI and \textit{C\texttt{+}\texttt{+}} threads provided by libdpipe. The LiGenDock/LiGenScore libraries implement the domain-specific functional concerns. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{docker-ht-pipeline} \caption{Overview of the Exscalate docking application by varying the abstraction level.} \label{fig:docker_ht} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{docker-ht-io} \caption{Example of I/O synchronization with three Application instances, represented by different colors, that read the input ligands and store the results.} \label{fig:docker_io} \end{figure} Even if the problem is embarrassing parallel, we need to synchronize all the application's instances using MPI when we perform I/O operations. \prettyref{fig:docker_io} depicts an example of I/O coordination with three MPI processes, represented by different colors. Since the computation pipeline is the same for all the processes, we depict only one MPI process pipeline. To distribute the computation workload among the MPI processes, we split the input file in even slabs according to the file size and the number of MPI processes. Since the size of a molecule description depends on the ligand's properties, such as the number of atoms, it seldom happens that a slab starts with the beginning of a ligand description and it stops with the ending of a molecule description. We use the convention that each process elaborates all the ligands whose description begins between the slab start and stop. The last ligand description may end after the slab stop. On the main hand, we are using a very I/O-friendly access pattern because we read a file sequentially. On the other hand, the static data partition negates work-stealing among MPI processes. Therefore, the application throughput is equal to the throughput of the slowest process. The frequency at which each process reads from the input file depends on the pipeline throughput. The writer stage uses collective I/O operations to coalesce writing requests together before writing to the storage. Indeed, the user can configure the number of processes that issue I/O operations to reduce the pressure on the file system. This access pattern is I/O-friendly because all the writing operations are parallel and sequential. \subsection{Exscalate workflow} \label{sec:workflow} In principle, it is possible to store all the ligands that we would like to dock in a single file and to deploy the docking application on the whole machine. However, this approach has several drawbacks. The main concern is fault resiliency. The default action to respond to a fault in an MPI communicator, for example after a node failure, is to terminate all the processes \cite{message2015mpi}, which can lead to losing a significant amount of computation effort. This is a well known problem in literature \cite{snir2014addressing,bland2013post,Rocco2021}. Another concern lies in the application performance. \prettyref{fig:dock_time} shows how docking and scoring a large and complex ligand required much more time than a small and simple ligand. Therefore, we have a significant imbalance between the MPI processes if all the ligands with many atoms and torsional bonds are close. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{diagram_workflow} \caption{Exscalate workflow, from the input (ligand's chemical library and the protein models) on the left; to the final outcome (most promising set of molecules) on the right.} \label{fig:workflow} \end{figure*} The Exscalate workflow address these issues with a pre-processing phase on the chemical library to have a relatively small number of jobs that can run in parallel using a plain job array to coordinate the execution, such as the one provided by SLURM \cite{10.1007/10968987_3} or PBS \cite{pbs}. \prettyref{fig:workflow} depicts the Exscalate workflow which requires two different kinds of input from the domain knowledge. On one hand, we require the binding sites of the target proteins. How to obtain them is a complex procedure that we consider outside of the scope of this article \cite{ijms21145152}. On the other hand, we require the chemical library of molecules that we want to evaluate. It is possible to represent a molecule in a wide range of formats according to the amount of information that we want to store. In our case, we assume that the chemical library is stored using the SMILES format \cite{doi:10.1021/ci00057a005}, which can encode a molecule in a single string that contains only the structure of the molecule (ignoring the hydrogen) since it is the most compact. The next step in the ligand pre-processing is to broadly classify them in buckets according to their expected execution time, to reduce as much as possible the imbalance during the computation. As shown in \prettyref{fig:dock_time}, the number of torsional bonds and atoms seem good predictors. However, it is not trivial to extract these properties from the SMILES representation. For this reason, we trained a model that predicts the execution time given properties that are more accessible at this point of the workflow: the number of heavy atoms, the number of rings, and the number of chains. We also consider interactions between them. We use a decision tree model with a maximum depth of $16$ to predict the ligand's execution time. After the ligands classification according to their complexity, we can perform the pre-processing. In particular, for each ligand we add the hydrogen atoms, we generate the initial displacement of its atoms in the 3D space, and we unfold the molecule (\prettyref{sec:dock_and_score}). This elaboration is required once and it can be re-reused in all the virtual screening campaigns. In addition to the ligand pre-processing, we also need to set the size of the ligand binary files that we will use in the docking phase. Finally, once we have the target binding sites and the ligand binaries, we can perform the virtual screening campaign. The idea is to launch the docking application on all the ligand files, one pocket at a time. The output of the virtual screening is the ranking of the chemical library against each docking site. When domain experts selected the ligands that have a strong interaction with multiple docking sites or proteins, it is possible to re-create the 3D displacement of the ligand's atoms on demand. For this reason, we can store only the structure of the molecule, using the SMILES notation. \section{Conclusion} \label{sec:conclusion} In the context of urgent computing, where we want to reduce the social and economic impact of a pandemic as much as possible, we re-designed the Exscalate molecular docking platform targeting HPC systems. We use this platform to perform the largest virtual screening campaign against $15$ binding sites of $12$ viral proteins of Sars-Cov2. The results accounts for 64TB of data representing the score of each of the 70+ billion of ligands on the target pockets. The set of most promising compounds filtered for each target has been made available on the MEDIATE portal\footnote{https://mediate.exscalate4cov.eu/data.html} to permit researchers around the world to start more detailed de-novo campaigns from a reduced set of compounds. This document describes the Exscalate platform design from a software engineering point of view and reports its validation in the experiment. In particular, the experiment shows how the docking application can hinge on a node's accelerators to carry out the computation while using the CPU mainly to support the computation; i.e. by synchronizing the I/O toward the file system and feeding the GPUs with data. Moreover, we were able to scale over the two full HPC system, CINECA-Marconi100 and ENI-HPC5, that at the time of the experiment were the most powerful European supercomputers. \section{Experimental results} \label{sec:bigrun} We validate the Exscalate platform by performing a virtual screening campaign over $70$ million ligands against $15$ binding sites of $12$ viral proteins of Sars-Cov2. We deployed the Exscalate platform on HPC5 at ENI\cite{hpc5} and Marconi100 at CINECA\cite{m100}. A Marconi100 node is equipped with $32$ IBM POWER9 AC922 cores ($128$ hardware threads) and $4$ NVIDIA V100 GPU, with NVLINK $2.0$. The computation node of HPC5 is very similar since it also uses $4$ NVIDIA V100 GPUs, but it relies on Intel Xeon Gold 6252 24C as CPU ($24$ cores and $48$ hardware threads) and it uses NVLINK only for the GPU interconnection. This section reports the extra-functional concerns of the experiment. \subsection{Evaluating the storage requirements} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{validation_time} \caption{Measured docking time} \label{fig:validation_time} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{validation_error} \caption{Prediction error} \label{fig:validation_error} \end{subfigure} \caption{Frequency distribution of the measured docking time, using the CUDA implementation, and its prediction error. We discared values with a frequency lower than $0.001$ for conciseness purposes.} \label{fig:validation} \end{figure*} One of the main concerns in HPC systems is the storage. When scaling an experiment to the scale of a trillion docking operation, it requires us to evaluate in detail what we want to read and write, giving attention to the format. To perform the virtual screening, we need information about the binding sites of the target proteins and the chemical library of ligands that we want to analyze. The former is not an issue since it requires total storage of $29$MB and the information needed is read once when the application starts. The latter needs more careful consideration. Domain experts use to work with SMILES format to represent a ligand since it is the most compact. In fact, the chemical library evaluated in the experiment encoded in the SMILES format requires a total of $3.3$TB. However, the docking application requires a richer description of the molecule, as described in \prettyref{sec:workflow}. The most widely used format to store the required information is the TRIPOS Mol2, which is encoded in ASCII characters and focuses on readability rather than efficiency. For this reason, we use a custom binary format that stores only the information required by the docking application, such as the atom's position, type, and bonds. By comparing the size of the same molecules, the Mol2 format requires $5-6$X more space with respect to the binary format. Nonetheless, the whole binary chemical library for the experiment requires $59$TB of storage. Storing all the docked poses is unfeasible since we are targeting $15$ binding site and we re-score $30$ alternative pose for each input ligand because it would require $26$PB of storage. For this reason, we store only the SMILES of the molecule and its best score in a CSV-like file. Then, we can re-generate the docked pose on demand since the docking algorithm is deterministic. The size of the final output is $7.3$TB of data. \subsection{Predicting the ligand complexity} \label{sec:validation} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{throughput} \caption{Throguhput of the docking application in terms of ligands per seconds, by varying the number of CPP and CUDA workers.} \label{fig:throughput} \end{figure} \begin{table*} \centering \begin{tabular}{lccr} \toprule Binding site & Thr (ligands/sec/node) & Thr (ligands/sec) & HPC machine \\ \midrule PLPRO &2496 &1996800 &M100 \\ SPIKEACE &2498 &1998400 &M100 \\ NS12thumb &2499 &1999200 &M100 \\ NS13palm &2486 &1988800 &M100 \\ 3CL &2427 &1941600 &M100 \\ NSP13allo &2498 &1998400 &M100 \\ Nprot &2010 &3015000 &HPC5 \\ NSP16 &1980 &2970000 &HPC5 \\ NSP3 &1969 &2953500 &HPC5 \\ NSP6 &1985 &2977500 &HPC5 \\ NSP12ortho &2001 &3001500 &HPC5 \\ NSP14 &1965 &2947500 &HPC5 \\ NSP9 &1996 &2994000 &HPC5 \\ NSP15 &1990 &2985000 &HPC5 \\ NSP13ortho &2454/1987 &1963200/2980500 &M100/HPC5 \\ \bottomrule \hline \end{tabular} \caption{\label{tab:throughput}The throughput reached per node and per machine for each binding site evaluated in the experiment.The NSP13ortho binding site has been partially computed in both machines.} \end{table*} The main shortcoming of the docking application is its inability to perform work-stealing among different nodes, potentially leading to an imbalance in the execution. For this reason the workflow clusters the input chemical library according to the expected time required to elaborate a ligand, by using features that are trivially accessible from the SMILES format. \prettyref{fig:validation} shows the experimental campaign that we used to train a decision tree regressor \cite{scikit-learn} to predict the docking time using the number of heavy atoms, the number of rings, the number of chains, and the interactions between them. \prettyref{fig:validation_time} shows the measured execution time of a dataset with $21$ million of ligands with a different number of atoms and torsional bonds. We use the $80\%$ of the data to train the model, while the remaining data are used to compute the prediction error reported in \prettyref{fig:validation_error}. The model has a negligible mean error ($-0.00088$ ms), with a standard deviation of $3.81$ ms. Even if the average error is close to zero, we can notice how the standard deviation suggests that we do make an error when we predict the docking time of a given ligand. In the experiment, we cluster the ligands in buckets of $10$ ms to account for this variability. Since the high-throughput docking aims at avoiding imbalance in the computation, we are interested in the average behavior. \subsection{Exploiting a node heterogeneity} The availability of multiple implementations of the dock and score algorithm grants access to heterogeneous resources. However, the relationship between the number of CUDA and CPU workers (\prettyref{sec:docker}) and the application throughput is not trivial. \prettyref{fig:throughput} shows the application throughput in terms of docked ligands per second, by varying the number of CUDA and CPP workers, when we deploy the application on a Marconi100 node, which has $32$ IBM POWER9 AC922 cores ($128$ hardware threads) and $4$ NVIDIA V100 GPU. The application binds each CUDA worker to a single GPU in a round-robin fashion. For example, when we use $24$ CUDA workers, we have $6$ threads that feed data and retrieve the results for each GPU in the node. From the throughput, we can notice how the application reaches the peak performance for a high number of CUDA workers. Moreover, when we increase the number of CPU workers to match the number of hardware threads, we harm the application performance. This behavior implies that, in our case study, it is better to use CPUs to support accelerators and I/O operations rather than contribute to the elaboration itself. Furthermore, to benefit most from a GPU it is not enough to use a single CUDA worker. We expect this result because the CUDA worker needs to parse the ligand description and initialize the related data structures before launching any CUDA kernel. Thus, by using more CUDA workers we can hide these overheads and fully utilize the GPU. To perform this analysis we used the dataset ``Commercial Compound MW$<$330'' from the MEDIATE website. It is composed of $5$ milion of small molecules and it is publicly available. \subsection{Scaling on the target HPC machine} To overcome the limitations of using a single MPI application that runs on the whole supercomputer, we run several instances on different portions of the input data. We divided the input set in {\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}}$3400$ jobs, where each job is composed of $32$ MPI processes that last for about $5$ minutes and targets a single binding site. For this experiment, we decided to evaluate the binding sites sequentially. \prettyref{tab:throughput} reports for each binding site the average throughput of a node and the whole machine. In particular, to compute the throughput of a node, we sampled the application log files to get the average throughput per MPI process, which is equal to the node performance and then computed the average. To compute the machine throughput, we divided the time to solution of the experiment, i.e. the wall time required to dock the whole chemical library against the target docking site, by the number of ligands in the chemical library. Thus, it takes into consideration all the overheads of the execution. On average, the throughput of a single node is $2.2k$ ligands per second, while the combined throughput of the both supercomputers is $5M$ ligands per second. The throughput per node that we measured while scaling to the whole machine is comparable to the one that we measured while tuning the number of CUDA and CPU workers. Therefore, the Exscalate platform was able to exploit all the available resources. \section{Introduction} Drug discovery is a long process that usually involve \textit{in-silico}, \textit{in-vitro}, and \textit{in-vivo} stages. The outcome of this process is a molecule, named \textit{ligand}, that has the strongest interaction with at least one binding site of the target protein, also known as \textit{pocket}, that represents the disease. Domain experts expect this interaction to lead to a beneficial effect. Virtual screening is one of the early stages that aims to select a set of promising \textit{ligands} from a vast chemical library. The complexity of this operation is due to the ligand and pocket flexibility: both of them can change shape when they interact. Therefore, to estimate the interaction strength using a \textit{scoring function}, we also need to predict the displacement of their atoms using a \textit{docking} algorithm. This problem is computationally heavy, and it is well known in literature \cite{Pagadala2017}. Moreover, to increase the probability of finding promising candidates, we would like to increase the size of the chemical library as much as possible, exacerbating the complexity of the virtual screening. Since the evaluation is \textit{in-silico}, we can design new molecules by simulating known chemical reactions. Therefore the chemical library size is limited only by the system's computational power. In the context of urgent computing, where the time required to find a therapeutic cure should be as short as possible, we re-designed the Exscalate platform with the goal of virtual screening as many ligands as possible in a given time budget. To maximize the throughput of the docking platform, we target \textit{High-Performance Computing (HPC)} supercomputers since their design maximizes the number of arithmetic operations per second \textit{Flop/s}, using double-precision floating-point numbers. Indeed, the TOP500 list \cite{top500} ranks all the HPC supercomputers worldwide according to their throughput. When we focus on the top five supercomputers, we can notice how four of them have heterogeneous nodes that heavily rely on accelerators, typically \textit{GPU}s. Thus, we need to hinge on the node's accelerators and efficiently scale up to the available nodes to use all the computation power of the target machine. Even if we focus on the software level, there are multiple well-known issues \cite{ashby2010opportunities,thakur2010mpi}. How to efficiently use accelerators, how to transfer data within a node to feed the accelerators, how to move data from storage devices to the machine's node and vice versa, how to minimize communications between nodes and synchronizations between processes, and how to improve resilience to reduce the impact of faults in the time-to-solution, are the most representative issues. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{pictures/1TD_CUT.pdf} \caption{Schematic representation of the dataset used for the EXSCALATE4CoV virtual screening experiment.} \label{fig:1TD} \end{figure*} \begin{table} \centering \begin{tabular}{lr} \toprule \textbf{Protein} & \textbf{PDB code}\\ \midrule 3CL protease (NSP5) & 6LU7 \\ N-protein & 6VYO \\ NSP3 & 6W02 \\ NSP6 & De novo model \\ NSP9 & 6W4B \\ NSP12 & 7BV2 \\ NSP13 & 6XEZ \\ NSP14 & Homology Model \\ NSP15 & 6W01 \\ NSP16 & 6W4H \\ PL protease & 6W9C \\ Spike-ACE2 & 6M0J \\ \bottomrule \hline \end{tabular} \caption{\label{tab:targetPDB} The 3D targets used in the molecular docking experiments. A target might have different pockets.} \end{table} In the context of the EXSCALATE4CoV European project\footnote{https://www.exscalate4cov.eu/}, that targets to find new potential drugs against COVID19 pandemic \cite{COVID19}, we deployed the Exscalate platform in two HPC machines with a combined throughput of $81$ PFLOPS, to rank a chemical library of more than $70$ billion ligands against $15$ binding-sites of $12$ viral proteins of Sars-Cov2 (Figure \ref{fig:1TD}). The crystal structures of the main functional units of SARS-CoV-2 proteome were obtained from the Protein Data Bank; \prettyref{tab:targetPDB} reports the list of the proteins analysed, with the corresponding PDB code. Homology models of the proteins for which the crystal structure is not available were generated and used. Overall, the experiment lasted $60$ hours and it performed a trillion of docking operations, becoming the largest virtual screening campaign up this moment. The knowledge generated by this experiment is publicly released through the MEDIATE website\footnote{https://mediate.exscalate4cov.eu}. The remainder of the paper is organized as follows: \prettyref{sec:related} briefly describes the most related application that can perform a virtual screen. \prettyref{sec:platform} describes the Exscalate platform, highlighting the design choices that led to the performances reported in \prettyref{sec:bigrun}, where we deployed the application in two HPC machines. Finally, \prettyref{sec:conclusion} concludes the paper. \section{Related works} \label{sec:related} A Molecular docking application can serve different purposes, from virtual screening to accurate simulations. For this reason, we have a wide spectrum of algorithms and approaches\cite{Biesiada2011,Yuriev2015,Pagadala2017} that covers the performance-accuracy curve trade-off. Since we use molecular docking to select the most promising candidates, we are interested in fast approaches such as ICM \cite{neves2012docking}, PSOVina \cite{Ng2015}, or EUDOCK \cite{Pang2010}. However, fewer can use accelerators \cite{Thomsen2006}, which account for the majority of the computation power in the target supercomputers. AutoDock \cite{Morris2009} is the most related work, since it has been ported in CUDA (AutoDock-GPU \cite{legrand2020gpu}) and deployed on the Summit supercomputer\cite{summit}, where they docked over one billion molecules on two SARS-CoV-2 proteins in less than two days \cite{glaser2021high}. They hinge on the Summit's NVMe local storage to dock batches of ligands in the target pocket and to store the intermediate results. In particular, AutoDock-GPU uses OpenMP to implement a threaded-based pipeline, where each thread reads ligands from the file, launches the CUDA kernels, waits for their completion, and it writes back the results. Since most docking algorithms use a fast but approximated scoring function to drive the estimation of the 3D pose of a ligand, it is common to re-score the most promising ones with a more accurate scoring function. They use a custom CUDA version of RFScore-VS \cite{wojcikowski2017performance} to perform such task and BlazingSQL \cite{blazingSQL} for computing statistics and selecting the top scoring ligands. To orchestrate the workflow they use FireWorks \cite{jain2015fireworks} from an external cluster to ensure a consistent state in presence of faults in the computations nodes. The approach that we followed to design the Exscalate platform differs in several ways. We use a monolithic application to dock and re-score the ligands, using MPI \cite{message2015mpi} to scale out and \textit{C\texttt{+}\texttt{+}11} threads to scale up. The proposed solution can reach a high throughput without relying on the node's local storage, which is not available in the target HPC systems. Moreover, we envelop the application in a more complex workflow that compensates for its weak points. \prettyref{sec:platform} will describe the platform in more detail. Furthermore, since our docking algorithm is deterministic, we can store only the molecule's structure, using the SMILES format \cite{weininger1988smiles}, and the best score that we found. Then, we can re-generate the 3D displacement of the atoms on demand.
519fe4e99a0c332996b3d6af57869090026f1913
\section{Introduction}\label{s:intro} Magnetic flux ropes are specific magnetic configurations in the solar atmosphere where helical field lines wrap around a common axial field. They are fundamentally associated with solar eruptions, particularly coronal mass ejections (CMEs), due to their magnetic free energy content and susceptibility to a loss of equilibrium or instability \citep[see][for a review]{green18}. Although the magnetic field of flux ropes cannot readily be directly observed in imaging data, sigmoids are a well-known indirect signature that indicates the presence of helical field lines, of around one turn, in a flux rope configuration \citep[][]{rust96,green11}. Sigmoids are hot, $S$-shaped (or double $J$-shaped) coronal loops that emit in EUV and soft X-ray, covering a temperature range of log $T_{K}$ = [6.0, 7.2] \citep[e.g.][]{gibson02,tripathi09,james18,mulay21}. When observed on the Sun, sigmoids are highly likely to erupt as a CME \citep{rust96,canfield99,canfield07}. Flux ropes in sigmoidal active regions can form during an active region's emergence and/or decay phase. Regardless of the phase though, flux rope formation can be a consequence of photospheric flows that drive reconnection at some height in the atmosphere. For example, during an active region's emergence phase, strong orbiting motions of photospheric field bring together sheared loops systems and drive reconnection between them, resulting in flux rope formation in the corona \citep[][]{james20}. Similarly, during the decay phase, reconnection in the photosphere, which manifests itself as flux cancellation \citep{martin85}, readily occurs and transforms an active region's sheared arcade into a low-lying flux rope \citep{vanballegooijen89}. This process takes place over several days and once the sigmoid forms as a continuous $S$-shape (from double $J$-shaped loops), a CME follows within a period of time measured in hours \citep{green14}. As decaying active regions disperse their fragmented flux over an ever larger area, flux cancellation and flux rope formation readily occur along the internal or main polarity inversion line (PIL) of the region \citep[e.g.][]{green11,green18,yardley18}. These transformations of the magnetic configuration are realized with magnetic reconnection occurring from photospheric up to low coronal heights. Different reconnection heights ultimately influence the specific details of the flux rope, and the plasma it contains, and its likelihood to erupt as a CME. Therefore, due to the very nature of the formation process of flux ropes, plasma composition is a potentially powerful diagnostic to constrain flux rope formation models, with measured elemental composition of sigmoid plasma providing information as to its origin, whether photospheric or coronal. Plasma composition can be determined by considering coronal emission lines from elements with different first ionization potentials (FIP). In general, elements with a FIP $\lesssim$ 10 eV have enhanced abundances compared to those with a FIP $\gtrsim$ 10 eV when the plasma is observed in the corona relative to the photosphere \citep[see the review of][]{laming15}. The degree of enhancement is highly correlated with the Sun's magnetic activity on all spatial and temporal scales \citep[e.g.][]{brooks15,baker18,brooks17}. Studies of erupting prominences show that their cool, dense plasma has photospheric composition, suggesting that unfractionated plasma from the photosphere/chromosphere was brought upwards into the prominence body rather than fractionated plasma from the corona condensing \citep[e.g.][]{feldman92,spicer98,ciaravella00}. \cite{parenti19} confirmed that quiescent prominences also have photospheric composition. These studies focused on the properties of the filament/prominence material suspended in a flux rope configuration. To date there are few examples of plasma composition being used either on its own or with other observational evidence to investigate the formation and evolution of flux ropes in active regions (ARs). \cite{baker13} found unfractionated plasma, i.e. of photospheric composition, along a sigmoid channel in an eruptive active region. The authors concluded that the observed photospheric composition plasma combined with significant flux cancellation was evidence of a flux rope that had formed via reconnection low down in the solar atmosphere as proposed by \cite{vanballegooijen89}. Coronal plasma composition was a key observable used to verify that a sigmoidal flux rope formed via reconnection high up in the corona by \citet{james17,james18}. \cite{fletcher01} established a link between elemental abundances and the type of magnetic topology associated with transition region brightenings within a sigmoidal active region. More precisely, they found that the brightenings related to bald patch separatrix and quasi-separatrix layers are associated with plasma composition close to typical photospheric and coronal composition, respectively. In the investigation presented here, coordinated \emph{Hinode}/XRT, SOT, and EIS observations are used to show how plasma composition evolves in a sigmoidal active region as a flux rope forms and then subsequently erupts. In Section \ref{s:evolution}, we provide observations of the photosphere, chromosphere, and corona during the active region's decay phase when the flux rope formed. An account of the \emph{Hinode}/EIS diagnostics used in this study follows in Section \ref{s:eis}. The spatially resolved composition ratio and temperature maps are presented in Section \ref{s:maps}. We discuss our findings in the context of flux rope formation models based on flux cancellation \cite[e.g.][]{vanballegooijen89,aulanier10} in Section \ref{s:discussion} before concluding in Section \ref{s:conclusion}. \section{Evolution of AR 10977}\label{s:evolution} \begin{figure}[bt!] \epsscale{1.2} \plotone{Fig1.png} \caption{Global evolution of AR 10977. (a) Half of total unsigned flux (black), mean temperature (red), and mean \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ / \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio (blue) vs time. \emph{Hinode}/EIS raster start times are shown with dashed gray lines. The global evolution of magnetic flux, temperature, and composition ratio are compared in Section \ref{s:Global_Evolution}. (b) \emph{Hinode}/XRT C Poly light curve for December 5--7. \label{fig:global}} \end{figure} \begin{figure*}[hbt!] \epsscale{1.2} \plotone{Fig2.png} \caption{SOHO/MDI LOS magnetograms of AR 10977 at 00:00 UT on 2007 December 4--7. \label{fig:mdi_series}} \end{figure*} \begin{figure*}[hbt!] \epsscale{1.2} \plotone{Fig3.png} \caption{\emph{Hinode}/XRT C Poly images of the coronal field evolution during the period of December 5--7. A sense of increasing shear in the northern section of active region is indicated by the black and red dashed lines showing the approximate angle between the main PIL and loops crossing it. A movie of the \emph{Hinode}/XRT images is included as `XRT$\_$movie.mp4'. The movie covers the time period from 04:39 UT on 2007 December 5 to 12:18 UT on 2007 December 7. The field of view centers on the sigmoidal active region as it evolves and eventually erupts as a CME. \label{fig:xrt_series}} \end{figure*} AR 10977 was a simple bipolar active region (Zurich classification $\beta$) visible on the solar disk from 2007 December 2--12. The active region was in its emergence phase for approximately two days before a peak flux value of $\sim$2.4$\times$10$^{21}$ Mx was reached at $\sim$ 12:00 UT on December 4. The active region then entered its decay phase, which was characterized by the formation of a sigmoid indicating that a flux rope had been built \citep{green11,savcheva12,gibb14}. The flux rope erupted early on December 7 in two stages -- a failed eruption followed a few hours later by a CME that was detected in STEREO-B coronagraph data \citep{ma09b}. A B1.4--class GOES flare and a global wave were associated with the CME \citep[][]{ma09,green11,long11,attrill14}. Figure \ref{fig:global}(a) shows half of the total unsigned magnetic flux (black curve) at the end of the emergence phase and during the decay phase, with the times of \emph{Hinode}/EIS observations plotted as vertical gray dashed lines. During the early decay phase, significant flux cancellation occurred along the northern section of the main PIL for 2.5 days prior to the CME that occurred at 04:20 UT on the 7th \citep{green11}. At the southern-most end of the main PIL, flux cancellation was also observed in the more fragmented magnetic field but this was minor compared to that of the primary site in the north and cancellation began later there \citep{green11}. The locations of the main PIL and the sites of flux cancellation are identified in the SOHO/MDI magnetograms of Figure \ref{fig:mdi_series}. Figure \ref{fig:global}(b) shows a light curve of the soft X-ray emission from the entire active region. The soft X-ray emission associated with the failed eruption and the CME peaked at 01:08 UT and 05:45 UT, respectively, on 2007 December 7. The evolution of the coronal loops during the region's decay phase can be seen in the \emph{Hinode}/XRT images in Figure \ref{fig:xrt_series}. The image series shows that the coronal field evolves in three key stages during the decay phase: flux rope formation, failed/CME eruptions, and post-CME eruption, briefly described below and discussed in more detail in Sections \ref{stage1}, \ref{stage2}, and \ref{stage3}, respectively. Early on December 5, before the start of significant flux cancellation in the north, the arcade loops are aligned approximately orthogonal to the main PIL i.e. are potential (see image at 00:34 UT on December 5). The arcade field is more sheared 12 hours later as flux cancellation is accelerating (see image at 12:57 UT). The approximate shear angles are shown by the crossing of the dashed red/black lines representing the main PIL/loops axes in both images. By 15:51 UT on December 6, the active region loops have formed a continuous forward S-shaped sigmoid \citep{green11}. The sigmoid/flux rope expands and rises during the failed eurption and CME (see images at 01:47 and 04:20 UT on December 7, respectively). Highly sheared post-eruption loops are present in the northern region immediately following the CME from 05:00 to 12:00 UT. The sigmoid was destroyed during the CME \citep{green11} but reformed after \emph{Hinode}/EIS composition observations ended at 11:26 UT. A filament is present in AR 10977 on December 5 as shown in the Improved Solar Optical Observing Network (ISOON) and \emph{Hinode}/SOT H$\alpha$ images in Figure \ref{fig:filament}. It lies along the main PIL and extends to the northwest of the active region. By the start of the \emph{Hinode}/SOT observing window at 15:01 UT on December 6, a newly formed branch of the filament is observed in the north. The full extent of the filament then has an S-shape similar to that of the sigmoid observed in the soft X-ray images in Figure \ref{fig:xrt_series}. The distinctive S-shaped filament remains essentially intact during the failed eruption and CME and throughout the 7th (not shown in Figure \ref{fig:filament}). This is sometimes observed in other events \citep[e.g.][]{dudik14} and it implies that the low lying magnetic configuration supporting the filament is not participating in the eruptions. \begin{figure}[hbt!] \epsscale{0.70} \plotone{Fig4a.png} \plotone{Fig4b.png} \plotone{Fig4c.png} \plotone{Fig4d.png} \caption{ISOON and \emph{Hinode}/SOT H$\alpha$ images at 14:39 UT (a), 17:30 UT (b), and 19:20 UT (c) on December 5 and at 15:01 UT on December 6 without/with contours (left/right) of SOHO/MDI line of sight magnetic field component of $\pm$100 G (white/green for positive/negative values). Filaments are indicated by the white arrows. $X$ and $Y$ coordinates are in arcsec. \label{fig:filament}} \end{figure} \section{\emph{Hinode}/EIS Observations}\label{s:eis} \emph{Hinode}/EIS observed AR 10977 from 2007 December 5--7, during which time Study \#180 was run 16 times, 13 of which are included here. A field-of-view of 180$\arcsec\times$512$\arcsec$ was constructed by stepping the 1$\arcsec$ slit in 3$\arcsec$ increments for 60 pointing positions, taking 50 second exposures at each position. All spectra were corrected for dark current, warm/hot/dusty pixels, and slit tilt using standard EIS routines in the SolarSoft Library \citep{freeland98}. Single Gaussian functions were fitted to the unblended emission lines \citep{brown08} used for the composition ratio and temperature measurements. \begin{figure*}[hbt!] \epsscale{1.15} \plotone{Fig5.png} \caption{\ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ / \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio. Left to right: Emissivity as a function of temperature and density for \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ (a), \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ (b), ratio of the Fe/S lines (c), and ratio as a function of temperature for densities of log N$_{e}$ = [8--10] (d). \label{fig:theory}} \end{figure*} In order to investigate the evolution of coronal plasma composition, a suitable low-FIP and high-FIP spectral line pair must be identified amongst the available lines. Previous EIS composition studies have employed the \ion{Si}{10} 258.38 $\mbox{\normalfont\AA}$/\ion{S}{10} 264.22 $\mbox{\normalfont\AA}$ line pair for 1--2 MK plasma \citep[e.g.][]{brooks11,baker13,brooks15} and the \ion{Ca}{14} 193.87 $\mbox{\normalfont\AA}$/\ion{Ar}{14} 194.40 $\mbox{\normalfont\AA}$ line pair for 3--4 MK plasmas \citep[e.g.][]{doschek15,baker20,to21}. Neither of these well known composition diagnostics is available in Study \#180. Instead, we use the low-FIP \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ / high-FIP \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ line pair recommended by \cite{feldman09} for measuring the FIP effect at temperatures of 2--3 MK, a suitable range for sigmoids \citep[e.g.][]{tripathi09}. The ratio was computed using the CHIANTI Atomic Database, Version 10 \citep{dere97,gdz21}. Figure \ref{fig:theory} shows the emissivity of \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ (panel a), \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ (panel b), and the Fe/S ratio (panel c) as a function of temperature and density, and the ratio as a function of temperature for specific densities of log N$_{e}$ = [8, 9, 10] (panel d). The emissivities were determined using photospheric abundances where log(H) = 12, log(Fe) = 7.45 and log(S) = 7.14 \citep{grevesse07}. The \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ and \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ lines have similar temperature dependence in ionization equilibrium \citep{feldman09} and negligible electron density dependence, therefore, their intensity ratio depends primarily on the relative abundances of Fe and S. In the temperature range log $T_{K}$ = [6.3, 6.5], the relationship is well constrained as shown with the yellow shaded region in panel (d) of Figure \ref{fig:theory}. In this range, the ratio curves in panel (d) are relatively flat and independent of electron density, with a variation span of 23$\%$. Therefore, in this study, we use the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ and \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ intensity ratio to determine whether the plasma composition is of photospheric or coronal origin and we refer to the spatial distribution of this ratio as composition ratio maps. A ratio of $\sim$0.20-0.25 is photospheric plasma composition (= unfractionated plasma) and a ratio $>$0.80, a factor over 3$\times$ the photospheric ratio, is coronal plasma composition (= fractionated plasma). Because of the stronger temperature dependence (see again Figure \ref{fig:theory}(d)), it becomes more difficult to disentangle temperature effects from those of changes in relative abundances for plasma temperatures outside of log $T_{K}$ = [6.3, 6.5]. \begin{figure}[hbt!] \epsscale{1.1} \plotone{Fig6.png} \caption{Theoretical curve of the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/ \ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ intensity ratio vs. log $T_{K}$. Temperature range highlighted in yellow is the same as in Figure \ref{fig:theory} (d). \label{fig:temp}} \end{figure} Temperature maps of the sigmoidal active region were made using the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ /\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ diagnostic line ratio. The temperature range covered by the line ratio is compatible with that of the composition diagnostic used in this study. Figure \ref{fig:temp} shows the theoretical temperature curve derived for the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ /\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ ratio with the range of log $T_{K}$ = [6.3, 6.5] highlighted in yellow. (See the Appendix for \emph{Hinode}/XRT filter ratio temperature maps in support of the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ /\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ temperature ratio maps.) The estimated uncertainties in the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition and the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ ratios is $\sim$30\% based on the intensity calibration uncertainty of $\sim$23\% \citep{Lang2006}. It should be noted that some low intensity pixels may have unreliable composition ratio measurements. Sufficiently low intensities mean that the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$ and/or \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ lines may not have well defined spectral profiles over and above the background level. This is more likely to affect pixels in regions of photospheric composition plasma. \emph{Hinode}/EIS \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ intensity, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio, and \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ temperature maps are provided in Figures \ref{fig:comp1}--\ref{fig:comp3}. Each map has been overplotted with \emph{SOHO}/MDI $\pm$100 G contours of the line of sight magnetic field component closest in time and differentially rotated to the start time of the EIS raster. The right column of the figures shows the corresponding \emph{Hinode}/XRT C-poly intensity maps. The color scheme for the composition ratio maps has been chosen so that photospheric composition with a ratio of $\sim$0.20--0.25 is dark (blue) and coronal composition with a ratio of $>$0.8 is light (tan/yellow/white). Orange/red indicates mixed or partially fractionated plasma in between photospheric and coronal composition. The color of the arrows in the composition ratio maps corresponds to this color scheme. \begin{figure*}[hbt!] \epsscale{1.1} \plotone{Fig7.png} \caption{\emph{Hinode}/EIS \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ intensity, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ temperature and XRT intensity maps on 2007 December 5. MDI contours of $\pm$100 G are overplotted on the EIS maps (positive - turquoise or white, negative - green or red). Blue/red/yellow arrows designate unfractionated/partially fractionated/highly fractionated coronal plasmas as discussed in the text. Stages of evolution are explained at the beginning of Section \ref{s:maps}. $X$ and $Y$ coordinates are in arcsec. \label{fig:comp1}} \end{figure*} \begin{figure*}[hbt!] \epsscale{1.2} \plotone{Fig8.png} \caption{\emph{Hinode}/EIS \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ intensity, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ temperature and XRT intensity maps on 2007 December 6. MDI contours of $\pm$100 G are overplotted on the EIS maps (positive - turquoise or white, negative - green or red). Blue/red/yellow arrows designate unfractionated/partially fractionated/highly fractionated coronal plasmas as discussed in the text. Stages of evolution are explained at the beginning of Section \ref{s:maps}. $X$ and $Y$ coordinates are in arcsec. \label{fig:comp2}} \end{figure*} \begin{figure*}[hbt!] \epsscale{1.1} \plotone{Fig9.png} \caption{\emph{Hinode}/EIS \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ intensity, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio, \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ temperature and XRT intensity maps on 2007 December 7. MDI contours of $\pm$100 G are overplotted on the EIS maps (positive - turquoise or white, negative - green or red). Blue/red/yellow arrows designate unfractionated/partially fractionated/highly fractionated coronal plasmas as discussed in the text. Stages of evolution are explained at the beginning of Section \ref{s:maps}. $X$ and $Y$ coordinates are in arcsec. \label{fig:comp3}} \end{figure*} \section{Composition and temperature evolution in the sigmoidal active region}\label{s:maps} \emph{Hinode}/EIS observations span the three main stages of the coronal evolution in the sigmoidal active region: Stage 1 - sigmoid/flux rope formation (observations at 00:14, 06:30, 11:39, 14:48, and 23:33 UT on December 5 and 02:14, 05:28, and 12:03 UT on December 6 in Figures \ref{fig:comp1} and \ref{fig:comp2}), Stage 2 - failed eruption and CME (observations at 00:18 and 03:27 on December 7 in Figure \ref{fig:comp3}), and Stage 3 - post-eruptive period (observations at 06:37, 10:34, and 11:26 UT on December 7 in Figure \ref{fig:comp3}). \subsection{Flux Rope Formation (Stage 1)}\label{stage1} The northern and southern sections of AR 10977 evolved separately throughout the active region's decay phase. Bright sheared loops crossed the main PIL, connecting the main magnetic polarities. These loops in the \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ and XRT intensity images correspond to highly fractionated plasma in the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition ratio maps at 00:14, 06:30, and 11:39 UT on December 5, as indicated by the yellow arrows in Figure \ref{fig:comp1}. The degree of plasma fractionation in these loops notably decreases in the EIS composition ratio maps from 14:48 UT on December 5 and this trend continues on the 6th. In the region north of the sheared arcade, plasma composition is predominantly partially fractionated (red arrows) at 00:14 UT. However, at the localized primary site of flux cancellation (see Figure \ref{fig:mdi_series}), the plasma composition is approaching photospheric levels (blue arrow). The spatial distribution of the photospheric-like composition extends away from the primary site of flux cancellation along loops in the northern-most region (called the elbow) in the composition ratio maps at 06:30, 11:38, and 14:48 UT on December 5 (Figure \ref{fig:comp1}). As the sigmoid develops the northern elbow reverts to partially fractionated (red) plasma on December 6. The loops in the southern section of the active region remained essentially perpendicular to the PIL, i.e. potential-like, throughout the sigmoid formation. Plasma composition in this region evolved from partially fractionated (red) at 00:14 UT on December 5 to photospheric-like hours later. The blue patch is located at the secondary site of flux cancellation. In general, the southern region has mixed plasma composition until a distinct curved feature of photospheric (blue) plasma develops in the magnetic void between the fragmented polarities at $\sim$--125$\arcsec$ in $Y$ (see the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ ratio map at 12:03 UT on December 6 in Figure \ref{fig:comp2}). The \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{Fe}{15} 284.16 $\mbox{\normalfont\AA}$ temperature maps in Figures \ref{fig:comp1} and \ref{fig:comp2} show very little temperature evolution within the considered narrow range in the loops to the north and south of the sheared arcade. Temperatures remain in the range $\log T_K$ = [6.32, 6.38] on December 5 and 6 well inside the reliable temperature window of our diagnostics as marked by the yellow band in Figure \ref{fig:temp}. Within the sheared arcade, the zone along the main PIL is hotter than its surroundings with temperatures in excess of $\log T_K$ = 6.4, approaching $\log T_K$ = 6.5. The extent of the hottest region evolves from a broad patch of approximately 30$\arcsec$ $\times$ 40$\arcsec$ to a narrow bar-like feature along the axis of the S-shaped sigmoid as it forms. \subsection{Eruptions (Stage 2)}\label{stage2} \emph{Hinode}/EIS observed the expanded and extended sigmoid during the rise phase of the light curve peak associated with the failed eruption at 00:18 UT (top panel of Figure \ref{fig:comp3}). Overall, plasma composition has evolved from partially fractionated to more photospheric-like at the extreme ends of the S-shaped loops, most notably in the northern elbow. After the failed eruption but before the CME, the sigmoid is dominated by photospheric composition along its full extent (at 03:27 UT). The temperature of the flux rope structure is approximately log $T_{K}$ = 6.35 at this time and the hot bar-like feature along the sigmoid's axis is not prominent in the composition ratio or temperature maps, although it is still clear in the \ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ intensity map. \subsection{Post CME Eruption (Stage 3)}\label{stage3} The sigmoid structure was destroyed during the CME and replaced by highly sheared loops \citep[see Figure \ref{fig:comp3} at 06:37 UT; ][]{green11}, while the potential-like arcade in the southern part of the active region remained intact. The structure of the active region and distribution of plasma composition are similar to that of the first EIS observation at 00:14 UT on December 5. Coronal composition plasma comprises the post-eruption arcade loops. Patches of photospheric composition plasma persist at the secondary site of flux cancellation in the southern half of the active region. The largest spatial extent of hot plasma (log $T_{K}$ = [6.45, 6.5]) is observed in the post-eruption arcade at 06:37 UT, after which the temperature returns to the characteristic pre-eruption range of log $T_{K}$ = [6.32, 6.38] a few hours later. \section{Discussion}\label{s:discussion} In Section \ref{s:maps} we show how plasma composition evolved in an active region that became sigmoidal as its sheared arcade field was transformed into a flux rope that eventually erupted as a CME. Photospheric composition plasma was found in coronal structures connected to sites of flux cancellation along the main PIL. Highly fractionated plasma was observed in the sheared arcade field at the beginning of significant flux cancellation on December 5 and then again shortly after the CME on December 7. Within the temperature range over which the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ composition diagnostic is effective (log $T_{K}$ = [6.3, 6.5]), photospheric composition was observed where the plasma temperature was log $T_{K}$ $\lesssim$ 6.35, and coronal composition where the plasma temperature was log $T_{K}$ $\gtrsim$ 6.4, suggesting that the level of fractionation, i.e. coronal plasma composition, is linked to the level of coronal heating. More precisely, it is linked to the height where reconnection is occurring (see Sections \ref{s:Photospheric_Cancellation} and \ref{s:Local_Evolution_Sites}). When reconnection occurs in the corona, the released energy is transported, e.g. by MHD waves, along the new formed flux tubes. This is associated with an increase in the composition ratio (i.e. level of coronal composition plasma). If reconnection occurs at the photospheric/chromospheric level, in particular at bald patches, plasma with photospheric composition is injected in the corona. This is in agreement with the results of \citet{fletcher01}. \subsection{Photospheric Flux Cancellation Mechanism of Flux Rope Formation} \label{s:Photospheric_Cancellation} The flux rope formation mechanism based on flux cancellation reported in \cite{green11} and modelled in \cite{aulanier10}, \cite{savcheva12} and \cite{gibb14} provides the basis for our understanding of the evolution of plasma composition and temperature in AR 10977. \cite{martin85} define flux cancellation as the apparent loss of magnetic flux in closely spaced magnetic field of opposite polarities. Photospheric converging motions towards the PIL is a natural process occurring as a consequence of the active region's magnetic field dispersion driven by convection \citep[][and references therein]{lvdg15}. Flux cancellation in a sheared arcade is the observational manifestation of reconnection along the PIL which in turn alters the coronal magnetic field structure. A description of the process from \cite{vanballegooijen89} is summarized below. The process is illustrated in their Figure 1. As two sheared loops crossing the PIL have one of their footpoints, of opposite magnetic polarity, moving towards each other by the converging motions, they are forced to reconnect. This process creates two new loops - a short loop that submerges at the PIL and a long loop connecting the distant footpoints. The submerged field reduces the flux content within the system which is detected as flux cancellation. The converging motions also increase the magnetic shear of the loops before reconnection, as well as the shear of the long loops formed by reconnection. It results in an increasingly sheared magnetic arcade, with a more sheared core, and later on a flux rope is formed. This later stage is due to less sheared loops reconnecting with more sheared loops below them, so that the reconnected loops are forced to wrap around, with a helix-like shape, the more sheared loops (which become the core of the flux rope). This process, envisioned first by \cite{vanballegooijen89} is presently confirmed by numerical simulations \citep[e.g.][]{amari03, aulanier10, amari11, zuccarello15}. The overlying arcade loops keep the flux rope line-tied at the photosphere forming a bald-patch separatrix surface \citep[BPSS;][]{titov93}. Where the helical field grazes the photosphere at the BPSS, the field lines are concave up, forming magnetic dips in which dense filament material can be supported along the PIL. Dips occur at the center of the S-shaped field located at sites of flux cancellation \citep{titov99, savcheva12}. The flux cancellation process adds flux to the flux rope and, as the flux rope is forming, it is moving upward to satisfy the force balance. As a consequence of the partial detachment of the flux rope from the photosphere, the BP first splits into two BPs which progressively separate with time. The two associated BPSS intersect at a coronal separator where magnetic reconnection is also forced to occur. Later on, as the flux rope is rising, BPs disappear and then magnetic dips are present only at the coronal level. The separator / intersecting BPSS is transformed to a hyperbolic flux tube (HFT) / QSLs \citep[e.g. Figure 4 of][]{aulanier10}. Therefore, during this entire process, converging photospheric motions with flux cancellation first impose reconnection of field lines at the photospheric level, then at the coronal level. This creates the envelope of the flux rope, further building it. Eventually, the overlying arcade can no longer hold down the flux rope due to built-up magnetic pressure and it subsequently erupts. \subsection{Local Evolution - Sites of Flux Cancellation} \label{s:Local_Evolution_Sites} AR 10977 exhibited significant photospheric flux cancellation along the internal or main PIL beginning early on 2007 December 5 (see Figure \ref{fig:global}(a)). The primary site of flux cancellation is in the northern region where the flux rope formed \citep{green11,savcheva12}. Photospheric plasma composition is first observed at the precise location of cancellation at the time when the flux curve in Figure \ref{fig:global}(a) is steepest, suggesting a fast rate of flux cancellation along the PIL. Composition ratio maps timed at 00:14, 06:30, and 11:39 UT coincide with the sharp fall in flux from approximately 00:00 to 15:00 UT on December 5. In these maps, the area in the corona containing plasma with photospheric composition, extends further north-northwest where the loops of the northern elbow are located. Over the same time period, a number of flaring events are observed in the enclosed XRT movie called `XRT$\_$movie.mp4'. These events are likely to be related to reconnection induced by the ongoing flux cancellation and subsequent reorganization of the coronal field in the northern region. Loops containing photospheric composition rooted in and around the primary site of flux cancellation are able to reconnect with nearby loops thereby transferring the plasma over a larger area as observed by \emph{Hinode}/EIS. By the same mechanisms the plasma with photospheric composition contained in the flux rope is heated up by small-scale reconnection events so that it appears in the temperature window observed by EIS. Flux cancellation at the secondary site (see Figure \ref{fig:mdi_series}) is weaker and begins somewhat later compared to that of the primary site \citep{green11}. However, a similar evolution of plasma composition is observed in the far south of the active region. Figure \ref{fig:comp1} at 23:33 UT shows a region of photospheric plasma composition at the secondary flux cancellation site. Eruptive activity is observed in the southern region in the XRT movie during December 5 leading up to the EIS observation. The scenario is comparable to that in the northern section of the active region. Unfractionated plasma composition at the sites of flux cancellation is consistent with a BPSS topology where the field lines are tangential to the photosphere. The simulations of AR 10977 carried out by \cite{gibb14} support the presence of a BPSS topology as the flux rope is very low down, forming at a height of 2 Mm. BPSS are locations where current sheets form and reconnection takes place, albeit very low in the solar atmosphere. During reconnection at the BPSS, the energy released along the field lines causes heating and evaporation of photospheric plasma. After reconnection at the BPSS, photospheric plasma can be lifted into the corona as the concave up BPSS field lines rise \citep{titov93,fletcher01,aulanier10}. This scenario of AR 10977 is similar to that reported in \cite{baker13} where \emph{Hinode}/EIS observed photospheric plasma composition along the sigmoidal channel above the main PIL hours before a CME. The results of \cite{fletcher01} confirm the link between photospheric plasma composition, locations of flux cancellation, and a BPSS topology. They found that the elemental abundances measured in the transition region brightenings within a sigmoidal active region depends on the type of topological structure of the regions where the brightenings occur; coronal plasma with photospheric composition was associated with BPSS and coronal plasma with quasi-separatrix layers (QSLs). \cite{savcheva12} (Figure 11) modelled the sigmoid/flux rope just prior to the composition ratio map at 12:03 UT and identified the locations of flux rope associated field line dips within AR 10977. One dip is located at the primary site of flux cancellation and the other is in the magnetic void, i.e. the region of low radiance in the band of photospheric composition that is indicated by the blue arrow at 12:03 on December 6 (Figure \ref{fig:comp2}). It is tempting to claim that the photospheric plasma composition is directly related to the magnetic dip at the BPSS identified by \cite{savcheva12}. However, it is equally plausible that the unfractionated plasma is also related to the low amount of coronal heating, as traced by the low temperature and density present there, leading to a lower amount of MHD waves and a low level of the fractionation process of the model developed by \citet{laming15}. Finally, early studies \cite[e.g.][]{spicer98} found that plasma composition in prominences (and therefore filaments) is photospheric. On December 5, photospheric composition was observed intermittently along the northern pathway of what would become the S-shaped filament observed on December 6 (Figures \ref{fig:comp1} and \ref{fig:comp2}), suggesting the possibility that \emph{Hinode}/EIS was observing the plasma of the filament cavity before filament formation in the northern section of the active region. \subsection{Local Evolution - Arcade Field} \label{s:Local_Evolution_Arcade} Within the central arcade field of the active region, highly fractionated plasma with coronal composition is observed during the period of accelerated flux cancellation early on December 5. The \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ ratio evolves to lower levels, though remaining coronal in composition, as the flux cancellation curve flattens at $\sim$15:00 UT. As the composition ratio decreases, the temperature of the plasma within the arcade field region also decreases. Concurrent evolution in composition and temperature suggests that reconnection induced by flux cancellation is also decreasing. Enhanced levels in the low-FIP \ion{Fe}{16} relative to the high-FIP \ion{S}{13} in the arcade field connecting opposite polarities are predicted by the ponderomotive fractionation model of \cite{laming15} and supported by the simulations of \cite{dahlburg16}. A high Alfv\'en wave flux is expected while magnetic field flux cancellation induced coronal reconnection is ongoing. This is in contrast with the description of the scenario in Section \ref{s:Local_Evolution_Sites} where reconnection occurs at the photospheric level. Later on, the Alfv\'en wave flux is expected to fall off with the lower levels of induced reconnection starting later on December 5 and continuing until after the eruptions. Plasma mixing is likely to occur as arcade loops reconnect with loops rooted in the vicinity of flux cancellation regions thereby contributing to the decrease in \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ ratio values. In a similar fashion, coronal composition is observed in the bright, hot post-eruption loops at 06:37 UT on December 7, thereafter, the flare arcade fades, the temperature decreases, and the plasma composition becomes mixed, i.e. partially fractionated (red). \subsection{Local Evolution - Flux Rope} \label{s:Local_Evolution_Sigmoidal} \cite{green11} reported that the flux rope, as traced by coronal plasma, had formed approximately eight hours before the start of the failed eruption. The footpoints of the flux rope are identified by the white circles overplotted on the \emph{Hinode}/EIS map at 00:18 UT on December 7 (Figure \ref{fig:comp3}). At this stage, the plasma composition at each footpoint is predominantly photospheric while partially fractionated plasma (red) comprises the central portion of the flux rope. Three hours later, after the failed eruption but before the CME, photospheric composition has spread along the entire length of the flux rope (at 03:27 UT in Figure \ref{fig:comp3}). Prior to and during the eruptive period the sigmoid/flux rope expands and rises \citep[see Figure 9(a) of][]{gibb14}, allowing heated plasma with photospheric composition at the footpoints to expand into the increasing volume. Well before the eruption, reconnection at bald patches is also expected to bring new plasma with photospheric composition at the periphery of the forming flux rope. As the erupting system enters a phase of fast expansion \citep{aulanier10}, the photospheric plasma is accelerated into the flux rope so that more of the volume is filled, similar to what is observed at 03:27 UT. The dominantly photospheric plasma composition of the erupting sigmoid/flux rope is in agreement with the photospheric origin material in erupting \citep[e.g.][]{widing86,feldman92,spicer98} and quiescent prominences/filaments \citep{parenti19}. \subsection{Global Evolution} \label{s:Global_Evolution} Locally, the spatial distribution and temporal evolution of plasma composition observed in AR 10977 are in support of the scenario proposed by \cite{vanballegooijen89} that flux cancellation at the main PIL of a sheared arcade field leads to the formation of a flux rope. The global evolution of plasma composition within the active region also appears to be dominated by the processes of flux rope formation in flux cancellation models. In Figure \ref{fig:global}(a), the mean active region \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ ratios within the temperature range log $T_{K}$ = [6.3, 6.5] are compared to the flux and temperature evolution. The mean value of the composition ratio decreases $\sim$30$\%$ during the flux rope formation and eruptions until a few hours after the CME when the active region is dominated by hot, post-eruption loops. Over the same time period, the magnetic flux decreased by a similar amount, $\sim$29$\%$. The parallel evolution of plasma composition and magnetic flux is consistent with the conclusions of \cite{baker18}. They found that the relative abundance of low-FIP \ion{Si}{10} 258.38 $\mbox{\normalfont\AA}$ compared to high-FIP \ion{S}{10} 264.22 $\mbox{\normalfont\AA}$ increases during the magnetic emergence phase and decreases during the magnetic decay phase for active regions ranging from ephemeral flux regions to the largest active regions. The strong correlation of composition with magnetic activity extends to solar-cycle time scales \citep{brooks17}. The mean temperature of the active region remained stable during the flux rope formation followed by the eruptive period, in agreement with the results of \cite{uu12} who demonstrated that active region cores become fainter and less variable during the decay phase. \section{Conclusion}\label{s:conclusion} Locally and globally, the \emph{Hinode}/EIS plasma composition and temperature observations of AR 10977 strongly support the \cite{vanballegooijen89} model of flux rope formation by photospheric flux cancellation. We employed a new composition diagnostic, the \ion{Fe}{16} 262.98 $\mbox{\normalfont\AA}$/\ion{S}{13} 256.69 $\mbox{\normalfont\AA}$ ratio proposed by \cite{feldman09} to investigate the formation and evolution of a flux rope in a sigmoidal active region. Our results demonstrate that plasma composition provides independent observational evidence to distinguish between the mechanisms of flux rope formation - those that form via reconnection in the corona \citep[e.g.][]{james17,james18} and those that form lower in the atmosphere via photospheric flux cancellation \citep[e.g.][]{vanballegooijen89,aulanier10}.
292c8b478d9f4ca82000c34b2a6e07c78755c4e1
\section{Introduction} \label{sec:intro} Five-dimensional supersymmetric gauge theories have been the subject of long-standing interest, in light of their intertwined roles both as low-energy descriptions of SCFTs in 5d \cite{Seiberg:1996bd}, and as non-trivial quantum field theories engineered by M-theory compactification on a Calabi--Yau threefold \cite{Intriligator:1997pq,Lawrence:1997jr}. In particular, their stringy origin situates their Kaluza--Klein (KK) reduction on $\mathbb{R}^{4}\times S^1$ at the centre of a web of correspondences relating instanton counting to, {\it inter alia}, the topological A-model on resolutions of local CY3 singularities \cite{Katz:1996fh,Lawrence:1997jr}, a class of relativistic integrable models \cite{Nekrasov:1996cz,MR1090424,Fock:2014ifa}, a $q$-deformed version of the AGT correspondence \cite{Awata:2009ur,Nieri:2013yra,Nieri:2013vba}, and in special instances, the large~$N$ limit of Chern--Simons theory on non-trivial 3-manifolds and related matrix models \cite{Aganagic:2002wv,Borot:2014kda,Borot:2015fxa,Marino:2002fk}. \\ As for any four-dimensional theory with eight supercharges, the infrared physics on the Coulomb branch $\mathcal M_{\mathsf{C}}$ of the $\mathcal{N}=2$ KK theory on $\mathbb{R}^4 \times S^1$ is encoded into its prepotential, which governs the exact Wilsonian effective action of the gauge theory. At weak coupling and for classical gauge groups, one approach to compute this microscopically in the full $\Omega$-background is by using localisation, asymptotically in the Coulomb moduli \cite{Nekrasov:2002qd, Nekrasov:2003rj, Nakajima:2003pg, Nakajima:2005fg}. Alternatively, and equivalently, the $\Omega$-background prepotential coincides with the free energy of the refined A-model on the associated engineering CY3 geometry \cite{Iqbal:2007ii,Awata:2005fa,Awata:2008ed}. The spectacular results arising from direct instanton calculations come at a price, however, as explicit instanton partition functions become unwieldy in general, the treatment of exceptional gauge groups requires some degree of guesswork and/or extrapolation from the classical cases, and one is {\it a fortiori} stuck in the S-duality frame corresponding to the instanton expansion/large volume limit of the engineering geometry. By the work of Seiberg and Witten, one way around this is to consider the realisation of the gauge theory prepotential from the special geometry of a family of spectral curves fibred over the Coulomb branch $\mathcal M_\mathsf{C}$ \cite{Seiberg:1994rs}. Restricting for simplicity to the setting of the pure gauge theory with gauge group $\mathcal{G}$, the affine part $\mathcal{C}_{\mathcal{G};u}=\{\mathsf{P}_{\mathcal{G};u}(\mu,\lambda;u)=0\}$ of the fibres of the family over a Coulomb moduli point $u\in\mathcal M_\mathsf{C}$ is given by the vanishing of a certain characteristic polynomial $\mathsf{P}_{\mathcal{G};u}(\mu,\lambda) \in \mathbb{C}[\mu,\lambda]$, and the gauge theory effective action is recovered from the special geometry relations \begin{equation} \frac{{\partial} \mathcal{F}_\mathcal{G}}{{\partial} a_i} = \frac{1}{2} \oint_{B_i} \log \mu ~\mathrm{d} \log \lambda, \qquad a_i = \frac{1}{2\pi \mathrm{i} }\oint_{A_i} \log \mu ~\mathrm{d} \log \lambda, \label{eq:specgeom} \end{equation} where $\{A_i, B_i\}_{i=1}^r$ is a duality-frame-dependent choice of a symplectic basis of integral homology 1-cycles on $\mathcal{C}_{\mathcal{G};u}$. \medskip In principle, knowing $\mathsf{P}_{\mathcal{G};u}$ determines the prepotential of the gauge theory via the period integrals in \eqref{eq:specgeom}, but translating this into practice, especially when $\mathcal{G}\neq\mathrm{SU}(N)$, is a much different story. Even being able to write down the integrals (let alone compute them) in \eqref{eq:specgeom} requires a delicate analysis in the choice of a symmetric combination of homology 1-cycles $\{A_i, B_i\}_{i=1}^r$, which realises a projection to a distinguised Prym--Tyurin subvariety\footnote{We denote $\overline{\mathcal{C}_{\mathcal{G};u}}$ the smooth completion (normalisation of the Zariski closure in $\mathbb{P}^2$) of the affine curve $\mathcal{C}_{\mathcal{G}; u}$. In particular $\overline{\mathcal{C}_{\mathcal{G};u}}$ is a compact Riemann surface.} of $\mathrm{Jac}(\overline{\mathcal{C}_{\mathcal{G};u}})$, see \cite{Martinec:1995by, Brini:2017gfi}. On top of this, the curves $\overline{\mathcal{C}_{\mathcal{G};u}}$ are usually non-hyperelliptic and of high genus on a dense open subset of $\mathcal M_\mathsf{C}$, {\it de facto} hampering a direct calculation of the period integrals. One possible perspective to overcome this impasse is offered by string theory engineering, wherein for $\mathcal{G}=\mathrm{SU}(N)$ the SW geometry gets identified with the Hori--Iqbal--Vafa mirror of a toric CY3 \cite{Kol:1997fv,Brandhuber:1997ua}. The periods \eqref{eq:specgeom} are then solutions of a generalised hypergeometric system of coupled PDEs given by the Gelfand--Kapranov--Zelevinsky (GKZ) system for the corresponding toric variety. These solutions can be effectively computed asymptotically around the large complex structure/weak gauge coupling point using Frobenius' method: for $N=2$ and fundamental matter, this method was brought to fruit in \cite{Eguchi:2000fv} (see also \cite{Closset:2021lhd}). The toric condition on the engineering geometry however confines this approach to unitary gauge groups, and the case $\mathcal{G}\neq \mathrm{SU}(N)$ requires fundamentally different ideas to be treated.\medskip In the present paper we address this problem by providing a systematic construction of Picard--Fuchs operators annihilating the periods \eqref{eq:specgeom} for a general simple gauge group $\mathcal{G}$. We will offer two constructions of such global D-modules over the Coulomb branch, and use them to test proposals for 5d SW curves arising from geometric engineering, brane constructions, and/or the theory of relativistic integrable systems. \subsubsection*{Picard--Fuchs ideals from Frobenius manifolds} Suppose first that $\mathcal{G}$ is of type ADE. We propose that SW periods are, in the terminology of \cite{MR2070050}, the {\it odd} periods of the canonical Frobenius manifold structure on the orbits of the Dubrovin--Zhang extension of the affine Weyl group of type $\mathcal{G}$ in the reflection representation \cite{MR1606165}, which was recently explicitly constructed in \cite{Brini:2017gfi,Brini:2021pix}. This is a natural generalisation of an idea of Dubrovin \cite{MR2070050} (see also \cite{Eguchi:1996nh,Ito:1997ur} for previous work on this), where the polynomial Frobenius manifold of type ADE played a similar role in the reconstruction of the SW periods for four-dimensional $\mathcal{N}=2$ super Yang--Mills with simply-laced gauge symmetry. In particular we propose that the Picard--Fuchs ideal annihilating the physical periods of the SW curve can be read off from a suitable specialisation of the quantum differential equations for the associated Dubrovin--Zhang Frobenius manifold. Schematically, and in close analogy to \cite{Eguchi:1996nh,Ito:1997bd,Ito:1997ur,Ito:1998zr}, we propose that there exists a distinguished holomorphic chart $\{t_i(u)\}_{0\leq i \leq r}$ on\footnote{The second factor accounts for the complexified radius of the fifth-dimensional circle, or rather its product with the holomorphic energy scale in the gauge theory.} $\mathcal M_\mathsf{C}\times \mathbb{P}^1$ such that the periods \eqref{eq:specgeom} satisfy a holonomic system of Fuchsian PDEs in the form \begin{eqnarray} \left({\partial}_{t_0}+\sum_i \mathfrak{q}_i t_i {\partial}_{t_i}\right)^2 \Pi &=& 4 h_{\mathfrak{g}}^2 {\partial}_{t_r}^2 \Pi, \nn \\ {\partial}^2_{t_i t_j} \Pi &=& \mathsf{C}_{ij}^k {\partial}^2_{t_k t_r} \Pi \label{eq:PFintro} \end{eqnarray} In \eqref{eq:PFintro}, $\{\mathfrak{q}_i\}_{i=1}^r$ are the $\alpha$-basis coefficients of the highest element of the root system of type $\mathcal{G}$, $h_{\mathfrak{g}}$ is the Coxeter number, the coordinates $t_i$ are a homogeneous choice of flat coordinates for the associated affine Weyl Frobenius manifold \cite{MR1606165,Brini:2021pix}, and $\mathsf{C}_{ij}^k$ are the structure constants of the Frobenius product in those coordinates. From the engineering point of view, and as an application to local mirror symmetry/(orbifold) Gromov--Witten theory, the Picard--Fuchs ideals \eqref{eq:PFintro} specialise to the GKZ system for the associated $Y^{N,0}$ toric singularities considered in \cite{Eguchi:2000fv,Brini:2008rh} for $\mathcal{G}=\mathrm{SU}(N)$. When $\mathcal{G}\neq\mathrm{SU}(N)$, they further generalise them to the non-toric singularities given by orbifolds of the singular conifold by a finite group action $\Gamma \subset \mathrm{SL}(2,\mathbb{C})$ which is McKay-dual to $\mathcal{G}$. \subsubsection*{Picard--Fuchs ideals from Jacobi rings} When $\mathcal{G}$ is non-simply-laced, a naive application of the above approach fails. The reasons are already well-known from the study of the 4d setup, where the auxiliary Frobenius manifold would arise from the spectral curves of the non-relativistic periodic Toda lattice. It was found shortly after the work of Seiberg--Witten \cite{Martinec:1995by} that the integrable system relevant for $\mathcal{N}=2$ super Yang--Mills theory with gauge group $\mathcal{G}$ is the Toda lattice associated to the {\it twisted} Kac--Moody algebra $(\mathcal{G}^{(1)})^\vee$, whereas the construction in \cite{MR1606165,Brini:2021pix} of Frobenius manifolds on orbits of extended affine Weyl groups pertains to its untwisted counterpart. Unfortunately there turns out to be no strict analogue of the Frobenius manifolds of \cite{MR1606165} in the twisted Kac--Moody world, as the would-be Frobenius metric constructed from the spectral curve becomes either degenerate, or curved, in that setting. \\ Motivated by work on associativity equations for 4d-prepotentials in \cite{Bonelli:1996qh,Marshakov:1996ae,Marshakov:1997ny}, we propose an alternative method to construct Picard--Fuchs systems in the form \eqref{eq:PFintro} for the pure five-dimensional gauge theory on $\mathbb{R}^{4}\times S^1$ in a completely algebraic fashion. Our procedure takes as its input datum the polynomial $\mathsf{P}_{\mathcal{G}}$ defining the five-dimensional spectral curve: the tensor $\mathsf{C}$ is identified in this context with the structure constants of a canonical subring of the algebra of regular functions on the zero-dimensional scheme corresponding to the ramification locus of the curve. Its (non-trivial) existence and explicit construction is reduced to a problem in commutative algebra, which we solve for all $\mathcal{G}$ and for all different realisations of Seiberg--Witten geometries when more than one is available (such as for non-simply-laced classical groups). By the mirror theorem of \cite{Brini:2021pix}, for simply-laced $\mathcal{G}$ this restricts to the structure constants of the affine-Weyl Frobenius algebras mentioned previously. Owing to the absence of an underlying Frobenius metric for non-simply-laced $\mathcal{G}$, there is now no natural notion of ``flat'' coordinates $t_i(u)$, and it is a priori unclear how to fix a privileged chart for writing down something like \eqref{eq:PFintro}. We claim that the constraints of rigid special K\"ahler geometry, and in particular the existence of a prepotential, are in fact sufficient to uniquely pin down an analogous coordinate frame $t_i(u)$ for general $\mathcal{G}$, without reference to an underlying flat metric. \subsubsection*{Matching with the gauge theory, and an integrable systems surprise} Using Frobenius' method, the above constructions provide an efficient way of computing the prepotential given the SW/B-model mirror curve, and therefore the putative five-dimensional prepotentials for the corresponding pure gauge theory, around any boundary point in moduli space, and in particular at infinity in the Coulomb branch. \\ We put this to the test in a wide array of cases. When the SW geometry arises from geometric engineering \cite{Katz:1996fh,Borot:2015fxa} or brane constructions in M-theory \cite{Kol:1997fv,Hayashi:2017btw,Li:2021rqr}, we show that our constructions recover the microscopic gauge theory results with flying colours. We match the perturbative and instanton parts of the prepotential with the expressions arising from the K-theoretic version of the Nakajima--Yoshioka blow-up equations \cite{Nakajima:2005fg}, as well as with their extrapolation to general gauge groups \cite{Keller:2012da}. We also apply our proposal to the SW curves for classical non-simply-laced groups obtained from string theory constructions with orientifold planes \cite{Brandhuber:1997ua,Hayashi:2017btw,Li:2021rqr}. As a byproduct of our construction we show, in type $B_n$, that the curves proposed by \cite{Brandhuber:1997ua} from an M-theory uplift of the Hanany--Witten construction with orientifold planes correctly reproduce the instanton prepotential for $\mathrm{SO}(2n+1)$ gauge groups; and in type $C_n$, we confirm a tension between the results of \cite{Brandhuber:1997ua} for $n=1$ and more recent proposals of $\mathrm{Sp}(1)$ curves with discrete $\theta$-angle at $\theta=\pi$ in \cite{Hayashi:2017btw,Li:2021rqr}. Although the two curves are closely related, and their periods are shown to be annihilated by the differential ideal generated by the second line of \eqref{eq:PFintro} for the same choice of structure constants $\mathsf{C}_{ij}^k$, in this rank-1 case determining the full expression for the periods requires fixing a finite-dimensional ambiguity akin to imposing a quasi-homogeneity condition as in the first line of \eqref{eq:PFintro}: our calculations show that this is done differently for the curves from \cite{Brandhuber:1997ua} on one hand and of \cite{Hayashi:2017btw,Li:2021rqr} on the other, ruling out the former and confirming the correctness of the latter with a direct prepotential calculation. We also match our B-model construction applied to the $\theta=0$ version of the brane web geometries with an O5-plane of \cite{Hayashi:2017btw,Li:2021rqr} against the corresponding instanton calculation of the prepotential -- again finding perfect agreement. \medskip We furthermore apply our construction to the spectral curves of the periodic relativistic Toda chain on the Langlands dual group of the affine Poisson--Lie group $\mathcal{G}^{(1)}$ \cite{MR1993935}, which are natural relativistic deformations of the well-known four-dimensional SW curves for the $\mathcal{N}=2$ theory with gauge group $\mathcal{G}$ \cite{Martinec:1995by}, and which have already implicitly appeared by the usual procedure of Dynkin folding in the context of geometric engineering on local CY singularities \cite{Borot:2015fxa}. For non-simply-laced cases these are formally different from the curves obtained from string engineering -- and for exceptional $\mathcal{G}$, they are to our knowledge the only candidates available so far for a B-model/SW description of the low energy theory. We show that our construction is (non-trivially) well-defined in this context as well: there is an $(r+1)$-dimensional vector subspace of the ring of regular functions of the branch locus of the SW curve, non-trivially closing under multiplication to a subring with structure constants $\mathsf{C}_{ij}^k$, $i,j,k=0,\dots, r$, and once again a distinguished chart in the sense of \eqref{eq:PFintro} is shown to exist. We then proceed to solving the resulting Picard--Fuchs system at large complex structure: for non-simply-laced groups the comparison with the gauge theory result surprisingly {\it fails} in this case, away from the divisor in the Coulomb branch corresponding to the non-relativistic/4d limit, already for the perturbative prepotential. We corroborate our findings with an explicit residue calculation of the triple derivatives of the 1-loop prepotential from the SW curve, which we find in agreement with the calculation from our proposed Picard--Fuchs system. It thus remains an open problem to identify the correct integrable system counterpart of non-simply-laced gauge theories on $\mathbb{R}^4 \times S^1$, and, for exceptional non-simply-laced groups, to determine their SW geometry. \subsubsection*{Organisation of the paper} The paper is structured as follows. In \cref{sec:intro5d} we give a review of instanton counting and blow-up equations in four and five dimensions one one hand, and five-dimensional Seiberg--Witten curves from string theory engineering and integrable systems on the other. Then, in \cref{sec:ade}, we formulate our B-model approach to compute the gauge theory prepotential for simply-laced Lie algebras from the odd periods of the corresponding extended affine Frobenius manifold, and present detailed tests of our proposal in low rank. In \cref{sec:bcfg} we generalise our construction to non simply-laced examples, and provide an extended set of examples supporting its validity. A summary and prospects for future work are discussed in \cref{sec:conclusion}. We collect the notation used throughout the text in \cref{tab:notation} for the reader's convenience. \subsubsection*{Acknowledgements} We acknowledge discussions and correspondence with G.~Bonelli, K.~Ito, H.~Kanno, A.~Klemm, P.~Su\l kowski, A.~Tanzini, and F.~Yagi. We are especially grateful to F.~Yagi for his very helpful comments about the content of \cref{sec:c1pi}. This project has been supported by the Engineering and Physical Sciences Research Council under grant agreement ref.~EP/S003657/2. The work of KO is also in part supported by the TEAM programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund (POIR.04.04.00-00-5C55/17-00). \newpage \begin{table}[!h] \centering \begin{tabular}{|c|p{10cm}|p{2.5cm}|} \hline Symbol & Meaning & First instance \\ \hline \hline $\mathcal{G}$; $\mathcal{G}^{(1)}$ & The compact form of a simple, simply-connected Lie group; the corresponding affine Lie group. & \cref{sec:intro} \\ \hline $\mathfrak{g}$/$\mathfrak{g}^{(n)}$ & The simple complex Lie algebra $\mathrm{Lie}_\mathbb{C}(\mathcal{G})$; the corresponding affine version in Kac notation with twisting order $n$. & \cref{sec:intro}; \cref{sec:SWtwToda}. \\ \hline $\mathcal{K}(\mathfrak{g})/\mathcal{K}(\mathfrak{g}^{(n)})$ & The Cartan matrix of the simple/(twisted)-affine Lie algebra $\mathfrak{g})/\mathfrak{g}^{(n)}$ & \eqref{eq:pbg} \\ \hline $\mathcal{A}(\mathfrak{g})$; $\mathcal{A}(\mathfrak{g}^{(1)})$ & A simply-laced simple (resp. affine) Lie algebra, from which $\mathfrak{g}$ (resp. $(\mathfrak{g}^{(1)})^\vee$) is obtained as a quotient by outer automorphisms. & \cref{sec:SWtwToda} \\ \hline $\Delta$/$\Delta^+$/$\Delta_\ell$/$\Delta_\ell^+$ & The set of all/positive/long/long positive roots of $\mathfrak{g}$ & \cref{sec:intro5d} \\ \hline $\rho_{[i_1\dots i_r]} $ & The irreducible representation of $\mathcal{G}$ with highest weight $\omega=[i_1\dots i_r]$ in Dynkin notation & \cref{sec:SWtwToda}\\ \hline $\omega_i$ (resp. $\rho_i$) & The $i^{\rm th}$ fundamental weight (resp. representation) of $\mathcal{G}$ & \cref{sec:intro5d} \\ \hline $\mathfrak{q}_i$, $\mathfrak{q}^\vee_i$ & The Coxeter (resp. dual Coxeter) coefficients of $\mathfrak{g}$ & \eqref{eq:PFintro} \\ \hline $\mathcal{F}_{\mathcal{G}}$, $\mathcal{F}_{\mathcal{G}}^{[n]}$ & The prepotential (resp. the $n^{\rm th}$ instanton contribution to it) of the $\mathcal{N}=2$ pure theory with gauge group $\mathcal{G}$ on $\mathbb{R}^{1,3} \times S^1_{R_5}$. & \eqref{eq:prep5d}--\eqref{eq:prepnp} \\ \hline $\mathsf{R}_{\mathcal{G},\rho; u}$, $\widetilde{\mathsf{R}}_{\mathcal{G};u}$ & The characteristic (resp. reduced) polynomial of the Lax operator for the untwisted affine relativistic Toda chain in the representation $\rho$ & \eqref{eq:laxaff2} \\ \hline $\mathsf{T}_{\mathcal{G},\rho; u}$, $\widetilde{\mathsf{T}}_{\mathcal{G},\rho;u}$ & The characteristic (resp. reduced) polynomial of the Lax operator for the twisted affine relativistic Toda chain in the representation $\rho$ & \eqref{eq:laxtw} \\ \hline $\mathsf{Q}_{\mathcal{G};u}$, $\widetilde{\mathsf{Q}}_{\mathcal{G};u}$ & The defining (resp. reduced) polynomial of the Hanany--Witten M-theory curve for type $\mathcal{G}$ & \eqref{eq:QAn}--\eqref{eq:QCnb} \\ \hline $\mathsf{P}_{\mathcal{G};u}$; $\widetilde{\mathsf{P}}_{\mathcal{G};u}$ & One of $\mathsf{Q}$, $\mathsf{R}$, $\mathsf{T}$; resp. $\widetilde{\mathsf{Q}}$, $\widetilde{\mathsf{R}}$, $\widetilde{\mathsf{T}}$ above & \cref{sec:intro} \\ \hline $(u_0, \dots, u_r)$ & Ad-invariant classical coordinates on the Coulomb branch & \cref{sec:intro5d} \\ \hline $(q_0, \dots, q_r)$ & Exponentiated A-model Coulomb moduli & \cref{sec:intro5d} \\ \hline $(z_0, \dots, z_r)$ & B-model coordinates around the large complex structure point & \eqref{eq:utoz} \\ \hline $(t_0, \dots, t_r)$ & Flat coordinates of $M^{\rm trig}_\mathfrak{g}$; the distinguished coordinates of \eqref{eq:PF5prodgen} & \eqref{eq:PFintro} \\ \hline $\mathcal{V}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$ & The canonical subring of the space of regular functions on the ramification locus spanned by $\{{\partial}_{t_i} \widetilde{\mathsf{P}}_{\mathcal{G};u}/{\partial}_{t_r} \widetilde{\mathsf{P}}_{\mathcal{G};u}\}$ & Section~\ref{sec:Jacobi alg} \\ \hline $\mathrm{GB}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$ & A reduced Gr\"obner basis for $\mathcal{D}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$ & Section~\ref{sec:Jacobi alg} \\ \hline $\mathsf{C}_{ij}^k$ & Structure constants of $\mathcal{V}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$ in the $t$- chart & \eqref{eq:PFintro \\ \hline $M^{\rm trig}_\mathfrak{g}$; $F^{\rm trig}_\mathfrak{g}$ & The extended affine Frobenius manifold of type $\mathfrak{g}$; its prepotential & \cref{sec:Frobenius} \\ \hline \end{tabular} \caption{Notation employed throughout the text.} \label{tab:notation} \end{table} \newpage \section{Gauge theories and instanton counting in five dimensions} \label{sec:intro5d} The field content of the minimally supersymmetric Yang--Mills theory with gauge group $\mathcal{G}$ in five dimensions is given by a gauge field $A_\mu$ with a Dirac spinor $\lambda$ and a real scalar $\phi$, both in the adjoint representation of $\mathcal{G}$. Upon compactification on a circle of radius $R_5$ to $\mathbb{R}^4 \times S_{R_5}^1$, the (classical/quantum) moduli space of the theory is parametrised by the (classical/quantum) vev of the complexified scalar $\varphi \coloneqq \phi+\mathrm{i} A_5$. Fixing a set of linear generators $\{h_i\}_{i=1}^r$ for the Lie algebra of the maximal torus of $\mathcal{G}$ and in a diagonal gauge for $\varphi$, we shall write these respectively as \begin{equation} \left\langle \varphi \right\rangle_{\rm cl} = \sum_{i=1}^r a_i^{\rm cl} h_i, \quad \left\langle \varphi \right\rangle = \sum_{i=1}^r a_i h_i, \label{eq:avar} \end{equation} and write $q_i^{\rm cl} \coloneqq {\rm e}^{2 \pi \mathrm{i} R_5 a_i^{\rm cl}}$, $q_i \coloneqq {\rm e}^{2 \pi \mathrm{i} R_5 a_i}$ for the corresponding exponentiated linear coordinates on the Cartan torus. An alternative set of coordinates which arises naturally in B-model approaches to $\mathcal{N}=2$ theories is given by the classical vev of (the conjugacy class of) the complexified Wilson loop $g=P\exp\int_{S^1_{R_5}} (-\mathrm{i} \varphi)$. A choice of $r$-independent ${\rm Ad}$-invariant holomorphic functions on $\mathcal{G}$ fixes then a chart on the classical Coulomb branch, a natural one being given by the fundamental traces % \begin{equation} u_i \coloneqq \left\langle \mathrm{Tr}\,_{\rho_i} P\exp\int_{S^1_{R_5}} \left(A_5-\mathrm{i} \phi\right)\right\rangle_{\rm cl}, \quad i=1, \dots, r, \label{eq:uvar} \end{equation} where $\rho_i$ is the irreducible representation having the $i^{\rm th}$ fundamental weight $\omega_i$ of $\mathcal{G}$ as its highest weight. By \eqref{eq:avar} and their definition in \eqref{eq:uvar}, the $u$-coordinates are Weyl-invariant integral Laurent polynomials in $(q_1^{\rm cl}, \dots, q_r^{\rm cl})$: \begin{equation} u_i \in \mathbb{Z}[q_1^{\rm cl}, \dots, q_r^{\rm cl}]. \end{equation} Treating further the compactification radius $R_5$ as a free parameter, there is an additional dimensionless modulus compared to the usual four-dimensional theory given by \begin{equation} u_0 := \left(\frac{1}{R_5 \Lambda_{\rm QCD}}\right)^{2 h_\mathfrak{g}}, \label{eq:u0} \end{equation} where $\Lambda_{\rm QCD} = \Lambda_{\rm UV}{\rm e}^{-\frac{1}{4 g^2_{\rm YM}(\Lambda_{\rm UV})}}$ is the dynamical gauge theory scale, and $h_\mathfrak{g}$ is the dual Coxeter number. We will often write $q_0 \coloneqq u_0^{-1}$ for its inverse, so that $q_0$ corresponds to the perturbative limit of the gauge theory. \medskip When $R_5=\infty$, the prepotential of the theory was shown by Intriligator--Morrison--Seiberg (IMS) to be exact at one loop \cite{Intriligator:1997pq}, and it takes the form of a cubic polynomial in the real scalars $\varphi$, \begin{equation} \mathcal{F}_\mathcal{G}\Big|_{R_5=\infty} = \frac{1}{2 g_{\rm YM}^2} \mathsf{g}_{ij} \varphi^i \varphi^j + \frac{k_{\rm CS}}{6} \mathsf{d}_{ijk} \varphi^i \varphi^j \varphi^k+ \frac{1}{6}\sum_{\alpha \in \Delta} \theta(\alpha(\varphi)) \alpha(\varphi)^3 \label{eq:prep5d} \end{equation} where $\mathsf{g}_{ij}$ and $\mathsf{d}_{ijk}$ are respectively the Killing pairing and the cubic Casimir form for $\mathfrak{g}$, $k_{\rm CS} \in \mathbb{Z}$ is the 5-dimensional Chern--Simons level, and $\theta(x)$ denotes Heaviside's step function. Upon compactification on $S^1_{R_5}$, the prepotential receives finite $u_0$ perturbative corrections from an infinite tower of excited Kaluza--Klein states, as well non-perturbative instanton corrections. The former are resummed as \cite{Nekrasov:1996cz} \begin{equation} \mathcal{F}_\mathcal{G}^{[0]} = -\frac{\mathrm{i}}{8\pi^3} \sum_{\alpha \in \Delta} \left[ \left(2\pi \mathrm{i} R_5 \alpha(a)\right)^2 \frac{\log q_0}{2}+ \mathrm{Li}_3\left({\rm e}^{2\pi \mathrm{i} R_5 \alpha(a)}\right)\right] \label{eq:preppert} \end{equation} while the latter gives rise to an infinite sum of the form \begin{equation} \mathcal{F}_\mathcal{G}^{\rm np}= \sum_{n>0} q_0^n \mathcal{F}_\mathcal{G}^{[n]}(q_1, \dots, q_r), \label{eq:prepnp} \end{equation} with $\mathcal{F}_\mathcal{G}^{[n]} \in (\mathrm{i}/2\pi)^3 \mathbb{Q}[[q_1, \dots, q_r]]$. \subsection{A-model: K-theoretic instanton counting and blow-up equations} \label{sec:blowup} We start off by giving a quick summary of instanton partition functions in four and five dimensions, and of the generalised blow-up equations \cite{Nakajima:2005fg,Keller:2012da} that recursively determine them. \subsubsection{Instanton counting \`a la Nekrasov: the $\mathrm{SU}(r+1)$ case} Let $R_5 = 0$. By $\mathcal{N}=2$ supersymmetry, the path integral calculation of non-perturbative contributions to the prepotential localises on instantons -- anti-selfdual connections on a principal $\mathcal{G}$-bundle on $S^4 \simeq \mathbb{R}^4 \cup \{ \mathrm{pt} \}$ with fixed second Chern character, modulo gauge equivalence. For $\mathcal{G}=\mathrm{SU}(r+1)$ \cite{Nekrasov:2002qd}, this space has an algebraic compactification to the framed moduli space $\mathcal{M}(r,n)$ of torsion-free sheaves on $\mathbb{P}^2$ of rank $r+1$ and $\langle c_2(E),[\mathbb{P}^2]\rangle=n$: given a pair of integers $(r,n)$, $\mathcal{M}(r,n)$ parametrises isomorphism classes of a pair $(E,\Phi)$ such that \cite{Nakajima:2003pg}: \begin{enumerate} \item $E$ is a torsion-free sheaf of rank$E=r+1$ and $\langle c_2(E),[\mathbb{P}^2]\rangle=n$ which is locally free in a neighbourhood of $\ell_{\infty}=\mathbb{P}^2\backslash \mathbb{C}^2$, \item $\Phi:E|_{\ell_{\infty}}\xrightarrow{\sim}\mathcal{O}^{\oplus r+1}$ is an isomorphism, which implies $c_1(E)=0$ \end{enumerate} $\mathcal M(r,n)$ is a $2n(r+1)$-dimensional smooth quasi-projective complex variety, with the open subset $\mathcal M^{\text{reg}}_0(r,n)$ of locally free sheaves coinciding with the moduli space of instantons on $\mathbb{S}^4$ of rank $r+1$ and $c_2=n$ \cite{Donaldson:1984tm}. It also carries a $\tilde T\coloneqq (\mathbb{C}^\star)^2\times (\mathbb{C}^\star)^{r}$ action coming from the scaling actions on $\mathbb{P}^2$ and the maximal torus of $\mathrm{SU}(r+1)$, with zero-dimensional fixed loci $\mathcal M^{\tilde T}(r,n)$. Let $\mathbb{Q}(\epsilon_1, \epsilon_2; a_1, \dots, a_r)$ be the field of fractions of $H_{\tilde T}({\rm pt}) = H(B\tilde T, \mathbb{Q})\simeq \mathbb{Q}[\epsilon_1, \epsilon_2; a_1, \dots, a_r]$. The instanton partition function is defined by the Atiyah--Bott formula as the equivariant volume \begin{equation} \mathcal{Z}_{\mathcal{G}}^{\text{np,4}d}\coloneqq \sum_{n\geq0}q_0^n\int_{\mathcal M^{\tilde T}(r,n)} \frac{1}{{\rm e}_{\tilde T}\big(N_{\mathcal M^{\tilde T}(r,n)/\mathcal M(r,n)}\big)} \in \mathbb{Q}(\epsilon_1, \epsilon_2, a_1, \dots, a_r). \label{Z4d1} \end{equation} The localisation formula reduces the computation of $\mathcal{Z}_{\mathcal{G}}^{\text{np,4}d}$ to a combinatorial question involving sums of 2D-partitions, see \cite{Nakajima:2003pg,Nakajima:2003uh} for explicit formulas. It was conjectured in \cite{Nekrasov:2002qd} and proved in \cite{Nakajima:2003pg,Nekrasov:2003rj} that the 4d prepotential computed from the Seiberg-Witten curve coincides with the logarithm of the instanton partition function in the non-equivariant/flat $\Omega$-background limit, $ \mathcal{F}_\mathcal{G}^{\text{4d}}=\lim_{\epsilon_1,\epsilon_2\to0}{\epsilon_1\epsilon_2}\log \mathcal{Z}_{\mathcal{G}}^{\text{np,4d}}$. A natural generalisation to five compactified dimensions was also addressed by Nekrasov \cite{Nekrasov:2002qd} and later proven by Nakajima--Yoshioka \cite{Nakajima:2005fg}, by uplifting \eqref{Z4d1} to equivariant $K$-theory: \begin{align} \mathcal{Z}_{\mathcal{G}}^{\text{np,5}d}=&\sum_{n\geq0}\left(q_0 {\rm e}^{-rR_5(\epsilon_1+\epsilon_2)/2}\right)^nZ^{\text{np,5}d}_{\mathcal{G},n},\\ Z^{\text{np,5}d}_{\mathcal{G},n}=&\,\sum_i (-1)^i \text{ch}\, H^i(\mathcal M(r,n),\mathcal{O}),\label{Z5d1} \end{align} where ch denotes the Hilbert series \cite[Section 4.1]{Nakajima:2003pg}. Accordingly, the nonperturbative part of the prepotential \eqref{eq:prepnp} is given by \begin{equation} \mathcal{F}_\mathcal{G}^{\text{np}}=\lim_{\epsilon_1,\epsilon_2\to0}\epsilon_1\epsilon_2\log \mathcal{Z}_{\mathcal{G}}^{\text{np,5d}}.\label{FandZ5d} \end{equation} \subsubsection{Blow-up equations} An efficient tool to evaluate \eqref{Z5d1} was put forward by \cite{Nakajima:2003pg}, in the form of a comparison formula between the instanton partition function on $\mathbb{P}^2$ and the one on successive blow-ups at points. The upshot of that is a recursive relation for $ \mathcal{Z}_{\mathcal{G}}^{\text{np,5}d}$ in terms of instanton numbers $n$. Let $\mathbb{F}_1$ be the first Hirzebruch surface, \begin{equation} \mathbb{F}_1=\{[z_0,z_1,z_2],[x,y]\in\mathbb{P}^2\times\mathbb{P}^1|z_1y=z_2x\}. \end{equation} We denote by $C$ the exceptional divisor defined by $z_0=0$, by $\mathcal{O}(C)$ the line bundle associated with the divisor $C$, and by $\mathcal{O}(m C)$ the $m$-th tensor power for some $m\in\mathbb{Z}_{\geq0}$, and consider the framed moduli space $\widehat \mathcal M(r,n,k)$ of torsion-free sheaves $(E,\Phi)$ on $\mathbb{F}_1$, where \begin{equation} \left\langle c_1(E),[C]\right\rangle=-k,\;\;\;\;\left\langle c_2(E)-\frac{r}{2(r+1)}c_1(E)^2,[\mathbb{F}_1]\right\rangle=n. \end{equation} Although $n$ is not an integer in general, $\widehat \mathcal M(r,n,k)$ is nonsingular of dimension $2n(r+1)$, and it carries a $\mathbb{C}^2\times \mathbb{C}^r$-torus action as in the previous section. The corresponding $K$-theoretic instanton partition function for $\mathbb{F}_1$ is \begin{equation} \hat Z_{\mathrm{SU}(r+1),m,k}^{\text{np},5d}=\sum_{n\geq0}\left(q_0 {\rm e}^{-rR_5(\epsilon_1+\epsilon_2)/2}\right)^n \text{ch}\, \sum_i (-1)^i H^i(\widehat \mathcal M(r,n,k),\mathcal{O}(mC)).\label{Z5d2} \end{equation} and is defined for each tensor power $\mathcal{O}(m C)$. In terms of $\hat Z_{\mathrm{SU}(r+1),m,k}$, Nakajima--Yoshioka \cite{Nakajima:2005fg} establish a finite difference equation in the equivariant parameters and Coulomb moduli, as follows. Let $\mathcal{G}=\mathrm{SU}(r+1)$ and $\vec{a}\in \mathbb{R}^{r+1} $ with $\sum_{i}a_i=0$ be an element of the Cartan subalgebra of $\mathfrak{g}=\mathfrak{sl}_{r+1}$, and further write $K^{\vee}$ for the special linear coroot lattice, $K^{\vee}=\{\vec{k}\in\mathbb{Z}^{r+1}|\sum_ik_i=0\}$. Then for $k=0$, $\hat Z_{\mathcal{G},m,0}^{\text{np},5d}$ satisfies \begin{align} \hat Z_{\mathcal{G},m,0}^{{\rm np},5d}=&\sum_{\vec{k}\in K^{\vee}}\frac{(q_0 {\rm e}^{R_5(\epsilon_1+\epsilon_2)(m-h_{\mathfrak{g}}^{\vee}/2)})^{(\vec{k},\vec{k})/2}e^{(\vec{k},\vec{a})m}}{\prod_{\alpha\in\Delta}l_{\alpha}^{\vec{k}}(\epsilon_1,\epsilon_2,\vec{a})}\nonumber\\ &\times Z_{\mathcal{G}}^{{\rm np},5d}(\epsilon_1,\epsilon_2-\epsilon_1,\vec{a}+\epsilon_1 \vec{k};e^{R_5\epsilon_1(m-h_{\mathfrak{g}}^{\vee}/2)}q_0,R_5)\nonumber\\ &\times Z_{\mathcal{G}}^{{\rm np},5d}(\epsilon_1-\epsilon_2,\epsilon_2,\vec{a}+\epsilon_2 \vec{k};e^{R_5\epsilon_2(m-h_{\mathfrak{g}}^{\vee}/2)}q_0,R_5),\label{blowup1} \end{align} where \begin{equation} l_{\alpha}^{\vec{k}}(\epsilon_1,\epsilon_2,\vec{a})= \begin{dcases} \hspace{7mm}\prod_{\substack{i,j>0\\i+j\leq-(\vec{k},\vec{a})-1}}\left(1-e^{R(i\epsilon_1+j\epsilon_2-(\vec{a},\alpha))}\right) & {\rm if}\;\; (\vec{k},\alpha)<0,\\ \prod_{\substack{i,j>0\\i+j\leq(\vec{k},\vec{a})-2}}\left(1-e^{R(-(i+1)\epsilon_1-(j+1)\epsilon_2-(\vec{a},\alpha))}\right) & {\rm if}\;\; (\vec{k},\alpha)>1,\\ \hspace{35mm}1 & {\rm otherwise} \end{dcases} \end{equation} Furthermore, for all $m\in\{0,1,...,r+1\}$, we have that \begin{equation} \hat Z_{\mathcal{G},m,0}^{{\rm np},5d} = Z_{\mathcal{G}}^{{\rm np},5d},\label{blowup2} \end{equation} Note that $h_{\mathfrak{sl}_{r+1}}^{\vee}=r+1$ for $\mathrm{SU}(r+1)$. From \eqref{blowup1}, it is possible to obtain a recursion for $Z_{\mathcal{G},n}^{\text{np,5}d}$, i.e., the expansion coefficients in terms of $q_0$ \eqref{Z5d2}: see \cite{Nakajima:2005fg}. \subsubsection{General gauge groups} The discussion above is restricted to the case of $\mathcal{G}=\mathrm{SU}(r+1)$, and except for the classical gauge groups where an ADHM construction is known \cite{Nekrasov_2004,Marino_2004,Fucito_2004}, it is a priori unclear how to deduce an immediate generalisation of \eqref{blowup1} for other Lie types. However it is not hard to extrapolate \eqref{blowup1} to general simple, simply-connected $\mathcal{G}$ as the r.h.s. is entirely written in terms of purely root-theoretic data \cite{Keller:2012da}: indeed, replacing the type~A root system of \eqref{blowup1} with the one of an arbitrary simple Lie algebra $\mathfrak{g}$ leads to formulas for 1-instanton partition which are consistent with supersymmetric index calculations \cite{Benvenuti_2010,Gaiotto:2009jjh,Keller_2012}. By taking these results into consideration, the authors of \cite{Keller:2012da} derived another recursion for $Z_{\mathcal{G},n}^{\text{np,5}d}$ from \eqref{blowup1} in the following form: \begin{equation} Z_{\mathcal{G},n}^{{\rm np},5d}=\frac{{\rm e}^{R_5n(\epsilon_1+\epsilon_2)}I_n^{(0)}-\left({\rm e}^{R_5n\epsilon_1}+{\rm e}^{R_5n\epsilon_2}\right)I_n^{(1)}+I_n^{(2)}}{(1-{\rm e}^{R_5n\epsilon_1})(1-{\rm e}^{R_5n\epsilon_2})},\label{KS1} \end{equation} where \begin{align} I^{(m)}_n=&\sum_{\substack{\vec{k}\in K^{\vee}\\i,j<n}}^{\frac12(\vec{k},\vec{k})+i+j=n}\frac{\exp\left(R_5m\left(i\epsilon_1+j\epsilon_2+(\vec{k},\vec{a})+(\vec{k},\vec{k})\frac{\epsilon_1+\epsilon_2}{2}\right)\right)}{\prod_{\alpha\in\Delta}l_{\alpha}^{\vec{k}}(\epsilon_1,\epsilon_2,\vec{a})}\nonumber\\ &\times Z_{\mathcal{G},i}^{{\rm np},5d}(\epsilon_1,\epsilon_2-\epsilon_1,\vec{a}+\epsilon_1 \vec{k},R_5) \,Z_{\mathcal{G},j}^{{\rm np},5d}(\epsilon_1-\epsilon_2,\epsilon_2,\vec{a}+\epsilon_2 \vec{k},R_5).\label{KS2} \end{align} Note that $I^{(m)}_n$ is determined by $ Z_{\mathcal{G},n'}^{{\rm np},5d}$ with $n'<n$, and $ Z_{\mathcal{G},0}^{{\rm np},5d}=1$. In particular, denoting $\Delta_{\ell}$ the set of long roots, we get \begin{align} Z_{\mathcal{G},1}^{{\rm np},5d}(\epsilon_1,\epsilon_2,\vec{a})=&\frac{1}{(1-{\rm e}^{-\epsilon_1})(1-{\rm e}^{-\epsilon_2})}\nonumber\\ &\sum_{\gamma\in\Delta_{\ell}}\frac{1}{(1-{\rm e}^{-\epsilon_1-\epsilon_2+\gamma\cdot a})(1-{\rm e}^{-\gamma\cdot a})\prod_{\alpha\cdot\gamma=1}(1-{\rm e}^{-\alpha\cdot a})}, \end{align} from which we find \begin{align} \mathcal{F}_{\mathcal{G}}^{[1]}=\sum_{\gamma\in\Delta_{\ell}}\frac{1}{(1-{\rm e}^{\gamma\cdot a})(1-{\rm e}^{-\gamma\cdot a})\prod_{\alpha\cdot\gamma=1}(1-{\rm e}^{-\alpha\cdot a})},\label{1instanton} \end{align} where we used \eqref{FandZ5d}. \subsection{B-model: Seiberg--Witten geometries, old and new} We move on to review the B-model counterpart of instanton counting, given by special geometry on families of spectral curves fibred over the Coulomb branch. We discuss in turn the two main sources of these for five-dimensional gauge theories, namely M-theory engineering and the spectral curves from relativistic integrable models. The two approaches lead to a priori inequivalent B-model geometries for non simply-laced groups, as we now review. \label{sec:SWcurves} \subsubsection{Spectral curves from M-theory engineering} \label{sec:SWMth} String theory embeddings are an extremely helpful tool to analyse the infrared behaviour of supersymmetric quantum field theories. For the case of eight supercharges, it is well known that one can realise SUSY gauge theories with gauge symmetry given by products of classical Lie groups and bifundamental matter as the low energy theory on systems of D-branes suspended between NS5-branes \cite{Hanany:1996ie,Witten:1997sc,Kol:1997fv}. In the strong type II string coupling limit, the resulting M-theory picture is given by a single smooth fivebrane setup, containing the Seiberg--Witten curve as a factor \cite{Witten:1997sc,Brandhuber:1997ua}. For 5d theories with a single $\mathcal{G}=\mathrm{SU}(N)$ factor and no hypermultiplets, one considers a IIB configuration on $\mathbb{R}^{1,3} \times S^1_{R_5} \times \mathbb{R} \times \mathbb{R}^4$ with $N$ light D5-branes located at $x^i=0$, $i=5,7,8,9,10$, sweeping a full $\mathbb{R}^{1,3} \times S^1_{R_5}$ in the $x^0, \dots x^4$ directions and suspended between two non-dynamical solitonic fivebranes classically situated at finite distance in the $x^6$ coordinate; the latter are extended in the $x^0, \dots, x^5$ coordinates. The theory on the D5-branes is macroscopically five-dimensional, owing to the the finite stretch in the $x^6$ direction. This brane setup is T-dual to the constructions considered for theories with eight supercharges in 3d and 4d respectively by \cite{Hanany:1996ie} and \cite{Witten:1997sc}, and it admits an M-theory description on $\mathbb{R}^{1,3} \times S^1_{R_5} \times \mathbb{R} \times \mathbb{R}^4 \times S^1_{R_{11}}$ in terms of an M5 brane wrapping a supersymmetric cycle $\mathbb{R}^{1,3} \times \mathcal{C}$, for a non-compact Riemann surface $\mathcal{C}$. In terms of the $\mathbb{C}^\star$-coordinates $\lambda={\rm e}^{-\mathrm{i} (x^4 + \mathrm{i} x^5)/R_5}$ and $\mu= {\rm e}^{-(x^6 + \mathrm{i} x^{10})/R_{11}}$, $\mathcal{C}$ satisfies an algebraic equation of the form \begin{equation} \mathsf{Q}_{\mathrm{SU}(N); u}(\mu,\lambda) = \lambda^2 + Q_1(\mu) \lambda + Q_2(\mu) = 0 \end{equation} where roots of $Q_1(\mu)$ (resp. $Q_2(\mu)$) correspond, in the type IIB limit, to the location of D5-branes between NS5s (resp. at $x^6=\infty$, corresponding to fundamental hypermultiplets in the gauge theory on the D5 worldvolume). For the case of no hypermultiplets and Chern--Simons level $k_{\rm CS}\in \{0,\dots, N\}$, one finds \cite{Brandhuber:1997ua} \begin{equation} \label{eq:QAn} \mathsf{Q}_{\mathrm{SU}(N); u}(\mu,\lambda) = \lambda^2+ \lambda\left[1+\sum_{k=1}^{N-1} u_k \mu^{k} + \mu^{N}\right] + q_0 \mu^{N-k_{\rm CS}}. \end{equation} Other classical gauge groups can be analysed in the same manner through the addition of orientifold O5-planes in the type IIB picture. This again can be subsumed into an M-theory description in terms of a fivebrane with worldvolume containing the affine part $\{(\mu,\lambda) \in \mathbb{C}^\star \times \mathbb{C}^\star | \mathsf{Q}_{\mathcal{G};u}(\mu,\lambda)=0\}$ of a hyperelliptic Riemann surface. In the special orthogonal case, this yields \cite{Brandhuber:1997ua} \begin{eqnarray} \label{eq:QBn} \mathsf{Q}_{\mathrm{Spin}(2N+1); u}(\mu,\lambda) &=& \mu^{2N-3}\left(\mu^2-1\right) \left(\mu^2+1\right)^2 (\lambda^2+q_0) + \lambda\left[1+\sum_{k=1}^{2N-1} u_k \mu^{2k} + \mu^{4N}\right] , \\ \label{eq:QDn} \mathsf{Q}_{\mathrm{Spin}(2N); u}(\mu, \lambda) &=& \mu^{N-2} (\mu^2-1)^2\left(\lambda^2+q_0\right) + \lambda \left[1+\sum_{k=1}^{2N-1} u_k \mu^k+\mu^{2N}\right]. \end{eqnarray} where $u_{k}=u_{2N-k}$. For symplectic gauge groups, since $\pi_4(\mathrm{Sp}(N))=\mathbb{Z}/2$, the gauge theory path integral depends on an additional discrete ambiguity, which we can regard as a 5d analogue of a theta angle ${\rm e}^{\mathrm{i} \theta}$ where $\theta \in \{0,\pi\}$. The difference between the two choices can be reabsorbed in a rescaling of the masses of the fundamental hypermultiplets when present, but it is physical for the pure gauge theory, leading to inequivalent prepotentials. Accordingly, two sets of curves $\{\mathsf{Q}_{\mathrm{Sp}(N); u,\theta}=0\}$ were put forward in \cite{Hayashi:2017btw,Li:2021rqr} to account for this, \begin{eqnarray} \label{eq:QCny0} \mathsf{Q}_{\mathrm{Sp}(N); u,0}(\mu,\lambda) &=& \mu^{N+2} \left(\lambda^2+ q_0\right)+ \lambda \Bigg[ \sum_{k=2}^{N+2} u_k \left(\mu^{N+2+k}+\mu^{N+2-k}\right) \nn \\ &+& \left(q_0-1-\sum_{k=1}^{\frac{N-1}{2}}u_{2k+1}\right) \left(\mu^{N+3}+\mu^{N+1}\right) -2 \sum_{k=1}^{\frac{N+1}{2}} u_{2k}\Bigg], \\ \label{eq:QCnypi} \mathsf{Q}_{\mathrm{Sp}(N); u,\pi}(\mu,\lambda) &=& \mu^{N+2} \left(\lambda^2+ q_0\right)+ \lambda \Bigg[ \sum_{k=2}^{N+2} u_k \left(\mu^{N+2+k}+\mu^{N+2-k}\right) \nn \\ &-& \left(1+\sum_{k=1}^{\frac{N-1}{2}}u_{2k+1}\right) \left(\mu^{N+3}+\mu^{N+1}\right) -2 \left(q_0+ \sum_{k=1}^{\frac{N+1}{2}} u_{2k}\right)\Bigg]. \end{eqnarray} with $u_{N+2}=1$. A pre-existing set of curves for the case with $N_f\leq N+2$ fundamental flavours was also proposed in \cite{Brandhuber:1997ua}, although the discussion of the $\theta$-angle dependence was not carried out. Naively setting $N_f=0$ leads to an alternative M-theory curve whose defining polynomial we denote by $\mathsf{Q}_{\mathrm{Sp}(N); u}^\flat$, where \begin{equation} \mathsf{Q}_{\mathrm{Sp}(N); u}^\flat (\mu,\lambda) = \mu^{N+2}\left(\lambda^2+ q_0\right)+ \lambda(\mu^2-1)^2\left[1+\sum_{k=1}^{2N-1} u_k \mu^{k} + \mu^{2N}\right] \label{eq:QCnb} \end{equation} and $u_{k}=u_{2N-k}$. We shall in fact see in \cref{sec:c1pi} in the example $N=1$ that the prepotential computed from \eqref{eq:QCnb} disagrees with the $\mathrm{Sp}(1)$ gauge theory prepotential for either choice of discrete $\theta$-angle. \subsubsection{Spectral curves from integrable systems} \label{sec:SWtwToda} Another, and historically the first \cite{Nekrasov:1996cz}, main source of spectral curves for gauge theories in five compactified dimensions comes from the theory of relativistic integrable systems; and in particular, for the pure gauge theory, from the study of the affine relativistic Toda chain \cite{Nekrasov:1996cz,MR979202, MR1090424}. We refer to \cite{Nekrasov:1996cz, Borot:2015fxa, Brini:2017gfi, Fock:2014ifa, Williams:2012fz, MR1993935} for background and further details, condensing here the information relevant for the constructions of \cref{sec:bcfg}. The phase space of the relativistic Toda chain associated to the (untwisted) affine Lie group $\mathcal{G}^{(1)}$ is the $(2r+2)$-dimensional Poisson algebraic torus $ \left( (\mathbb{C}_x^\star)^{r+1} \times (\mathbb{C}_y^\star)^{r+1}, \{,\}_\mathcal{G}\right)$ with log-constant Poisson bracket \begin{equation} \{x_i, y_j\}_\mathcal{G} \coloneqq \mathcal{K}(\mathfrak{g}^{(1)})_{ij} x_i y_j. \label{eq:pbg} \end{equation} Since $\mathcal{K}(\mathfrak{g}^{(1)})$ has a 1-dimensional kernel, \eqref{eq:pbg} has a single Casimir function, given by \begin{equation} u_0 \coloneqq \prod_{i=0}^r x_i^{-2\mathfrak{q}_i}, \label{eq:aleph} \end{equation} with $\mathfrak{q}_i$ the Coxeter exponents of $\mathfrak{g}^{(1)}$. Dynamical commuting flows are specified by the spectral-parameter-dependent Lax operator \cite{Fock:2014ifa,Kruglinskaya:2014pza, MR1993935}, \begin{eqnarray} L_\mathcal{G}(x,y;\lambda) &=& E_0(\lambda/y_0) E_{\bar 0}(\lambda) \prod_{i=1}^r H_i(x_i) E_i(1) H_i(y_i) E_{-i}(1) \label{eq:laxaff2} \end{eqnarray} where $E_i(x)=\exp(x e_i)$, $H_i(x)=\exp(x h_i)$, and $\{e_{\pm i},h_i\}_{i=0,\dots, r}$ is a Cartan--Weyl basis for $\mathfrak{g}^{(1)}$. \medskip Let $\rho_i$, $i=1,\dots, r$ be the $i^{\rm th}$ fundamental representation of $\mathcal{G}$. Then, proceeding as in \cite[Lem.~2.2]{Brini:2017gfi}, one finds that the $\lambda$-dependence of the fundamental traces of the Lax operator \eqref{eq:laxaff2} are given by \begin{equation} \mathrm{Tr}\,_{\rho_i} L_\mathcal{G}(x,y;\lambda) = u_i(x,y) + \left\{ \bary{lcr} \left(\lambda \delta_{iN}+\frac{\delta_{i,N+1}}{u_0\lambda}\right), & \qquad & \mathcal{G}= \mathrm{SU}(2N+1) \\ \left(\lambda + \frac{1}{u_0\lambda}\right) \delta_{i\bar k}, & \qquad & \mathrm{else,} \eary \right. \label{eq:lambdadep} \end{equation} where $\{u_i\}_{i=1}^r$ is a complete set of commuting first integrals w.r.t. the Poisson structure \eqref{eq:pbg}, and $\bar k$ labels the fundamental weight $\omega_{\bar k}$ corresponding to the largest irreducible Weyl orbit. Fixing a non-trivial representation $\rho'\in \mathrm{Rep}(\mathcal{G})$, these integrals determine, and can be retrieved from, the characteristic Laurent polynomial \begin{eqnarray} \mathsf{R}_{\mathcal{G},\rho';u}(\mu,\lambda) & \coloneqq & \det_{\rho'} \left[ L_\mathcal{G}(x,y;\lambda)-\mu\right] \nn \\ &=&\sum_{i=0}^{\dim \rho'} (-\mu)^i \mathrm{Tr}\,_{\wedge^{\dim\rho'-i}\rho} L_\mathcal{G}(x,y;\lambda) \nn \\ &\in & \mathbb{Z}[\lambda^\pm, \mu; u_0, \dots, u_r] \label{eq:untwcharpol} \end{eqnarray} where we have used \eqref{eq:lambdadep} and the fact that $\mathrm{Rep}(\mathcal{G})$ is an integral polynomial ring in the fundamental characters. Note that, for $\mathcal{G}\neq \mathrm{SU}_{2N+1}$, \eqref{eq:lambdadep} further ensures that there exists $\widetilde{\mathsf{R}_{\mathcal{G},\rho';u}}(\mu, \nu)\in \mathbb{Z}[\mu, \nu; u_0, \dots, u_r]$ such that \begin{equation} \widetilde{\mathsf{R}}_{\mathcal{G},\rho';u}(\mu,\nu)\bigg|_{\nu=\lambda+1/(u_0\lambda)}=\mathsf{R}_{\mathcal{G},\rho';u}(\mu,\lambda). \label{eq:redpolR} \end{equation} As $(u_1, \dots, u_r)$, from \eqref{eq:lambdadep}, are Weyl-invariant coordinates on the maximal torus of $\mathcal{G}$, the vanishing locus of $\mathsf{R}_{\mathcal{G};u}$ gives (after normalisation of the fibres) a family of spectral curves over the Coulomb branch. For classical simply-laced groups\footnote{For the special unitary gauge group $\mathcal{G}=\mathrm{SU}(N)$, the equivalence is realised at Chern--Simons level $k_{\rm CS}=0$. The effect of a shift in the Chern--Simons level on the integrable chain is discussed in \cite{Eager:2011dp,Marshakov:2019vnz}.} $\mathcal{G}=A_r$ or $D_r$ one recovers \cite{Borot:2015fxa,Kruglinskaya:2014pza}, in particular, the M-theory curves \eqref{eq:QAn} and \eqref{eq:QDn}: \begin{align} \mathsf{R}_{\mathrm{SU}(r+1),\rho_1;u}&=\mathsf{Q}_{\mathrm{SU}(r+1);u}\Bigr|_{k_{{\rm CS}}=0},\;\;\;\; \mathsf{R}_{\mathrm{Spin}(2r),\rho_1;u}=\mathsf{Q}_{\mathrm{Spin}(2r);u}. \end{align} The spectral polynomials $\mathsf{R}_{E_n,\rho_{n-1}}$ for $\mathcal{G}=E_n$ were computed\footnote{Although the curves depend on the choice of the representation $\rho'$ in \eqref{eq:laxaff2}, the prepotentials computed for different choices of $\rho'$ only differ by an overall normalisation. The representation-independence of the special geometry prepotential is non-trivial, and follows from an isomorphism of the flows of the underlying integrable system when projected to a canonical Prym--Tyurin subvariety of the Jacobian: see \cite{Brini:2017gfi,MR1668594} for an extended discussion.} in \cite{Borot:2015fxa,Brini:2017gfi}. \medskip \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline Type & Folding & Dynkin diagram\\ \hline \hline $B_l^\vee=A^{(2)}_{2l-1}$ & \dynkin[edge length=.75cm, involutions={ 1{8}; 27; 36; 45;} ]A[1]{****.****} & \dynkin[% edge length=.75cm, ]A[2]{odd} \\ \hline $C_l^\vee=D^{(2)}_{l+1}$ & \dynkin[edge length=.75cm, involutions={ [relative]0{9}; 1{10}; 28; 37;} ]D[1]{*****.*****} & \dynkin[% edge length=1cm, ]D[2]{} \\ \hline $F_4^\vee=E^{(2)}_{6}$ & \dynkin[% edge length=.75cm, involutions={07;16;35}]E[1]7 & \dynkin[% edge length=1cm, ]E[2]6\\ \hline $G_2^\vee=D^{(3)}_{4}$ & \dynkin[% edge length=.75cm, involutions={01;16;60;35;52;23}]E[1]6 & \dynkin[% edge length=1cm, ]D[3]4\\ \hline \end{tabular} \caption{Dynkin diagrams and foldings for twisted affine Lie algebras.} \label{tab:twistedla} \end{table} When $\mathcal{G}$ is of BCFG type, however, it is known that the prepotentials of the spectral curves $\{\mathsf{R}_{\mathcal{G},\rho';u}=0\}$ are not expected to reproduce the low energy effective action of the gauge theory. The reason for this was highlighted in \cite{Martinec:1995by}: in the $R_5 \to 0$ limit, corresponding to the limit of infinite speed of light of the relativistic Toda chain, the relevant dynamical system for $\mathcal{N}=2$ super Yang--Mills is the affine non-relativistic Toda system associated to the {\it twisted} Kac--Moody algebras $(\mathfrak{g}^{(1)})^\vee$ \cite{MR1104219,Olive:1982ye}, rather than to $\mathfrak{g}^{(1)}$ itself. These are given by quotients of a ``parent'' untwisted Kac--Moody algebra $\mathcal{A}(\mathfrak{g}^{(1)})$ by their outer automorphism group $\mathrm{Out}(\mathcal{A}(\mathfrak{g}^{(1)}))$, associated to the folding of the Dynkin diagram of $\mathfrak{g}^{(1)}$: see \cref{tab:twistedla}. Accordingly, it is natural to cast the twisting constructions relevant for the affine Lie algebras/ non-relativistic Toda chains/$\mathcal{N}=2$ $d=4$ theories at the level of the corresponding affine Lie groups/relativistic Toda chains/$\mathcal{N}=2$ KK theories, by considering the {\it twisted} Lax operator% \begin{equation} L_\mathcal{G}^{\vee}(x,y;\lambda) \coloneqq E^\vee_0(\lambda/y_0) E^\vee_{\bar 0}(\lambda)\prod_{i=1}^r H^\vee_i(x_i) E^\vee_i(1) H^\vee_i(y_i) E^\vee_{-i}(1) \label{eq:laxtw} \end{equation} obtained from \eqref{eq:laxaff2} by replacing all Chevalley operators with their twisted counterparts. Let $\mathcal{A}(\mathcal{G}) = \exp\mathcal{A}(\mathfrak{g}))$: fixing $\mathbf{1} \neq \rho' \in \mathrm{Rep}(\mathcal{A}(\mathcal{G}))$, the associated characteristic polynomial is defined as in \eqref{eq:untwcharpol} \begin{eqnarray} \mathsf{T}_{\mathcal{G},\rho';u}(\mu,\lambda) & \coloneqq & \det_{\rho'} \left[ L^\vee_\mathcal{G}(x,y;\lambda)-\mu\right] \nn \\ &=&\sum_{i=0}^{\dim \rho'} (-\mu)^i \mathrm{Tr}\,_{\wedge^{\dim\rho'-i}\rho} L^\vee_\mathcal{G}(x,y;\lambda) \label{eq:polT} \end{eqnarray} To determine the spectral dependence of $\mathsf{T}_{\mathcal{G},\rho';u}$ on the $\lambda$ parameter, let $\mathrm{Eff}(\mathcal{A}(\mathfrak{g}))$ be the set of nodes in the Dynkin diagram of the ``parent'' simple Lie algebra $\mathcal{A}(\mathfrak{g})$ acted upon effectively by diagram automorphisms. This set of vertices splits in orbits $\mathrm{Eff}(\mathcal{A}(\mathfrak{g}))=\sqcup_j \mathsf{o}^{(2)}_j \sqcup_k \mathsf{o}^{(3)}_k$, where $\mathsf{o}^{(n)}_i$ is a cardinality-$n$ set of Dynkin nodes upon which the outer automorphisms acts as the permutation group $S_n$ in $n$-elements. Let $\sigma : \mathsf{o}^{(n)}_j \to \mathbb{Z}/n\mathbb{Z}$ be a bijection onto the set of $n^{\rm th}$ roots of unity $\{\omega_n^l\}_{l=0}^{n-1}$, such that if $p$ is the full cyclic permutation of order $n$ in $S_n$, then $\sigma(p(\mathsf{v}))={\rm e}^{2\pi\mathrm{i}/n}\sigma(\mathsf{v})$. We claim that, up to a rescaling and a shift of $\log\lambda$, the analogue of \eqref{eq:lambdadep} in the twisted case is \begin{equation} \mathrm{Tr}\,_{\rho_i} L^\vee_\mathcal{G}(x,y;\lambda) = u_{\pi(i)}(x,y) + \left\{ \bary{lr} \sigma(\mathsf{v}_i) \lambda + \frac{1}{u_0\lambda \sigma(\mathsf{v}_i)}, & \quad \mathsf{v}_i \in \mathrm{Eff}(\mathcal{A}(\mathfrak{g})), \\ 0 & \mathrm{else}, \eary \right. \label{eq:lambdatw} \end{equation} where now the Casimir function is expressed in terms of the {\it dual} Coxeter exponents \begin{equation} u_0 \coloneqq \prod_{i=0}^r x_i^{-2\mathfrak{q}^\vee_i}, \label{eq:alephtw} \end{equation} and $\pi : \{\mathsf{v}_i\} \to \{1,\dots, r\}$ is a choice of label on the orbits, i.e. it is defined up to permutation by $\pi(\mathsf{v}_i)=\pi(\mathsf{v}_j)$ iff $\{\mathsf{v}_i,\mathsf{v}_j\} \subset \mathsf{o}^{(n)}_l$ for some $n$ and $l$. Fron \eqref{eq:lambdatw}, we can define as we did for \eqref{eq:redpolR} the reduced characteristic polynomial \begin{equation} \widetilde{\mathsf{T}}_{\mathcal{G},\rho';u}(\mu,\nu)\bigg|_{\nu=\lambda+1/(u_0\lambda)}=\mathsf{T}_{\mathcal{G},\rho';u}(\mu,\lambda). \label{eq:redpolT} \end{equation} \begin{example} Let us consider the example of $\mathcal{G}=B_2$, so that $\big(B_2^{(1)}\big)^\vee = \big(C_2^{(1)}\big)^\vee = D_3^{(2)}= A_3^{(2)}$. In this case the folding of the Dynkin diagram of the simple Lie algebra $\mathcal{A}(\mathfrak{b}_2)=\mathfrak{d}_3\simeq \mathfrak{a}_3$ identifies the vertices $v_1 \leftrightarrow v_3$, corresponding to the highest weights $[010]$ and $[001$] of the two Weyl spinor representations of $D_3$ (equivalently, the two complex-conjugate vector representations of $A_3$) leaving fixed $v_2$, which labels the 6-dimensional defining representation $\rho_{[100]}$ of $\mathrm{SO}(6)$ (resp. the adjoint representation of $\mathrm{SU}(4)$), so that $\mathrm{Eff}(\mathfrak{b}_2)=\mathsf{o}^{(2)}_1=\{v_1,v_3\}$: see \cref{fig:dynkB2}. \medskip \begin{figure}[h] \centering \dynkin[% edge length=1.25cm, labels*={v_1,v_2,v_3}, involution/.style={blue!50,stealth-stealth,thick}, involutions={13}]A3 $\qquad \longrightarrow \qquad$ \dynkin[% edge length=1.25cm, labels*={1,2}, ]B2 \caption{The Dynkin diagram of $B_2\simeq C_2$ via $A_3\simeq D_3$-folding.} \label{fig:dynkB2} \end{figure} For this example, let $\rho'=\rho_{[010]}=\mathbf{6}_\mathsf{v}$ be the 6-dimensional fundamentl representation of $A_3$ (i.e. the vector representation of $D_3=\mathrm{Spin}(6)$) and let $\varepsilon_{kl}\in \mathrm{Mat}(6,\mathbb{C})$ with $(\varepsilon_{kl})_{ij}=\delta_{ik}\delta_{lj}$. Then the twisted Cartan--Weyl generators are given by \cite[Sec.~21.6]{MR1993935} \begin{equation} e^\vee_1=\varepsilon_{21}-\varepsilon_{65}, \quad e^\vee_2=\varepsilon_{32}-\varepsilon_{53}, \quad e^\vee_0=\varepsilon_{14}-\varepsilon_{46}, \end{equation} with $e_{\bar i}^\vee=(e_i^\vee)^T$ and $h_i = [e_i, e_{\bar i}]$, $i=0,1,2$. Then, from \eqref{eq:laxtw}, we find up to an affine transformation $\log \lambda \to a \log \lambda+b$ that \begin{equation} {\partial}_{\lambda} \mathrm{Tr}\,_{\wedge^i \rho'} L^\vee_{B_2}(x,y;\lambda) = \{0,1,-2\}_i \left(2\lambda -\frac{2}{u_0 \lambda^3}\right), \end{equation} and $\mathrm{Tr}\,_{\wedge^i \rho'} L^\vee_{B_2}=\mathrm{Tr}\,_{\wedge^{6-i} \rho'} L^\vee_{B_2}$ by the reality of $\rho'$. Using the character relations in the exterior algebra \begin{equation} \wedge^2\rho' \oplus \rho_{[000]}= \rho_{[100]} \otimes \rho_{[001]}, \quad \wedge^3\rho' = \rho_{[200]} \oplus \rho_{[002]}, \label{eq:wedgeB2} \end{equation} plus the fact that $\rho_{[200]}\oplus \rho_{[010]}=\rho_{[100]}\otimes \rho_{[100]}$, $\rho_{[002]}\oplus\rho_{[010]}=\rho_{[001]}\otimes \rho_{[001]}$, and restricting to the invariant locus under the folding action, we retrieve the spectral parameter dependence claimed in \eqref{eq:lambdatw}: \begin{eqnarray} \mathrm{Tr}\,_{\rho_1} L^\vee_{B_2}(x,y;\lambda) &=& u_1(x,y) + \left(\lambda+\frac{1}{u_0\lambda}\right), \nn \\ \mathrm{Tr}\,_{\rho_3} L^\vee_{B_2}(x,y;\lambda) &=& u_1(x,y) - \left(\lambda+\frac{1}{u_0\lambda}\right), \nn \\ \mathrm{Tr}\,_{\rho_2} L^\vee_{B_2}(x,y;\lambda) &=& u_2(x,y). \label{eq:lambdaB2} \end{eqnarray} From \eqref{eq:wedgeB2}-\eqref{eq:lambdaB2}, the reduced spectral polynomial in the 6-dimensional vector representation $\rho'=\mathbf{6}_\mathsf{v} \in \mathrm{Rep}(D_3)$ reads \begin{eqnarray} \widetilde{\mathsf{T}}_{B_2,\mathbf{6}_\mathsf{v};u}(\mu,\nu) &=& 1-u_2 \mu+\left(u_1^2-1\right) \mu^2-2 \left(u_1^2-u_2\right) \mu^3 \nn \\ &+& \left(u_1^2-1\right) \mu^4-u_2 \mu^5+\mu^6-\nu^2 (\mu+1)^2 \mu^2. \end{eqnarray} Alternatively, using \eqref{eq:lambdaB2}, we can get a more compact expression by letting $\rho'=\mathbf{4}_{\pm}$ to be either of the two $4$-dimensional complex conjugate irreducible representations of $\mathrm{Spin}(6)=\mathrm{SU}(4)$. Since $\wedge^i\rho'=\rho_{i}$ in this case, we obtain right off the bat from \eqref{eq:lambdaB2} that \begin{equation} \widetilde{\mathsf{T}}_{B_2,\mathbf{4}_{\pm};u}(\mu,\nu) = 1-(u_1+\nu) \mu+ u_2 \mu^2-(u_1-\nu) \mu^3+ \mu^4. \label{eq:twistedB2curve} \end{equation} \end{example} \begin{example} As a slightly more complicated example, consider $\big(G_2^{(1)}\big)^\vee = D_4^{(3)}$. In this case the triality symmetry of the simple Lie algebra $\mathcal{A}(\mathfrak{g}_2)=\mathfrak{d}_4$ identifies the vertices $v_1 \leftrightarrow v_3 \leftrightarrow v_4$, corresponding to the highest weights $[1000]$, $[0010]$ and $[0001]$ of the three eight-dimensional irreducible representations of $\mathrm{Spin}(8)$, whereas $v_2$, corresponding to the adjoint representation, is left fixed; see \cref{fig:dynkG2}. In particular, we have that $\mathrm{Eff}(\mathfrak{g}_2)=\mathsf{o}^{(3)}_1=\{v_1,v_2, v_4\}$. \medskip \begin{figure}[h] \centering \dynkin[% edge length=2cm, labels*={~~v_1,v_2,v_3,v_4}, involution/.style={blue!50,stealth-stealth,thick}, involutions={[in=120,out=80,relative]13;[in=120,out=80,relative]34;[in=120,out=80,relative]41}]D4 $\qquad \longrightarrow \qquad$ \dynkin[% edge length=1.25cm, labels*={1,2}, ]G2 \caption{The Dynkin diagram of $G_2$ via $D_4$-folding.} \label{fig:dynkG2} \end{figure} Fix $\rho'=\rho_{[1000]}$. Then the twisted Cartan--Weyl generators are given in this representation by \cite[Sec.~21.13]{MR1993935} \begin{eqnarray} e_1^\vee &=& \varepsilon_{32}-\varepsilon_{76} \nn \\ e_2^\vee &=& \varepsilon_{21}+\varepsilon_{43}+\varepsilon_{53}-\varepsilon_{64}-\varepsilon_{65}-\varepsilon_{87} \nn \\ e_0^\vee &=& \varepsilon_{14}-\varepsilon_{58} +\omega_3 \left(\varepsilon_{15}-\varepsilon_{48}\right)+ \omega_3^2\left(\varepsilon_{26}-\varepsilon_{37}\right) \end{eqnarray} with again $e_{\bar i}^\vee=(e_i^\vee)^T$, $h_i = [e_i, e_{\bar i}]$, $i=0,1,2$, and where as before $\omega_3={\rm e}^{2\pi\mathrm{i}/3}$. Plugging this into \eqref{eq:laxtw} it is straightforward to compute the spectral dependence of the exterior traces $\mathrm{Tr}\,_{\wedge^i \rho} L^\vee_{G_2}$, whose explicit expression we omit here. Using the following relations in the representation ring $\mathrm{Rep}(D_4)$, \begin{equation} \wedge^2\rho = \rho_{[0100]}, \quad \wedge^3\rho = \rho_{[0010]} \otimes \rho_{[0001]}, \quad \wedge^4\rho = \rho_{[020]} \oplus \rho_{[002]}, \end{equation} and \begin{eqnarray} \rho_{[0020]} &=& \rho_{[0010]}\otimes \rho_{[0010]}\oplus \rho_{[1000]},\nn \\ \rho_{[0020]} &=& \rho_{[0010]}\otimes \rho_{[0001]}\oplus \rho_{[1000]} \end{eqnarray} it is immediate to verify that the $\lambda$-dependence of the exterior traces $\mathrm{Tr}\,_{\wedge^i \rho} L^\vee_{G_2}$ is induced by a shift of the fundamental traces as in \eqref{eq:lambdatw}. The reduced spectral polynomial then reads \begin{eqnarray} \widetilde{T}_{G_2; \rho_1;u}(\mu,\nu) &=& (\mu-1)^2 \left[u_2 (\mu+1)^2 \mu^2-u_1^2 \mu^3-u_1 \left(\mu^5+\mu\right)+\sum_{i=0}^6 \mu^i\right]\nn \\ &+& \nu(\mu-1)^2 \mu \left(\mu^4+2 \mu^3-u_1 \mu^2+\mu^2+2 \mu+1\right)-\nu^2 \mu^3 \left(\mu^2+\mu+1\right) \nn \\ &+& \frac{3 (\mu+1)^2 \mu^3}{u_0^2}. \end{eqnarray} \end{example} \section{Picard--Fuchs equations: the simply-laced case} \label{sec:ade} \subsection{Extended affine Weyl groups and Frobenius manifolds} \label{sec:Frobenius} We start by providing a very brief account of the construction of \cite{MR1606165} of a semi-simple Frobenius manifold on the space of regular orbits of an extended affine Weyl group, referring the reader to \cite{MR1606165,Brini:2017gfi,Brini:2021pix} for a more extended treatment. Let $\mathfrak{g}$ be a complex simple Lie algebra of rank $r$, and write $\mathfrak{h}$ and $\mathcal{W}_\mathfrak{g}$ for, respectively, its Cartan subalgebra and the Weyl group. Let $\bar k \in \{1, \dots, r\}$ be a label marking the highest weight $\omega_{\bar k}$ corresponding to any of the highest-dimensional fundamental representations of $\mathfrak{g}$. The action of $\mathcal{W}_\mathfrak{g}$ on $\mathfrak{h}$ admits an affine extension $\widetilde{\mathcal{W}}_{\mathfrak{g}, \omega_{\bar k}} \simeq \mathcal{W}_\mathfrak{g} \rtimes \Lambda_\mathfrak{g}^\vee \rtimes \mathbb{Z}$ on $\mathfrak{h} \times \mathbb{C}$, \begin{align} \widetilde{\mathcal{W}}_{\mathfrak{g}, \omega_{\bar k}} \times \mathfrak{h} \times \mathbb{C} & \rightarrow \mathfrak{h} \times \mathbb{C}\\ ((w, \alpha^\vee, n), (h, \xi)) & \mapsto (w(h) + \alpha^\vee + n \omega_{\bar k}, \xi - n). \end{align} The GIT quotient \begin{equation} M^{\rm trig}_{\mathfrak{g}} \coloneqq (\mathfrak{h}^{\text{reg}} \times \mathbb{C})// \widetilde{\mathcal{W}}_{\mathcal{R}} = \text{Spec}(\mathcal{O}_{\mathfrak{h} \times \mathbb{C}}(\mathfrak{h}^{\text{reg}} \times \mathbb{C}))^{\widetilde{\mathcal{W}}_{\mathfrak{g}}} \cong \mathcal{T}^{\text{reg}}/\mathcal{W}_{\mathfrak{g}} \times \mathbb{C}^*, \label{Def:DZmaniLG} \end{equation} is isomorphic as a complex manifold to the trivial one-dimensional family $\mathcal M_\mathsf{C}(\mathcal{G}) \times \mathbb{C}^\star$, parametrised by $\xi:=\log u_0 =-2h_{\mathfrak{g}}\log R_5 \Lambda_{\rm QCD} \in \mathbb{P}^1$ of classical vacua with maximally broken gauge symmetry of the pure $\mathcal{N}=1$ gauge theory on $\mathbb{R}^4 \times S_{R_5}^1$ with gauge group the real compact form $\mathcal{G}$ of $\exp(\mathfrak{g})$. In \eqref{Def:DZmaniLG}, $\mathfrak{h}^{\text{reg}}$ is the set of regular orbits, and $\mathcal{T}^{\text{reg}} = \text{exp}(2\pi \mathrm{i} \mathfrak{h}^{\text{reg}})$ is its image under the exponential map to the maximal torus $\mathcal{T}$. In \cite{MR1606165}, the authors construct a canonical, semisimple Frobenius manifold structure on $M_\mathfrak{g}^{\rm trig}$. Their construction asserts that there exist a chart $\{t_i(h,\xi)\}_{i=0}^r$ on $M_\mathfrak{g}^{\rm trig}$ and a solution $F_{\mathfrak{g}}(t_0, \dots, t_r)$ of the WDVV equation such that: % \ben \item $F_{\mathfrak{g}} \in \mathbb{Q}[t_0, \dots, t_r][{\rm e}^{t_0}]$; \item $\eta_{ij}:={\partial}^3_{t_r t_i t_j} F_{\mathfrak{g}}$ is a constant and non-degenerate matrix; \item $E(F) = 2 F$, with \begin{equation} E\coloneqq \sum_{j=1}^{r} \frac{\mathfrak{q}_j}{\mathfrak{q}_{\bar k}}t_j \partial_j + \frac{1}{\mathfrak{q}_{\bar k}} \partial_{0}, \label{eq:Evec} \end{equation} up to quadratic terms. \end{enumerate} Here, $\mathfrak{q}_i \coloneqq \left\langle \omega_i, \omega_{\bar k}\right\rangle$ are the fundamental Coxeter degrees of $\mathfrak{g}$. \subsection{Seiberg--Witten periods as odd periods} The Frobenius manifolds $M^{\rm trig}_\mathfrak{g}$ are a trigonometric version of the polynomial Frobenius manifolds $M^{\rm pol}_\mathfrak{g}$ \cite{Dubrovin:1993nt,Dubrovin:1994hc} on quotients of the reflection representation of ordinary Weyl groups. For $\mathfrak{g}=\mathfrak{ade}$ these are isomorphic to the chiral rings\footnote{And for $\mathfrak{g}=\mathfrak{bcfg}$, the invariant subrings obtained by Dynkin folding \cite{Zuber:1993vm}.} of the twisted, massively perturbed two-dimensional minimal $\mathcal{N}=2$ SCFTs with central charge $d=c/3<1$ \cite{Dijkgraaf:1990dj} of type ADE. It was proposed in \cite{Eguchi:1996nh,Ito:1997ur,Ito:1997zq,MR2070050} that the SW periods of four-dimensional $\mathcal{N}=2$ super Yang--Mills theories with simply-laced compact gauge group $\mathcal{G}$ are solutions of a Picard--Fuchs system determined by the Frobenius structure on the tensor product $M^{\rm pol}_\mathfrak{g} \otimes QH(\mathbb{P}^1)$ ($\mathfrak{g}= \mathrm{Lie}(\mathcal{G})_\mathbb{C}$) of the polynomial Frobenius manifold of type $\mathfrak{g}$ with the quantum cohomology of the projective line (the chiral ring of the topologically A-twisted $\sigma$-model). The claim is that there exists a change-of-variables $(u_1, \dots, u_r, \Lambda_{\rm QCD}) \to (t_1, \dots, t_r,Q)$ such that the periods satisfy the holonomic system of PDEs \cite[Prop.~5.19]{MR2070050} \begin{eqnarray} \label{eq:PF4dhom} \left(\sum_{i=1}^r \mathfrak{q}_i t_i {\partial}_{t_i}-1\right)^2 \Pi &=& 4 h_{\mathfrak{g}}^2 Q {\partial}^2_{t_r} \Pi \\ \label{eq:PF4prod} {\partial}^2_{t_i t_j} \Pi &=& c_{ij}^k {\partial}^2_{t_r t_k} \Pi \end{eqnarray} In \eqref{eq:PF4dhom}--\eqref{eq:PF4prod}, the independent variables $\{t_i\}_{i=1}^r$ are flat coordinates for the Saito metric on $M^{\rm pol}_{\mathfrak{g}}$, and are related polynomially to the classical Coulomb order parameters $u_i$ \cite{Dubrovin:1993nt}; the variable~$Q$ keeps track of the dependence on the holomorphic scale, and is identified with the coordinate parametrising primary insertions of the K\"ahler class in the topological A-model on $\mathbb{P}^1$ \cite{MR2070050}; and finally, in \eqref{eq:PF4prod}, $c_{ij}^k(t)$ are the structure constants of the $\mathcal{N}=2$ ADE chiral ring. In the language of \cite{MR2070050}, the gauge theory periods are identified with the {\it odd periods} of the Frobenius manifold $M_{\mathfrak{g}}^{\rm pol} \otimes QH(\mathbb{P}^1)$ on the ``coordinate cross'' where all dependence on the primary insertions of the $\mathbb{P}^1$ theory are discarded. \medskip Viewing the pure $\mathcal{N}=2$ KK theory on $\mathbb{R}^4 \times S_{R_5}^1$ with ADE gauge group as a 4d theory with $\widetilde{\rm ADE}$ loop group gauge symmetry, as in \cite{Nekrasov:1996cz}, we propose that the same tensor product construction of \cite{Ito:1997ur,MR2070050} applies to the 5d setting upon replacing the type $\mathfrak{g}=\mathfrak{ade}$ polynomial Frobenius manifolds $M_\mathfrak{g}^{\rm pol}$ with their trigonometric, extended affine version $M_\mathfrak{g}^{\rm trig}$. This leads us to postulate that for ADE gauge groups the periods \eqref{eq:specgeom} satisfy the system of PDEs \begin{subequations} \begin{empheq}[box=\widefbox]{align} \label{eq:PF5Eul} L_E^2 \Pi =& 4 {\partial}^2_{t_r} \Pi \\ \label{eq:PF5prod} {\partial}^2_{t_i t_j} \Pi =& \mathsf{C}_{ij}^k {\partial}^2_{t_r t_k} \Pi \end{empheq} \end{subequations} where now $i=0,\dots, r$, the Euler vector field $E$ is as in \eqref{eq:Evec}, and $\mathsf{C}_{ij}^k(t)$ are now the structure constants $\mathsf{C}_{ij}^k(t)\coloneqq \eta^{kl} {\partial}^3_{t_l t_i t_j} F^{\rm trig}_\mathfrak{g}$ of $M^{\rm trig}_\mathfrak{g}$ in the flat chart $\{t^i\}_{i=0}^r$.\footnote{A remark is in order regarding the type A series, since in that case we have an additional parametric dependence on the five-dimensional Chern--Simons term. By \eqref{eq:lambdadep}, the tensor product with the quantum cohomology of $\mathbb{P}^1$, whose LG superpotential has the form $\lambda+1/(u_0\lambda)$, picks out a CS level such that the symmetry $\lambda \leftrightarrow (u_0 \lambda)^{-1}$ is realised at the spectral curve level. This occurs at $k_{\rm CS}=0$ for $\mathcal{G}=\mathrm{SU}(2N)$ and $k_{\rm CS}=1$ for $\mathcal{G}=\mathrm{SU}(2N+1)$.} \medskip It will be helpful to express \eqref{eq:PF5Eul}--\eqref{eq:PF5prod} in a natural set of coordinates adapted to the weak coupling expansion in the gauge theory. These are the analogue of the locally monodromy invariant coordinates around the large complex structure/maximally unipotent point in Hori--Vafa mirror symmetries, and they are related to the usual Coulomb branch $u$-coordinates as \begin{equation} z_0 = 1/u_0, \qquad z_i = \prod_{j=1}^r u_j^{-\mathcal{K}_{ij}(\mathfrak{g})} \label{eq:utoz} \end{equation} where $\mathcal{K}(\mathfrak{g})$ is the Cartan matrix of $\mathfrak{g}$. In these coordinates, \eqref{eq:PF5Eul}--\eqref{eq:PF5prod} read \begin{subequations} \begin{empheq}[box=\widefbox]{align} \label{eq:PF5Eulz} \left(z_0 {\partial}_{z_0}\right)^2 \Pi =& 4 h_\mathfrak{g}^2 \left( X^{ij}(z) {\partial}^2_{z_i z_j} + Y^i(z) {\partial}_{z_i}\right) \Pi, \\ \label{eq:PF5prodz} {\partial}^2_{z_i z_j} \Pi =& \left(A_{ij}^{kl}(z) {\partial}^2_{z_k z_l} + B^{k}_{ij} {\partial}_{z_k}\right) \Pi, \end{empheq} \end{subequations} where \begin{equation} \bary{rclrcl} X^{ij} &=& \frac{{\partial} z_i}{{\partial} t_r} \frac{{\partial} z_j}{{\partial} t_r},& Y^{i} &=& \frac{{\partial}^2 z_i}{{\partial} t_r {\partial} t_k} \frac{{\partial} z_k}{{\partial} t_r},\\ A^{kl}_{ij} &=& \frac{{\partial} t_m}{{\partial} z_i}\frac{{\partial} t_n}{{\partial} z_j}\mathsf{C}^p_{mn} \frac{{\partial} z_k}{{\partial} t_p}\frac{{\partial} z_l}{{\partial} t_r},& B^{k}_{ij} &=& \frac{{\partial} t_m}{{\partial} z_i}\frac{{\partial} t_n}{{\partial} z_j}\mathsf{C}^p_{mn} \frac{{\partial}^2 z_k}{{\partial} t_p {\partial} t_r} + \frac{\partial^2 t_m}{\partial z_i\partial z_j}\frac{\partial z_k}{\partial t_m}. \label{eq:XYAB} \eary \end{equation} By design, the large complex structure/semi-classical expansion point corresponds in these coordinates to $z=0$: from \eqref{eq:u0} and \eqref{eq:utoz}, sending $\Lambda_{\rm QCD}$ to infinity sets $z_0=0$, and from \eqref{eq:uvar} sending $a_i$ to infinity leads to a damping behaviour of the form % \begin{equation} z_i \sim {\rm e}^{- \sum_{j}\mathcal{K}_{ij}(\mathfrak{g}) a_j} \sim {\rm e}^{-\sum_{j}\mathcal{K}_{ij}(\mathfrak{g}) a^{\rm cl}_j} \end{equation} so that $z_i \to 0$, $i=1 ,\dots, r$ in that limit. The determination of the gauge theory prepotential from \eqref{eq:PF5Eul}--\eqref{eq:PF5prod} proceeds then along the following steps. \ben \item We write down the $t-$chart Picard--Fuchs system \eqref{eq:PF5Eul}--\eqref{eq:PF5prod} from the Frobenius manifold prepotential $F_\mathfrak{g}^{\rm trig}(t)$, recently found for all $\mathfrak{g}$ in \cite{Brini:2017gfi,Brini:2021pix}. \item We then derive from this the Picard--Fuchs system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} in the $z$-coordinates, by using the expression of the flat coordinates $t^i(z)$ in terms of the basic invariants/classical gauge theory Casimirs found in \cite{Brini:2017gfi,Brini:2021pix}, and using \eqref{eq:utoz}. \item We finally look for solutions to \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} in the form \begin{eqnarray} \Pi &=& \sum_{j,k} a_{jk} \log z_j \log z_k + \sum_l \sum_{J \in \frac{1}{|Z(\mathcal{G})|}\mathbb{Z}^{r+1}} b_{l,J} \log z_l \prod_k z_k^{J_k} \nn \\ & & + \sum_{J \in \frac{1}{|Z(\mathcal{G})|} \mathbb{Z}^{r+1}} c_{J} \prod_k z_k^{J_k} \label{eq:ansatz} \end{eqnarray} with at worst double-logarithimic singularities around $z=0$. In \eqref{eq:ansatz} we also allow fractional exponents with denominators that divide the order of the center of the group, as the latter coincides with the determinant of the Cartan matrix, and may arise as a consequence of the change-of-variables in \eqref{eq:utoz}. \medskip We shall find that \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} admit a $2r+2$-dimensional solution space of the form \eqref{eq:ansatz}: two of these are always $\log z_0$ and constant solution, and that the remaining $2r$ solutions satisfy the special geometry relations \eqref{eq:specgeom}, from which the gauge theory prepotential can be computed. \end{enumerate} We will put this strategy to the test in some of the lowest rank ADE examples in the next section. \subsection{Examples} \subsubsection{$A_1$} We start off by illustrating in detail the simplest example of $\mathcal{G}=A_1$ at vanishing $\theta$-angle. In this case, the Frobenius manifold $M_\mathfrak{g}^{\rm trig}$ coincides with the quantum cohomology ring of $\mathbb{P}^1$, and its prepotential is simply \begin{equation} F_{A_1}^{\rm trig} (t_0,t_1)=t_0t_1^2+{\rm e}^{2t_0}, \end{equation} with Euler vector field $E={\partial}_{t_0}+t_1 {\partial} t_1$ from \eqref{eq:Evec}, while the flat coordinates $(t_0,t_1)$ are related to the classical Coulomb moduli as \cite{Brini:2021pix} \begin{align} t_0=\log u_0,\;\;\;\;t_1=u_0 u_1. \end{align} From this, it is immediate to verify that the flat metric is anti-diagonal, $\mathsf{C}_{1i}^j=\delta_i^j$, and \begin{equation} \mathsf{C}_{0i}^j= \left( \begin{array}{cc} 0 & {\rm e}^{t_0} \\ 1 & 0 \\ \end{array} \right).\label{A1Jacobi} \end{equation} Finally, the semiclassical expansion coordinates \eqref{eq:utoz} are: \begin{equation} z_0=\frac{1}{u_0},\;\;\;\;z_1=\frac{1}{u_1^2}\label{zA1} \end{equation} We now have all the ingredients to write the Picard--Fuchs system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz}, which reads \begin{eqnarray} \left(z_0 {\partial}_{z_0}\right)^2 \Pi &=& 8 z_0^2 z_1^2 (3 {\partial}_{z_1} \Pi + 2 z_1 {\partial}^2_{z_1}) \Pi, \nn \\ (z_0{\partial}_{z_0}^2) \Pi &=& 4 \left[z_1^2(4 z_1-1) {\partial}_{z_1}^2 + z_1(6 z_1-1) {\partial}_{z_1} + z_0 z_1 {\partial}_{z_0} {\partial}_{z_1}\Pi\right] . \label{eq:PFA1} \end{eqnarray} Inserting the ansatz \eqref{eq:ansatz} into \eqref{eq:PFA1}, we find that the solution space of \eqref{eq:PFA1} is a 4-dimensional complex vector space with coordinates $(b_{0,0},c_{0,0},c_{0,1}, c_{0,2})$: \begin{eqnarray} & & \Pi = \frac{2c_{0,2} -3 c_{0,1}}{14} \left[ \log ^2\left(z_1\right)-2\log\left(z_0\right) \log \left(z_1\right)\right] \nn \\ &+& \frac{\log z_0}{7} \bigg[\left(7 b_{0,0}+z_1 \left(4 c_{0,2}-6 c_{0,1}\right)+z_1^2 \left(6 c_{0,2}-9 c_{0,1}\right)\right)+z_0^2 \left(4 c_{0,2}-6 c_{0,1}\right) z_1+\dots \bigg] \nn \\ &+& \frac{\log z_1}{14} \bigg[\left(13 c_{0,1}-4 c_{0,2}\right)+z_1 \left(8 c_{0,2}-12 c_{0,1}\right)+z_1^2 \left(12 c_{0,2}-18 c_{0,1}\right)+z_0^2 \left(8 c_{0,2}-12 c_{0,1}\right) z_1+\dots \bigg] \nn \\ &+& \left(c_{0,0}+z_1 c_{0,1}+z_1^2 c_{0,2}\right)+z_0^2 \left(c_{0,1} z_1+\left(\frac{18 c_{0,1}}{7}+\frac{16 c_{0,2}}{7}\right) z_1^2\right)+\dots \end{eqnarray} Setting $c_{0,1}=2/3 c_{0,2}$ we find a constant holomorphic solution at $c_{0,2}=b_{0,0}=0$, and two logarithmic solutions \begin{eqnarray} \Pi_{A_0} &=& \log z_0 \nn \\ \Pi_{A_1} &=& \log z_1+2 z_1+3 z_1^2+2 z_1 z_0^2+ 12 z_1^2 z_0^2+\dots \end{eqnarray} for $(b_{0,0},c_{0,2})=(1,0)$ and $(0,3)$ respectively. From \eqref{eq:u0} and \eqref{eq:utoz}, we identify ${\rm e}^{\Pi_{A_0}}=z_0=q_0$, while $\Pi_{A_1}$ equates to minus the scalar vev $a=\left\langle \varphi \right\rangle$ in \eqref{eq:specgeom}, in terms of which the inverse mirror map reads \begin{equation} z_1(q_0, q_1) =q_1-2 \left(q_0^2+1\right) q_1^2+3 \left(q_0^4+1\right) q_1^3-4 \left(q_0^6+q_0^4+q_0^2+1\right) q_1^4+5 \left(q_0^8-5 q_0^4+1\right) q_1^5+\dots \end{equation} Setting instead $c_{0,1}=1$ and $c_{0,2}=1/4$ gives the dual period \begin{eqnarray} \Pi_{B_1} &=& \frac{1}{4} \left(\log ^2z_1+2 \log z_0 \log z_1-4 \log z_1 \right)+\log \left(z_0z_1\right) \left(z_1+\frac{3}{2} z_1^2+z_0^2 z_1+ 6 z_0^2z_1^2+\dots\right) \nn \\ &-& z_1 z_0^2+z_1^2 \left(\frac{1}{4} +2 z_0^2\right)+\dots \nn \\ &=& \frac{1}{4} \left(\log^2q_1+2 \log q_0\log q_1-4 \log q_1\right)\nn \\ &+& \left(1+q_0^2\right) q_1+\frac{1}{4} \left(1+16 q_0^2+q_0^4\right) q_1^2+\frac{1}{9} \left(1+81 q_0^2+81 q_0^4+q_0^6\right) q_1^3 +\nn \\ &+&\frac{1}{16} \left(1+256 q_0^2+1040 q_0^4+256 q_0^6+q_0^8\right) q_1^4+O\left(q_1^5\right) \end{eqnarray} Identifying $\Pi_{B_1}$ with the gradient ${\partial}_{a} \mathcal{F}$ of the gauge theory prepotential, the above calculation retrieves the expression of the latter for the five-dimensional pure $\mathrm{SU}(2)$ theory at vanishing $\theta$-angle \cite{Nekrasov:1996cz}. \subsubsection{$A_2$} As a slightly more difficult case, let us consider another special unitary case given by the $\mathrm{SU}(3)$ gauge theory with $k_{\rm CS}=1$. The prepotential of $M_{\mathfrak{a}_2}^{\rm trig}$ is given by \begin{equation} F(t_0,t_1,t_2)={\rm e}^{\frac32 t_0}+\frac34 t_0t_2^2-\frac{1}{96}t_1^4+\frac14 t_1^2t_2, \end{equation} with flat coordinates related to the classical Coulomb moduli as \begin{align} (t_0,t_1,t_2)=(\log u_0,u_0^{\frac12} u_1,u_0 u_2). \end{align} As before the flat metric is anti-diagonal, with non-trivial structure constants \begin{equation} \mathsf{C}_{0i}^j =\left( \begin{array}{ccc} 0 & 0 & 1 \\ \frac{9}{2} {\rm e}^{\frac{3 t_0}{2}} & 0 & 0 \\ \frac{9}{4} {\rm e}^{\frac{3 t_0}{2}} t(1) & \frac{3}{2} {\rm e}^{\frac{3 t_0}{2}} & 0 \\ \end{array} \right)_{ij}, \quad \mathsf{C}_{1i}^j = \left( \begin{array}{ccc} 0 & \frac{1}{3} & 0 \\ 0 & -\frac{t_1}{2} & 1 \\ \frac{3}{2} {\rm e}^{\frac{3 t_0}{2}} & 0 & 0 \\ \end{array} \right), \end{equation} and the Euler vector field is $E={\partial}_{t_0}+t_1/2 {\partial}_{t_1}+t_2 {\partial}_{t_2}$. Finally, the semiclassical coordinates \eqref{eq:utoz} are here given by \begin{equation} z_0=\frac{1}{u_0},\;\;\;\;z_1=\frac{u_2}{u_1^2},\;\;\;\;z_2=\frac{u_1}{u_2^2},\label{A2z} \end{equation} The strategy in this case follows {\it verbatim} the discussion of the $A_1$ example. As expected, the solution space of \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} is 6-dimensional, linearly generated by one constant, three logarithmic, and two doubly-logarithmic solutions. By way of example, we obtain for the logarithmic solutions \begin{eqnarray} \Pi_{A_0} &=& \log z_0, \nn \\ \Pi_{A_1} &=& \log \left(z_1\right)-z_2-8 z_2 z_1^3+\frac{20 z_1^3}{3}-2 z_2 z_1^2+3 z_1^2-\frac{3 z_2^2}{2} +z_2^2 z_1+2 z_1+6 z_0^2 z_2^{4/3} z_1^{8/3}+\dots\nn \\ \Pi_{A_2} &=& \log \left(z_2\right)-z_1+2 z_2-\frac{10 z_1^3}{3}-3 z_0^2 z_2^{4/3} z_1^{8/3}+z_2 z_1^2-\frac{3 z_1^2}{2}-8 z_2^3 z_1-2 z_2^2 z_1+\dots \end{eqnarray} The final result for the prepotential then is \begin{eqnarray} -8\pi^3 \mathrm{i} \mathcal{F}_{A_2} &=& (2\log^2q_1 +2\log q_2 \log q_1+4 \log^2q_2)\log q_0 \nn \\ &-& \frac{20 \log^3 q_1}{27}-\frac{10}{9} \log q_2 \log^2 q_1-\frac{8}{9} \log^2 q_2 \log q_1-\frac{16 \log^3 q_2}{27}\nonumber\\ &+& 2\left(q_1+2q_1 q_2+2q_2\right)+\frac{1}{4}\left(q_1^2+q_1^2 q_2^2+q_2^2))\right)+\dots \end{eqnarray} Once again we have matched this at the first few orders in $q_i$ with the instanton calculation\footnote{For the $\mathrm{SU}(N)$ theory at an arbitrary Chern--Simons level $k_{\rm CS}$, see \cite[Section 3]{Tachikawa:2004ur} for the perturbative prepotential $\mathcal{F}_{\mathrm{SU}(N)_{k_{\rm CS}}}^{[0]}$ and \cite{Gottsche:2006bm} for the instanton corrections $\mathcal{F}_{\mathrm{SU}(N)_{k_{\rm CS}}}^{[n]}$ with $n>0$.} of the gauge theory prepotential at Chern--Simons level 1. We have verified in the same manner that the same applies for higher rank cases as well, such as $\mathcal{G}=\mathrm{SU}(4)$ at vanishing CS level, and that moreover our expressions agree with the direct period integral calculations of \cite{Brini:2008rh}. % \subsubsection{$D_4$}\label{sec:D4} Let us now move on to uncharted territory and test the Picard--Fuchs construction of \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} on the first non-trivial, non-unitary case corresponding to $\mathcal{G}=\mathrm{Spin}(8)$. While a direct analysis of period integrals is too hard to carry out in this case, our method allows to compute their weak coupling expansion efficiently. The prepotential, Euler vector field, and flat coordinates for this case are respectively given by \begin{align} F^{\rm trig}_{\mathfrak{d}_4}=&\frac{{\rm e}^{2 t_0}}{2}+{\rm e}^{t_0} t_1^2+{\rm e}^{t_0} t_3^2+{\rm e}^{t_0} t_4^2+2 {\rm e}^{\frac{t_0}{2}} t_1 t_3 t_4\nonumber\\ &+\frac{1}{2} t_2 t_1^2+\frac{1}{2} t_2 t_3^2+\frac{1}{2} t_2 t_4^2-\frac{t_1^4}{48}-\frac{t_3^4}{48}-\frac{t_4^4}{48}+\frac{t_2^3}{6}+\frac{1}{2} t_0 t_2^2, \label{eq:prepd4} \end{align} \begin{equation} E= {\partial}_{t_0}+\frac{t_1}{2} {\partial}_{t_1}+t_2 {\partial}_{t_2}+\frac{t_3}{2} {\partial}_{t_3}+\frac{t_4}{2} {\partial}_{t_4}, \end{equation} \begin{equation} t_0=\log u_0,\;\;\;\;t_1=u_0^{\frac12} u_1,\;\;\;\;t_2=u_0(u_2+2),\;\;\;\;t_3=u_0^{\frac12} u_3,\;\;\;\;t_4=u_0^{\frac12} u_4. \end{equation} The expansion coordinates around the large complex structure point in the moduli space read, from \eqref{eq:utoz}, \begin{equation} z_0=\frac{1}{u_0},\;\;\;\;z_1=\frac{u_2}{u_1^2},\;\;\;\;z_2=\frac{u_1 u_3 u_4}{u_2^2},\;\;\;\;z_3=\frac{u_2}{u_3^2},\;\;\;\;z_4=\frac{u_2}{u_4^2}. \label{eq:zd4} \end{equation} Explicit expressions for the differential system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} and its solutions are omitted here as they are obtained in a straightforward manner from \eqref{eq:prepd4}--\eqref{eq:zd4}; they are available upon request. Once again our strategy can be shown to retrieve the gauge theory prepotential for this case as well, which we have verified up to one-instanton, and up to $\mathcal{O}(z_i^{12})$. \footnote{As for such high rank the calculations become rapidly quite cumbersome, for our purposes it is quicker to match the special geometry and gauge theory expressions by a partial reverse-engineering of the solutions of \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz}. We computed the single-logarithmic solutions of \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} -- i.e. the mirror map --, and then used these to find a conjectural expression for the dual periods $\Pi_{B_i}$ in terms of the semiclassical variables B-model variables $z_i$ by employing the known results for the 1-loop and 1-instanton contributions to the gauge theory prepotential. We then verified explicitly in an expansion around $z=0$ that these are indeed solutions of our differential system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz}.} \subsubsection{$E_6$} Finally, we will briefly treat here the exceptional case of $\mathcal{G}=E_6$. From \cite{Brini:2021pix}, the prepotential of $M^{\rm trig}_{\mathfrak{e}_6}$ reads \begin{eqnarray} F^{\rm trig}_{\mathfrak{e}_6} &=& -\frac{t_1^6}{3240}-\frac{1}{648} t_4 t_1^4+\frac{1}{12} {\rm e}^{\frac{t_0}{3}} t_5 t_1^4+{\rm e}^{t_0} t_1^3+{\rm e}^{\frac{t_0}{2}} t_6 t_1^3-\frac{1}{216} t_4^2 t_1^2+\frac{5}{3} {\rm e}^{\frac{2 t_0}{3}} t_5^2 t_1^2 \nn \\ &+& \frac{1}{6} {\rm e}^{\frac{2 t_0}{3}} t_2 t_1^2-\frac{1}{6} {\rm e}^{\frac{t_0}{3}} t_4 t_5 t_1^2+\frac{1}{6} {\rm e}^{\frac{t_0}{6}} t_5^2 t_6 t_1^2+\frac{1}{6} {\rm e}^{\frac{t_0}{6}} t_2 t_6 t_1^2+\frac{1}{12} {\rm e}^{\frac{t_0}{3}} t_5^4 t_1+\frac{1}{12} {\rm e}^{\frac{t_0}{3}} t_2^2 t_1\nn \\ &+& \frac{1}{6} {\rm e}^{\frac{t_0}{3}} t_2 t_5^2 t_1+3 {\rm e}^{\frac{t_0}{3}} t_5 t_6^2 t_1-\frac{1}{3} t_3 t_4 t_1+3 {\rm e}^{\frac{4 t_0}{3}} t_5 t_1-{\rm e}^{\frac{t_0}{2}} t_4 t_6 t_1+6 {\rm e}^{\frac{5 t_0}{6}} t_5 t_6 t_1 \nn \\ &+& \frac{{\rm e}^{2 t_0}}{2}-\frac{t_5^6}{3240}+\frac{1}{648} t_2 t_5^4-\frac{t_6^4}{16}+\frac{t_2^3}{648}-\frac{t_4^3}{648}+ {\rm e}^{t_0} t_5^3+2 {\rm e}^{\frac{t_0}{2}} t_6^3\nn \\ &+& \frac{1}{2} t_0 t_3^2-\frac{1}{216} t_2^2 t_5^2-\frac{1}{6} {\rm e}^{\frac{2 t_0}{3}} t_4 t_5^2+3 {\rm e}^{t_0} t_6^2+ \frac{3}{2} t_3 t_6^2-\frac{1}{6} {\rm e}^{\frac{2 t_0}{3}} t_2 t_4+\frac{1}{12} {\rm e}^{\frac{t_0}{3}} t_4^2 t_5\nn \\ &+& \frac{1}{3} t_2 t_3 t_5+{\rm e}^{\frac{t_0}{2}} t_5^3 t_6-\frac{1}{6} {\rm e}^{\frac{t_0}{6}} t_4 t_5^2 t_6-\frac{1}{6} {\rm e}^{\frac{t_0}{6}} t_2 t_4 t_6+{\rm e}^{\frac{t_0}{2}} t_2 t_5 t_6, \end{eqnarray} with Euler vector field \begin{equation} E = {\partial}_{t_0}+ \frac{t_1}{3} {\partial}_{t_1}+ \frac{2t_2}{3} {\partial}_{t_2}+ t_3 {\partial}_{t_3}+ \frac{2t_4}{4} {\partial}_{t_4}+ \frac{t_5}{3}{\partial}_{t_5} + \frac{t_6}{2} {\partial}_{t_6} \end{equation} and flat and $z$-coordinates given respectively by \begin{eqnarray} & t_0=\log u_0,\quad t_1=u_0^{1/3} u_1,\quad t_2=u_0^{2/3} \left(u_1^2-6 u_2-12 u_5\right), \nn \\ & t_3=u_0 \left(u_3+2 u_1 u_5+4 u_6+3\right), \quad t_4=u_0^{2/3} \left(12 u_1-u_5^2+6 u_4\right),\nn \\ & t_5=u_0^{1/3} u_5, \quad t_6=\sqrt{u_0} \left(u_6+2\right), \end{eqnarray} and \begin{eqnarray} & z_0=\frac{1}{u_0},\quad z_1=\frac{u_2}{u_1^2},\quad z_2=\frac{u_1 u_3}{u_2^2}, \quad z_3=\frac{u_2 u_4 u_6}{u_3^2}, \nn \\ & z_4=\frac{u_3 u_5}{u_4^2}, \quad z_5=\frac{u_4}{u_5^2},\quad z_6=\frac{u_3}{u_6^2}. \end{eqnarray} As in the previous case of $\mathcal{G}=D_4$, to deal with the complexity of the system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} it is convenient to focus on the calculation of single-logarithmic solutions from which the mirror map can be constructed, with the dual periods being recovered as an {\it a posteriori} check. The $z$-chart differential system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} contains now an unmanageably large number of terms spawned by the change-of-variables \eqref{eq:XYAB}, thereby impeding a direct computational approach based on solving by brute force for the coefficients in the ansatz \eqref{eq:ansatz}. We adopt a hybrid method to circumvent the issue by considering instead the $t$-chart Picard--Fuchs operators \eqref{eq:PF5Eul}--\eqref{eq:PF5prod}, whose expressions are significantly simpler, working out their action on a monomial $\prod_{i=0}^r z_i^{m_i}(t)$, finally expressing the result in $z$-coordinates to obtain recursion relations for the coefficients $(a,b,c)$ in the $z$-chart ansatz \eqref{eq:ansatz}. Restricting e.g. to the single-logarithmic solutions ($a_{jk}=0$, $b_{l,J}=\delta_{J0}\delta_{li}$ for $i=0, \dots, r$) we obtain from \eqref{eq:PF5Eul} \begin{eqnarray} \left(J_0+2\right)^2 S_0^{-2} \prod_{i=1}^6 S_i^{-2 \mathfrak{q}_i} c_J &=& 4 \bigg[J_2^2+\left(-4 J_3+2 J_4+2 J_6-1\right) J_2+J_4^2+4 J_3^2 \nn \\ &+& J_6^2-J_4+J_3 \left(-4 J_4-4 J_6+2\right)+2 J_4 J_6-J_6\bigg] c_J, \label{eq:receul} \end{eqnarray} where $S_i c_{(J_0, \dots, J_i, \dots J_r)} \coloneqq c_{(J_0, \dots,J_i-1, \dots J_r)}$ is the left-shift in the $i^{\rm th}$ component of the multi-index $J$ in \eqref{eq:ansatz}. The recursions from \eqref{eq:PF5prod} are obtained similarly; the simplest one is obtained for $(i,j)=(4,2)$, where e.g. we obtain \begin{eqnarray} 0 &=& \left(J_1-2 J_2+J_3\right) \left(J_3-2 J_4+J_5\right) c_J-2 \left(J_3-2 J_6+1\right) \left(J_2-2 J_3+J_4+J_6+1\right) S_3S_6 c_J \nn \\ &+& 2 \bigg[J_2^2+\left(-4 J_3+2 J_4+2 J_6-13\right) J_2 \nn \\ &+& 4 J_3^2+J_4^2+J_6^2-13 J_4+J_3 \left(-4 J_4-4 J_6+26\right)+2 J_4 J_6-13 J_6+42\bigg] S_1S^{2}_2 S_3^4 S_4^2 S_5 S_6^2 c_J \nn \\ &-&\bigg[J_2^2+\left(-4 J_3+2 J_4+2 J_6+3\right) J_2+4 J_3^2+J_4^2+J_6^2+3 J_4+2 J_4 J_6+3 J_6\nn \\ &-& 2 J_3 \left(2 J_4+2 J_6+3\right)+2\bigg] S_3 c_J. \label{eq:rec42} \end{eqnarray} The initial data of the recursions are fixed by the coefficients $(b_{0,0}, \dots, b_{r,0})$ in \eqref{eq:ansatz}, leading to an $r+1$-dimensional vector space of solutions of \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz} with log-singularities as expected: for example the above differential constraint \eqref{eq:PF5prod} for $(i,j)=(4,2)$ sets \begin{equation} c_{(0,0,0,1,0,0,0)}=-1/2 c_{(0,1,2,4,2,1,2)}=-b_{0,2}+2b_{0,3}-b_{0,4}-b_{0,6}, \end{equation} while \eqref{eq:PF5Eul} sets $c_{(2,4,8,12,8,4,6)}=-b_{0,2}+2b_{0,3}-b_{0,4}-b_{0,6}$. The full set of recursions for all $(i,j)$ in \eqref{eq:PF5prod} and the ensuing reconstruction of the solutions is omitted for reasons of space, and is available upon request. \section{Picard--Fuchs equations: the non-simply-laced case} \label{sec:bcfg} The construction of the previous Section is constrained to apply to ADE gauge groups: although there exists a definition of an analogous canonical Frobenius manifold structure on the quotient of the reflection representation of extended affine Weyl groups associated to any root system \cite{MR2070050}, including non-simply-laced cases, these are not expected to retrieve the gauge theory periods as solutions of the differential system \eqref{eq:PF5Eulz}--\eqref{eq:PF5prodz}. The reason for this was already hinted at in \cref{sec:SWcurves}: the mirror theorem of \cite{Brini:2021pix} asserts that the Frobenius manifolds $M_\mathfrak{g}^{\rm trig}$ are isomorphic to certain Hurwitz strata associated to the spectral curves of the relativistic Toda chain associated to the co-extended loop group of $\mathcal{G}$, which in the non-relativistic limit reduce to the usual spectral curves of the Toda chain associated to the {\it untwisted} affine Kac--Moody algebra $\mathfrak{g}^{(1)}$. For non-simply-laced $\mathfrak{g}$, this is different from the {\it twisted} Toda chain relevant for the gauge theory: to account for this, it would in fact be natural to speculate that if one replaced, in the Landau--Ginzburg construction of \cite{Brini:2021pix}, the affine relativistic Toda chain with its twisted (Langlands/Montonen--Olive dual) version, then this would yield a ``twisted'' Frobenius manifold $(M_\mathfrak{g}^{\rm trig})^\vee$ from which the gauge theory periods would be retrieved by the same construction of the previous Section. \medskip Both expectations turn out to be false. As we shall see, the natural Frobenius metric associated to the spectral curves of \cref{sec:SWtwToda} is either degenerate or not flat; and perhaps more surprisingly, the twisted relativistic Toda spectral curves will turn out to be the {\it wrong} integrable system for the pure gauge theory in five compactified dimensions, away from $R_5\to 0$. As such, a new strategy is required to treat the non-simply-laced cases along the same lines as the ADE setting. In the next Section we will describe one such strategy, partly inspired by previous studies of associativity equations for the prepotentials of the 4d and the (perturbative) 5d theory, which also included non-simply-laced cases \cite{Bonelli:1996qh,Hoevenaars:2002wj, Marshakov:1996ae, Marshakov:1997ny,hoevenaars2005wdvv}. \subsection{The Jacobi algebra of a Seiberg--Witten curve}\label{sec:Jacobi alg} By the mirror theorem of \cite{Brini:2021pix}, the structure constants $\mathsf{C}_{ij}^k$ in the previous section can be read off from the spectral curve of the type-$\mathcal{G}^{(1)}$ relativistic Toda chain, $\mathsf{R}_{\mathcal{G},\rho';u}(\mu,\lambda)$, as follows. Let $\nu: \overline{C}^{\rm red}_{\mathcal{G};u} \to \mathbb{P}^1$ be the Cartesian projection to the $\nu$-axis from the perturbative spectral curve $\overline{C}^{\rm red}_{\mathcal{G};u}\coloneqq \overline{\{\widetilde{\mathsf{R}}_{\mathcal{G},\rho';u}(\mu, \nu/u_0) =0\}}$ for a choice of non-trivial irreducible representation $\rho'$: as the below is independent of the particular choice, we will henceforth suppress $\rho'$ from our notation. Then, from \cite[Thm.~3.1]{Brini:2021pix}, we have that \begin{equation} {\partial}_{t_i} \nu(\mu) ~ {\partial}_{t_j} \nu(\mu) = \mathsf{C}_{ij}^k {\partial}_{t_k} \nu(\mu) {\partial}_{t_r} \nu(\mu) + \mathsf{D}_{ij}(\mu) {\partial}_\mu \nu(\mu) \label{eq:LGalg} \end{equation} for some meromorphic function $\mathsf{D}_{ij}(\mu)$, and where the $t_i$-derivatives are taken at constant $\mu$. This states that $M_\mathfrak{g}^{\rm trig}$ is isomorphic, as a holomorphic family of commutative rings, to the Jacobi algebra of the meromorphic projection $\nu$. By the implicit function theorem, we can then rewrite \eqref{eq:LGalg} as \begin{equation} {\partial}_{t_i} \widetilde{\mathsf{R}}_{\mathcal{G};u} ~ {\partial}_{t_j} \widetilde{\mathsf{R}}_{\mathcal{G};u} = \mathsf{C}_{ij}^k~ {\partial}_{t_k} \widetilde{\mathsf{R}}_{\mathcal{G};u}~ {\partial}_{t_{ r}}\widetilde{\mathsf{R}}_{\mathcal{G};u} ~ \mod (\widetilde{\mathsf{R}}_{\mathcal{G};u}, {\partial}_\mu \widetilde{\mathsf{R}}_{\mathcal{G};u}) \label{eq:LGalgpol} \end{equation} This can be phrased more poignantly as follows. Let $\widetilde{V}_{(i)}(\widetilde{\mathsf{R}}_{\mathcal{G};u}) := {\partial}_{t_i} \widetilde{\mathsf{R}}_{\mathcal{G};u}/{\partial}_{t_r} \widetilde{\mathsf{R}}_{\mathcal{G};u}$. Then for all $u \in \mathbb{C}^{r+1}$, the tangent space $T_{t(u)} M_\mathfrak{g}^{\rm trig}$ is isomorphic as a commutative, associative unital algebra to $$\mathcal{V}(\widetilde{\mathsf{R}}_{\mathcal{G};u}) \coloneqq \mathrm{span}_\mathbb{C} \{\widetilde{V}_{(i)}\}_{i=0}^r.$$ The latter is obviously a vector subspace of the quotient ring $\mathbb{C}[{\rm e}^{t_0}, t_1 \dots, t_r][\lambda,\nu]/\mathcal{I}(\widetilde{\mathsf{R}}_{\mathcal{G};u})$, where $$\mathcal{I}(\widetilde{\mathsf{R}}_{\mathcal{G};u}):=\left\langle \widetilde{\mathsf{R}}_{\mathcal{G};u}(\mu, \nu/u_0), {\partial}_\mu \widetilde{\mathsf{R}}_{\mathcal{G};u}(\mu, \nu/u_0)\right\rangle,$$ and \eqref{eq:LGalgpol} states that it is actually a subalgebra, closed under product. The structure constants $\mathsf{C}_{ij}^k$ (although not the Frobenius metric yet) of $M_\mathfrak{g}^{\rm trig}$ can be obtained from \begin{equation} \widetilde{V}_{(i)}(\widetilde{\mathsf{R}}_{\mathcal{G};u})\widetilde{V}_{(j)}(\widetilde{\mathsf{R}}_{\mathcal{G};u})={\mathsf{C}}_{ij}^{k}(t) \widetilde{V}_{(k)}(\widetilde{\mathsf{R}}_{\mathcal{G};u})\;\;\;\;{\rm mod} \;\;\;\;\mathcal{I}(\widetilde{\mathsf{R}}_{\mathcal{G};u}),\label{eq:Valg} \end{equation} in a purely algebraic way, given the knowledge of $\widetilde{\mathsf{R}}_{\mathcal{G};u}$ and the unit vector $e=\widetilde{V}_{(r)}(\widetilde{R}_{\mathcal{G};u})$. \medskip How does this help in dealing with non-simply-laced cases? In all instances considered in \cref{sec:SWcurves}, including non-simply-laced gauge groups, the construction of SW curves for the gauge theory leads to a spectral polynomial of the form $\mathsf{P}_{\mathcal{G};u}(\mu, \lambda) = \widetilde{\mathsf{P}}_{\mathcal{G};u}(\mu,\lambda+q_0/\lambda)$ for some $\widetilde{\mathsf{P}}_{\mathcal{G};u} \in \mathbb{Z}[\mu, \nu; u_0, \dots, u_r]$, exactly as for the untwisted relativistic Toda polynomials relevant for simply-laced $\mathcal{G}$ in the discussion above: we have $\mathsf{P}_{\mathcal{G};u} = \mathsf{Q}_{\mathcal{G};u}$ in the M-theory engineering curves \eqref{eq:QAn}--\eqref{eq:QDn}, and $\mathsf{P}_{\mathcal{G};u} = \mathsf{R}_{\mathcal{G};u}$ (resp. $\mathsf{P}_{\mathcal{G};u} = \mathsf{T}_{\mathcal{G};u}$) for the untwisted (resp. twisted) relativistic Toda curves of \eqref{eq:polT}. It is then natural to look at the possibility of replacing the input polynomial $\widetilde{\mathsf{R}}_{\mathcal{G};u}$ in \eqref{eq:Valg} with the Hanany--Witten/relativistic Toda reduced characteristic polynomials $\widetilde{\mathsf{Q}}_{\mathcal{G};u}$/$\widetilde{\mathsf{T}}_{\mathcal{G};u}$, and construct an associated Frobenius manifold out of them. \medskip Indeed, as we shall show in \cref{claim:ideal} below, for all the curves of \cref{sec:SWcurves} we will be able to associate a holomorphic family of commutative and associative algebras via \eqref{eq:Valg}, and whose existence is itself non-trivial. But these won't come in general from a Frobenius manifold: when $\mathsf{P}_{\mathcal{G};u}\neq \mathsf{R}_{\mathcal{G};u}$: one can check that the construction of \cite{Dubrovin:1992dz,Brini:2021pix} applied to the the curves in \cref{sec:SWMth,sec:SWtwToda} for non-simply-laced $\mathcal{G}$ gives rise to a Frobenius metric that is either degenerate (for $\mathsf{Q}_{B_r;u}$ and $\mathsf{T}_{\mathcal{G};u}$) or curved (for $\mathsf{Q}_{C_r;u}$), so there is no clear notion of a privileged ``flat" coordinate system where something like \eqref{eq:PF5Eul}-\eqref{eq:PF5prod} could hold. These difficulties can however be side-stepped by imposing the contraints of rigid special K\"ahler geometry: we condense this in the following list of statements. \begin{claim}\label{claim:ideal} Let $\widetilde{\mathsf{P}}_{\mathcal{G};u}$ be equal to one of $\widetilde{\mathsf{R}}_{\mathcal{G};u}$, $\widetilde{\mathsf{Q}}_{\mathcal{G};u}$ or $\widetilde{\mathsf{T}}_{\mathcal{G};u}$. Up to affine-linear transformations, there exists a unique chart $\{t^i(u)\}_{i=0}^r$ such that \ben \item there exist holomorphic functions $\mathsf{C}_{ij}^k(t)$ s.t. \begin{align} \widetilde{V}_{(i)}(\widetilde{\mathsf{P}}_{\mathcal{G};u})\widetilde{V}_{(j)}(\widetilde{\mathsf{P}}_{\mathcal{G};u})={\mathsf{C}}_{ij}^{k}(t) \widetilde{V}_{(k)}(\widetilde{\mathsf{P}}_{\mathcal{G};u})\;\;\;\;{\rm mod} \;\;\;\;\mathcal{I}(\widetilde{\mathsf{P}}_{\mathcal{G};u})\label{eq:Valg2} \end{align} \item the periods associated to the SW curve $\{\mathsf{P}_{\mathcal{G};u}(\mu,\lambda)=0\}$ satisfy the system of $2^{\rm nd}$ order PDEs \begin{empheq}[box=\widefbox]{align} \label{eq:PF5prodgen} {\partial}^2_{t_i t_j} \Pi =& \mathsf{C}_{ij}^k(t) {\partial}^2_{t_r t_k} \Pi \end{empheq} \end{enumerate} \vspace{-.75cm} The coordinates $t_i(u)$ are uniquely fixed by imposing that the inverse $u_i(t)$ is a collection of trigonometric polynomials, \begin{equation} u_0={\rm e}^{t_0}, \quad u_i \in \mathbb{C}[{\rm e}^{-t_0}; t_1, \dots, t_r], \label{eq:ttougen} \end{equation} and that single-logarithmic singular solutions of \eqref{eq:PF5prodgen} exist at the point of large complex structure. \label{claim:coord} \end{claim} Our claim therefore postulates the existence of a differential ideal annihilating the periods of the SW curve, which is specified entirely by the product structure on $\mathcal{V}(\widetilde P_{\mathcal{G};u})$ and by the choice of a canonical system of coordinates $t^i(u)$. The latter behaves as a surrogate of the flat coordinates of the previous section, in spite of the lack of a flat non-degenerate Frobenius metric. As we did for the ADE gauge groups, we will subject \cref{claim:coord} above to a variety of tests, including comparisons with the gauge theory calculations from \cref{sec:blowup} and direct period integral calculations from the spectral curve, giving in turn an effective method to compute the periods in \eqref{eq:PF5prodgen}--\eqref{eq:PF5Eulgen} around the large complex structure limit. We will proceed, accordingly, in three steps. \begin{description} \item[Step 1: the canonical ring structure.] In analogy with the ADE cases, we assume that some of the special coordinates $t_i$ are determined by the same residue formula as \cite[Lemma.~4.1]{Brini:2021pix} which is a specialisation of the residue formulas of \cite[Lecture~5]{Dubrovin:1994hc} for the flat coordinates of a Hurwitz Frobenius manifold. For the examples below, we will indeed find that all $t_i$ (but perhaps $t_r$) are determined by the residue computation. Again similarly to the simply-laced setting, we further impose that $t_r=u_0 (u_r+f(u_1,...u_{r-1}))$ where $f$ is a polynomial in $u_1,...,u_{r-1}$. We then proceed to verify the first point in \cref{claim:coord} by an algebraic construction of the ring structure on $\mathcal{V}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$. To do this we first compute a reduced Gr\"obner basis $\mathrm{GB}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$ for the ideal $\mathcal{I}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$, and then use multi-variate polynomial division to compute the reduction of \eqref{eq:Valg2} w.r.t. $\mathrm{GB}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$. Since the latter is a Gr\"obner basis, the reduction is zero iff the above expression is in the ideal $\mathcal{I}(\widetilde{\mathsf{P}}_{\mathcal{G};u})$, and imposing the vanishing of the remainder of the division gives an {\it a priori} highly overconstrained inhomogeneous linear system for the indeterminates $\mathsf{C}_{ij}^{k}$. While for a general family of plane curves the system would admit no solutions, we will always find that, remarkably, for all the spectral polynomials $\widetilde{\mathsf{P}}_{\mathcal{G};u}$ in \cref{sec:SWcurves} a unique solution exists. \item[Step 2: the special coordinates.] We are now in a position to write down the differential system \eqref{eq:PF5prodgen} in a quasi-polynomial $t$-chart. At this point we still have a parametric dependence on the coefficients of the polynomial $f(u_1,...,u_{r-1})$, which we fix as follows. We first write \eqref{eq:PF5prodgen} in the $z$-chart defined by \eqref{eq:utoz}, and look for a condition such that solutions in the form \eqref{eq:ansatz} exist. Remarkably, all coefficients of $f(u_1,...,u_{r-1})$ turn out to be fixed this way, by just looking at the classical piece of the solutions, i.e., their limit as $z_0\to 0$. \item[Step 3: taming the solutions.] We still lack the analogue of the quasi-homogeneity condition \eqref{eq:PF5Eul} at this stage, which entails that the solution space of \eqref{eq:PF5prodgen} will be {\it a priori} of much higher dimension than the expected $2r+2$. It turns out that this freedom can be fixed entirely\footnote{With the one exception of the $\mathrm{Sp}(1)$ curves, where a small remaining ambiguity will need additional input to be fixed.}, in either of two ways. For the spectral curves with $\mathcal{G}=B_r$, associated either to the twisted affine relativistic Toda chain or the M-theory engineering of \cite{Brandhuber:1997ua}, we find that there exist complex numbers $\alpha$, $\beta \in \mathbb{C}$, and $l\in \{0,\dots, r\}$ such that \begin{empheq}[box=\widefbox]{align} \label{eq:PF5Eulgen} \left(t_0+\sum_{i=1}^r \mathfrak{q}_i^\vee t_i {\partial}_{t_i}\right)^2 \Pi =& \alpha {\rm e}^{\beta t_0}{\partial}^2_{t_l t_r} \Pi. \end{empheq} In the quasi-homogeneity equation \eqref{eq:PF5Eulgen}, the Euler vector field is specified in terms of the dual Coxeter exponents of $\mathcal{G}$ (i.e. the $\alpha$-basis coefficients of the highest short root), and we slightly generalise \eqref{eq:PF5Eulgen} by admitting a more general second derivative term in the r.h.s., where possibly $t_l\neq t_r$.\footnote{Although we believe this can be reduced to the form appearing in \eqref{eq:PF5Eulgen}, i.e. with $t_l=t_r$, as an equation on cohomology (the periods), it will be easier for us to consider the generalised, and equivalent equation \eqref{eq:PF5Eulgen} by seeing this as following from a stronger constraint at the level of forms (the SW differential).} For $\mathcal{G}\neq B_r$, we show that the imposition of the existence of a prepotential whose gradient returns the double-log solutions, ${\partial}_{\Pi_{A_i}}\mathcal{F} = \Pi_{B_i}$, and whose analytic structure at weak coupling has the form % \begin{equation} \mathcal{F}_\mathcal{G}(q_0,\dots, q_r) = C_3(\log q_0, \dots, \log q_r) + \sum_{I \in (\mathbb{N})^{r+1}} a_I \prod_j q_j^{I_j},\label{eq:Fconstraint} \end{equation} % allows to fix uniquely all remaining ambiguities. \end{description} Having outlined our computational strategy, we will now verify the above proposal in several examples. \subsection{$B_2$}\label{sec:B2} From \eqref{eq:QBn}, the perturbative spectral curve for $\mathcal{G}=B_2=\mathrm{Spin}(5)$ reads \begin{equation} \widetilde{\mathsf{Q}}_{B_2}(\mu,\nu;u)=\left(\mu^2-1\right) \left(\mu^2+1\right)^2 \mu \,\nu +u_2 \mu^4+u_1 \left(\mu^6+\mu^2\right)+\mu^8+1. \label{B2curve} \end{equation} As far as \cref{claim:coord} is concerned, we find that for this example, and indeed for all $B_r$ cases we have checked (up to $r=4$), the privileged coordinate set $t_i(u)$ including $t_r$ can be found by the same residue calculation of \cite[Lemma~4.1]{Brini:2021pix} applied to the perturbative SW curve \eqref{B2curve}. We thus get \begin{equation} t_0=\log u_0,\;\;\;\;t_1=u_0^{1/2}\sqrt{2 - 2 u_1 + u_2},\;\;\;\;t_2=u_0(2+2u_1+u_2). \end{equation} with inverse \begin{equation} u_0= {\rm e}^{t_0},~u_1 = -\frac{1}{4} {\rm e}^{-t_0} \left(t_1^2-t_2\right),~u_2= \frac{1}{2} {\rm e}^{-t_0} \left(t_1^2-4 {\rm e}^{t_0}+t_2\right). \end{equation} A reduced Gr\"obner basis for the discriminant ideal $\mathcal{I}(\widetilde{\mathsf{Q}}_{B_2;u})$ can be computed using Buchberger's algorithm with respect to the lexicographic monomial ordering $\mu \prec \nu$. We obtain $\mathrm{GB}(\widetilde{\mathsf{Q}}_{B_2;u}) = \{P(\mu), Q(\mu,\nu)\}$, with \begin{eqnarray} P(\mu) &=& \left(\mu ^2+1\right)^4 \left(4 \left(\mu ^2-1\right)^2 e^{t_0}-\mu ^2 t_2\right)+\mu ^2 \left(\mu ^2-1\right)^2 \left(\mu ^4-6 \mu ^2+1\right) t_1^2, \nn \\ Q(\mu,\nu) &=& \mu \Bigg[\left(39 \mu ^8-313 \mu ^6+557 \mu ^4-347 \mu ^2+64\right) t_1^2\nn \\ &+& 4 (\mu^2 -1) \left(39 \mu ^8+116 \mu ^6+78 \mu ^4-68 \mu ^2-101\right) e^{t_0} \nn \\ &-& \left(39 \mu ^8+155 \mu ^6+233 \mu ^4+165 \mu ^2+64\right) t_2\Bigg]+128\nu. \quad \end{eqnarray} Reducing \eqref{eq:Valg2} to the ideal $\mathcal{I}(\widetilde{\mathsf{Q}}_{B_2;u})=\left\langle \mathrm{GB}(\widetilde{\mathsf{Q}}_{B_2;u}) \right\rangle$ w.r.t. to the reduced basis $\mathrm{GB}(\widetilde{\mathsf{Q}}_{B_2;u})$ and solving for $\mathsf{C}_{ij}^k$, we obtain, for the structure constants of \eqref{eq:PF5prodgen}, \begin{equation} \mathsf{C}_{0i}^j=\left( \begin{array}{ccc} -t_1^2-16 {\rm e}^{t_0}+t_2 & -16 {\rm e}^{t_0} t_1 & 16 {\rm e}^{t_0} t_2 \\ -2 t_1 & -16 {\rm e}^{t_0} & 0 \\ 1 & 0 & 0 \\ \end{array} \right)_i^j, \end{equation} \begin{equation} \mathsf{C}_{1i}^j=\left( \begin{array}{ccc} -2 t_1 & -16 {\rm e}^{t_0} & 0 \\ -2 & -t_1 & 2 t_2 \\ 0 & 1 & 0 \\ \end{array} \right)_i^j. \end{equation} Finally, let us show how to derive the quasi-homogeneity condition \eqref{eq:PF5Eulgen}. We will directly impose that \eqref{eq:PF5Eulgen} holds as a constraint on the Seiberg--Witten differential $\log \mu \mathrm{d}\log \lambda$ for some $\alpha$, $\beta$ and $l$. The full SW curve $\overline{\mathcal{C}}_{B_r;u}$ is hyperelliptic, with a degree two map $\mu:\overline{\mathcal{C}}_{\mathcal{G};u} \to \mathbb{P}^1$ realising it as a branched cover of the complex line. In particular there is a subset of homology cycles $\{\gamma_1, ..., \gamma_{g(\overline{\mathcal{C}}_{B_r;u})}\}$ in $\overline{\mathcal{C}}_{B_r;u}$ such that \begin{equation} \Pi_{\gamma_i} \coloneqq \oint_{\gamma_i} \log \mu \mathrm{d}\log \lambda = \int_{l_i} \left(\log \lambda_+ -\log \lambda_-\right) \mathrm{d} \log \nu \label{eq:hyperPi} \end{equation} for chains $l_i=\mu(\gamma_i) \subset \mathbb{P}^1$, where an integration by parts has been performed. Let now $\mathfrak{D}=\sum_{ij} \mathsf{a}_{ij}(u) {\partial}^2_{u_i u_j} + \sum_i \mathsf{b}_i {\partial}_{u_i}$ be a second order differential operator whose symbol $\sigma(\mathfrak{D})$ has vanishing constant term. Then it is easy to show that $\mathfrak{D}(\log \lambda_+(\mu) -\log \lambda_-(\mu)) = P_1/P_2^{3/2}$ for $P_1,P_2 \in \mathbb{C}[\mu; u_0, \dots, u_r]$. Imposing that $P_1 \equiv 0$ identically in $\mu$ gives an in principle overconstrained system of equations in the indeterminates $\mathsf{a}$, $\mathsf{b}$. Happily, this can be shown to non-trivially admit a 1-dimensional space of solutions, parametrised by overall rescalings of $\mathfrak{D}$: in flat co-ordinates and at the level of periods, this gives \begin{equation} \left({\partial}_{t_0} + \frac12 t_1{\partial}_{t_1} + t_2 {\partial}_{t_2} \right)^2\Pi =-256 {\rm e}^{-t_0}\partial_{t_0}\partial_{t_2}\Pi. \end{equation} completing the construction of the PF system in \eqref{eq:PF5prodgen} and \eqref{eq:PF5Eulgen}. \medskip Armed with this, we proceed as in the analysis for the simply-laced cases. We find three linearly independent single-log solutions in the $z_i$-coordinates \eqref{eq:utoz}, and two double-log solutions, confirming our \cref{claim:coord}. Up to 2-instanton corrections, we find that the perturbative level is unambiguously fixed by \eqref{eq:PF5prodgen}, while order by order in $z_0$, each solution comes with finitely many constants which are equivalently fixed by the quasi-homogeneity equation \eqref{eq:PF5Eulgen} or by imposing imposing the existence of the prepotential. We find that the final result, \begin{eqnarray} -4\pi^3 \mathrm{i} \mathcal{F}_{B_2} &=& \frac{5}{6} \log ^3q_2+\log q_0 \log ^2q_2+\frac{5}{4} \log q_1 \log ^2q_2+\frac{3}{4} \log ^2q_1 \log q_2+\log q_0 \log q_1 \log q_2 \nn \\ &+& q_2 +q_1 q_2+q_1 q_2^2 +\frac{q_2^2}{8} +\frac{q_2^3}{27} +\frac{q_2^4}{64}+\frac{q_1^2 q_2^2}{8}+\dots \nn \\ &+& q_0 \Big( q_1 q_2 +3 q_1^2 q_2 +5 q_1^3 q_2+4 q_1^2q_2^2 +7 q_1^4 q_2+8 q_1^3q_2^2+3 q_1^2q_2^3 +9 q_1^5q_2 +12 q_1^4q_2^2 \nn \\ &+& 9 q_1^3 q_2^3 +11 q_1^6q_2 +16 q_1^5q_2^2 +15 q_1^4q_2^3 +8 q_1^3q_2^4+\dots\Big) \end{eqnarray} is in precise agreement with the IMS, 1-loop, and 1-instanton prepotential, which we have checked up to $\mathcal{O}(q_i^8)$. \subsection{$B_3$} The reduced characteristic polynomial for the M-theory curve of $\mathcal{G}=B_3$ is given from \eqref{eq:QBn} as \begin{align} \widetilde{\mathsf{Q}}_{B_3;u}(\mu,\nu)=&(\mu^2-1) \mu^3 \left(\mu^2+1\right)^2 \nu \nonumber\\ &+u_3 \mu^6+u_1 \left(\mu^{10}+\mu^2\right)+u_2 \left(\mu^8+\mu^4\right)+\mu^{12}+1, \end{align} We follow exactly the same procedure as for $\mathcal{G}=B_2$ to compute the structure constants of the Jacobi algebra \eqref{eq:Valg} and the flat coordinates. We omit details and present the final result for both. For the coordinates, we get \begin{equation} t_0=\log u_0,~ t_1=u_0^{\frac23}(u_1-1),~t_2=u_0^{1/2} (2 u_1-2u_2 + u_3-2)^{1/2}, ~t_3=u_0(2+2u_1+2u_2+u_3), \end{equation} with polynomial inverses \begin{equation} u_0= {\rm e}^{t_0},~u_1= {\rm e}^{-\frac{2 t_0}{3}} t_1+1, ~u_2= \frac{1}{4} {\rm e}^{-t_0} t_2^2-\frac{1}{4} {\rm e}^{-t_0} t_3+1,~u_3= \frac{1}{2} {\rm e}^{-t_0} t_2^2-2 {\rm e}^{-\frac{2 t_0}{3}} t_1+\frac{1}{2} {\rm e}^{-t_0} t_3-2, \end{equation} while the structure constants in the $t$-chart read \begin{equation} \mathsf{C}_{00}^j={ \left( \begin{array}{c} \frac{4 {\rm e}^{\frac{t_0}{3}} t_1 \left(t_1-24 {\rm e}^{\frac{t_0}{3}}\right)+3 t_3-3 t_2^2}{9} \\ \frac{12 {\rm e}^{\frac{2 t_0}{3}} t_1^2+288 {\rm e}^{t_0} t_1+3 \left(t_3-t_2^2\right) t_1-432 {\rm e}^{\frac{4 t_0}{3}}+{\rm e}^{\frac{t_0}{3}} \left(18 \left(5 t_2^2+t_3\right)-8 t_1^3\right)}{27} \\ \left(16 {\rm e}^{t_0}-\frac{16}{3} {\rm e}^{\frac{2 t_0}{3}} t_1\right) t_2 \\ \frac{16 {\rm e}^{\frac{2 t_0}{3}}}{3} \end{array} \right)_j } \end{equation} \begin{equation} \mathsf{C}_{1i}^j=\left( \begin{array}{cccc} \frac{4}{3} {\rm e}^{\frac{t_0}{3}} \left(t_1-6 {\rm e}^{\frac{t_0}{3}}\right) & \frac{1}{9} \left(-8 {\rm e}^{\frac{t_0}{3}} t_1^2-36 {\rm e}^{\frac{2 t_0}{3}} t_1+72 {\rm e}^{t_0}-3 t_2^2+3 t_3\right) & -\frac{16}{3} {\rm e}^{\frac{2 t_0}{3}} t_2 & \frac{16}{3} {\rm e}^{\frac{2 t_0}{3}} t_3 \\ 4 {\rm e}^{\frac{t_0}{3}} & -\frac{4}{3} {\rm e}^{\frac{t_0}{3}} \left(2 t_1+9 {\rm e}^{\frac{t_0}{3}}\right) & 0 & 0 \\ 0 & -2 t_2 & -16 {\rm e}^{\frac{2 t_0}{3}} & 0 \\ 0 & 1 & 0 & 0 \\ \end{array} \right)_i^j \end{equation} \begin{equation} \mathsf{C}_{2i}^j= \left( \begin{array}{cccc} -2 t_2 & 8 {\rm e}^{\frac{t_0}{3}} t_2 & 16 {\rm e}^{t_0}-\frac{32}{3} {\rm e}^{\frac{2 t_0}{3}} t_1 & 0 \\ 0 & -2 t_2 & -16 {\rm e}^{\frac{2 t_0}{3}} & 0 \\ -6 & 2 \left(t_1+6 {\rm e}^{\frac{t_0}{3}}\right) & -t_2 & 2 t_3 \\ 0 & 0 & 1 & 0 \\ \end{array} \right)_i^j, \end{equation} and $\mathsf{C}_{3i}^j=\delta_{i}^j$. As for the $B_2$ case, we also look for a smaller set of differential equations satisfied directly by the SW differential at for fixed $\mu$. It turns out that there is a four-dimensional space of differential operators annihilating \eqref{eq:hyperPi} in this case, and one basis element is in the desired form: \begin{equation} \left({\partial}_{t_0} + \frac23 t_1{\partial}_{t_1} +\frac12 t_2 {\partial}_{t_2} + t_3 {\partial}_{t_3} \right)^2\Pi=-4096 {\rm e}^{-t_0/3}\partial_1\partial_3\Pi. \end{equation} Once again the corresponding solutions are found to have the desired asymptotic behaviour at infinity in the Coulomb branch. We verified that the resulting prepotential agrees with the gauge theory calculation up to 1-instanton corrections and up to order $\mathcal{O}(q_i^7)$, $i=1,2,3$. \subsection{$C_1$ at $\theta=0$} For symplectic gauge groups, $\mathcal{G}=C_r$, we have the two different sets of curves \eqref{eq:QCnb} and \eqref{eq:QCny0}--\eqref{eq:QCnypi}, from \cite{Brandhuber:1997ua} and \cite{Hayashi:2017btw,Li:2021rqr} respectively. We consider here for the purpose of illustration the rank one case at different theta angles. \medskip For $\theta=0$, \eqref{eq:QCny0} gives \begin{equation} \widetilde{\mathsf{Q}}_{C_1;u,0}(\mu,\nu/u_0)=\mu^3 \nu+\left(\mu^2-1\right)^2 \left(u_1 \mu+\mu^2+1\right)+\frac{\mu^2 \left(\mu^2+1\right)}{u_0}. \end{equation} For the coordinates $t^i(u)$ and the $t-$chart structure constants, we find \begin{equation} t_0=\log u_0,\;\;\;\;t_1=u_0 u_1, \end{equation} \begin{equation} \mathsf{C}_{0i}^j=\left( \begin{array}{cc} -\frac{2 t_1}{3} & \frac{1}{3} {\rm e}^{t_0} \left(4 {\rm e}^{t_0}-1\right) \\ 1 & 0 \\ \end{array} \right), \end{equation} and again $\mathsf{C}_{1i}^j=\delta_i^j$.\\ The resulting Picard--Fuchs system is superficially different from the one of $A_1$ in \eqref{A1Jacobi}. Also the absence of an analogue of the quasi-homogeneity condition \eqref{eq:PF5Eulgen} leads to a space of solutions of dimension higher than $2r+2=4$: while for higher rank the imposition of the existence of a prepotential is sufficient to project to a solution space of the correct dimension, the case of rank one is pathological, and leads to a small finite ambiguity at every instanton level. In this case, perturbative parts and 1-instanton corrections are unambiguously fixed, but for higher instanton corrections we find two free parameters at each even order, and none at odd orders, which we can fix by matching with a direct period expansion from the curve. By doing so, we verify that the resulting prepotential is in agreement with the $\mathrm{SU}(2)$ calculation at vanishing $\theta$-angle, which we confirmed in an expansion at weak coupling up to 5-instantons. \subsection{$C_1$ at ${\theta=\pi}$} \label{sec:c1pi} We will jointly consider in this section the curves \eqref{eq:QCnypi} and \eqref{eq:QCnb}. We will show that, while formally different, these surprisingly give rise to the same algebra structure on the discriminant of the perturbative SW curve, and therefore to the same differential system \eqref{eq:PF5prodgen}. As in the previous Section, however, in this rank one situation there will be a finite (two-dimensional) ambiguity at each order of the $q_0$ expansion of the solutions, which as for $\theta=0$ we fix from a direct period calculation from the full SW curve. It turns out that the free parameters thus fixed will be {\it different} according to which curve one picks: the respective prepotential will also be different, with only the curve \eqref{eq:QCnypi} returning the correct gauge theory prepotential. \\ The perturbative SW curve \eqref{eq:QCnb} of \cite{Brandhuber:1997ua} reads \begin{equation} \widetilde{\mathsf{Q}}_{C_1;u}^\flat(\mu,\nu)=\mu^3 \nu+\left(\mu^2-1\right)^2 \left(u_1 \mu-\mu^2-1\right),\label{C1} \end{equation} whereas the $C_1^{\theta=0}$ curve in \cite{Hayashi:2017btw} is given by \begin{equation} \widetilde{\mathsf{Q}}_{C_1;u,\pi}(\mu,\nu)=\mu^3\nu-\left(\mu^2-1\right)^2 \left(u_1 \mu+\mu^2+1\right)+\frac{2 \mu^3}{u_0}\label{C10} \end{equation} Note that, up to rescalings of $\nu$ and $\mu$, the curves differ only by the last term which depends on $u_0$, and is therefore proportional to the gauge theory scale: this implies that the 1-loop prepotentials agree for these two curves. It is however less trivial to see whether this equality survives instanton corrections. In the same special coordinates for $\mathcal{G}=A_1$ and $\mathcal{G}=C_1^{\theta=0}$, \begin{equation} t_0=\log u_0,\;\;\;\;t_1=u_0 u_1, \end{equation} we find that the structure constants for both curves indeed surprisingly agree: \begin{equation} {\mathsf{C}}_{0i}^j=\left( \begin{array}{cc} -\frac{2 t_1}{3} & \frac{4 {\rm e}^{2 t_0}}{3} \\ 1 & 0 \\ \end{array} \right),\label{C1Jacobi} \end{equation} and ${\mathsf{C}}_{1i}^j=\delta_i^j$. We then solve \eqref{eq:PF5prodgen} in an expansion in $z_0=1/u_0$, $z_1=1/\sqrt{u_1}$ around $(z_0,z_1)=(0,0)$. As before, in this rank-1 case the imposition of the existence of a prepotential does not give enough constraints, and the equations \eqref{eq:PF5prodgen} leave a two-dimensional ambiguity at every instanton level which we fix, at any given order in $q_0$, by an expansion of the period integrals to leading order in $z_1=1/u_1$. It turns out that different values are obtained for these free parameters according to whether one picks \eqref{eq:QCnypi} or \eqref{eq:QCnb}: for example, for the non-trivial single-log solution of \eqref{eq:PF5prodgen}, we find \begin{equation} \Pi_A = \log z_1+z_1^2+C_1 z_0 z_1^3+\frac{3 z_1^4}{2}+6 C_1 z_0 z_1^5+z_1^6 \left(C_2 z_0^2+\frac{10}{3}\right)+30 C_1 z_0 z_1^7+z_1^8 \left(14 C_2 z_0^2+\frac{35}{4}\right)+\mathcal{O}\left(z_1^9\right) \end{equation} where $(C_1,C_2,\dots)=(2,15,\dots)$ for \eqref{eq:QCnypi} and $(C_1, C_2,\dots)=(0,5,\dots)$ for \eqref{eq:QCnb}. These impact the prepotential directly: for the curve \eqref{eq:QCnypi} we find \begin{eqnarray} -4 \pi^3 \mathrm{i}\mathcal{F}_{C_1,\theta=\pi} &=& \frac{1}{8} \log ^2 q_1\log q_0+\frac{1}{12}\log^3 q_1 + q_1+\frac{q_1^2}{8}+\frac{q_1^3}{27}+\frac{q_1^4}{64}+ \dots \nn \\ &-& \frac{q_0}{2} \left(1+ 3 q_1 +5 q_1^2+ 7 q_1^3+ 9 q_1^4+ 11 q_1^{5}+\dots\right) \nn \\ &+& q_0^2 \left(-\frac{q_1}{16}+ \frac{45}{16} q_1^2 + 16 q_1^3 + \frac{875}{16} q_1^4 + 144 q_1^5+ \dots \right) \label{eq:prepC1pi}, \end{eqnarray} % in precise agreement with the genus zero topological string calculation from the engineering geometry $K_{\mathbb{F}_1}$ \cite{Chiang:1999tz}. For the curve \eqref{eq:QCnb} instead we get the correct expression for the 1-loop term, but disagreement is already found at 1-instanton level for all terms, where e.g. the $[q_0^1 q_1^0]$ coefficient in the expansion of the prepotential is predicted to vanish. This confirms the tension between the results of \cite{Brandhuber:1997ua} on one hand and \cite{Hayashi:2017btw,Li:2021rqr} on the other, ruling out the curve \eqref{eq:QCnb} of \cite{Brandhuber:1997ua} as a candidate Seiberg--Witten geometry for the $\mathrm{Sp}(1)$ theory with no flavours. \subsection{The relativistic Toda chain on a twisted loop group} \label{sec:PFToda} \subsubsection{$B_2$} From \eqref{eq:twistedB2curve}, the spectral curve of the twisted $B_2$ relativistic Toda chain can be obtained from the reduced characteristic polynomial \begin{equation} \widetilde{\mathsf{T}}_{B_2;u}(\mu,\nu)=u_2 \mu^4-u_1 \left(\mu^3+\mu\right)+\mu^4+1+\left(\mu^2-1\right) \mu \nu.\label{twistedB2} \end{equation} Note that \eqref{twistedB2} is different from \eqref{B2curve}, even after the natural redefinition $\mu \to \sqrt{\mu}$. In the non-relativistic limit, the full curve $\mathsf{T}_{B_2;u}(\mu,\lambda)=\widetilde{\mathsf{T}}_{B_2;u}(\mu,\lambda +q_0/\lambda)=0$ reduces to the four-dimensional $(B_2^{(1)})^\vee = A_3^{(2)}$ curve of \cite{Martinec:1995by}. \medskip Even though $t_2$ is not determined by the residue formula of \cite[Lemma~4.1]{Brini:2021pix}, we can still find it by applying the strategy outlined above. Omitting computational details, the special geometry condition on the dual periods gives for the $t$-chart the following expressions: \begin{equation} t_0=\log u_0,\;\;\;\;t_1=u_0 u_1,\;\;\;\;t_2=u_0(2+u_1+u_2). \end{equation} The Gr\"obner basis calculation for $\mathcal{I}\big(\widetilde{\mathsf{T}}_{B_2;u}\big)$ gives then \begin{eqnarray} & & \mathrm{GB}\big(\widetilde{\mathsf{T}}_{B_2;u}\big) = \Big\{{\rm e}^{-t_0} \left(\left(\mu ^2+4 \mu +1\right) \mu ^2 t_1+\left(\mu ^2+1\right) \left(\left(\mu ^2-1\right)^2 {\rm e}^{t_0}-\mu ^2 t_2\right)\right), \nn \\ & & {\rm e}^{-t_0} \left(2 \nu +3 \mu ^5 {\rm e}^{t_0}-2 \mu ^3 {\rm e}^{t_0}-\left(3 \mu ^2+4\right) \mu t_2+\left(3 \mu ^3+12 \mu ^2+4 \mu +2\right) t_1-\mu {\rm e}^{t_0}\right)\Big\},\quad \end{eqnarray} from which the structure constants of the Jacobi algebra are read off as \begin{equation} \mathsf{C}_{0i}^j= \left( \begin{array}{ccc} -t_1-4 {\rm e}^{t_0}+t_2 & 4 {\rm e}^{t_0} t_1 & 4 {\rm e}^{t_0} t_2 \\ -1 & t_2-t_1 & 3 t_1+t_2 \\ 1 & 0 & 0 \\ \end{array} \right), \end{equation} \begin{equation} \mathsf{C}_{1i}^j= \left( \begin{array}{ccc} -1 & t_2-t_1 & 3 t_1+t_2 \\ {\rm e}^{-t_0} & -2 & 3 \\ 0 & 1 & 0 \\ \end{array} \right). \end{equation} As for the $B_2$ curve of \cite{Brandhuber:1997ua} considered in Section~\ref{sec:B2}, since $\overline{\{ \mathsf{T}_{B_2;u}(\mu, \lambda+q_0/\lambda)=0}\}$ is hyperelliptic, we can directly derive a quasi-homogeneity equation satisfied by the Seiberg--Witten differential in the form \eqref{eq:PF5Eulgen}: \begin{equation} L_E^2\Pi=4 {\rm e}^{-t_0}\partial_0\partial_2\Pi. \end{equation} With the above in place, it is now straightforward to compute the prepotential. Surprisingly, this {\it disagrees} with the gauge theory result, away from the four-dimensional, $R_5\to 0$ limit, already at the 1-loop level. We find: \begin{eqnarray} -4\pi^3\mathrm{i} \mathcal{F}_{B_2} &=& \frac{\log^3 q_1}{3}+\frac{1}{2} \log q_0 \log^2 q_1+\frac{1}{2} \log^2 q_1\log q_2+\frac{3}{8} \log q_1\log^2 q_2 \nn \\ &+& \frac{1}{2} \log q_0 \log q_1\log q_2 +\frac{\log^3 q_2}{8}+\frac{1}{4} \log q_0 \log^2 q_2 \nn \\ &+& \frac{q_2}{4}+\frac{q_2^2}{32}+\frac{q_2^3}{108}+q_1+q_1 q_2+\frac{ q_1^2}{8}+\frac{ q_1^2q_2}{4}+\frac{ q_1^2q_2^2}{8} + \frac{q_1^3}{27}+\frac{q_1^3 q_2^3}{27} +\cdots \nn \\ &+& q_0\left(\frac{1}{2} q_1^2 q_2+q_1^3 q_2+\frac{3}{2} q_1^4q_2+ q_1^3q_2^2+2 q_1^5q_2+2 q_1^4q_2^2 +\frac{5}{2} q_1^6q_2 +3 q_1^5q_2^2+\frac{3}{2} q_1^4q_2^3+\dots\right) \nn \\ &+& q_0^2\left(\frac{1}{32} q_1^4 q_2^2+\frac{3}{4} q_1^5q_2^2+\frac{1}{16} q_1^4q_2^3+\frac{65}{16} q_1^6q_2^2+ q_1^5q_2^3+\frac{1}{8} q_1^4q_2^4+\frac{55}{4} q_1^7q_2^2+\frac{99}{16} q_1^6q_2^3+\dots\right). \nn \\ \label{weightedF0B2} \end{eqnarray} We see from \eqref{weightedF0B2} that the W-bosons multi-cover contributions to the prepotential take the form $\sum_{\alpha \in \Delta^+(B_2)} 4/(\alpha, \alpha)^2 \mathrm{Li}_3(q^\alpha)$, where contributions of long roots appear weighted with their inverse square length ($=1/4$), instead of appearing with weight one. Note that this is not incompatible with the curve being correct in the four-dimensional limit, as such weighing factor can be reabsorbed when $R_5\to 0$ by a rescaling of the Coulomb moduli and an immaterial quadratic shift of the prepotential. It should not come as much of a surprise that the higher order instanton terms also disagree with the gauge theory calculation from the blowup formulas \eqref{1instanton}. \medskip At this point one might wonder whether the mismatch with the instanton calculation is to be ascribed to a failure of our formalism in this case, rather than a genuine deviation of the twisted Toda spectral curve from the correct description of the low energy theory. A good litmus test would be to contrast \eqref{weightedF0B2} with a direct period integral calculation from the curve: as the discrepancy with the gauge theory prepotential affects the perturbative level as well, this can be probed already in the $q_0\to 0$ limit, where by \eqref{eq:lambdatw} the spectral curve reduces to the vanishing locus of \eqref{twistedB2}. This is a rational curve, as the $\mu$-projection is unramified and maps it isomorphically to a $\mathbb{P}^1$. As a result, the general expression for the triple derivatives of the prepotential boils down to a sum of residues at the divisors of zeroes and poles of $\mu$ and $\nu$: \begin{eqnarray} {\partial}^3_{\log q_i, \log q_j, \log q_k} \mathcal{F}_{B_2} &=& \sum_{\mathrm{d} \nu=0} \frac{q_i q_j q_k{\partial}_{q_i}(\log\mu~\mathrm{d}\log\nu){\partial}_{q_j}(\log\mu~\mathrm{d}\log\nu){\partial}_{q_k}(\log\mu~\mathrm{d}\log\nu)}{\mathrm{d} \log \mu ~\mathrm{d} \log \nu} \nn \\ &=& \sum_{\nu,\mu\in \{0,\infty\}} \frac{q_i q_j q_k{\partial}_{q_i}(\log\nu~\mathrm{d}\log\mu){\partial}_{q_j}(\log\nu~\mathrm{d}\log\mu){\partial}_{q_k}(\log\nu~\mathrm{d}\log\mu)}{\mathrm{d} \log \mu ~\mathrm{d} \log \nu}\nn \\ \label{eq:tripleB2} \end{eqnarray} where in going from the first to the second line we have turned the contour around in the $\mu$-plane and picked up poles of the integrand in the complement of $\{\mathrm{d} \nu(\mu)=0\}$; we have further used the fact that $\{\mathrm{d} \mu(\nu)=0\}=\emptyset$; and we have finally employed the `thermodynamic identity' ${\partial}_{q_i} \mu \mathrm{d}\nu=-{\partial}_{q_i} \nu \mathrm{d}\mu$ \cite[Lecture~5]{Dubrovin:1994hc}. The residues in \eqref{eq:tripleB2} are then immediate to compute as, from \eqref{twistedB2}, they are just residues of the rational function $\nu(\mu)$ at $\mu=0$, $\infty$, and ${\rm e}^{\pi\mathrm{i} k/2}$, $k=0,1,2,3$. This returns the $q_0=0$ part of \eqref{weightedF0B2}, confirming the correctness of the proposed Picard--Fuchs system, and the discrepancy with the gauge theory alongside it. \subsubsection{$G_2$} Our last example is the spectral curve geometry of the twisted relativistic Toda chain associated to the $G_2$ root lattice. The reduced characteristic polynomial is given by \begin{eqnarray} \widetilde{T}_{G_2;u}(\mu,\nu) &=& (\mu-1)^2 \left[u_2 (\mu+1)^2 \mu^2-u_1^2 \mu^3-u_1 \left(\mu^5+\mu\right)+\sum_{i=0}^6 \mu^i\right]\nn \\ &+& \nu(\mu-1)^2 \mu \left(\mu^4+2 \mu^3-u_1 \mu^2+\mu^2+2 \mu+1\right)-\nu^2 \mu^3 \left(\mu^2+\mu+1\right) \nn \\ &+& \frac{3 (\mu+1)^2 \mu^3}{u_0^2}. \end{eqnarray} The corresponding spectral curve $\{\widetilde{\mathsf{T}}_{G_2, u}(\mu,\nu+q_0/\nu)=\mathsf{T}_{G_2, u}(\mu,\lambda)=0\}$ has already appeared in the literature as a factor of the spectral curve for $\mathrm{SU}(N)$ Chern--Simons theory on the Seifert manifold $\Sigma_{2,3,3} \simeq S^3/\mathbb{T}_{24}$, where $\mathbb{T}_{24}$ is the binary tetrahedral group: the spectral parameter dependence of \eqref{eq:lambdatw} is indeed the same as that of \cite[Eq.(6.33)--(6.35)]{Borot:2015fxa}. This is large-$N$ dual to the topological A-string on the quotient $[\mathcal{O}^{\oplus 2}_{\mathbb{P}^1}(-1)/\mathbb{T}_{24}]$ of the resolved conifold, which engineers the pure gauge theory with $\mathcal{G}=E_6$. The family of spectral curves for the twisted $G_2$ Toda chain curve then sit as a three-parameter subfamily of the $E_6$ five-dimensional SW curves. It is furthermore immediate to see that it returns the affine $G_2^\vee$ periodic Toda spectral curve of \cite{Martinec:1995by} in the $R_5 \to 0$ limit. \\ The quasi-polynomiality condition and the condition of existence of single-log solutions at large complex structure uniquely fixes the privileged coordinate $t$-frame as \begin{equation} t_0=\log u_0,\;\;\;\;t_1=u_0^{\frac12} \left(u_1+2\right),\;\;\;\;t_2= u_0\left(\frac12 u_1^2+7 u_1+7 + u_2\right), \end{equation} in terms of which the structure constants read \begin{equation} \mathsf{C}_{0i}^j={\footnotesize \left( \begin{array}{ccc} \frac{6 t_2+18 {\rm e}^{t_0}-t_1^2+18 t_1 {\rm e}^{\frac{t_0}{2}}}{4} & \frac{2 t_2 t_1+12 {\rm e}^{\frac{t_0}{2}} \left(4 t_1^2+t_2\right)-3 t_1^3-54 {\rm e}^{t_0} t_1}{8} & \frac{4 t_2 t_1^2+6 {\rm e}^{\frac{t_0}{2}} \left(9 t_1^2+14 t_2\right) t_1-4 t_2^2+18 {\rm e}^{t_0} \left(3 t_1^2-4 t_2-3 t_1^4\right)}{8} \\ -3 {\rm e}^{\frac{t_0}{2}} & -t_1^2+\frac{9}{2} {\rm e}^{\frac{t_0}{2}} t_1+t_2 & {\rm e}^{\frac{t_0}{2}} \left(9 t_1^2+6 t_2\right)+\frac{1}{4} \left(2 t_1 t_2-3 t_1^3\right) \\ 1 & 0 & 0 \\ \end{array} \right)_i^j}, \end{equation} \begin{equation} \mathsf{C}_{1i}^j={\footnotesize \left( \begin{array}{ccc} -3 {\rm e}^{\frac{t_0}{2}} & -t_1^2+\frac{9}{2} {\rm e}^{\frac{t_0}{2}} t_1+t_2 & {\rm e}^{\frac{t_0}{2}} \left(9 t_1^2+6 t_2\right)+\frac{1}{4} \left(2 t_1 t_2-3 t_1^3\right) \\ 2 & -3 t_1 & -\frac{3 t_1^2}{2}+9 {\rm e}^{\frac{t_0}{2}} t_1-t_2 \\ 0 & 1 & 0 \\ \end{array} \right)_i^j, } \end{equation} and $\mathsf{C}_{2i}^j=\delta_i^j$. \medskip Unlike the twisted $B_2$ case, the spectral curve is not hyperelliptic here and it is more difficult to come up with a partial differential equation in the moduli which is already satisfied at the level of differentials. To fix the prepotential uniquely, we shall first consider the full solution space of \eqref{eq:PF5prodgen}, without imposing any quasi-homogeneity condition of the form \eqref{eq:PF5Eulgen}. We find that the solutions thus obtained possess the following properties in the $z$-chart: \begin{itemize} \item all Taylor coefficients of $\mathcal{O}(z_0^0)$ in the expansion around $z=0$ are unambiguously fixed only by \eqref{eq:PF5prodgen}, \item order by order in $z_0$, each solution comes with finitely many constant parameters that cannot be fixed purely by \eqref{eq:PF5prodz} due to the lack of the quasi-homogeneity equation \eqref{eq:PF5Eulz}, but imposing the existence of the prepotential in the form \eqref{eq:Fconstraint} \emph{uniquely} fixes such constant parameters. \end{itemize} Exactly as for the previous case of $\mathcal{G}=B_2$, the perturbative prepotential contains as expected a sum of trilogarithms indexed by positive roots, but the long roots contributions are weighted by a relative factor of $4/(\alpha, \alpha)^2=1/9$ compared to the short ones, in disagreement with the gauge theory calculation already at the perturbative level (for $R_5\neq 0$). The higher order corrections in $z_0=q_0$ are computed efficiently and in a similar manner: rather unsurprisingly at this point, they also differ from the blow-up formula calculation of \cite{Keller:2012da}. We find \begin{eqnarray} -4\pi^3 \mathrm{i} \mathcal{F}_{G_2} &=& \frac{4}{3} \log ^3q_1+\frac{1}{2} \log q_0 \log ^2q_1+2 \log ^2q_1 \log q_2 +\frac{7}{6} \log q_1\log ^2q_2+\frac{7}{27} \log ^3q_2 \nn \\ &+& \frac{1}{6} \log q_0 \log ^2q_2+\frac{1}{2} \log q_0 \log q_2 \log q_1 \nn \\ &+& q_1+\frac{q_2}{9}+\frac{q_1^2}{8}+q_2 q_1+\frac{q_2^2}{72}+\frac{q_1^3}{27}+ q_1^2q_2+\frac{q_2^3}{243}+\frac{q_1^4}{64}+\frac{1}{9} q_1^3q_2+\frac{1}{8} q_1^2q_2^2+\frac{q_2^4}{576}+\frac{q_1^5}{125}+\dots \nn \\ &+& q_0 \bigg(q_1^4 q_2^2+2 q_1^5 q_2^2+3 q_1^6q_2^2 +2 q_1^5q_2^3+4 q_1^7q_2^2+6 q_1^6q_2^3+5 q_1^8q_2^2 +10 q_1^7q_2^3 +3 q_1^6q_2^4+\dots\bigg) \nn \\ &+& q_0^2 \bigg(\frac{1}{8} q_1^8 q_2^4+3 q_1^9 q_2^4+\frac{65}{4} q_1^{10}q_2^4+3 q_1^9q_2^5 +55 q_1^{11}q_2^4 +25 q_1^{10}q_2^5+\dots \bigg) +\mathcal{O}(q_0^3). \end{eqnarray} \section{Conclusion} \label{sec:conclusion} In this paper we formulated an algebraic construction of Picard--Fuchs D-modules for the periods of a class of B-model geometries related to $\mathcal{N}=1$ super Yang--Mills theory on $\mathbb{R}^4 \times S^1$, including non-unitary gauge groups/non-toric engineering geometries. Our proposal passes a host of fine checks involving explicit period integral calculations on the B-model, and/or indirect tests from multi-instanton calculations in the gauge theory/A-model. One first upshot of our study is a direct confirmation of the validity of various proposals in the literature arising from the theory of relativistic integrable systems (for simply-laced gauge groups) and from 5-brane web constructions (for classical groups): in doing so we also resolve a tension between two inequivalent proposals for symplectic gauge groups in \cite{Brandhuber:1997ua} and \cite{Li:2021rqr}, in favour of the latter. A second surprising consequence is a disproof of the relevance of the twisted version of the affine relativistic Toda chain for non simply-laced gauge symmetry: the corresponding spectral curves are shown {\it not} to reproduce the low-energy prepotential of the gauge theory for BCFG types, in sharp contrast with the well-established situation for pure $\mathcal{N}=2$ super Yang--Mills theory on $\mathbb{R}^4$. \medskip Our results furthermore open several further avenues of investigation, which we defer to future work. We describe some of these below. \bit \item One obvious strand of implications is for the possible generalisation to a non-trivial $\Omega$-background, possibly including surface operators. In particular, in the Nekrasov--Shatashvili limit \cite{Nekrasov:2009rc}, it would be fascinating to explore to what extent the commutative algebra construction of \cref{sec:bcfg} of Picard--Fuchs systems from the ring of regular functions on the discriminant admits a canonical non-commutative quantisation: this aligns well with both the spirit of the quantum curve constructions of \cite{Aganagic:2011mi} and the fact that the NS twisted effective superpotential is known to be computable from an $\hbar$-deformation of the usual special geometry relations \cite{Mironov:2009uv, Mironov:2009dv}. While it is clear how to promote the Seiberg--Witten curve to a quantum finite-difference operator, the notion of a non-commutative discriminant ideal relevant for the constructions of \cref{sec:bcfg} is less obvious. Moreover the fact that the system \eqref{eq:PF5prodgen} is holonomic rests on the underlying ring structure on the discriminant being commutative, which guarantees that the structure constants $\mathsf{C}_{ij}^k$ can be viewed as Christoffel symbols of a torsion-free connection on the Coulomb branch: how this feature would persist in the noncommutative setting would need to be clarified. \item As far as the self-dual slice of the $\Omega$-background is concerned, the construction proposed here gives an effective technique for finding the mirror maps in the context of local mirror symmetry, and it would be interesting to combine it with the Chekhov--Eynard--Orantin topological recursion \cite{Chekhov:2005rr,Eynard:2007kz} to compute open and closed higher genus A-model amplitudes in the self-dual $\Omega$ background, as considered in \cite{Alday:2009fs,Brini:2010fc,Awata:2010bz,Kozcaz:2010af} in relation to the AGT correspondence with surface operators, and studied in detail in \cite{Aganagic:2002wv,Brini:2008ik,Borot:2014kda,Borot:2015fxa,Brini:2017gfi} for ADE gauge groups in relation to the large $N$ duality with Chern--Simons theory on Seifert manifolds and related matrix models. A fascinating relating development is the appearance of recursive structures for the self-dual theory arising from the non-autonomous Toda chain in four-dimensions, as recently been studied in \cite{Bonelli:2021rrg}, raising the prospects of a possible uplift to the 5d setup considered here. \item Our story took place wholly in five dimensions and without hypermultiplets. The absence of matter leads to a symmetry under the involution $\lambda \leftrightarrow q_0/\lambda$ which simplifies the task of determining the PF ideal to calculations on the reduced (perturbative) Seiberg--Witten curves $\{\widetilde{\mathsf{P}}_{\mathcal{G};u}=0\}$, as in \cite{Ito:1997ur}. The lack of such symmetry may render the calculations more difficult in the case with non-trivial matter content, but it should possible to extend at least the treatment of \cref{sec:bcfg} to these cases as well. Furthermore, the trigonometric Frobenius manifolds of \cref{sec:ade} have canonical elliptic deformations in terms of Frobenius structures on orbits of Jacobi groups, as in \cite{MR1775220,MR1797874}: it is only natural to conjecture that these would play for a 6D uplift of the theory the same role that their rational (Weyl) and trigonometric (extended affine Weyl) had for the four and five-dimensional setup respectively. Note that for $\mathcal{G}=\mathrm{SU}(N)$ their almost-dual Frobenius manifolds \cite{MR2070050} are indeed already known to closely reproduce the six-dimensional perturbative gauge theory prepotential \cite{Marshakov:1997cj} in terms of Beilinson's elliptic polylogarithm, in exactly the same way as the almost-duals of their rational \cite{MR2070050} and trigonometric \cite{Stedman:2021wqs} limits Frobenius manifolds gave the 1-loop prepotential of the four and five-dimensional gauge theories respectively. \item The surprising failure of the twisted affine relativistic Toda system to reproduce the low energy effective action of the theory with BCFG gauge symmetry raises several interesting questions. The first, obvious one is what would then be the correct integrable model underlying the gauge theory. This should be a relativistic deformation of the twisted affine Toda chain which is qualitatively different from the natural one considered here by Dynkin folding. That would be similar to the way in which the integrable models underlying the $\mathrm{SU}(N)$ theory at different Chern--Simons levels provide inequivalent relativistic deformations of the periodic Toda chain, to which they however all reduce in the four-dimensional/non-relativistic limit \cite{Brini:2008rh,Eager:2011dp}. For classical groups, the dimer model strategy of \cite{Eager:2011dp} applied to the M-theory curves of \cite{Brandhuber:1997ua,Li:2021rqr} might be a promising avenue to make headway into this, and possibly hint to a solution encompassing exceptional non-simply-laced gauge groups as well. \item The reverse question is also of some non-trivial relevance, i.e. what is the physical and/or geometrical significance of the B-model on the local geometry associated to the spectral curves of the twisted affine relativistic Toda chain? Part of the answer is already known: these curves indeed arise via affine Dynkin folding on a sublocus of the K\"ahler moduli space of the topological A-model on ADE-orbifolds of the resolved conifold, where they have already been found {\it ante litteram} to describe (at small 't Hooft coupling/small K\"ahler volume) the large $N$ limit of Chern--Simons theory on Seifert manifolds of positive curvature \cite{Borot:2015fxa}. So even though our results in \cref{sec:PFToda} disprove that the resulting special geometry prepotentials are the gauge theory prepotentials of the pure KK theory with $\mathcal{G}=\mathrm{BCFG}$, there is still however room for a gauge theory angle on them. For example, although the $G_2$ twisted relativistic Toda curves do not describe the weak coupling expansion of the pure $G_2$ theory in 5d, they do sit as the Seiberg--Witten curves of the $E_6$ theory on a special 3-dimensional locus of its Coulomb branch \cite{Borot:2015fxa} determined by the folding procedure. It would nonetheless be interesting to give a more poignant characterisation of the large radius expansions of \cref{sec:PFToda}, whether intrinsically or in terms of a suitable deformation of the $\mathcal{N}=2$ KK theory with non-simply-laced gauge symmetry. \end{itemize}
8fc7c5fbcca1e25bbfacd3cddbc1b11963a29136
\section*{Related work} Evaluation of network robustness was explored on many different networks with different purposes behind the research. In one paper \cite{callaway2000network}, the authors perform random and targeted node removal experiments on randomly generated networks with varying node degrees and other relevant parameters. Authors of \cite{wang2011modeling} perform similar simulations to \cite{callaway2000network}, but with a different network - they used a complex software network for which they showed that it is more robust than random and scale-free networks to each type of simulated attack. A large number of robustness measures are tested on various different graphs in \cite{ellens2013graph, liucomparative}, with some of the authors further testing the robustness measures after optimizing the graphs. Similarly, the authors of \cite{beygelzimer2005improving} looked at improving robustness through modest linkage alterations, since more invasive network changes are normally not feasible in real-life networks. In \cite{fortuna2011evolution} the authors analyze the package dependency network of the Debian GNU/Linux operating system. They have shown that the modularity of the network of dependencies increased over time, due to aspiration for minimizing project costs. Although modularity did not decrease the number of incompatibilities within the modules, it does increase the fraction of random packages working properly in a random computer. As shown in \cite{nguyen2010studying}, only a small subset of dependency network measures on the level of a programming language predict the post-release failures of software. Authors of \cite{decan2018impact} analyzed the impact of security vulnerabilities in the npm package dependency network. Their work focused on a concrete list of 400 security reports and how they affect the rest of the network. Authors of \cite{guillaume2004comparison} explore and compare the resilience of the scale-free network and randomly generated networks and propose two different attack strategies. \section*{Results} \subsection*{Effects of node failures on network robustness} The network robustness was tested on three different cases: failure of random nodes, hub nodes, and important nodes (according to the PageRank \cite{page1999pagerank} score). We treat a node failure as a security vulnerability or an error in the package that prevents it from working correctly. When a node fails the failure spreads across the nodes that are dependant on it and the failure spreads to the deeper levels of dependencies until it reaches all of the connected dependant nodes. From figure (\ref{fig:robustness}) we can see that after the removal of only one node we have already lost 29.7\% of our network. We reach a 60\% network outage after the removal of 12 hubs and their corresponding dependants. After the removal of 130 hubs or important nodes, we reach the network outage of 80\% and after this point, the percentage of the affected nodes starts increasing slower until it stabilizes around 90\% of outage after the removal of around 8000 nodes. This means that there are only 10\% of the nodes left in our network and that they are weakly dependant or not dependant on each other anymore since the removal of the node doesn't have a significant impact on the whole network. From the figure (\ref{fig:robustness}) we can see that our network is totally robust to random node failures but when we removed 10\% (78,233) of the nodes we observed a network outage of 69\%. We also tested the robustness in terms of connectivity on the same set of targeted nodes. The difference is that connectivity is not measured by the fraction of the failure-affected nodes, but by the fraction of nodes in the largest connected component (LCC) and the failure is not propagated to the dependant nodes \cite{guillaume2004comparison}. Our whole network consists of one LCC but after the removal of 10\% of hubs, a fraction of LCC lowers to 3.8\%, and with the removal of 10\% of PageRank evaluated nodes our LCC is almost empty (0,0001\%). After the removal of 20\% of the hubs, our LCC disintegrates since it includes only 0,00001\% of the nodes and our network is not connected anymore. After the removal of 20\% of the hub nodes on the random graph with the same properties as our network, our LCC was still intact. After the removal of 50\% of hubs fraction of the LCC only lowers by 3.9\% and after the removal of the same number of PageRank evaluated nodes the fraction of LCC only lowers by 13.6\%. \begin{figure}[t]\centering% \includegraphics[width=\linewidth]{npm-graph-50} \caption{\textbf{Robustness of the npm network.} In the figure we can observe the robustness of the network during the simulated failure of 50 hub nodes, random nodes, and the highest PageRank evaluated nodes. We can see that the network is not at all robust to the failure of hub and PageRank evaluated nodes since with the removal of only one node we observe a network outage of 29.7\%. After the removal of 30 hubs, the network outage starts to slowly stabilize around 90\% of affected nodes.} \label{fig:robustness} \end{figure} \subsection*{Network robustness evolution} The evolution of network robustness is first analyzed by comparing the average number of dependencies (out-degree) of all nodes and the 50 most important nodes determined by PageRank. In figure (\ref{fig:avg_outdegree}) we can see the average values for each network snapshot from 2012 to 2021. We can see that in the beginning, the average number of dependencies has been growing steadily for both the whole network and the top 50 packages. In the last few years, the growth started to slow down, especially for most important packages in the network. If we looked directly at the values for the top 50 packages, we would see that most of the packages have 2 or fewer dependencies, with a few having 20 or more (e.g. a famous web server package \textit{express}). \begin{figure}[h!]\centering% \includegraphics[width=\linewidth]{average_outdegree} \caption{\textbf{Average number of dependencies from 2012 to 2021.} In the figure we show the average out-degree (number of dependencies) of all packages and the top 50 most important packages by PageRank. Values are show for each of the yearly network snapshots. We can see that the number of dependencies has been increasing throughout the years and is slowing down. Average number of dependencies of the top packages dropped below the average of the whole network for the first time in year 2021.} \label{fig:avg_outdegree} \end{figure} We also analyze the percentage of the network that depends on the 100 most important packages by PageRank through time. In figure (\ref{fig:perc_network accessible}) we show how the fraction of the affected network changes from year 2021 to year 2021. In the beginning, the percentage was increasing steadily until the year 2015, after which it starts dropping. \begin{figure}[h!]\centering% \includegraphics[width=\linewidth]{percentage_network_accessible} \caption{\textbf{Average percentage of the network that depends on the 100 most important packages by PageRank.} In the figure we show what percentage of the network is affected by the most important packages. We can see that the influence of the most important packages was increasing until the year 2015, after which it started gradually dropping.} \label{fig:perc_network accessible} \end{figure} \subsection*{Community formation around core packages} When calculating the intersection ratio between popular package communities and their neighbourhoods, we found that 2-step neighborhoods provided the best ratios between the fraction of the intersection in the community and the fraction in the node neighborhood. Two examples that stand out are the packages \textit{vue} and \textit{react}, as the intersection covers around half of both the community and neighborhood sub-graphs, as seen in table (\ref{tbl:fractions}) and figure (\ref{fig:vue_comm_neigh}). Both of these packages, as well as many others among the top of the PageRank ranking also have little to no dependencies or are dependent mostly on their own modules. Out of the tested 20 top-ranked packages according to PageRank, only three had over 10 dependencies, while most had 3 or less. Out of the tested packages, $6$ had a community fraction above $40\%$, with the number increasing to $9$ if we lower the bar to $30\%$. When looking at neighborhood fractions, $9$ had a fraction above $40\%$, with the number increasing to $16$ if we lower the bar to $30\%$. \begin{table}[h!]\centering% \caption{Intersection fractions between communities and popular package neighborhoods.} \begin{tabular}{lccc}\toprule & Frac. of community & Frac. of neighborhood & Dependencies \\\midrule vue & $49.1\%$ & $67.1\%$ & $0$ \\ react & $48.5\%$ & $58.1\%$ & $2$ \\\bottomrule \end{tabular} \label{tbl:fractions} \end{table} \section*{Discussion} From the figure (\ref{fig:robustness}) we can see that our network is not robust at all to the failures of hubs and nodes evaluated by PageRank since after the removal of only one node we have already lost almost one-third of our network. Even more, our network experience an outage of 90\% after only 1\% of hubs is targeted by a malicious attack. We observed that the removal of the hubs and very important nodes have almost the same impact on the network with the impact of PageRank evaluated nodes being slightly higher (the average difference between them is 0.004\% based on 78,233 calculations). The reason behind this is that hubs have high in-degrees and are therefore highly ranked by PageRank, as the algorithm gives higher value to nodes that are depended on frequently or are depended on by other high-ranked nodes. With the removal of 10\% of randomly selected nodes and their dependants, we observed a large network outage of 69\%. The reason behind this is that our network is very sensitive to the removal of hub nodes, and until one of these randomly selected nodes is not a hub it looks like our network is highly robust to random node failures. Otherwise, we can observe big spikes in node outages. Because of that, we can't really say that our network is robust to random node failures. In terms of connectivity robustness, we can say that our network is also not robust at all as its connectivity collapses after the removal of only 10\% of the PageRank evaluated nodes. We used the comparison to the random graph to check that if this behavior of our network isn't random but an actual consequence of our network's structure. This behavior is expected and the reason behind our network's rapid disintegration is that it follows a power-law in-degree distribution. With power-law distribution, the higher the degree, the lower the probability of occurrence of a node with such degree, so the number of nodes with a high degree is low. By targeting these high degree nodes, their numbers are quickly reduced and the network loses a lot of its connectedness and therefore collapses. Results of network robustness evolution show that the average number of package dependencies has been increasing since the creation of the package manager, and the growth is starting to slow down. This corresponds with the initial trend in web software development, where using packages cut down costs and sped up the development. In the last few years there has been a growing trend of zero-dependency packages, which reduces security risks, although at a cost of longer development. The latter is especially visible in the most important packages, which tend to mostly be famous web frameworks and libraries that are often used with them. Communities that form around them and continue their development seem to lowering the number of dependencies, which can be seen in figure (\ref{fig:avg_outdegree}) with the decline from 2020 to 2021. When analyzing the percentage of affected network by the most important packages, we see that in the beginning the trend is increasing, then declining after year 2015. We see that on average, the extent of the potential security risks in important package on the network is decreasing, since a continually lesser part of the network depends on them in the past few years. This not only means that important packages are being used less, but also that overall dependence on packages down the line is decreasing. Such trend means a positive change in robustness of the network. As assumed, popular package neighbourhoods are partially contained in communities detected by the Louvain algorithm. We can see that large parts of popular package neighborhoods are part of their respective communities, which sometimes intersect between each other. With community packages depending on core packages or specialized core packages, networks are divided according to the framework they belong to or are made for. It would make little to no sense for a package dependent on \textit{vue} to also depend on \textit{react}, since they represent competing web development packages. By following the zero-dependency paradigm, these popular packages reduce the impact of outages, especially if they keep their deployment secure from attacks and unintended errors. Community detection was done using the Louvain algorithm, which might not be the perfect for our network structure. The algorithm optimizes modularity, which means that it searches for clusters that have many connections between the nodes within and little connections between clusters. Such structure does not exactly represent the communities in the npm dependency network, where there are mostly dependency trees, without many connections between the packages that depend on the same package. This is because normally, packages implement a limited functionality or extend the functionality of the package they depend on. Such packages in most cases don't depend on each other, since adding other functionalities is not their goal. Because of this the clusters detected are not optimal, but do partially show communities around packages. Other community detection algorithms were used, but results couldn't be obtained due to memory limitations caused by the rather large network size. Overall, we show that the current network state is fragile, since security issues introduced into a few of the most important packages effect a large portion of the network. This is probably not alarming, since these packages are often backed up by companies and are not likely to contain serious problem, due to a large community of professionals backing them up. The current trend in the robustness of the network is positive, since the average number of dependencies is stabilizing and even dropping for the most important packages. The latter also affect a continually lesser part of the whole network, which also brings improvements in terms of security. We suggest that developers of the npm packages try to keep the number of dependencies at minimum or mostly use well-known packages which are backed up by a serious community. When specifying dependencies, they should specify exact version and use version locking mechanisms, which prevent automatic updates to newer versions without knowing. Updates should be done manually, checking what has been changed and making sure that security analysis has been passed by updated versions. \section*{Methods} \subsection*{Data} The npm dependency network was extracted and parsed from the official npm package registry \cite{npm_reg}. Each node represents a single npm package and an edge between two nodes represents the one-way dependency between them, with a link from package A to package B if A depends on B - the network is directed. In its raw form, it contains 1,741,751 nodes and 3,594,421 edges. Since a large fraction of the nodes are isolated from the main network, we decided to use the largest weakly connected component, which contains 782,332 nodes and 2,572,892 edges. Removed nodes are mostly independent islands and don't change the main network structure, which we analyze. The average degree of our network is 4.567 and the network’s in-degree distribution follows a power-law degree distribution, similar as in \cite{fortuna2011evolution}. \subsection*{Approaches} \subsubsection*{Effects of node failures on network robustness} For the testing of the effects of node failures on network robustness, we use three different types of failure simulations. Random node failures represent random errors while hub and PageRank evaluated node failures represent targeted attacks. In every iteration of our simulated node failure we choose our targeted node - randomly, based on its in-degree or PageRank score. We then create a sub-graph around this node to find all of the nodes that are dependant on it. After that, we remove this sub-graph from our original network and measure the fraction of the affected nodes. With this, we successfully measure the impact of a single node on the whole network. We repeat this procedure until we remove 10\% of targeted nodes. For the connectivity robustness of our network, we use a similar approach the only difference was that we remove our nodes in batches and that the failure of the node isn't spreading to its dependants. After the removal of the batch of targeted nodes, we compute the fraction of the nodes in LCC. The same procedure is then repeated on a randomly generated graph with the same number of nodes and edges as our original graph. \subsubsection*{Network robustness evolution} We analyze network robustness evolution by first creating network snapshots. Data obtained from the npm registry contains version history and release times, which enable us to extract the network structure at a certain point in time. Using this approach, we create ten networks ranging from year 2012 to 2021, considering version updates. If there are no new versions for a certain year, we use the last existing version. We then first calculate the average out-degree of all the nodes in the network for each year. This enables us to see how the average number of dependencies change through time. Since the guidelines and trends are going toward packages with as little dependencies as possible, we assume that the value would drop. Furthermore, we calculate the average out-degree of the 50 packages that have the highest PageRank score in each snapshot. This way, we compare how the average number of dependencies (out-degree) of packages in the network compares to the most important packages in the network. In our second approach, we analyze the effect of security issues of the most important packages on the rest of the network. We again calculate the PageRank values of each network snapshot and calculate the effect of the 100 nodes with the highest PageRank score. The number of top nodes is fixed to 100 in order to always use only the most important packages in the network. If we use a percentage of the current network size, the calculations are not realistic, since the network growth speed isn't the same as the growth of number of important packages. We then calculate the percentage of nodes in the network that depend on these top packages by running depth-first search on each one of them. For each snapshot the percentages are averaged. \subsubsection*{Community formation around core packages} For detecting communities, we use the Louvain algorithm. For this purpose, we consider our graph as an undirected graph, as Louvain does not accept directed graphs. To see if communities indeed form around popular packages, we take the top 20 packages ranked by PageRank, and extract sub-graphs of distances 1, 2 and 3 around these nodes. One step represents the packages that depend directly on a core package. Since the dependence is direct, it would make sense that these packages are in the same community as the core package. Two steps represent either the core package and its extended dependents or the core package with specialized core packages along with their dependents. As all packages are still relatively close or directly connected to the core, their inclusion in the same community would also make sense. Three steps represent the core, the specialized core, and the extended dependent packages of a root package. Since three steps already cover a large portion of the network, we did not expect there to be a large overlap with the communities, but tested in case our expectations were wrong. We then look at the communities of the root node, extract that community from the whole graph, then look at the intersecting nodes between the community sub-graph and package neighborhood sub-graph. From the sizes of these node subsets we compute the fraction that each overlap represents of the community and node neighborhood in question. \section*{References}
7f387c921f2b32b2d0dcc281c8e8386faecf3f36
\section{CONCLUSION} \label{sec:con} In this paper, an AFA-graph is proposed to obtain good results for natural image segmentation. The method uses superpixels of different scales as segmentation primitive. To reduce the noise, we then introduce an IKDE method to estimate the mLab features of superpixels. Besides, the SPR and APC are applied to select global sets, which are used to build a NOLRR-graph to update an adjacency-graph of superpixels at each scale. Experimental results show the good performance and high efficiency of the proposed AFA-graph. We also compare our AFA-graph with the state-of-the-art methods, and our AFA-graph achieves competitive results on five benchmark datasets namely BSD300, BSD500, MSRC, SBD, and PASCAL VOC. In the future, we plan to extend the proposed framework to include more discriminating feature such as texture, shape, and priors to obtain better results. And we will explore the combination of graph and deep unsupervised learning to improve image segmentation performance. \section{EXPERIMENTS AND ANALYSIS} To verify the performance of the proposed AFA-graph, we first introduce the experimental setup, and then show the results of our approach and its variations. Moreover, the results of our method are presented compared with the state-of-the-art methods on different datasets. Finally, we show time complexity analysis of our AFA-graph. \subsection{Experimental Setup} \subsubsection{Databases} All experiments are carried out on the following publicly available databases: Berkeley Segmentation Database (BSD)~\cite{martin2001database} and Microsoft Research Cambridge (MSRC) database~\cite{Shotton2006TextonBoost}, Stanford Background Dataset (SBD)~\cite{5459211}, and PASCAL visual object classes 2012 (VOC) dataset~\cite{Everingham10}. \begin{itemize} \item BSD300: The BSD300 includes 300 natural images and the ground truth data (each one has about 5 human annotations). Each image has a fixed size of 481$\times$321 pixels. \item BSD500: As an improved version of BSD300 dataset, the BSD500 contains 500 natural images. Each image is annotated by 5 different people on average. \item MSRC: The MSRC contains 591 images and 23 object classes with accurate pixel-wise labeled images. The performance is evaluated using the clean ground-truth object instance labeling of~\cite{Malisiewicz2007Improving}. \item SBD: The SBD contains 715 images of urban and rural scenes with 8 classes. Each image is approximately 240$\times$320 pixels and contains at least one foreground object. The `layers', `regions', and `surfaces' are all used as groundtruth. \item PASCAL VOC: The segmentation challenge of PASCAL VOC12 includes 2913 images (1416 images in train set and 1449 images in val set) with annotated objects of 20 categories. Following the standard setting, we use val set to test the final performance. \end{itemize} \subsubsection{Metrics} To evaluate our method, there are four standard measurements: the probabilistic rand index (PRI)~\cite{unnikrishnan2007toward}, the variation of information (VoI)~\cite{meilua2007comparing}, the global consistency error (GCE)~\cite{martin2001database}, and the boundary displacement error (BDE)~\cite{freixenet2002yet}. The closer the segmentation result is to the ground truth, the higher the PRI is, and the smaller the other three measures (VoI, GCE, and BDE) are. All experiments are conducted under the PC condition of 3.40GHz of Intel Xeon E5-2643 v4 processor, 64G RAM, Ubuntu 16.04 OS and Matlab 2018a. \subsection{Performance analysis for framework} \label{subsec:different modules} \subsubsection{Influence of parameter $\alpha$ in IKDE} To analyze the robustness of our AFA-graph, we first explore the influence of exponential factor $\alpha$ in IKDE. As shown in Figs.~\ref{fig:alphainfluence}(a)-(d), we report the indexes of PRI, VoI, GCE, and BDE for different $\alpha$ under a range of corruptions on BSD300 dataset. From Fig.~\ref{fig:alphainfluence}, we can find that the best performance is obtained when parameter $\alpha=1$. The index of PRI is highest and other three indexes are closed to the best results. \begin{figure}[!t] \centering \subfloat[]{ \includegraphics[width=1.65in]{figures/7-1.jpg} } \subfloat[]{ \includegraphics[width=1.65in]{figures/7-2.jpg} } \vspace{-0.5em} \subfloat[]{ \includegraphics[width=1.65in]{figures/7-3.jpg} } \subfloat[]{ \includegraphics[width=1.65in]{figures/7-4.jpg} } \caption{Examine the influence of $\alpha$ in IKDE on BSD300 dataset. To accurately describe the influence of $\alpha$ on performance, the values of four indexes retain four decimal places. (a) PRI. (b) VoI. (c) GCE. (d) BDE.} \label{fig:alphainfluence} \end{figure} \subsubsection{Influence of parameters $e$ and $g$ in APC} To analyze the robustness of our AFA-graph, we then explore the influence of parameters $e$ and $g$ in APC. We report the results for different $e$ and $g$ under a range of corruptions on BSD300 dataset as shown in Figs.~\ref{fig:APCinfluence}(a)-(d). From the results, we can find that the best performance is obtained when parameter $e=3$ and $g=5$. Our method has good robustness to the variety of parameters $e$ and $g$. \begin{figure}[!t] \centering \subfloat[]{ \includegraphics[width=1.65in]{figures/9-1.jpg} } \subfloat[]{ \includegraphics[width=1.65in]{figures/9-2.jpg} } \vspace{-0.5em} \subfloat[]{ \includegraphics[width=1.65in]{figures/9-3.jpg} } \subfloat[]{ \includegraphics[width=1.65in]{figures/9-4.jpg} } \caption{Examine the influence of $e$ and $g$ in APC on BSD300 dataset. (a) PRI. (b) VoI. (c) GCE. (d) BDE. Our method has good robustness to the variety of parameter.} \label{fig:APCinfluence} \end{figure} \subsubsection{Influence of parameter $d$ in NOLRR} To analyze the robustness of our AFA-graph, we also explore the influence of the selected rank $d$ in the proposed NOLRR (\textbf{Algorithm~\ref{alg:olrrgraph}}). In Figs.~\ref{fig:dinfluence}(a)-(d), our proposed NOLRR also has good robustness to the variety of parameter. The index of PRI is highest and other three indexes are closed to the best results. So, the best performance is obtained when parameter $d=50$. \begin{figure}[!t] \centering \subfloat[]{ \includegraphics[width=1.65in]{figures/8-1.jpg} } \subfloat[]{ \includegraphics[width=1.65in]{figures/8-2.jpg} } \vspace{-0.5em} \subfloat[]{ \includegraphics[width=1.65in]{figures/8-3.jpg} } \subfloat[]{ \includegraphics[width=1.65in]{figures/8-4.jpg} } \caption{Examine the influence of $d$ in NOLRR on BSD300 dataset. To accurately describe the influence of $d$ on performance, the values of four indexes retain four decimal places. (a) PRI. (b) VoI. (c) GCE. (d) BDE. Our method has good robustness to the variety of parameter.} \label{fig:dinfluence} \end{figure} \subsubsection{Denoise methods} As suggested, we have conducted experiments to explore the influence of noise to accuracy. The results on BSD300 are listed in Table~\ref{different denoise}. The Gaussian means that we perform Gaussian filtering on images or features as done by SFFCM~\cite{Lei2018fuzzy}. The size of filter is [5, 5], and the value for sigma is set as 1. The Bilateral means that we perform shiftable bilateral filtering on images or features as done by CCP~\cite{7410546}. The size of filter is [5, 5], and the width of both spatial and range Gaussian are set as 5. For IKDE, the values of $\alpha$ is an exponential factor is set to 1. We can observe that IKDE can achieve the best results on all metrics. In Fig.~\ref{fig:graphs1}, the results also show that the performance of filtering on images is worse than filtering on features. In this case, filtering on images may influence the neighborhood topology between pixels which is the core of a constructed affinity graph. \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed framework with different denoising methods on BSD300 dataset. We perform Gaussian, bilateral, and our IKDE filtering on images or features.} \label{different denoise} \begin{tabular}{lcccccc} \toprule Methods & Images & Features & PRI $\uparrow$ & VoI $\downarrow$ & GCE $\downarrow$ & BDE $\downarrow$ \\ \midrule Gaussian & $\surd$ & & 0.81 & 2.22 & 0.24 & 18.56 \\ Gaussian & & $\surd$ & 0.84 & 1.76 & 0.20 & 15.71 \\ Bilateral & $\surd$ & & 0.81 & 2.20 & 0.24 & 19.17 \\ Bilateral & & $\surd$ & 0.84 & 1.68 & 0.18 & 15.26 \\ IKDE & $\surd$ & & 0.81 & 2.22 & 0.25 & 19.17 \\ IKDE & & $\surd$ & 0.84 & 1.67 & 0.19 & 14.72 \\ \bottomrule \end{tabular} \end{table} \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed NOLRR-graph with different basic graphs using mLab features on BSD300 dataset. A-graph means adjacency-graph.} \label{different visual fea} \begin{tabular}{lccccc} \toprule Methods & IKDE & PRI $\uparrow$ & VoI $\downarrow$ & GCE $\downarrow$ & BDE $\downarrow$ \\ \midrule A-graph & & 0.83 & 1.75 & 0.18 & 15.02 \\ A-graph & $\surd$ & 0.83 & 1.75 & 0.18 & 14.94 \\ NOLRR-graph & & 0.81 & 2.32 & 0.25 & 17.57 \\ NOLRR-graph & $\surd$ & 0.81 & 2.34 & 0.24 & 17.42 \\ A- + NOLRR-graph & & 0.84 & 1.68 & 0.19 & 14.74 \\ A- + NOLRR-graph & $\surd$ & 0.84 & 1.67 & 0.19 & 14.72 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=0.84in]{figures/5-1-0.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-1-1.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-1-2.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-1-3.jpg} \vspace{0.2em} \includegraphics[width=0.84in]{figures/5-2-0.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-2-1.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-2-2.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-2-3.jpg} \vspace{0.2em} \includegraphics[width=0.84in]{figures/5-3-0.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-3-1.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-3-2.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-3-3.jpg} \vspace{0.2em} \includegraphics[width=0.84in]{figures/5-4-0.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-4-1.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-4-2.jpg}\hspace{-0.1em} \includegraphics[width=0.84in]{figures/5-4-3.jpg} \caption{Visual comparison on BSD300 dataset obtained by our AFA-graph. From left to right, input images, the results of AFA-graph w/o IKDE, IKDE on images and IKDE on features are presented. The results of AFA-graph with IKDE on features are much better, in particular often more accurate.} \label{fig:graphs1} \end{figure} \subsubsection{Graphs} To verify the proposed AFA-graph with other graph construction, we list the performance of different graphs in Table~\ref{different visual fea}. The setting of scale $k$ is the same as adjacency-graph~\cite{li2012segmentation}. To achieve the optimal performance of these graphs, we follow the procedure stated in~\cite{li2012segmentation} and~\cite{wang2015global} to manually select the best group in Tcut~\cite{li2012segmentation} ranking from 1 to 40. In adjacency-graph, the standard deviation of the Gaussian kernel function is defined as 20. We construct the NOLRR-graph, and the parameter $d$ is set to 50. Our method is highly efficient and adaptive to the combination of local graph (adjacency-graph) and global graph (NOLRR-graph). It achieves the best performances in comparison with the adjacency-graph and NOLRR-graph. As shown in Fig.~\ref{fig:graphs2}, adjacency-graph only considers the local structure easily leading to wrong segmentation when the objects cover a large part of images. Our proposed NOLRR-graph often produces a dense graph, which cannot satisfy the sparsity of the desired graph. Our AFA-graph combines different affinity graphs with assimilating the advantages of two graphs to further improve the segmentation performance. \begin{figure*}[!t] \centering \includegraphics[height=0.77in]{figures/6-1-1.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-1-2.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-1-3.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-1-6.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-1-7.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-1-10.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-1-11.jpg} \vspace{0.3em} \includegraphics[height=0.77in]{figures/6-2-1.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-2-2.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-2-3.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-2-6.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-2-7.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-2-10.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-2-11.jpg} \vspace{0.3em} \includegraphics[height=0.77in]{figures/6-3-1.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-3-2.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-3-3.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-3-6.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-3-7.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-3-10.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-3-11.jpg} \vspace{0.3em} \includegraphics[height=0.77in]{figures/6-4-1.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-4-2.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-4-3.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-4-6.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-4-7.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-4-10.jpg}\hspace{-0.1em} \includegraphics[height=0.77in]{figures/6-4-11.jpg} \caption{Visual comparison on BSD300 dataset obtained by different affinity graphs. From left to right, input superpixels, A-graph with the segmentation results, NOLRR-graph with the results, and our AFA-graph (A- + NOLRR-graph) with the results are presented. Our AFA-graph combines different affinity graphs with assimilating the advantages of two graphs to further improve the segmentation performance.} \label{fig:graphs2} \end{figure*} \subsubsection{Different modules} To see how AFA-graph is affected by different modules, we report the scores of combinations of different modules using in the baseline. The baseline easily combines adjacency-graph and NOLRR-graph by element-wise sum operation. The Area means the node selection method from GL-graph\cite{wang2015global}. The SPR and APC are utilized to select global nodes. Especially if the baseline does not select APC, we will use \emph{k}-means with $k=2$ for fair comparison. The results of different modules using in the baseline are shown in Table~\ref{different modules}. The results show that the segmentation performance of APC with SPR is better than area and \emph{k}-means. Therefore, the SPR can better represent superpixels compared with the area. Obviously, our method is suitable for the selection of global nodes when we know nothing about the feature distribution of the superpixels beforehand. \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative comparison for different methods using in our AFA-graph on BSD300 dataset. The baseline easily combines adjacency-graph and NOLRR-graph by element-wise sum operation.} \label{different modules} \begin{tabular}{lcccccccccc} \toprule Methods & PRI $\uparrow$ & VoI $\downarrow$ & GCE $\downarrow$ & BDE $\downarrow$ \\ \midrule Baseline & 0.80 & 2.46 & 0.26 & 15.96 \\ Baseline+IKDE & 0.80 & 2.40 & 0.27 & 15.60 \\ Baseline+IKDE+Area & 0.84 & 1.69 & 0.21 & 14.95 \\ Baseline+IKDE+\emph{k}-means & 0.84 & 1.69 & 0.19 & 15.01 \\ Baseline+IKDE+\emph{k}-means+SPR & 0.84 & 1.68 & 0.19 & 14.94 \\ Baseline+IKDE+APC+SPR & 0.84 & 1.67 & 0.19 & 14.72 \\ \bottomrule \end{tabular} \end{table} \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed framework with state-of-the-art approaches on BSD300 dataset. We directly take their evaluations reported in publications for fair comparison. The 'C' in class means clustering-based methods. The 'G' in category means graph-based methods.} \label{tab:BSDS300} \begin{tabular}{lccccc} \toprule Methods &Class & $\textrm{PRI}\uparrow$ & $\textrm{VoI}\downarrow$ & $\textrm{GCE}\downarrow$ & $\textrm{BDE}\downarrow$ \\ \midrule FCM~\cite{Lei2018fuzzy} & C & 0.74 & 2.87 & 0.41 & 13.78 \\ FRFCM~\cite{8265186} & C & 0.75 & 2.62 & 0.36 & 12.87 \\ MS~\cite{comaniciu2002mean} & C & 0.76 & 2.48 & 0.26 & 9.70 \\ SFFCM~\cite{Lei2018fuzzy} & C & 0.78 & 2.02 & 0.26 & 12.90 \\ FCR~\cite{4480125} & C & 0.79 & 2.30 & 0.21 & \textbf{8.99} \\ H\_+R\_Better~\cite{li2018iterative} & C & 0.81 & 1.83 & 0.21 & 12.16 \\ Corr-Cluster~\cite{Kim2013Task} & C & 0.81 & 1.83 & -- & 11.19 \\ HO-CC~\cite{nowozin2014image} & C & 0.81 & 1.74 & -- & 10.38 \\ \midrule FH~\cite{Felzenszwalb2004Efficient} & G & 0.71 & 3.39 & 0.17 & 16.67 \\ Ncut~\cite{shi2000normalized} & G & 0.72 & 2.91 & 0.22 & 17.15 \\ MNCut~\cite{cour2005spectral} & G & 0.76 & 2.47 & 0.19 & 15.10 \\ LFPA~\cite{Tae2013Learning} & G & 0.81 & 1.85 & 0.18 & 12.21 \\ SAS~\cite{li2012segmentation} & G & 0.83 & 1.68 & 0.18 & 11.29 \\ $\ell_0$-Graph~\cite{wang2013graph} & G & 0.84 & 1.99 & 0.23 & 11.19 \\ GL-Graph~\cite{wang2015global} & G & 0.84 & 1.80 & 0.19 & 10.66 \\ AASP-Graph~\cite{zhang2019aaspgraph} & G & 0.84 & 1.65 & \textbf{0.17} & 14.64 \\ AFA-Graph (Gaussian) & G & 0.84 & 1.76 & 0.20 & 15.71 \\ AFA-Graph (Bilateral) & G & 0.84 & 1.68 & 0.18 & 15.26 \\ AFA-Graph (IKDE) & G & \textbf{0.84} & \textbf{1.65} & 0.18 & 15.00 \\ \bottomrule \end{tabular} \end{table} \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed framework with state-of-the-art approaches on BSD500 dataset. We directly take their evaluations reported in publications for fair comparison.} \label{tab:BSDS500} \begin{tabular}{lccccc} \toprule Methods &Class & $\textrm{PRI}\uparrow$ & $\textrm{VoI}\downarrow$ & $\textrm{GCE}\downarrow$ & $\textrm{BDE}\downarrow$ \\ \midrule DSFCM~\cite{8543645} & C & 0.74 & 2.90 & 0.41 & -- \\ FCM~\cite{Lei2018fuzzy} & C & 0.74 & 2.88 & 0.40 & 13.48 \\ MSFCM~\cite{9120181} & C & 0.74 & 2.85 & 0.40 & -- \\ FRFCM~\cite{8265186} & C & 0.76 & 2.67 & 0.37 & 12.35 \\ HS~\cite{Wu20} & C & 0.76 & 2.39 & 0.26 & 14.03 \\ HO-CC~\cite{nowozin2014image} & C & 0.83 & 1.79 & -- & \textbf{9.77} \\ AFC~\cite{8770118} & C & 0.76 & 2.05 & 0.22 & 12.95 \\ RSFFC~\cite{9162644} & C & 0.78 & 2.12 & 0.28 & -- \\ SFFCM~\cite{Lei2018fuzzy} & C & 0.78 & 2.06 & 0.26 & 12.80 \\ MS~\cite{comaniciu2002mean} & C & 0.79 & 1.85 & 0.26 & -- \\ \midrule MNCut~\cite{cour2005spectral} & G & 0.76 & 2.33 & -- & -- \\ FH~\cite{Felzenszwalb2004Efficient} & G & 0.79 & 2.16 & -- & -- \\ SAS~\cite{li2012segmentation} & G & 0.80 & 1.92 & -- & -- \\ RAG~\cite{7484679} & G & 0.81 & 1.98 & -- & -- \\ $\ell_0$-Graph~\cite{wang2013graph} & G & 0.84 & 2.08 & 0.23 & 11.07 \\ AASP-Graph~\cite{zhang2019aaspgraph} & G & 0.84 & 1.71 & \textbf{0.18} & 13.78 \\ AFA-Graph (Gaussian) & G & 0.83 & 1.80 & 0.20 & 15.18 \\ AFA-Graph (Bilateral) & G & 0.84 & 1.72 & 0.19 & 14.41 \\ AFA-Graph (IKDE) & G & \textbf{0.84} & \textbf{1.70} & 0.19 & 14.14 \\ \bottomrule \end{tabular} \end{table} \subsection{Comparison with state-of-the-art methods} We also report quantitative comparisons with the state-of-the-art methods. The comparison results on BSD300, BSD500, MSRC, SBD, and PASCAL VOC datasets are shown in Table~\ref{tab:BSDS300} to Table~\ref{tab:VOC}, respectively. We directly take their evaluations reported in publications for fair comparison. It can be noticed that our AFA-graph has a better performance compared with the state-of-the-art methods. We attribute this to adaptive combination of different graphs, which helps to accurately separate the foreground and background. Our method follows a similar, but not identical strategy as the SAS, $\ell_0$-graph, and GL-graph. Instead of using only adjacent neighborhoods of superpixels in SAS and $\ell_0$ affinity graph of superpixels in $\ell_0$-graph algorithm, we build an AFA-graph by combining NOLRR-graph and adjacency-graph, which allows the constructed graph to have the characteristics of a long-range neighborhood topology, with the sparsity and high discriminating power. The main differences between GL-graph and our method are global node selection and graph construction. In GL-graph, the superpixels are simply divided into three parts: small, medium, and large sized according to their area. For small and large sets, all superpixels connect to their adjacent neighbors, while for medium set, the superpixels are used to build a $\ell_0$-graph. However, according to the analysis of SPR and area revealed in Section~\ref{node_selection}, the area does not accurately reflect the distribution of global nodes in superpixels. In our method, the adjacency-graph of all superpixels is updated by the NOLRR-graph, which is built through NOLRR with the global nodes selected by the SPR and APC. Figure~\ref{compsas} shows various segmentation results obtained with SAS, $\ell_0$-graph, GL-graph, and our AFA-graph. The results of other three methods are the best results reported by the authors. It can be seen that our method can achieve a desirable result with less tuning for the group $k_{T}$ in Tcut (for ostrich, the group $k_{T}=2$), because the method takes into account the global information exactly. Especially, compared with the other three methods, our method obtains an accurate segmentation even in the difficult cases where: i) the detected object is highly textured, and the background may be highly unstructured, such as coral, surfer, and skier; ii) objects of the same type appear in a large and fractured area of the image, such as ostrich, zebras, and racecars. However, the BDE of our method is unsatisfied. As shown in Fig.~\ref{fig:falseressult}, our method can not obtain accurate and clear boundaries, when the detected object is too tiny and its texture is easily to be confused with background. The main reason is that our AFA-graph only uses pixel color information, which fails to capture enough contour and texture cues of segmenting images. \begin{figure*}[!t] \centering \includegraphics[width=7in]{figures/10.jpg} \caption{Visual comparison on BSD300 dataset obtained with the adjacency-graph~\cite{li2012segmentation} , $\ell_0$-graph~\cite{wang2013graph}, GL-graph~\cite{wang2013graph}, and our AFA-graph. Two columns of the comparison results are shown here. From left to right, input images, the results of the SAS, $\ell_0$-graph, GL-graph, and AFA-graph are presented. The results of AFA-graph are visually better, in particular often more accurate.} \label{compsas} \end{figure*} \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed framework with state-of-the-art approaches on MSRC dataset. We directly take their evaluations reported in publications for fair comparison.} \label{tab:MSRC} \begin{tabular}{lccccc} \toprule Methods &Class & $\textrm{PRI}\uparrow$ & $\textrm{VoI}\downarrow$ & $\textrm{GCE}\downarrow$ & $\textrm{BDE}\downarrow$ \\ \midrule MSFCM~\cite{9120181} & C & 0.68 & 1.80 & 0.30 & -- \\ DSFCM~\cite{8543645} & C & 0.69 & 1.91 & 0.32 & -- \\ FCM~\cite{Lei2018fuzzy} & C & 0.70 & 1.93 & 0.32 & 12.67 \\ FRFCM~\cite{8265186} & C & 0.71 & 1.79 & 0.30 & 12.23 \\ SFFCM~\cite{Lei2018fuzzy} & C & 0.73 & 1.58 & 0.25 & 12.49 \\ RSFFC~\cite{9162644} & C & 0.75 & 1.51 & 0.24 & -- \\ Corr-Cluster~\cite{Kim2013Task} & C & 0.77 & 1.65 & -- & 9.19 \\ HO-CC~\cite{nowozin2014image} & C & 0.78 & 1.59 & -- & \textbf{9.04} \\ \midrule Supervised-NCut~\cite{Kim2013Task} & G & 0.60 & 3.10 & -- & 13.50 \\ MNCut~\cite{cour2005spectral} & G & 0.63 & 2.77 & -- & 11.94 \\ SAS~\cite{li2012segmentation} & G & 0.80 & 1.39 & -- & -- \\ $\ell_0$-Graph~\cite{wang2013graph} & G & 0.82 & 1.29 & 0.15 & 9.36 \\ AASP-Graph~\cite{zhang2019aaspgraph} & G & 0.82 & 1.32 & 0.14 & 13.38 \\ AFA-Graph (Gaussian) & G & 0.80 & 1.38 & 0.15 & 15.85 \\ AFA-Graph (Bilateral) & G & 0.81 & 1.32 & 0.14 & 13.92 \\ AFA-Graph (IKDE) & G & \textbf{0.82} & \textbf{1.29} & \textbf{0.14} & 13.37 \\ \bottomrule \end{tabular} \end{table} \subsection{Time complexity analysis} Our method includes steps of over-segmentation, feature extraction, global node selection, NOLRR-graph construction, and Tcut. The time complexity of OMP~\cite{you2016scalable} global node selection and NOLRR-graph construction are analyzed in Section~\ref{node_selection} and Section~\ref{graph_construction}, respectively. The time complexity of Tcut is $O(k_T|N|^{3/2})$ with a small constant. Using an aforementioned computer, our method takes totally 7.61 seconds to segment an image with size 481$\times$321 from BSD300 on average, where 5.11 seconds for generating superpixels, 0.56 seconds for global node selection, 1.12 seconds for building adaptive affinity graph, as well as only 0.82 seconds for the bipartite graph construction and partitioning with Tcut. The average running times (ARTs) of methods are obtained on aforementioned computer. The ART of the proposed AFA-graph is close to SAS and slight faster than GL-graph and our previous AASP-graph. Note that the AFA-graph is more efficient than GL-graph and AASP-graph in regard to adaptive node selection and their graph construction. In contrast, $\ell_0$-graph, MNcut, GL-graph, and Ncut usually take more than 20, 30, 100, and 150 seconds, respectively. The main reason is that extracting different features cost too much time. In summary, there are four main reasons why our AFA-graph has high performance: i) we propose IKDE to estimate the mLab of natural images to reduce the noise; ii) the combination of SPR and APC can select the global nodes, accurately mining the feature distribution of superpixels; iii) the NOLRR-graph is proposed to reduce time complexity while improving segmentation accuracy; iv) our AFA-graph combines adjacency-graph and NOLRR-graph with assimilating the advantages of two graphs. \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed framework with state-of-the-art approaches on SBD dataset.} \label{tab:SBD} \begin{tabular}{lccccc} \toprule Methods &Class & $\textrm{PRI}\uparrow$ & $\textrm{VoI}\downarrow$ & $\textrm{GCE}\downarrow$ & $\textrm{BDE}\downarrow$ \\ \midrule SFFCM~\cite{Lei2018fuzzy} & C & 0.66 & 1.89 & 0.22 & 16.44 \\ \midrule $\ell_0$-Graph~\cite{wang2013graph} & G & 0.80 & 1.97 & 0.20 & \textbf{9.96} \\ AASP-Graph~\cite{zhang2019aaspgraph} & G & 0.81 & 1.78 & \textbf{0.17} & 10.61 \\ SAS~\cite{li2012segmentation} & G & 0.81 & 1.78 & 0.17 & 10.52 \\ AFA-Graph (Gaussian) & G & 0.80 & 1.78 & 0.20 & 11.11 \\ AFA-Graph (Bilateral) & G & 0.81 & 1.75 & 0.18 & 10.67 \\ AFA-Graph (IKDE) & G & \textbf{0.81} & \textbf{1.75} & 0.18 & 10.33 \\ \bottomrule \end{tabular} \end{table} \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \caption{Quantitative results of the proposed framework with state-of-the-art approaches on PASCAL VOC dataset.} \label{tab:VOC} \begin{tabular}{lccccc} \toprule Methods &Class & $\textrm{PRI}\uparrow$ & $\textrm{VoI}\downarrow$ & $\textrm{GCE}\downarrow$ & $\textrm{BDE}\downarrow$ \\ \midrule SFFCM~\cite{Lei2018fuzzy} & C & 0.61 & \textbf{1.38} & 0.21 & 37.69 \\ \midrule AASP-Graph~\cite{zhang2019aaspgraph} & G & 0.58 & 1.83 & 0.18 & 41.94 \\ SAS~\cite{li2012segmentation} & G & 0.61 & 1.65 & 0.17 & 39.13 \\ $\ell_0$-Graph~\cite{wang2013graph} & G & 0.61 & 1.59 & 0.17 & 38.73 \\ AFA-Graph (Gaussian) & G & 0.61 & 1.61 & 0.17 & 35.42 \\ AFA-Graph (Bilateral) & G & 0.62 & 1.56 & 0.16 & 34.14 \\ AFA-Graph (IKDE) & G & \textbf{0.62} & 1.55 & \textbf{0.16} & \textbf{34.47} \\ \bottomrule \end{tabular} \end{table} \begin{figure}[!t] \centering \includegraphics[width=3.3in]{figures/11.jpg} \caption{False results by AFA-graph. Our AFA-graph only uses pixel color information, which fails to capture enough contour and texture cues of segmenting images.} \label{fig:falseressult} \end{figure} \section{Introduction} \label{sec:intro} \IEEEPARstart{A}s a fundamental and challenging task in computer vision, image segmentation is a process of decomposing an image into independent regions, which plays an important role in many high-level applications~\cite{zhu2016beyond}. Unsupervised methods receive much attention because they require no prior knowledge. In the literature, many unsupervised segmentation methods have been intensively studied~\cite{li2018iterative}. Some representative works of them rely on constructing a reliable affinity graph for the representation of image content, such as the adjacency-graph~\cite{li2012segmentation}, $\ell$$_0$-graph~\cite{wang2013graph}, and global/local affinity graph (GL-graph)~\cite{wang2015global}. \begin{figure}[!t] \centering \subfloat[]{ \includegraphics[width=1.1in]{figures/1-1.jpg} }\hspace{-0.8em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-2.jpg} }\hspace{-0.8em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-3.jpg} } \vspace{-0.5em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-4.jpg} }\hspace{-0.8em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-5.jpg} }\hspace{-0.8em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-7.jpg} } \vspace{-0.5em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-8.jpg} }\hspace{-0.8em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-9.jpg} }\hspace{-0.8em} \subfloat[]{ \includegraphics[width=1.1in]{figures/1-10.jpg} } \caption{Comparison results by different graph-based segmentation methods. (a) Input image. (b-e) Superpixels generated by over-segmenting the image at different scales. (f-i) Segmentation results by adjacency-graph~\cite{li2012segmentation}, $\ell_0$-graph~\cite{wang2013graph}, GL-graph~\cite{wang2015global}, and our AFA-graph, respectively. Although superpixels features are extremely complex and vary greatly at different scales, our method enforces the global structure and preserves more local information simultaneously.} \label{introfig} \end{figure} Clearly, for these affinity graph-based methods, the segmentation performance significantly depends on the effectiveness of the constructed affinity graph, with particular emphasis on the neighborhood topology and pairwise affinities between nodes (\textit{i.e.}, pixels or superpixels). For example, adjacency-graph~\cite{li2012segmentation} of coherent superpixels is applied to aggregate cues for segmentation. However, it has not enough ability to collect global grouping cues when the objects occupy a large of areas in the image. $\ell_0$-graph~\cite{wang2013graph} uses sparse representation, which enables it to capture global grouping cues, but it has tendency to create isolated regions. As shown in Fig. 1, GL-graph~\cite{wang2015global} is proposed to combine the adjacency-graph and the $\ell_0$-graph according to superpixel areas at different scales, leading to a better result than only using a single graph. The superpixels are simply divided into three parts: small, medium, and large sized according to their area in GL-graph~\cite{wang2015global}. However, there still exist three difficulties to be solved: i) these affinity graph-based methods ignore the noise in images and features, which can influence the accuracy of pairwise similarities and further hinder the affinity propagation; ii) due to the extreme complexity of superpixel features and the wide variation at different scales, the combination principles of different graphs are unreliable and often rely on empiricism; iii) multi-scale combinatorial grouping with graph construction are obliged to generate additional calculations, incurring a higher computational complexity. To solve these problems, an \emph{adaptive fusion affinity graph} (AFA-graph) is proposed to segment natural images. The proposed AFA-graph combines the local and global nodes of superpixels at different scales based on affinity propagation. The superpixels are obtained by over-segmenting an input image at different scales, and then filtered by an improved kernel density estimation method. We use affinity propagation clustering to select global nodes of these superpixels according to their subspace-preserving representation. Moreover, the noise-free online low-rank representation (NOLRR) is used to obtain a NOLRR-graph at each scale via a sparse representation of mLab features of the global nodes. All superpixels at each scale are used to build an adjacency-graph, which are further updated by the NOLRR-graph. We introduce a bipartite graph to map the relationship between the original image pixels and superpixels, and to enable propagation of grouping cues across superpixels at different scales. Intensive experiments are conducted on five public datasets, namely BSD300, BSD500, MSRC, SBD, and PASCAL VOC with four metrics, including PRI, VoI, GCE, and BDE for quantitative comparisons. The experimental results show the effectiveness of our AFA-graph compared with the state-of-the-art methods. This work makes the following contributions. \begin{itemize} \item We explore the impact of noise on pairwise similarity and affinity propagation between superpixels. \item We construct an AFA-graph to combine different graphs with the sparsity and a high discriminating power for natural image segmentation. \item We propose a novel NOLRR-graph to improve segmentation performance while reducing the time complexity. \item Extensive quantitative and qualitative experiments show that our AFA-graph outperforms recent methods on five benchmark datasets. \end{itemize} A preliminary conference version of this paper can be referred to adaptive affinity graph with subspace pursuit (AASP-graph)~\cite{zhang2019aaspgraph}. Compared with AASP-graph~\cite{zhang2019aaspgraph}, this study contains: i) an adaptive fusion affinity graph named as AFA-graph is proposed to segment natural images with online low-rank representation; ii) we propose a novel global node selection strategy containing subspace-preserving representation and affinity propagation clustering. As a new algorithmic enhancement, the strategy further perfects global node selection and improves the segmentation performance; iii) we propose a NOLRR-graph to improve segmentation accuracy and reduce computational complexity; iv) we perform extensive experiments to validate the effectiveness of global node selection strategy and the NOLRR-graph. More extra experiments on five datasets and theoretical analyses are also added to verify the robustness of AFA-graph. The official code is available at \url{https://github.com/Yangzhangcst/AFA-graph}. The rest of this paper is organized as follows. Related works are reviewed in Section II. The proposed AFA-graph for natural image segmentation is presented in Section III. Experimental results are reported in Section IV, and the paper is concluded in Section V. \section{FRAMEWORK} In this section, we first introduce the overview of our proposed AFA-graph. Then, we show the global nodes selection. Finally, we present the process of NOLRR-graph construction in details. \begin{figure*}[!t] \centering \includegraphics[width=7in]{figures/2.jpg} \caption{The overall framework of image segmentation based on the proposed AFA-graph. After over-segmenting an input image as the SAS~\cite{li2012segmentation}, we obtain superpixels at $k$ different scales. Global nodes of superpixels are sorted out through subspace-preserving representation (SPR) and affinity propagation clustering (APC) and then build a NOLRR-graph at each scale. The adjacency-graph is constructed by all superpixels and it is updated by the NOLRR-graph at each scale. The updated graphs are fused to obtain the final result through Tcut~\cite{li2012segmentation}.} \label{framework} \end{figure*} \subsection{Overview} \label{overview} The overall framework of the proposed AFA-graph for natural image segmentation is shown in Fig.~\ref{framework}. The proposed approach primarily consists of three components: superpixel representation, adaptive affinity graph construction, and fusion graph partition. For graph-based segmentation, approximating each superpixel in the feature space into a linear combination of other superpixels are regarded as neighbors, and their affinities are calculated from the corresponding representation error~\cite{wang2015global}. Let $\textbf{\emph{S}}_k=\{s\}^N_{i=1}$ be the superpixels of an input image \emph{\textbf{I}} at scale $k$, where $N$ is the number of superpixels. Formally, such an approximation can be written as: \begin{equation} \textbf{\emph{d}}_{j}=\textbf{\emph{D}}\textbf{\emph{c}}_{j}, \quad c_{jj}=0, \end{equation} where $\textbf{\emph{d}}_{j}\in\mathbb{R}^n$ is a matrix representation of superpixels over the dictionary $\textbf{\emph{D}}$, and $\textbf{\emph{c}}_{j}\in\mathbb{R}^N$ is the sparse representation of superpixels. The constraint $c_{jj}=0$ prevents the self-representation of $\textbf{\emph{d}}_{j}$. As for all superpixels at each scale, every superpixel is connected to its adjacent superpixels denoted as local graph. Let $\textbf{\emph{M}}_A$ be the matrix-representation of its adjacent neighbors, we attempt to represent its feature $\textbf{\emph{f}}_i$ as a linear combination of elements in $\textbf{\emph{M}}_A$. In practice, we solve the following problem: \begin{eqnarray} \textbf{\emph{c}}''_i=\mathop{\arg\min}_{\textbf{\emph{c}}'_i}||\textbf{\emph{f}}_i-\textbf{\emph{M}}_A\textbf{\emph{c}}'_i||_2. \end{eqnarray} The affinities coefficient $A_{i,j}$ of the adjacency-graph $\textbf{\emph{A}}$ between superpixels $s_i$ and $s_j$ are calculated as $A_{i,j}=1$, if $i$ is equal to $j$; $A_{i,j}=1-(r_{i,j}+r_{j,i})/2$, otherwise; where $r_{i,j}=||\textbf{\emph{f}}_i-\textbf{\emph{c}}''_{i,j}\textbf{\emph{f}}_j||^2_2$. To combine different graph, the proposed NOLRR-graph $\textbf{\emph{W}}$ is replaced by the adjacency-graph $\textbf{\emph{A}}$ at global nodes to obtain the updated adjacency-graph $\textbf{\emph{A}}'$ at a certain scale. To map the relationships between pixels and superpixels and enable propagation of grouping cues across superpixels at different scales, a bipartite graph is built to describe the relationships of pixels to superpixels and superpixels to superpixels. In this case, the bipartite graph is unbalanced, which can be solved by Tcut~\cite{li2012segmentation} to obtain the final segmentation result. \subsection{Superpixel representation} Superpixel is an irregular pixel block composed of adjacent pixels with similar features. It is usually used as a preprocessing segmentation step. Various visual patterns of a natural image can be captured through superpixels generated by different methods with different parameters. Like the GL-graph~\cite{wang2015global}, an input image is over-segmented into superpixels using the same parameters as in SAS~\cite{li2012segmentation}. And then, the color features of each superpixel are characterized by using mean value in the CIE L*a*b* space (mLab). In general, all natural images contain noise, which influences the accuracy of pairwise similarities and further hinders the affinity propagation. The disturbance of the noise to a natural image is reflected in its statistical features. And the mLab can be regarded as a special statistical feature (histogram) of the image. As pointed out by~\cite{foster2014segmentation}, a peak in the histogram corresponds to a relatively more homogeneous region in the image. The mLab of an image can be estimated in a robust state such that an estimated feature is less sensitive to local peculiarities. To reduce the noise, we introduce an improved kernel density estimation (IKDE) method to estimate the mLab of natural images, and the histogram is smoothed by single exponential smoothing. \begin{equation} \textbf{\emph{f}}_{t}^{in}= \alpha \textbf{\emph{f}}_t+(1-\alpha)\textbf{\emph{f}}_{t-1}^{in} \end{equation} where $\alpha \in (0,1]$ is an exponential factor. $\textbf{\emph{f}}^{in}_{t}$ is the original features at time period $t$, and $\textbf{\emph{f}}_t$ is the features smoothed by IKDE. As shown in Fig.~\ref{ikde}, the essential shape of mLab features is preserved throughout this process. The IKDE can improve the reliability of features as explained in subsection~\ref{subsec:different modules}. \begin{figure}[!t] \centering \subfloat[]{ \includegraphics[width=3.3in]{figures//4-3.jpg} } \vspace{0.5em} \subfloat[]{ \includegraphics[width=3.3in]{figures//4-4.jpg} } \caption{Illustration of the IKDE with exponential smoothing method. (a) Original values of the superpixels in Lab color space. (b) Smoothed values of the superpixels processed by the IKDE.} \label{ikde} \end{figure} \subsection{Global Node Selection} \label{node_selection} It is shown in~\cite{you2016scalable} that superpixels from different groups can be well approximated by a union of low dimensional subspaces. One way to achieve this goal is to use the sparse subspace clustering~\cite{elhamifar2013sparse} which separates superpixels into groups such that each group contains only superpixels from the same subspace. Therefore, global nodes can be classified according to their subspace-preserving representation (SPR) which is the affiliation $\textbf{\emph{c}}_{j}$ of superpixels with subspace. It is proved that the sparsest solution of $\textbf{\emph{c}}_{j}$ measured in the sense of $\ell_0$-norm is unique, and it conveys the most meaningful information of superpixels~\cite{MichaelElad10}. So, the sparse solution can be regarded as follows: \begin{equation} \textbf{\emph{c}}_{j}^{*}= \mathop{\arg\min}_{\textbf{\emph{c}}_j}\|\textbf{\emph{c}}_{j}\|_0 \quad s.t.\ \textbf{\emph{f}}_j=\textbf{\emph{F}}\textbf{\emph{c}}_j, \ c_{jj}=0, \end{equation} where $\|\cdot\|_0$ represents the $\ell$$_0$-norm which counts the number of nonzero values in a vector, and $\textbf{\emph{F}}=[\ \emph{\textbf{f}}_1,...,\textbf{\emph{f}}_N]\in \mathbb{R}^{n\times N}$ is the smoothed feature matrix. Hence, the orthogonal matching pursuit (OMP)~\cite{you2016scalable} method is applied to seek an approximation of the sparsest solution. \begin{equation} \textbf{\emph{c}}_{j}^{*}= \mathop{\arg\min}_{\textbf{\emph{c}}_j}\|\textbf{\emph{f}}_j-\textbf{\emph{F}}\textbf{\emph{c}}_j\|_2^2 \quad s.t.\ \|\textbf{\emph{c}}_{j}\|_0\leq \psi, \ c_{jj}=0, \end{equation} where the parameter $\psi$ is the maximal number of coefficients for each input smoothed feature $\textbf{\emph{f}}_j$, which controls the sparsity of the solution. The solution $\textbf{\emph{c}}_j^{*}\in \mathbb{R}^{N}$ (the \emph{j}-th column of $\textbf{\emph{C}}^{*}\in \mathbb{R}^{N\times N}$) is computed by OMP$(\textbf{\emph{F}}_{-j}, \textbf{\emph{f}}_j)\in \mathbb{R}^{N-1}$ with a zero inserted in its \emph{j}-th entry, where $\textbf{\emph{F}}_{-j}$ is the smoothed feature matrix with the \emph{j}-th column removed. To better illustrate the relationship between features and SPR of superpixels, we show superpixels generated by over-segmenting two images at different scales and compare the SPR of superpixels computed by mLab with the areas of superpixels in Fig.~\ref{SPR}. It can be seen that the influence of superpixels across different scales on the SPR is smaller than that of the area in the same image. Therefore, the SPR can better represent superpixels compared with the area. In Fig.~\ref{SPR}, the features of superpixels are various at different scales. The global nodes cannot be easily defined in natural images. To address this issue, we propose a novel similarity between any two superpixels and apply affinity propagation clustering (APC)~\cite{frey2007clustering} with the similarity matrix to find global nodes. As shown in Fig.~\ref{similarity}, the proposed similarity $S_{i,j}$ between two superpixels $s_i$ and $s_j$ is defined as a combination of the Euclidean distance $d^E_{ij}$ and the geodesic distance along the probability density function of their feature histogram. \begin{equation} S_{i,j} = -\Big(|d^E_{ij}|^e+|\sum_{x=i}^{j-1}d^E(x,x+1)|^g\Big)^{1/2},\ j>i. \end{equation} where $e$ and $g$ are the hyper-parameters which are setting as $e=3$ and $g=5$, respectively. It can be noticed that APC is very suitable for the task that we know nothing about the feature distribution of the superpixels beforehand. After $\textbf{\emph{c}}_j^{*}$ is computed, the classification of the superpixels is obtained by applying spectral clustering to the affinity matrix $\textbf{\emph{M}}_{sp}=|\textbf{\emph{C}}^{*}|+|{\textbf{\emph{C}}^{*}}^\mathsf{T}|$ with the clustering number provided by the APC. The procedure of global node selection is summarized in \textbf{Algorithm~\ref{alg:global nodes selection}}. We will give a more detailed discussion about the reliability of SPR and APC in Section~\ref{subsec:different modules}. \begin{figure*}[!t] \centering \subfloat[]{ \includegraphics[width=2.24in]{figures/3-1.jpg}} \subfloat[]{ \includegraphics[width=2.24in]{figures/3-2.jpg}} \subfloat[]{ \includegraphics[width=2.24in]{figures/3-3.jpg}} \vspace{-0.5em} \subfloat[]{ \includegraphics[width=2.24in]{figures/3-4.jpg}} \subfloat[]{ \includegraphics[width=2.24in]{figures/3-5.jpg}} \subfloat[]{ \includegraphics[width=2.24in]{figures/3-6.jpg}} \caption{The \emph{Superpixels} (top left) generated by over-segmenting the input images at different scales with their \emph{SPR} (top right) computed by mLab features and their \emph{Areas} (bottom). From left to right, the number of superpixels in the image decreases gradually. The change of superpixels has less effect on SPR than on areas in the same image.} \label{SPR} \end{figure*} \begin{figure}[!t] \centering \subfloat[]{ \includegraphics[width=1.8in]{figures/4-1.jpg}} \vspace{-0.5em} \subfloat[]{ \includegraphics[width=3in]{figures/4-2.jpg}} \caption{Illustration of the similarity $S_{i,j}$ between two superpixels $s_i$ and $s_j$. (a) Visualization results of mLab features. (b) Euclidean distance along the probability density function of the mLab feature histogram.} \label{similarity} \end{figure} \subsection{NOLRR-graph Construction} \label{graph_construction} To capture long-range grouping cues, we utilize Eq. (1) to approximate each global node from others in mLab features. As an alternative, we choose low-rank minimization to approximate each global node from others in mLab features. LRR is a significant method for segmenting data that are generated from a subspace union~\cite{liu2010robust,lu2016face}. Both SSC and LRR utilize the idea of expressing each sample as a linear combination of the remaining. So, it solves the following problem: \begin{equation} \mathop{\min}_{\textbf{\emph{C}}}{\dfrac{\lambda_1}{2}}{\|\textbf{\emph{F}}-\textbf{\emph{Y}}\textbf{\emph{C}}\|}_F^2+{\| \textbf{\emph{C}}\|}_*, \end{equation} where $\textbf{\emph{C}}\in{\mathbb{R}}^{N\times N}$ is the coefficient matrix of a representation of $\textbf{\emph{Y}}\in{\mathbb{R}}^{n\times N}$ over itself. Typically, $\textbf{\emph{Y}}$ is chosen as the $\textbf{\emph{F}}$ itself. $\lambda_1$ is a tunable parameters which can be set according to the properties of norms. ${\|\textbf{\emph{C}}\|}_*$ is the nuclear norm of matrix. $\| \cdot \|_F^2$ represents the squared Frobenius norm. However, solving LRR is a challenge in terms of time complexity and memory cost. To remedy this issue, one potential way to solve this problem is an online manner namely noise-free online LRR (NOLRR). The matrix ${\|\textbf{\emph{C}}\|}_*$ can be denoted: \begin{equation} \|\textbf{\emph{C}}\|_*=\mathop{\min}_{\textbf{\emph{U}},\textbf{\emph{V}},\textbf{\emph{C}}=\textbf{\emph{U}}\textbf{\emph{V}}^\mathsf{T}}\dfrac{1}{2} \Big(\|\textbf{\emph{U}}\|_F^2+\|\textbf{\emph{V}}\|_F^2\Big), \end{equation} where $\textbf{\emph{U}}\in{\mathbb{R}}^{N\times d}$ and $\textbf{\emph{V}}\in{\mathbb{R}}^{N\times d}$. To decouple the rows of $\textbf{\emph{U}}$, we use an auxiliary matrix $\textbf{\emph{D}}=\textbf{\emph{Y}}\textbf{\emph{U}}$ and approximate the term $\textbf{\emph{F}}$ with $\textbf{\emph{D}}\textbf{\emph{V}}^\mathsf{T}$. The matrix $\textbf{\emph{D}}$ is regarded as a basis dictionary of the data, $\textbf{\emph{V}}$ with being the coefficients. So we obtain a \emph{regularized} version of LRR and solve the following problem: \begin{equation} \label{eq:rverolrr} \begin{split} \mathop{\min}_{\textbf{\emph{D,U,V}}}{\dfrac{\lambda_1}{2}}{\|\textbf{\emph{F}}-\textbf{\emph{D}}\textbf{\emph{V}}^\mathsf{T}\|}_F^2+\dfrac{1}{2} \Big(\|\textbf{\emph{U}}\|_F^2&+\|\textbf{\emph{V}}\|_F^2\Big)\\ &+{\dfrac{\lambda_2}{2}}\|\textbf{\emph{D}}-\textbf{\emph{Y}}\textbf{\emph{U}}\|_F^2. \end{split} \end{equation} Let $\textbf{\emph{f}}_i$, $\textbf{\emph{y}}_i$, $\textbf{\emph{u}}_i$, and $\textbf{\emph{v}}_i$, be the $i$-th column of matrices $\textbf{\emph{F}}$, $\textbf{\emph{Y}}$, $\textbf{\emph{U}}^\mathsf{T}$, and $\textbf{\emph{V}}^\mathsf{T}$, respectively. Solving Eq.~(\ref{eq:rverolrr}) indeed minimizes the following empirical cost function, \begin{equation} \label{eq:emcostfun} f_N(\textbf{\emph{D}})\triangleq\frac{1}{N}\sum\limits_{i=1}^N{\ell_1}(\textbf{\emph{f}}_i,\textbf{\emph{D}})+\frac{1}{N}\sum\limits_{i=1}^N{\ell_2}(\textbf{\emph{y}}_i,\textbf{\emph{D}}), \end{equation} where the loss function ${\ell_1}$, ${\ell_2}$ are defined as \begin{equation} \label{eq:l1} {\ell_1}(\textbf{\emph{f}}_i,\textbf{\emph{D}})=\mathop{\min}_{\textbf{\emph{v}}}{\hat{\ell_1}}(\textbf{\emph{f}}_i,\textbf{\emph{D}},\textbf{\emph{v}}), \end{equation} \begin{equation} \label{eq:l2} {\ell_2}(\textbf{\emph{y}}_i,\textbf{\emph{D}})=\mathop{\min}_{\textbf{\emph{u}}}{\hat{\ell_2}}(\textbf{\emph{y}}_i,\textbf{\emph{D}},\textbf{\emph{M}}_{i-1},\textbf{\emph{u}}), \end{equation} that is \begin{equation} {\hat{\ell_1}}(\textbf{\emph{f}}_i,\textbf{\emph{D}},\textbf{\emph{v}}) \triangleq \frac{\lambda_1}{2}{\|\textbf{\emph{f}}_i-\textbf{\emph{D}}\textbf{\emph{v}}\|}_2^2+\frac{1}{2}{\|\textbf{\emph{v}}\|}_2^2, \end{equation} \begin{equation} {\hat{\ell_2}}(\textbf{\emph{y}}_i,\textbf{\emph{D}},\textbf{\emph{M}}_{i-1},\textbf{\emph{u}}) \! \triangleq \! \frac{1}{2}{\|\textbf{\emph{u}}\|}_2^2+\frac{\lambda_2}{2}{\|\textbf{\emph{D}}-\textbf{\emph{M}}_{i-1}-\textbf{\emph{y}}_i\textbf{\emph{u}}^\mathsf{T}\|}_F^2, \end{equation} where \begin{equation} \textbf{\emph{M}}_{i-1}=\sum\limits_{j=1}^{i-1}\textbf{\emph{y}}_j\textbf{\emph{u}}_j^\mathsf{T}. \end{equation} In the $t$-th time instance, the objective function for updating the basis $\textbf{\emph{D}}$ is defined as \begin{equation} g_t(\textbf{\emph{D}}) \! \triangleq \! \frac{1}{t}\Big(\!\sum\limits_{i=1}^t \! {\hat{\ell_1}}(\textbf{\emph{f}}_i,\textbf{\emph{D}},\textbf{\emph{v}}_i)+\! \sum\limits_{i=1}^t \! \frac{1}{2}{\|\textbf{\emph{u}}_i\|}_2^2+\frac{\lambda_2}{2}{\|\textbf{\emph{D}}-\textbf{\emph{M}}_t\|}_F^2\Big). \end{equation} This is a surrogate function of the empirical cost function $f_t(\textbf{\emph{D}})$ defined in Eq.~(\ref{eq:emcostfun}), namely it provides an upper bound for $f_t(\textbf{\emph{D}}): g_t(\textbf{\emph{D}}) \geqslant f_t(\textbf{\emph{D}})$. Expanding the first term, $\textbf{\emph{D}}_t$ is given by: \begin{equation} \label{eq:dt} \begin{split} \textbf{\emph{D}}_t=\mathop{\arg\min}_{\textbf{\emph{D}}}{\dfrac{1}{t}}\Big[\dfrac{1}{2}\mathop{\mathrm{Tr}}&(\textbf{\emph{D}}^\mathsf{T}\textbf{\emph{D}}(\lambda_1\textbf{\emph{A}}_t +\lambda_2\textbf{\emph{I}}_d) \\ & -\mathop{\mathrm{Tr}}(\textbf{\emph{D}}^\mathsf{T}(\lambda_1\textbf{\emph{B}}_t+\lambda_2\textbf{\emph{M}}_t))\Big], \end{split} \end{equation} where $\textbf{\emph{A}}_t=\sum_{i=1}^t\textbf{\emph{v}}_i\textbf{\emph{v}}_i^\mathsf{T}$ and $\textbf{\emph{B}}_t=\sum_{i=1}^t\textbf{\emph{f}}_i\textbf{\emph{v}}_i^\mathsf{T}$. In practice, a block coordinate descent approach~\cite{shen2016online} can be applied to minimize over $\textbf{\emph{D}}$. Usually, a potential way to construct a graph is collecting all the $\textbf{\emph{u}}_i$ and $\textbf{\emph{v}}_i$ to compute the representation matrix $\textbf{\emph{C}}=\textbf{\emph{U}}\textbf{\emph{V}}^\mathsf{T}$. So the NOLRR-graph is built as $ \emph{\textbf{W}}=(|\textbf{\emph{C}}|+|{\textbf{\emph{C}}}|^\mathsf{T})/2$. The above NOLRR reduces the memory cost from \emph{O}($N^2$) to \emph{O}($nd$) with $d$ being estimated rank ($d<n\ll N$) and makes it an appealing solution for large scale data. The time complexity of the update on accumulation matrices is \emph{O}($nd$) and that of $\textbf{\emph{D}}_t$ is \emph{O}($nd^2$). The procedure of NOLRR-graph construction is summarized in \textbf{Algorithm~\ref{alg:olrrgraph}}. The NOLRR-graph can improve the performance of natural image segmentation as explained in Section~\ref{subsec:different modules}. \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[!t] \caption{Global node selection} \label{alg:global nodes selection} \begin{algorithmic}[1] \REQUIRE Feature $\textbf{\emph{F}}\in \mathbb{R}^{n\times N}$, $e$, $g$, $\psi=3$, $\tau=10^{-6}$ ; \STATE Compute $\textbf{\emph{c}}_j^{*}$ from OMP$(\textbf{\emph{F}}_{-j},\textbf{\emph{f}}_j)$; \STATE Set $\textbf{\emph{C}}^{*}=[\textbf{\emph{c}}_1^{*},...,\textbf{\emph{c}}_N^{*}]$ and $\textbf{\emph{M}}_{sp}=|\textbf{\emph{C}}^{*}|+|{\textbf{\emph{C}}^{*}}^\mathsf{T}|$; \STATE Compute classification from $\textbf{\emph{M}}_{sp}$ by spectral clustering with the clustering number provided by the APC. \ENSURE Global nodes. \end{algorithmic} \end{algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm}[!t] \caption{NOLRR-graph construction} \label{alg:olrrgraph} \begin{algorithmic}[1] \REQUIRE Feature $\textbf{\emph{F}}\in \mathbb{R}^{n\times N}$; parameter $d$; \STATE \textbf{Initialize} $\lambda_1=1$, $\lambda_2^{ini}=1/\sqrt{n}$, basis dictionary $\textbf{\emph{D}}_0\in \mathbb{R}^{n\times d}$, zero matrices $\textbf{\emph{A}}_0\in \mathbb{R}^{d\times d}$, $\textbf{\emph{B}}_0\in \mathbb{R}^{n\times d}$, $\textbf{\emph{M}}_0\in \mathbb{R}^{n\times d}$, $\textbf{\emph{U}}\in \mathbb{R}^{N\times d}$, $\textbf{\emph{V}}\in \mathbb{R}^{N\times d}$. \FOR {$t=1$ to $N$} \STATE Access the $t$-th atom $\textbf{\emph{f}}_t$ and compute $\lambda_2=\sqrt{t}\times \lambda_2^{ini}$; \STATE Compute the coefficients $\textbf{\emph{v}}_t$ and $\textbf{\emph{u}}_t$ using Eq.~(\ref{eq:l1}) and Eq.~(\ref{eq:l2}), respectively; \STATE Update the accumulation matrices \begin{center} $\textbf{\emph{A}}_t \leftarrow \textbf{\emph{A}}_{t-1}+\textbf{\emph{v}}_{t}\textbf{\emph{v}}_{t}^\mathsf{T}$, $\textbf{\emph{B}}_t \leftarrow \textbf{\emph{B}}_{t-1}+\textbf{\emph{f}}_{t}\textbf{\emph{v}}_{t}^\mathsf{T}$, $\textbf{\emph{M}}_t \leftarrow \textbf{\emph{M}}_{t-1}+\textbf{\emph{f}}_{t}\textbf{\emph{v}}_{t}^\mathsf{T}$; \end{center} \STATE Update the basis dictionary using Eq.~(\ref{eq:dt}); \STATE Update the matrices \begin{center} $\textbf{\emph{U}}_{t,:} \leftarrow \textbf{\emph{u}}_{t}$, $\textbf{\emph{V}}_{t,:} \leftarrow \textbf{\emph{v}}_{t}$; \end{center} \ENDFOR \STATE Compute the matrix $\textbf{\emph{C}}=\textbf{\emph{U}}\textbf{\emph{V}}^\mathsf{T}$ and the NOLRR-graph $\emph{\textbf{W}}=(|\textbf{\emph{C}}|+|{\textbf{\emph{C}}}|^\mathsf{T})/2$. \ENSURE NOLRR-graph $\textbf{\emph{W}}$. \end{algorithmic} \end{algorithm} \section{Related Works} The unsupervised methods segment images without any human intervention. As we focus on unsupervised image segmentation in this paper, a review of related methods is introduced in this section. A more detailed review of the image segmentation process can be found in~\cite{zhu2016beyond}. The essence of image segmentation can be regarded as a clustering problem, which groups the pixels into local homogenous regions. Some clustering-based methods such as $k$-means, mean-shift (MS)~\cite{comaniciu2002mean}, fusion of clustering results (FCR)~\cite{4480125}, correlation clustering (Corr-Cluster)~\cite{Kim2013Task}, higher-order correlation clustering (HO-CC)~\cite{nowozin2014image}, deviation-sparse fuzzy c-means (DSFCM)~\cite{8543645}, membership scaling fuzzy c-means (MSFCM)~\cite{9120181}, FRFCM~\cite{8265186}, automatic fuzzy clustering (AFC)~\cite{8770118}, robust self-sparse fuzzy clustering (RSFFC)~\cite{9162644} and superpixel-based fast fuzzy c-means (SFFCM)~\cite{Lei2018fuzzy} are the typical examples. In particular, subspace clustering~\cite{you2016scalable,feng2013online} has extensively established in computer vision. Many subspace clustering algorithms aim to obtain a structured representation to fit the underlying data, such as sparse subspace clustering~\cite{elhamifar2013sparse} and LRR~\cite{liu2010robust}. Both of them utilize the idea of self-expressiveness which expresses each sample as a linear combination of the remaining~\cite{you2016scalable}. However, such approaches can be computationally expensive with high memory cost. One of the most popular ways to alleviate the computational complexity is online implementation~\cite{feng2013online,shen2016online}. Besides clustering-based methods, graph-based methods can be regarded as image perceptual grouping and organization methods which have become one of the most popular image segmentation methods. Graph-based methods are based on the fusion of the feature and spatial information, such as normalized cut (Ncut)~\cite{shi2000normalized}, Felzenszwalb-Huttenlocher (FH) graph-based method~\cite{Felzenszwalb2004Efficient}, $\ell_0$-graph~\cite{wang2013graph}, iterative ensemble Ncut~\cite{Li2016Iterative}, multi-scale Ncut (MNcut)~\cite{cour2005spectral}, GL-graph~\cite{wang2015global}, region adjacency graph (RAG)~\cite{7484679}, and AASP-graph~\cite{zhang2019aaspgraph}, etc. More recently, Kim et al.~\cite{Tae2013Learning} proposed a multi-layer sparsely connected graph to effectively combine local grouping cues, and then applied semi-supervised learning to define the relevance scores between all pairs of these graph nodes as the affinities of the graph. Wang et al.~\cite{wang2014regularized} introduced the normalized tree partitioning and the average tree partitioning to optimize normalized and average cut over a tree for image segmentation. Saglam and Baykan~\cite{Saglam2017Sequential} used Prim's sequential representation of the minimum spanning tree for image segmentation. Especially, Li et al.~\cite{li2018iterative} proposed a region adjacency cracking method to adaptively loosen the color labeling constraints. Furthermore, a heuristic four-color labeling algorithm was used to establish a uniform appearance of those regions with global consistency. In this method, affinity propagation clustering~\cite{frey2007clustering} is applied to crack the adjacent constraint, which is suitable for the task with unknown regional characteristic distribution. Affinity propagation can adaptively determine the clusters based on the relationship of the given regions. The GL-graph~\cite{wang2015global} is built over superpixels to capture both short- and long-range grouping cues, and thereby enabling propagation of grouping cues between superpixels of different scales based on a bipartite graph~\cite{li2012segmentation}. It can be noticed that the combination of the global graph and the local graph over superpixels can obtain a better segmentation result.
346c263de5c730222c7e6c6464bedae51cfc5d91
\section{Introduction} Quantum field theories on noncommutative spaces appeared at the end of the last century \cite{Grosse:1992bm,Doplicher:1994tu,Filk:1996dm, Grosse:1995ar}. These investigations and compactifications of M-theory on the noncommutative torus \cite{Connes:1997cr} motivated the perturbative renormalisation programme of QFT on noncommutative geometries. Whereas renormalisable at one-loop order \cite{Martin:1999aq, Krajewski:1999ja}, a new class of problems (UV/IR-mixing \cite{Minwalla:1999px,Chepelev:1999tt}) was found at higher loop order. Two of us (HG+RW) found a way to avoid the UV/IR-mixing problem for scalar fields by understanding that it signals the generation of another marginal coupling \cite{Grosse:2004yu,Grosse:2004wte}. This coupling corresponds to a harmonic oscillator potential and implements a particular duality under Fourier transform \cite{Langmann:2002cc}. The duality-covariant scalar model (with oscillator potential) is perturbatively renormalisable \cite{Grosse:2003aj,Grosse:2004yu}. Moreover, the $\beta$-function of the coupling constant vanishes at the self-duality point \cite{Grosse:2004by,Disertori:2006uy}. The proof \cite{Disertori:2006nq} led to a new solution strategy first formulated by two of us (HG+RW) in \cite{Grosse:2009pa} and then extended in \cite{Grosse:2012uv}. All these developments and results have been reviewed previously in great detail \cite{Szabo:2001kg,Wulkenhaar:2006si,Rivasseau:2007ab,SurveyNCG}. This paper provides the first review of the enormous progress made during the last three years. It started with the exact solution by one of us (RW) with E.~Panzer \cite{Panzer:2018tvy} of the non-linear Dyson-Schwinger equation found in \cite{Grosse:2009pa} for the case of 2-dimensional Moyal space. A renewed interest in higher planar correlation functions \cite{DeJong} established a link to the Hermitian 2-matrix model \cite{Eynard:2005iq} which has a non-mixed sector that follows \emph{topological recursion} \cite{Eynard:2016yaa}. This observation identified the key to generalise \cite{Panzer:2018tvy} to a solution of all quartic matrix models \cite{Grosse:2019jnv} by three of us (AH+HG+RW). After initial investigations of the algebraic-geometrical structure in \cite{Schurmann:2019mzu}, three of us (JB+AH+RW) identified in \cite{Branahl:2020yru} the objects which obey (conjecturally; proved in the planar case \cite{Hock:2021tbl}) \emph{blobbed topological recursion} \cite{Borot:2015hna}, a systematic extension of topological recursion \cite{Eynard:2007kz}. Several properties of these objects have already been investigated \cite{Branahl:2020uxs}. {\footnotesize\tableofcontents} \section{What are quantum fields on quantum spaces?} \label{sec:motiv} \subsection{The free scalar field} Quantum Physics was developed (by Planck, Heisenberg, Schr\"odinger and others) between 1900 and 1926, special relativity by Einstein in 1905. Attempts to combine both led to the Klein-Gordon and the Dirac equations. These equations, coupled to electromagnetic potentials, describe energy levels of the electron and other particles. Certain energy levels predicted to be degenerate are split in nature (Lamb-shift). These tiny corrections are explained by quantum electrodynamics (QED) and other quantum field theories. A standard treatment in quantum field theory consists in expanding it around a linear, exactly solvable, model. A favourite example is the free scalar field in $D$ dimensions which arises by canonical quantisation of the Klein-Gordon equation. The two-point function of the free scalar field has an analytic continuation in time to a two-point function on Euclidean space: \begin{align} G(t,\vec{x}) = \int \frac{d^{D-1}\vec p}{(2\pi)^{D-1} \cdot 2\omega_p} e^{-\omega_p |t| + \mathrm{i} \vec p \vec x },\qquad \omega_p=\sqrt{M^2+\vec{p}^2}\;. \label{Schwinger-2pt} \end{align} According to Minlos' theorem there exists a measure on the space $X$ of tempered distributions such that $G(t,\vec{x})= \int_X d \mu (\phi) \; \phi(t,\vec{x})\phi(0,\vec{0})$, where $\phi(t,\vec{x})$ is a stochastic field. Moments of $d \mu (\phi)$ are understood as Euclidean correlation functions. They fulfil the Osterwalder-Schrader (OS) axioms \cite{Osterwalder:1974tc} of smoothness, Euclidean covariance, OS positivity and symmetry. \subsection{Interacting fields and renormalisation} \label{sec:renorm} The Minlos measure associated with (\ref{Schwinger-2pt}) is the starting point for attempts to construct interacting models (in the Euclidean picture). They are formally obtained by a deformation \begin{align} d\mu(\phi) \mapsto d\mu_{\mathrm{int}}(\phi) = \frac{d\mu(\phi) \;e^{-S_{\mathrm{int}}(\phi)}}{ \int_X d\mu(\phi) \;e^{-S_{\mathrm{int}}(\phi)}}, \label{measure-int} \end{align} where the derivative of the functional $S_{\mathrm{int}}$ is non-linear in $\phi$. As a matter of fact, in all cases of interest this deformation is problematic because there exist (many) moments $\int_X d\mu_{\mathrm{int}}(\phi) \;\phi(t_1,\vec{x}_1)\cdots \phi(t_n,\vec{x}_n)$ which diverge. To produce meaningful quantities a procedure known as renormalisation theory is necessary. Its first step is regularisation, which amounts to understand the space $X$ of all $\phi$ as limit of a sequence (or net) of finite-dimensional spaces $X_\alpha$. Then every finite-dimensional space is endowed with its own functional $S^\alpha_{\mathrm{int}}(\phi)$ which is carefully adjusted so that certain moments $\int_{X_\alpha} d\mu_{\mathrm{int}}^\alpha(\phi) \; \phi(x_1)\cdots \phi(x_n)$ stay constant when increasing $\alpha$. Hence, they at least have a limit when approaching $X$. The challenge is to make sure that an adjustment of finitely many moments suffices to render all moments meaningful; the theory is then called renormalisable. In realistic particle physics models, this was only achieved in infinitesimal neighbourhoods of the free theory, which by far miss the required physical parameter values. Much harder are models which require adjustment of infinitely many moments to render infinitely many other moments meaningful; Einstein gravity could be of this type. \subsection{Scalar fields on quantum spaces} This article reviews a framework of quantum field theory where the renormalisation programme sketched in sec.~\ref{sec:renorm} can be fully implemented for truly interacting fields. Our fields do not live on familiar space-time; they live on a quantum space (a \textit{quantised} space whose points follow non-trivial commutation relations -- the main example in this work will be the noncommutative Moyal space). It is conceivable that such quantum spaces could be a good description of our world when gravity and quantum physics are simultaneously relevant \cite{Doplicher:1994tu}. It is probably difficult to give a picture of a quantum space, but it is fairly easy to describe scalar fields on it. For that we let our finite-dimensional spaces $X_\alpha$ of sec.~\ref{sec:renorm} be the spaces $H_N$ of Hermitean $N\times N$-matrices. Given a sequence $(E_1,E_2,\dots)$ of positive real numbers, the energies, we construct the Minlos measure $d\mu(\phi)$ of a free scalar field on quantum space by the requirement \begin{align} \int_{H_N} d\mu(\phi) \;\phi_{kl}=0,\qquad \int_{H_N} d\mu(\phi) \;\phi_{kl}\phi_{k'l'} = \frac{\delta_{kl'}\delta_{lk'}}{N(E_k+E_l)}, \label{minlos-q} \end{align} and factorisation of higher $n$-point functions. Here, the $(\phi_{kl})$ are the matrix elements of $\phi \in H_N$. The $E_k$ should be viewed as eigenvalues of the Laplacian on our quantum space. Their asymptotic behaviour (for $N\to \infty$) defines a dimension of the quantum space as the smallest number $D$ such that $\sum_{k=1}^\infty E_k^{-D/2-\epsilon}$ converges for all $\epsilon>0$. We deform the Minlos measure (\ref{minlos-q}) as in (\ref{measure-int}) via a quartic functional, \begin{align} d\mu_{\mathrm{int}}(\phi)= \frac{d\mu(\phi) \;e^{-\frac{\lambda N}{4} \mathrm{Tr}(\phi^4)}}{ \int_{H_N} d\mu(\phi) \;e^{-\frac{\lambda N}{4} \mathrm{Tr}(\phi^4)}},\qquad d\mu\text{ as in } (\ref{minlos-q}). \label{minlos-q-int} \end{align} The deformation (\ref{minlos-q-int}) is a quartic analogue of the Kontsevich model \cite{Kontsevich:1992ti} in which a cubic potential $\frac{\mathrm{i} N}{6} \mathrm{Tr}(\phi^3)$ is used to deform (\ref{minlos-q}). The Kontsevich model gives deep insight into the moduli space $\overline{\mathcal{M}}_{g,n}$ of complex curves and provides a rigorous formulation of quantum gravity in two dimensions \cite{Witten:1990hr}. For obvious reasons we call the model which studies (\ref{minlos-q-int}) and its moments the \emph{quartic Kontsevich model}. In dimension $D\geq 2$ (encoded in the $E_k$), moments of (\ref{minlos-q-int}) show the same divergences as discussed in sec.~\ref{sec:renorm} on ordinary space-time. Between 2002 and 2004 we treated them in a formal power series in (infinitesimal) $\lambda$. It turned out that an affine rescaling $E_k \mapsto Z E_k + c$ was enough, where $Z=Z(\lambda,N)$ and $c=c(\lambda,N)$ depend only on $\lambda$ and the size $N$ of the matrices, but not on $k$. We actually considered a more general Minlos measure where two further renormalisation parameters $\Omega(\lambda,N)$ and $\tilde{\lambda}(\lambda,N)$ were necessary \cite{Grosse:2004yu}; in lowest $\lambda$-order we found \cite{Grosse:2004by} that $\frac{\Omega^2}{\tilde{\lambda}}$ is independent of $N$. This was a remarkable symmetry which indicated that the Landau ghost problem \cite{Landau:1954??} could be absent in this model. This perspective influenced V.~Rivasseau who, with M.~Disertori, R.~Gurau and J.~Magnen, proved in \cite{Disertori:2006nq} that the model with quartic interaction functional (\ref{minlos-q-int}) tolerates the same $\lambda$ for all matrix sizes $N$, at least for infinitesimal $\lambda$. We understood immediately that their method can potentially provide relations between moments of $d\mu_{\mathrm{int}}(\phi)$. This turned out to be true, with amazing consequences: The model was revealed to be solvable. This solution has two aspects: First, the planar 2-point function of the measure (\ref{minlos-q-int}) satisfies a closed non-linear equation \cite{Grosse:2009pa, Grosse:2012uv}. A solution theory for this equation was developed in \cite{Panzer:2018tvy, Grosse:2019jnv}; it suggested a particular change of variables. In a second step it was found in \cite{Branahl:2020yru} that special combinations of the correlation functions possess after complexification and the mentioned change-of-variables a beautiful and universal algebraic-geometrical structure: (blobbed) topological recursion. The next section gives a short introduction. We return to the model under consideration in section~\ref{sec:solvingthemodel}. \section{Algebraic geometrical structures} \label{sec:AlgebraicGeometry} \subsection{Topological recursion} Topological recursion (TR) is a universal structure which is common to surprisingly many different topics in algebraic geometry, enumerative geometry, noncommutative geometry, random matrix theory, string theory, knot theory and more. It covers e.g.\ \textit{Witten's conjecture} \cite{Witten:1990hr} about intersection numbers on the moduli space of stable complex curves (proved by Kontsevich \cite{Kontsevich:1992ti}), Mirzakhani's recursion \cite{Mirzakhani:2006fta} for \textit{Weil-Petersson volumes} of bordered Riemann surfaces and generating functions of \textit{Hurwitz numbers} \cite{Bouchard:2007hi} with the same universal structure (see eq.\ (\ref{eq:tr}) below)! The common structure was formulated by B.~Eynard and N.~Orantin \cite{Eynard:2007kz} after insight into the Hermitean 2-matrix model \cite{Chekhov:2006vd}. Since then it became an active field of research. We refer to \cite{borot19} for an overview covering more than 100 papers. \medskip Topological recursion constructs a family $\omega_{g,n}$ of symmetric meromorphic differentials on products of Riemann surfaces $\Sigma$. These $\omega_{g,n}$ are labeled by the genus $g$ and the number $n$ of marked points of a compact complex curve, they occur as invariants of algebraic curves $P(x,y)=0$ (understood in parametric representation $x(z)$ and $y(z)$). From \textit{initial data} consisting of a ramified covering $x: \Sigma \to \Sigma_0$ of Riemann surfaces, a differential 1-form $\omega_{0,1}(z)=y(z)dx(z)$ and the \emph{Bergman kernel} $\omega_{0,2}(z,u)=B(z,u)=\frac{dzdu}{(z-u)^2}$ (here assuming a genus-0 spectral curve), TR constructs the meromorphic differentials $\omega_{g,n+1}(z_1,...,z_n,z)$ with $2g+n\geq 2$ via the following universal formula (in which we abbreviate $I=\{z_1,...,z_n\}$): \begin{align} \label{eq:tr} & \omega_{g,n+1}(I,z) \\ & =\sum_{\beta_i} \Res\displaylimits_{q\to \beta_i} K_i(z,q)\bigg( \omega_{g-1,n+2}(I, q,\sigma_i(q)) +\hspace*{-1cm} \sum_{\substack{g_1+g_2=g\\ I_1\uplus I_2=I\\ (g_1,I_1)\neq (0,\emptyset)\neq (g_2,I_2)}} \hspace*{-1.1cm} \omega_{g_1,|I_1|+1}(I_1,q) \omega_{g_2,|I_2|+1}(I_2,\sigma_i(q))\!\bigg)\;. \nonumber \end{align} This construction proceeds recursively in the negative Euler characteristic $-\chi=2g+n-2$. Here we need to define: \begin{itemize} \item A sum over the \textit{ramification points} $\beta_i$ of the ramified covering $x:\Sigma\to \Sigma_0$, defined via $dx(\beta_i)=0$. \\\textbf{Example}: $x(z)=z^2$ for a Riemann surface $\Sigma=\hat{\mathbb{C}} :=\mathbb{C}\cup \{\infty\}$ with $\beta=0$. The two sheets merge at $z=\beta=0$, but also at $z= \infty$ which is exceptional. \item the \textit{local Galois involution} $\sigma_i\neq \mathrm{id}$ defined via $x(q)=x(\sigma_i(q))$ near $\beta_i$ having the fixed point $\beta_i$ itself. \\ \textbf{Example}: $x(z)=z^2$ gives $\sigma(z)=-z$; \item the \textit{recursion kernel} $K_i(z,q) =\frac{\frac{1}{2}\int^{q'=q}_{q'=\sigma_i(q)} B(z,q')}{\omega_{0,1}(q)-\omega_{0,1}(\sigma_i(q))}$ constructed from the initial data. \end{itemize} To orientate oneself within this jungle of definitions, we turn the master formula into a picture (Fig.~\ref{fig:tr}). The recursion becomes a successive gluing of objects at their boundaries, starting with the recursion kernel and two cylinders and then becoming more and more complicated. \begin{figure}[h!t] \centering \includegraphics[width= 1.04\textwidth]{toprec.pdf} \caption{There are two different ways to obtain the left hand side $\omega_{g,n}$ by gluing the initial data (recursion kernel with three boundaries) with something of lower topology: Either one glues one object with one genus less and one boundary more $(g-1,n+2)$ with two boundaries of the kernel creating the missing genus, or one glues two objects with the kernel (causing no genus change): Then one has to account any $(g_1,n_1),(g_2,n_2)$ conform with the left hand side -- the sum over all possible partitions in the master formula (\ref{eq:tr}) arises.} \label{fig:tr} \end{figure} We conclude this subsection by giving the rather simple initial data of the three previously listed prime examples ($\omega_{0,2}=B$ and $\Sigma = \hat{ \mathbb{C}}=\Sigma_0$ in all cases): \begin{itemize} \item \textbf{Witten's conjecture}: $x(z)=z^2$, $\omega_{0,1}(z)=2z^2dz$; \item \textbf{Weil-Petersson volumes}: $x(z)=z^2$, $\omega_{0,1}(z)=\frac{4z}{\pi}\sin(\pi z)dz$; \item \textbf{Simple Hurwitz numbers}: $x(z)=-z+\log (z)$, $\omega_{0,1}(z)=(1-z)dz$. \end{itemize} \subsection{Blobbed topological recursion} We emphasised that topological recursion covers a large spectrum of examples in enumerative geometry, mathematical physics, etc. The model under consideration fits perfectly into an extension of TR developed in 2015 by G.~Borot and S.~Shadrin \cite{Borot:2015hna}: \emph{Blobbed topological recursion (BTR)}. \smallskip Its philosophy is quite analogous to that of TR, however the recursion is equipped with an infinite stack of further initial data, successively contributing to each recursion step. More precisely, the meromorphic differentials \begin{align*} \omega_{g,n}(...,z)=\mathcal{P}_z\omega_{g,n}(...,z) +\mathcal{H}_z\omega_{g,n}(...,z) \end{align*} decompose into a {polar part} $\mathcal{P}_z\omega_{g,n}$ (with poles in a selected variable $z$ at ramification points) and a \textit{holomorphic part} $\mathcal{H}_z\omega_{g,n}$ with poles somewhere else. The polar part follows exactly the usual TR \cite{Eynard:2007kz}, whereas the holomorphic part is not given via a universal structure. \begin{figure}[h!] \centering \includegraphics[width= 0.99\textwidth]{btr.png} \caption{Graphical interpretation of blobbed topological recursion: The usual recursion formula is enriched by a holomorphic (at ramification points) add-on $\mathcal{H}_z\omega_{g,n+1}$ (coloured) that appears as a surplus structure in the solution of the loop equations. It has to be seen as additional data that has to be taken into account at each further recursion step. \label{diagrams}} \end{figure} This extended theory was baptised \textit{blobbed} due to the occurring purely holomorphic parts (for $2g+m-2>0$) $\phi_{g,m}(z_1,...,z_{m-1},z)= \mathcal{H}_{z_1}... \mathcal{H}_{z_{m-1}}\mathcal{H}_z\omega_{g,m}(z_1...,z_{m-1},z)$, called the \textit{blob}. A family $\omega_{g,m}$ obeys BTR iff it fulfils \textit{abstract loop equations} \cite{Borot:2013lpa}: \begin{enumerate} \item $\omega_{g,m}$ fulfils the so-called \textit{linear loop equation} if \begin{align} \omega_{g,m+1}(u_1,...,u_m,z)+ \omega_{g,m+1}(u_1,...,u_m,\sigma_i(z))= \mathcal{O}(z-\beta_i)dz \label{lle} \end{align} is a holomorphic linear form for $z \to \beta_i$ with (at least) a simple zero at $\beta_i$. \item $\omega_{g,m}$ fulfils the \textit{quadratic loop equation} if \begin{align} Q^i_{g,m+1}&:=\omega_{g-1,m+2}(u_1,...,u_m,z,\sigma_i(z)) + \hspace*{-0.5cm} \sum_{\substack{g_1+g_2=g \\ I_1\uplus I_2=\{u_1,...,u_m\}}} \hspace*{-0.8cm} \omega_{g_1,|I_1|+1}(I_1,z) \omega_{g_2,|I_2|+1}(I_2,\sigma_i(z)) \nonumber \\[-2ex] &=\mathcal{O}((z-\beta_i)^2)(dz)^2 \label{qle} \end{align} is a holomorphic quadratic form with at least a double zero at $z \to \beta_i$. \end{enumerate} Although these formulae seem simple, a proof that the actual $\omega_{g,m}$ of a certain model fulfil the abstract loop equations may demand some sophisticated techniques. We list three models governed by BTR that were investigated within the last years: \begin{itemize} \item \textbf{Stuffed maps} \cite{Borot:2013fla}: The investigation of \textit{stuffed maps} arising from the multi-trace Hermitian one-matrix model -- not perfectly following the established theory of TR -- gave the motivation to formulate BTR in its full generality (two years later, \cite{Borot:2015hna}). \item \textbf{Tensor models} \cite{Bonzom:2020xaf}: Tensor models are the natural generalisation of matrix models and are now known to be covered by BTR at least in the case of \textit{quartic melonic interactions} for arbitrary tensor models. \item \textbf{Orlov–Scherbin partition functions} \cite{Bychkov:2020yzy}: Using $n$-point differentials corresponding to Kadomtsev-Petviashvili tau functions of hypergeometric type (Orlov–Scherbin partition functions) that follow BTR, the authors were able to reprove previous results and also to establish new enumerative problems in the realm of Hurwitz numbers. \end{itemize} A fourth example is the quartic interacting quantum field theory defined by the measure (\ref{minlos-q-int}). \section{Solving the Model} \label{sec:solvingthemodel} \subsection{The setup} Moments of the measure $d\mu_{\mathrm{int}}(\phi)$ defined in (\ref{minlos-q-int}) come with an intricate substructure. They first decompose into connected functions (or cumulants) \begin{align} \int_{H_N} d\mu_{\mathrm{int}}(\phi)\;\phi_{k_1l_1}\cdots \phi_{k_nl_n} =\sum_{\substack{\text{partitions }\\ \mathcal{P} \text{ of } \{1,\dots,n\} }} \prod_{\text{blocks }\mathcal{B} \in \mathcal{P}} \Big\langle \prod_{i\in \mathcal{B}} \phi_{k_il_i} \Big\rangle_c. \label{partition} \end{align} Because of the invariance of (\ref{minlos-q-int}) under $\phi\mapsto -\phi$, the only contributions come from $n$ and all $|\mathcal{B}|$ even. Take all $k_1,...,k_n$ pairwise different. Then it follows from (\ref{minlos-q}) that contributions to cumulants $\big\langle \phi_{k_1l_1} \cdots \phi_{k_nl_n} \big\rangle_c$ vanish unless the $l_i$ are a permutation of the $k_j$. Any permutation is a product of cycles, and after renaming matrix indices, only cumulants of the form \begin{align} &N^{2-b} G_{|k_1^1\dots k_{n_1}^1|\dots |k_1^b\dots k_{n_b}^b|} :=N^n \Big\langle \prod_{j=1}^b\prod_{i=1}^{n_j} \phi_{k_i^j k_{i+1}^j} \Big\rangle_c \label{eq:cumulants} \end{align} arise, where $k_{n_j+1}^j=k_1^j$ is cyclic. Finally, the $G_{\dots}$ come with a grading, which is called genus because it relates to the genus of Riemann surfaces: \begin{align} G_{|k_1^1\dots k_{n_1}^1|\dots |k_1^b\dots k_{n_b}^b|} =\sum_{g=0}^\infty N^{-2g} G^{(g)}_{|k_1^1\dots k_{n_1}^1|\dots |k_1^b\dots k_{n_b}^b|}. \label{genus-exp} \end{align} These $G^{(g)}_{\dots}$ are not independent. They are related by quantum equations of motions, called Dyson-Schwinger equations. Moreover, symmetries of the model give rise to a Ward-Takahashi identity \cite{Disertori:2006nq} \begin{align} 0=\sum_{p=1}^{N} \Big((E_k-E_l) \frac{\partial^2 }{\partial J_{kp} \partial J_{pl}} -J_{lp} \frac{\partial}{\partial J_{kp}}+ J_{pk} \frac{\partial}{\partial J_{pl}}\Big) \int_{H_N} d\mu_{\mathrm{int}}(\phi) \;e^{\mathrm{i} \mathrm{Tr}(\phi J)} . \label{WI-sum} \end{align} It breaks down to further relations between the $G^{(g)}_{\dots}$. All these relations together imply a remarkable pattern \cite{Grosse:2012uv}: The functions $G^{(g)}_{\dots}$ come with a partial order, i.e.\ either two (different) functions are independent, or precisely one is strictly smaller than the other. The relations respect this partial order: A function of interest depends only on finitely many smaller functions. The smallest function is the planar two-point function $G^{(0)}_{|ab|}$ which satisfies a closed non-linear equation \cite{Grosse:2009pa}. The non-linearity makes this equation hard to solve. The solution eventually succeeded with techniques from complex geometry. First, the equation extends to an equation for a holomorphic function $G^{(0)}(\zeta,\eta)$. Let $(e_1,...,e_d)$ be the pairwise different values in $E_1,...,E_N$, which arise with multiplicities $(r_1,...,r_d)$. To deal with the renormalisation problem in the limit $N,d\to \infty$, we have to rescale and shift these values to $e_k \mapsto Z(e_k+\frac{\mu_{bare}^2}{2})$. Then $G^{(0)}_{|ab|}=G^{(0)}(e_a,e_b)$ where $G^{(0)}(\zeta,\eta)$ satisfies the non-linear closed equation \begin{align} &(\zeta+\eta+\mu^2_{bare}) Z G^{(0)}(\zeta,\eta) \label{eq:Gcomplex} \\ &=1-\frac{\lambda}{N} \sum_{k=1}^{d}r_k \Big( ZG^{(0)}(\zeta,\eta) \;ZG^{(0)}(\zeta,e_k) - \frac{ZG^{(0)}(e_k,\eta) -ZG^{(0)}(\zeta,\eta) }{e_{k}-\zeta}\Big) . \nonumber \end{align} For finite $d$ one can safely set $\mu_{bare}^2=0=Z-1$. We already included $\mu_{bare}^2,Z$ in (\ref{eq:Gcomplex}) to prepare the limit $N,d\to \infty$ in which, depending on the growth rate of $r_k$, this equation will suffer from a divergence problem. One has to carefully adjust $Z(N,\lambda)$ and $\mu^2_{bare}(N,\lambda)$ to make $\lim_{N \to \infty}G^{(0)}(\zeta,\eta)$ well-defined. The key step to solve (\ref{eq:Gcomplex}) is a transform of variables $\zeta\mapsto z=R^{-1}(\zeta)$ implemented by a biholomorphic mapping $R^{-1}$ depicted in Figure~\ref{fig:complexification}. \begin{figure}[h!] \centering \includegraphics[width= 0.99\textwidth]{complex2.pdf} \caption{Illustration of the change of variables: The biholomorphic map $R:\mathcal{U}\to \mathcal{V}$ with $R(\varepsilon_k)=e_k$ will later be enlarged to a ramified cover $R:\hat{\mathbb{C}}\to \hat{\mathbb{C}}$. Functions on $\mathcal{U}$ will meromorphically continue to the Riemann sphere $\hat{\mathbb{C}}=\mathbb{C}\cup\{\infty\}$. \label{fig:complexification}} \end{figure} For appropriately chosen preimages $R(\varepsilon_k)=e_k$ we introduce another holomorphic function $\mathcal{G}^{(0)}$ by $\mathcal{G}^{(0)}(z,w)=G^{(0)}(R(z),R(w))$. We require that $R$ and $\mathcal{G}^{(0)}$ relate by \begin{align} R(z)+\mu^2_{bare}+\frac{\lambda}{N} \sum_{k=1}^{d} r_k Z\mathcal{G}^{(0)}(z,\varepsilon_k) +\frac{\lambda}{N}\sum_{k=1}^{d} \frac{r_k}{R(\varepsilon_k)-R(z)}=-R(-z). \label{fG-ansatz} \end{align} These steps turn (\ref{eq:Gcomplex}) into \begin{align} &(R(w)-R(-z)) Z\mathcal{G}^{(0)}(z,w) =1+\frac{\lambda}{N} \sum_{k=1}^{d}r_k \frac{Z\mathcal{G}^{(0)}(\varepsilon_k,w)}{R(\varepsilon_k)-R(z)} , \label{eq:fG} \end{align} which is a \emph{linear} equation for which a solution theory exists. It expresses $\mathcal{G}^{(0)}$ in terms of the not yet known function $R$. Inserting it into (\ref{fG-ansatz}) yields a complicated equation for $R$. The miracle is that relatively mild assumptions on $R$ allow to solve this problem: \begin{theorem}[\cite{Schurmann:2019mzu}, building on \cite{Grosse:2019jnv}] \label{throm1} Let $\lambda,e_k>0$ and $\mu^2_{bare}=0=Z-1$ (absent renormalisation). Assume that there is a rational function $R:\hat{\mathbb{C}}\to \hat{\mathbb{C}}$ with \begin{enumerate} \item $R$ has degree $d+1$, is normalised to $R(\infty)=\infty$ and biholomorphically maps a domain $\mathcal{U} \subset \mathbb{C}$ to a neighbourhood $\mathcal{V}$ of a real interval that contains $e_1,\dots,e_d$. \item $R$ satisfies \eqref{fG-ansatz} with $\mathcal{G}^{(0)}$ the solution of \eqref{eq:fG}, where $z,w,\varepsilon_k\in \mathcal{U}$. \end{enumerate} Then the functions $R$ and $\mathcal{G}^{(0)}$ are uniquely identified as \begin{align} R(z)&=z-\frac{\lambda}{N} \sum_{k=1}^d \frac{\varrho_k}{\varepsilon_k+z}\;,\qquad R(\varepsilon_k)=e_k\;,\quad \varrho_k R'(\varepsilon_k)=r_k\;, \label{R} \\ \mathcal{G}^{(0)}(z,w)&=\frac{\displaystyle 1 -\frac{\lambda}{N} \sum_{k=1}^d \frac{r_k}{ (R(z)-R(\varepsilon_k))(R(\varepsilon_k)-R({-}w))} \prod_{j=1}^d \frac{ R(w){-}R({-}\widehat{\varepsilon_k}^j)}{ R(w)-R(\varepsilon_j)} }{R(w)-R(-z)}\;. \label{Gzw-final} \end{align} Here, the solutions of $R(v)=R(z)$ are denoted by $v\in\{z,\hat{z}^1,\dots,\hat{z}^d\}$ with $z\in \mathcal{U}$ when considering $R:\hat{\mathbb{C}}\to \hat{\mathbb{C}}$. The symmetry $\mathcal{G}^{(0)}(z,w)=\mathcal{G}^{(0)}(w,z)$ is automatic. \end{theorem} \noindent We discuss later the renormalisation problem $\mu_{bare}^2(N,\lambda)\neq 0 \neq Z(N,\lambda)-1$ in the limit $N\to \infty$. The change of variables (\ref{R}) identified in the solution (\ref{Gzw-final}) of the two-point function is the starting point for everything else. However, although the equations for all other $G_{..}$ are affine, they cannot be solved directly. It was understood in \cite{Branahl:2020yru} that one has first to introduce two further families of functions, as we will explain in the next subsection. \subsection{BTR of the Quartic Kontsevich Model}\label{Sec.BTR} To give an impression how blobbed topological recursion is related to our model, we first introduce the aforementioned three families of functions whose importance became clear during an attempt of solving the $2{+}2$-point function: \begin{itemize} \item the cumulants $G^{(g)}_{|k_1^1\dots k_{n_1}^1|\dots |k_1^b\dots k_{n_b}^b|}$ defined in (\ref{eq:cumulants}), (\ref{partition}) and (\ref{genus-exp}), called $(n_1{+}...{+}n_b)$-point functions of genus $g$; \item the \textit{generalised correlation functions} defined as derivatives (here we need $E_k=e_k$ and $r_k=1$; this restriction is later relaxed) \begin{align} T^{(g)}_{q_1,q_2,...,q_m\|k_1^1...k_{n_1}^1|k_1^2...k_{n_2}^2|...|k_1^b...k_{n_b}^b|} :=\frac{(-N)^m\partial^m}{\partial e_{q_1}\partial e_{q_2}...\partial e_{q_m}}G^{(g)}_{|k_1^1...k_{n_1}^1|k_1^2...k_{n_2}^2|...|k_1^b...k_{n_b}^b|}\;; \end{align} \item functions $\Omega_{q_1,...,q_m}$ recursively defined by \begin{align} \Omega^{(g)}_{q_1,...,q_m} &:= \frac{(-N)^{m-1}\partial^{m-1}\Omega^{(g)}_{q_1}}{ \partial e_{q_2}...\partial e_{q_{m}}} +\frac{\delta_{m,2}\delta_{g,0}}{(e_{q_1}-e_{q_2})^2}\qquad \text{for } m\geq 2 \label{eq:Omega-gm} \end{align} and $\Omega^{(g)}_{q} := \frac{1}{N}\sum_{k=1}^N G^{(g)}_{|qk|} +\frac{1}{N^2}G^{(g)}_{|q|q|}$. They are symmetric in their indices and will soon be related to \textit{meromorphic differentials which satisfy BTR}. \end{itemize} A straightforward extension gave birth to \textit{(generalised) Dyson-Schwinger equations} between these functions. They first extend to equations for holomorphic functions on several copies of $\mathcal{V}$ and then via the change of variables $R,R^{-1}$ to equations between meromorphic functions \begin{align} &\mathcal{G}^{(g)}(z_1^1,\dots, z_{n_1}^1|\dots |z_1^b,\dots z_{n_b}^b)\;,\quad \mathcal{T}^{(g)}(u_1,...,u_m\| z_1^1,\dots, z_{n_1}^1|\dots |z_1^b,\dots z_{n_b}^b|)\;, \nonumber \\ &\Omega^{(g)}_m(u_1,...,u_m) \label{families} \end{align} on several copies of $\hat{\mathbb{C}}$. It was shown in \cite{Branahl:2020yru} that these equations can be recursively solved in a triangular pattern of interwoven loop equations connecting these three families $(\mathcal{G}^{(g)},\mathcal{T}^{(g)},\Omega^{(g)}_m)$ of (\ref{families}), see Figure~\ref{fig:euler}. \begin{figure}[h!t] \centering \includegraphics[width= 0.99\textwidth]{dsestructure.pdf} \caption{Illustration of the interwoven solution procedure, ordered by $-\chi$. Theorem \ref{throm1} which simultaneously gives $ \Omega^{(0)}_1(z)=\frac{1}{N}\sum_{k}r_k\mathcal{G}^{(0)}(\varepsilon_k,z)$ and the two-point function $\mathcal{T}^{(0)}(\emptyset\|z,w) = \mathcal{G}^{(0)}(z,w)$ is the starting point. A generic box at position $(g,m+b)$ contains $\Omega_m^{(g)}(u_1,...,u_{m-1},z) \to \mathcal{T}^{(g)}(u_1,...,u_{m-1}\|z,w|)$ and $\mathcal{T}^{(g)}(u_1,...,u_{m-2}\|z|w|)$. \label{fig:euler}} \end{figure} The arrows represent very different difficulties. It is easy to express every next $\mathcal{T}^{(g)}$ in terms of previous $\Omega^{(g')}_{m}$, but the result is an extremely lengthy and complicated equation for the next $\Omega^{(g)}_{m}$ in terms of the previous $\Omega^{(g')}_{m'}$. Obtaining a $(n_1{+}...{+}n_{b})$-point function of genus $g$ from $\mathcal{T}^{(g')},\Omega^{(g')}_{m'}$ in all boxes with $g'\leq g$ and $m'+b'\leq b$ is also easy (unless one wants to make the symmetries manifest). To our enormous surprise, the solution of the first of these very difficult equations for $\Omega^{(g)}_m$ with $2g+m-2\geq 0$ turned out to be ravishingly simple and structured. After the solution of $\Omega_m^{(g)}$ for $(g,m)\in\{(0,2),(0,3),(0,4),(1,1)\}$ in \cite{Branahl:2020yru} via the interwoven equations, it became nearly obvious that the meromorphic differentials $\omega_{g,n}$ defined by \begin{align} \omega_{g,n}(z_1,...,z_n):=\lambda^{2-2g-n}\Omega^{(g)}_{n}(z_1,...,z_n) dR(z_1)\cdots dR(z_n) \end{align} obey BTR. In this process the variable transform $R$ is again of central importance. It provides the spectral curve $(x:\hat{\mathbb{C}}\to \hat{\mathbb{C}}, \omega_{0,1}=ydx,\omega_{0,2})$ discussed in Sec.~\ref{sec:AlgebraicGeometry} with \begin{align}\label{specurve} x(z)=R(z)\;,\quad y(z)=-R(-z)\;,\quad \omega_{0,2}(u,z)=\frac{du\,dz}{(u-z)^2}+\frac{du\,dz}{(u+z)^2}\;. \end{align} We underline the appearance of some additional initial data in $\omega_{0,2}$, namely $\frac{du\,dz}{(u+z)^2}=-B(u,-z)$ (Bergman kernel with one changed sign). The next steps will consist in identifying structures and equations directly for the family $\omega_{g,n}$, avoiding the $\mathcal{T}^{(g)}$. This task was accomplished for the planar sector $\omega_{0,n}$ in \cite{Hock:2021tbl}. The symmetry of the spectral curve, $y(z)=-x(-z)$ and $\omega_{0,2}(u,z)=B(u,z)-B(u,-z)$ played a key r\^ole. This symmetry extends to a deep involution identity \begin{align} & \omega_{0,|I|+1}(I,q) +\omega_{0,|I|+1}(I,- q) \label{eq:flip-om} \\ &=-\sum_{s=2}^{|I|} \sum_{I_1\uplus ...\uplus I_s=I} \frac{1}{s} \Res\displaylimits_{z\to q} \Big( \frac{dR(-q) dR(z)}{(R(-z)-R(-q))^{s}} \prod_{j=1}^s \frac{\omega_{0,|I_j|+1}(I_j,z)}{dR(z)} \Big)\;. \nonumber \end{align} With considerable combinatorial effort it was possible to prove that this involution identity completely determines the moromorphic differentials $\omega_{0,n+1}$ to the following structure astonishingly similar to usual TR: \begin{theorem}[\cite{Hock:2021tbl}]\label{thmBTRplan} Assume that $z\mapsto \omega_{0,n+1}(u_m,...,u_m,z)$ is for $m\geq 2$ holomorphic at $z=-\beta_i$ and $z=u_k$ and has poles at most in points where the rhs of \eqref{eq:flip-om} has poles. Then equation \eqref{eq:flip-om} is for $I=\{u_1,...,u_m\}$ with $m\geq 2$ uniquely solved by \begin{align} \omega_{0,|I|+1}(I,z) &= \sum_{i=1}^r \Res\displaylimits_{q\to \beta_i}K_i(z,q) \sum_{I_1\uplus I_2=I} \omega_{0,|I_1|+1}(I_1,q)\omega_{0,|I_2|+1}(I_2,\sigma_ i(q)) \label{sol:omega} \\ &-\sum_{k=1}^m d_{u_k} \Big[\Res\displaylimits_{q\to - u_k} \sum_{I_1\uplus I_2=I} \tilde{K}(z,q,u_k) d_{u_k}^{-1}\big( \omega_{0,|I_1|+1}(I_1,q) \omega_{0,|I_2|+1}(I_2,q)\big)\Big]\,. \nonumber \end{align} Here $\beta_1,...,\beta_r$ are the ramification points of the ramified covering $R:\hat{\mathbb{C}}\to \hat{\mathbb{C}}$ given in \eqref{R} and $\sigma_i\neq \mathrm{id}$ denotes the local Galois involution in the vicinity of $\beta_i$, i.e.\ $R(\sigma_i(z))=R(z)$, $\lim_{z\to \beta_i}\sigma_i(z)=\beta_i$. By $d_{u_k}$ we denote the exterior differential in $u_k$, which on 1-forms has a right inverse $d^{-1}_u \omega(u)=\int_{u'=\infty}^{u'=u}\omega(u')$. The recursion kernels are given by \begin{align} K_i(z,q)&:= \frac{\frac{1}{2} (\frac{dz}{z-q}-\frac{dz}{z-\sigma_i(q)}) }{dR(\sigma_i(q))(R(-\sigma_i(q))-R(-q))}\;,\qquad \nonumber \\ \tilde{K}(z,q,u)&:= \frac{\frac{1}{2}\big(\frac{dz}{z-q}-\frac{dz}{z+u}\big)}{dR(q) (R(u)-R(-q))}\;. \label{eq:kernel} \end{align} The solution \eqref{sol:omega}$+$\eqref{eq:kernel} coincides with the solution of the interwoven loop equations depicted in Fig.~\ref{fig:euler}. The linear and quadratic loop equations \eqref{lle} and \eqref{qle} hold. The symmetry of the rhs of \eqref{eq:flip-om} under $q\mapsto -q$ is automatic. \end{theorem} To give an explicit formula for the holomorphic part as well, using the structure of topological recursion itself, is to the best of our knowledge exceptional. Higher genera $g$ are under current investigation and require again an arduous solution of the interwoven loop equations by hand. These non-planar symmetric meromorphic differentials have an additional pole of higher order at the fixed point $q=0$ of the involution $q \to -q$. Figure~\ref{othermm} represents similarities between the quartic Kontsevich model, the original Kontsevich model \cite{Kontsevich:1992ti}, the Hermitian one- and two matrix models \cite{Eynard:2016yaa,Eynard:2002kg,Chekhov:2006vd}). \begin{figure}[h!] \centering \includegraphics[width= 0.99\textwidth]{other_mm.png} \caption{Despite completely different mathematical structures behind the models, we can reach the limit both of the Hermitian one- and two-matrix model. Moreover, we stress that the global Galois involution of the one-matrix model and the Kontsevich model turn into the global symmetry of the spectral curve belonging to their more intricate siblings. \label{othermm}} \end{figure} Although the modifications to go from one model to the other seem mild, the mathematics of the four models differs drastically. But they all fit into some flavour of topological recursion so that there is a fruitful exchange of methods. \subsection{Solution strategy of all quartic models} In the two previous subsections we assumed finite matrices, in particular a truncated energy spectrum at finite $E_N$. The relations to ordinary QFTs appear in a limit $N\to \infty$ and depend on the \textit{(spectral) dimension} encoded in the $E_k$. In this subsection we describe this limit process. Note that the limit $N\to \infty$ turns rational functions into transcendental functions so that most algebraic structures get lost. Future research projects will address the questions whether parts of (B)TR survive and whether these surviving algebraic structures are compatible with renormalisation. This subsection addresses the simplest topological sector $(g,n)=(0,1)$. Introducing the measure $\varrho_0(t):=\frac{1}{N}\sum_{k=1}^dr_k\delta(t-e_k)$ we can turn the non-linear equation \eqref{eq:Gcomplex} into the integral equation \begin{align} &\bigg[\zeta+\eta+\mu^2_{bare}+\lambda\Xint-_0^{\Lambda^2}dt\,\varrho_0(t)\bigg(ZG^{(0)}(\zeta,t)+\frac{1}{t-\zeta}\bigg)\bigg]Z G^{(0)}(\zeta,\eta) \label{eq:GcomplexCont} \\ &=1+\lambda\Xint-_0^{\Lambda^2} dt\,\varrho_0(t) \Big( \frac{ZG^{(0)}(t,\eta) }{t-\zeta}\Big) , \nonumber \end{align} where $\Xint-$ is the Cauchy principle value and $\Lambda^2=\mathrm{max}_{k}(e_k)$. The limit $N\to \infty$ will be achieved in two steps. In the first step we interpret the measure $\varrho_0$ as a H\"older-continuous function. The renormalisation constants $Z,\mu^2_{bare}$ obtain a dependence on $\Lambda^2$ which in the second step is sent to $\infty$. Furthermore, the spectral dimension $D$ is also provided by an integral representation depending on the asymptotics of $\varrho_0(t)$ for $\Lambda^2\to \infty $, being the smallest $D$ such that the integral \begin{align}\label{specdim} \sum_{k=1}^{\infty} E_k^{-D/2-\epsilon} =\lim_{\Lambda^2\to\infty}\int_0^{\Lambda^2} dt\, \frac{\varrho_0(t)}{(1+t)^{D/2+\epsilon}}. \end{align} converges for all $\epsilon>0$. \begin{example} \begin{itemize} \item For an asymptotically constant measure $\varrho_0(t)\sim const$, the spectral dimension becomes $D=2$. The integral on the lhs of \eqref{eq:GcomplexCont} diverges logarithmically in $\Lambda^2$, which can be absorbed by $\mu^2_{bare}(\Lambda^2)$. The field renormalisation is set to $Z=1$. \item For an asymptotically linear measure $\varrho_0(t)\sim t$, the spectral dimension becomes $D=4$. The renormalisation constants $\mu^2_{bare}(\Lambda^2)$ and $Z(\Lambda^2)$ have to be adapted carefully such that $\lim_{\Lambda^2 \to\infty}G^{(0)}(\zeta,\eta)$ converges. \item For a measure with asymptotic behaviour $\varrho_0(t)\sim t^a$ and $a\geq 2$, the spectral dimension becomes $D\geq 6$. In this case, the model is not renormalisable anymore. \end{itemize} \end{example} Assuming that the expression in the square brackets in \eqref{eq:GcomplexCont} is known, then the powerful solution theory of singular integral equations \cite{Tricomi85} provides the explicit expression \cite{Grosse:2012uv} \begin{align*} ZG^{(0)}(a,b)=\frac{e^{\mathcal{H}^\Lambda_a[\tau_b(\bullet)]\sin\tau_b(a)}}{ \lambda \pi \varrho_0(a)}\qquad a,b\in [0,\Lambda^2], \end{align*} where $\mathcal{H}^\Lambda_a[f(\bullet)]:=\frac{1}{\pi}\Xint-_{0}^{\Lambda^2}\frac{dt \,f(t)}{t-a}$ is the finite Hilbert transform. Inserting this ansatz into (\ref{eq:GcomplexCont}) gives a consistency equation \cite{Panzer:2018tvy,Grosse:2019jnv} for the angle function $\tau_a:(0,\Lambda^2)\to [0,\pi]$ for $\lambda>0$ and $\tau_a:(0,\Lambda^2)\to [-\pi,0]$ for $\lambda<0$: \begin{align}\label{selfcont} \frac{\cot \tau_a(p)}{\lambda \pi \varrho_0(p)}=a+p+\mu^2_{bare}(\Lambda^2) +\lambda\pi \mathcal{H}_p^\Lambda [\varrho_0(\bullet)]+\frac{1}{\pi}\int_0^{\Lambda^2}dt\, \tau_p(t). \end{align} The solution of the angle function was first found in \cite{Panzer:2018tvy} for the special case $\varrho_0(t)=1$ and generalised for any H\"older-continuous measure with spectral dimension $D<6$ in \cite{Grosse:2019jnv}. First, we need the following implicitly defined construction: \begin{definition}\label{defR} Let $D\in \{0,2,4\}$ (otherwise take $2\lfloor \frac{D}{2}\rfloor$) and $\mu^2>0$. Define implicitly the complex function $R_D(z)$ and the deformed measure $\varrho_\lambda(t)$ by the system of equations \begin{align*} R_D(z)=z-\lambda (-z)^{\frac{D}{2}}\int_{\nu_D}^{\Lambda_D^2} \frac{dt\,\varrho_\lambda(t)}{(t+\mu^2)^{\frac{D}{2}}(t+\mu^2+z)}\qquad \text{and}\qquad \varrho_\lambda(t)=\varrho_0(R_D(t)), \end{align*} where $R_D(\Lambda^2_D)=\Lambda^2$ and $R_D(\nu_D)=0$. The limit $\Lambda^2\to \infty$ converges. \end{definition} \noindent This implicitly defined system of equations is for general $\varrho_0$ not exactly solvable. We will give in Sec. \ref{sec.Moyal} two examples corresponding to the 2- and 4-dimensional Moyal space, where $R_D$ and $\varrho_\lambda$ can be found explicitly. Nevertheless, a formal expansion in $\lambda$ of $R_D$ and $\varrho_\lambda$ can be achieved recursively already in the general case, starting with \begin{align} \varrho_\lambda(t)&=\varrho_0(t)+\mathcal{O}(\lambda),\quad \nonumber \\ R_D(z)&=z-\lambda (-z)^{\frac{D}{2}}\int_{0}^{\Lambda^2}\frac{dt\,\varrho_0(t)}{(t+\overline{\mu}^2)^{\frac{D}{2}}(t+\overline{\mu}^2+z)}+\mathcal{O}(\lambda^2), \label{1stroh} \end{align} where $\mu^2=\overline{\mu}^2+\mathcal{O}(\lambda)$. The expression is convergent for $\Lambda^2\to\infty$. One can prove that the complex function $R_D(z)$ is biholomorphic from the right half plane $\{z\in\mathbb{C}:\mathrm{Re}(z)>-\frac{\mu^2}{2}\}$ onto a domain $U_D\subset\mathbb{C}$ containing $[0,\infty)$ for real $\lambda$ \cite{Grosse:2019jnv}. We define on this domain $U_D$ the inverse $R^{-1}_D$ (which is not globally defined on $\mathbb{C}$). Then, the solution of the angle function is obtained by \begin{theorem}[\cite{Grosse:2019jnv}]\label{Thm:tausolve} Let $I_D:U_D\setminus[0,\Lambda^2]\ni w\mapsto I_D(w)\in \mathbb{C}$ be defined by \begin{align*} I_D(w):=-R_D(-\mu^2-R^{-1}_D(w)). \end{align*} The consistency relation \eqref{selfcont} is solved by \begin{align*} \tau_a(p)=\lim_{\epsilon\to 0}\mathrm{Im} \big(\log(a+I_D(p+\mathrm{i}\epsilon))\big), \end{align*} where $\mu^2_{bare}(\Lambda^2)$ is related to $\mu^2$ by \begin{align*} &D=0:&& \mu^2_{bare}(\Lambda^2)=\mu^2\;,\\ &D=2:&& \mu^2_{bare}(\Lambda^2)=\mu^2-2\lambda\int_{R_D^{-1}(0)}^{R_D^{-1}(\Lambda^2)}\frac{dt\,\varrho_\lambda(t)}{t+\mu^2}\;,\\ &D=4:&& \mu^2_{bare}(\Lambda^2)=\mu^2-\lambda\mu^2\int_{R_D^{-1}(0)}^{R_D^{-1}(\Lambda^2)}\frac{dt\,\varrho_\lambda(t)}{(t+\mu^2)^2}-2\lambda\int_{R_D^{-1}(0)}^{R_D^{-1}(\Lambda^2)}\frac{dt\,\varrho_\lambda(t)}{t+\mu^2}\;. \end{align*} The angle function $\tau_a(p)$ converges in the limit $\Lambda^2\to\infty$. \end{theorem}\noindent Equivalently to Theorem \ref{Thm:tausolve} is the statement that the expression in the square brackets of \eqref{eq:GcomplexCont} is equal to $I_D$, that is \begin{align*} \zeta+\mu^2_{bare}+\lambda\int_0^{\Lambda^2}dt\,\varrho_0(t)\bigg(ZG^{(0)}(\zeta,t)+\frac{1}{t-\zeta}\bigg)=I_D(\zeta)=-R_D(-\mu^2-R^{-1}_D(\zeta)), \end{align*} for $\zeta\in U_D\setminus[0,\Lambda^2]$. The proof of the theorem is essentially achieved by inserting the solution of $\tau_a(p)$ into rhs of the consistency relation \eqref{selfcont}, using the system of implicitly defined functions from Definition \ref{defR} and applying the Lagrange-B\"urmann inversion theorem, a generalisation of Lagrange inversion theorem. We refer to \cite{Grosse:2019jnv} for details. In conclusion, Theorem \ref{Thm:tausolve} together with Definition \ref{defR} provide the solution of the initial topology $(g,n)=(0,1)$, generalising the first part of Theorem \ref{throm1} to higher dimensions. The second part of Theorem \ref{throm1}, i.e.\ the explicit expression for the 2-point correlation function, extends as follows to $N\to \infty$: \begin{theorem}[\cite{Grosse:2019jnv}]\label{Thm:2P} The renormalised 2-point function of the $\lambda\phi^4$ matricial QFT-model in $D$ dimensions is given by \begin{align*} G(a,b):=\frac{\mu^{2 \delta_{4,D}} \exp (N_D(a,b))}{\mu^2+a+b} \end{align*} where \begin{align*} N_D(a,b)=\frac{1}{2\pi \mathrm{i}}\int_{-\infty}^\infty& dt\bigg\{\log\big(a-R_D(-\tfrac{\mu^2}{2}-\mathrm{i}t)\big)\frac{d}{dt}\log\big(b-R_D(-\tfrac{\mu^2}{2}+\mathrm{i}t)\big)\\ &-\log\big(a-(-\tfrac{\mu^2}{2}-\mathrm{i}t)\big)\frac{d}{dt}\log\big(b-(-\tfrac{\mu^2}{2}+\mathrm{i}t)\big)\\ &-\delta_{4,D}\log\big(-R_D(-\tfrac{\mu^2}{2}-\mathrm{i}t)\big)\frac{d}{dt}\log\big(-R_D(-\tfrac{\mu^2}{2}+\mathrm{i}t)\big)\\ &+\delta_{4,D}\log\big(-(-\tfrac{\mu^2}{2}-\mathrm{i}t)\big)\frac{d}{dt}\log\big(-(-\tfrac{\mu^2}{2}+\mathrm{i}t)\big) \bigg\} \end{align*} and $R_D$ built by Definition \ref{defR}. For $D\in \{0,2\}$ and for some restricted cases in $D=4$ (including the $D=4$ Moyal space), there is an alternative representation \begin{align}\label{2Pasy} &G(a,b) \\ &=\frac{(\mu^2{+}a{+}b)\exp\Bigg\{ \mbox{\small$\displaystyle\frac{1}{2\pi \mathrm{i}} \int_{-\infty}^\infty dt \log\bigg(\frac{a-R_D(-\frac{\mu^2}{2}{-}\mathrm{i} t)}{a-(-\frac{\mu^2}{2}{-}\mathrm{i} t)}\bigg)\frac{d}{dt}\log\bigg(\frac{b-R_D(-\frac{\mu^2}{2}{+}\mathrm{i} t)}{ b-(-\frac{\mu^2}{2}{+}\mathrm{i} t)}\bigg)$}\Bigg\}}{(\mu^2+b+R_D^{-1}(a))(\mu^2+a+R_D^{-1}(b))}.\nonumber \end{align} \end{theorem} \noindent We emphasise that $G(a,b)$, as a 2-point function, is by definition symmetric under $a\leftrightarrow b$. This symmetry is revealed by integration by parts within the exponential in \eqref{2Pasy}. Theorem \ref{Thm:2P} provides an exact formula for the 2-point function for an open neighbourhood around $\lambda=0$ such that a convergent expansion exists. The following example will give the first order contribution: \begin{example}\label{ex:2P} Let $\mu^2=1$ for convenience. The first order expansion of $\varrho_\lambda$ and $R_D$ was already given in \eqref{1stroh}. Equivalently, we get for the inverse \begin{align*} R^{-1}(a)=a+\lambda(-a)^{\frac{D}{2}}\int_{0}^{\Lambda^2}\frac{dt\,\varrho_0(t)}{(t+1)^{\frac{D}{2}}(t+1+a)}+\mathcal{O}(\lambda^2). \end{align*} Expanding \eqref{2Pasy} of Theorem \ref{Thm:2P}, where the exponential does not contribute at the first order since each logarithm starts at order $\lambda$, yields the convergent first order expression \begin{align*} G(a,b)=\frac{1}{1+a+b}-\frac{\lambda}{(1+a+b)^2}\bigg(&(-a)^{\frac{D}{2}}\int_{0}^{\infty}\frac{dt\,\varrho_0(t)}{(t+1)^{\frac{D}{2}}(t+1+a)}\\ &+(-b)^{\frac{D}{2}}\int_{0}^{\infty}\frac{dt\,\varrho_0(t)}{(t+1)^{\frac{D}{2}}(t+1+b)}\bigg)+\mathcal{O}(\lambda^2). \end{align*} \end{example} \noindent The reader may also compute the second order contribution in $\lambda$, where an iterated integral occurs with the canonical measure $dt\, \varrho_0(t)$. For the exponential, the contour of the integral should be deformed, and the integrand expands similar to the computation carried out in \cite[Sec. 7]{Panzer:2018tvy}. The structure of the solution is still very abstract. The next section will convey more insight into the implicit definitions of $\varrho_\lambda$ and $R_D$ and their structure. This implicit system of equations will be solved for two examples, the 2- and 4-dimensional Moyal space. \section{Scalar QFT on Moyal space} \label{sec.Moyal} Given a real skew-symmetric $D\times D$-matrix $\Theta$, the associative but noncommutative Moyal product between Schwartz functions $f,g\in \mathcal{S}(\mathbb{R}^D)$ is defined by \begin{align} (f\star g)(x)=\int_{\mathbb{R}^D\times \mathbb{R}^D} \frac{dy\,dk}{(2\pi)^D} \;f(x+\tfrac{1}{2}\Theta k) g(x+y) e^{\mathrm{i}\langle k,y\rangle}. \end{align} It was understood in \cite{Grosse:2003aj,Grosse:2004yu} that the scalar QFT on Moyal space which arises from the action functional \begin{align} S(\phi) &:= \int_{\mathbb{R}^D} \frac{dx}{(8\pi)^{D/2}} \Big( \frac{1}{2} \phi \star (-\Delta +4\Omega^2 \|\Theta^{-1}x\|^2+\mu^2) \phi +\frac{\lambda}{4} \phi\star \phi\star\phi \star \phi\Big)(x) \label{GW-action} \end{align} is perturbatively renormalisable in dimensions $D=2$ and $D=4$. It was also noticed \cite{Grosse:2004by} that the renormalisation group (RG) flow of the effective coupling constant (in $D=4$, lowest $\lambda$-order) is bounded and that $\Omega=1$ is a RG fixed point. Further investigations therefore focused to $\Omega=1$ \cite{Disertori:2006uy} for which the RG-flow of the coupling constant in $D=4$ was proved to be bounded at any order in perturbation theory \cite{Disertori:2006nq}. Methods developed in the proofs of these results were decisive in the derivation \cite{Grosse:2009pa, Grosse:2012uv} of non-linear equation \eqref{eq:Gcomplex} we started with. \subsection{$D=2$} In two dimensions we have $\Theta=\left(\begin{smallmatrix} 0 & \theta \\ -\theta & 0\end{smallmatrix}\right)$. We introduce a family $(f_{kl})_{k,l\in\mathbb{N}}$ of Schwartz functions by \begin{align} f_{kl}(x_1,x_2)=2(-1)^k \sqrt{\frac{k!}{l!}} (x_1+\mathrm{i}x_2)^{l-k} L^{l-k}_k\Big(\frac{2x_1^2+2x_2^2}{\theta}\Big) e^{-\frac{x_1^2+x_2^2}{\theta}}, \end{align} where $L^\alpha_k$ are associated Laguerre polynomials. It is straightforward to prove \begin{align} \overline{f_{kl}}&=f_{lk}\;,\qquad f_{kl}\star f_{mn}=\delta_{lm} f_{kn}\;,\qquad \int_{\mathbb{R}^2} dx\;f_{kl}(x)=2\pi \theta \delta_{kl}\;, \nonumber \\ & (-\partial_{x_1}^2-\partial_{x_2}^2+ \frac{4}{\theta^2} (x_1^2+x_2^2)) f_{kl}(x_1,x_2)= \frac{4}{\theta} (k+l+1) f_{kl}(x_1,x_2). \end{align} Therefore, expanding $\phi(x)=\sum_{k,l=0}^\infty \phi_{kl} f_{kl}(x)$, we turn \eqref{GW-action} for $D=2$ into $S(\phi)=\lim_{N\to \infty} S_N(\phi)$ with \begin{align} S_N(\phi)=\frac{\theta}{4}\Big( \sum_{k,l=0}^{N-1}\Big( \frac{4}{\theta}k +\frac{\mu^2}{2}+\frac{2}{\theta}\Big)\phi_{kl}\phi_{lk} +\frac{\lambda}{4} \sum_{k,l,m,n=0}^{N-1} \phi_{kl}\phi_{lm}\phi_{mn}\phi_{nk}\Big). \end{align} Setting here $\frac{\theta}{4}=\frac{N}{\Lambda^2}$ and $E_k=\Lambda^2 \frac{k}{N}+ \frac{\mu^2}{2}+\frac{\Lambda^2}{2N}$ we recover the Minlos measure (\ref{minlos-q-int}) as \begin{align} d\mu_{int}(\phi)= \frac{1}{\mathcal{Z}} \exp(-\Lambda^2 S_N(\phi)) \prod_{k=0}^{N-1}d\phi_{kk} \prod_{k=1}^{N-1}\prod_{l=0}^k d \mathrm{Re}\phi_{kl} \, d \mathrm{Im}\phi_{kl}\;. \end{align} After some shift by the lowest spectral value $E_0$ which is absorbed in the bare mass we identifty $e_k=\Lambda^2 \frac{k}{N}$ and $r_k=1$ in the spectral measure $\varrho_0(t)=\frac{1}{N}\sum_{k=0}^{N-1} \delta(t-e_k)$. For $N\to \infty$ this converges, in the sense of Riemann sums, to the characteristic function on $[0,\Lambda^2]$. Compared with Definition \ref{defR} we recover $D=2$ and obtain for the function $R_2$ in the limit $\Lambda^2\to \infty$ \begin{align} R_2(z)=z+\lambda z\int_0^\infty\frac{dt}{(t+\mu^2)(t+\mu^2+z)}=z+\lambda \log\Big(1+\frac{z}{\mu^2}\Big). \end{align} The function $R_2$ is holomorphic on $\mathbb{C}\setminus]-\infty,-\mu^2]$. It can be inverted there in terms of the Lambert-$W$ function defined by $W(z)e^{W(z)}=z$ to \begin{align*} R^{-1}_2(z)=\lambda W\Big(\frac{\mu^2}{\lambda}e^{\frac{z+\mu^2}{\lambda}}\Big)-\mu^2. \end{align*} Similar to the logarithm, the Lambert-$W$ function has infinitely many branches $W_k$ labelled by $k\in \mathbb{Z}$, where the inverse $R^{-1}$ is obtained by the principal branch $W_0$. Following \cite{Panzer:2018tvy}, this inverse admits a convergent expansion \begin{align*} R^{-1}_2(z)=\sum_{n=1}^\infty \frac{(-\lambda)^n}{n!}\frac{d^{n-1}}{dz^{n-1}}\log\Big(1+\frac{z}{\mu^2}\Big)^n \end{align*} consisting only of powers of the logarithm. Carrying out higher order computations on the $D=2$ Moyal plane shows that further transcendental functions occur which are not representable via powers of logarithms. These functions are generated by the integral inside the exponential of \eqref{2Pasy}, they are essentially in the class of Nielsen polylogarithms (see \cite{Panzer:2018tvy} for more detials). Applying the construction of Sec. \ref{Sec.BTR}, a completely different structure of the spectral curve is revealed, which is built of \begin{align*} x(z)=z+\lambda \log\Big(1+\frac{z}{\mu^2}\Big),\qquad y(z)=-x(-z-\mu^2)=z+\mu^2-\lambda\log\Big(-\frac{z}{\mu^2}\Big). \end{align*} This curve is no longer of algebraic nature. The continuum limit from an algebraic curve (as in \eqref{specurve}) to a transcendental spectral curve is still rather mysterious. The number of branches of $R^{-1}$ tends in the limit to infinity, where the number of ramification points increases as well. However, these ramification points accumulate in the continuum to a single ramification point $\beta$ at \begin{align*} x'(\beta)=R'_2(\beta)=1+\frac{\lambda}{\mu^2+\beta}=0 \quad\Rightarrow\quad \beta=-\mu^2-\lambda. \end{align*} In the limit $\lambda\to 0$, the ramification point $\beta$ approaches the singularity of the logarithm, where the full construction becomes meaningless. However for positive $\lambda$, we may ask whether the model is still governed by the universal structure of Theorem \ref{thmBTRplan}. This would imply rationality for all $\Omega^{(0)}_n(z_1,...,z_n)$ in $z_i$. Inserting $z_i=R_2^{-1}(x_i)$, we could verify that the $\Omega^{(0)}_n$ are expanded only into powers of logarithms, and therefore remain in a simple class of functions. Whereas, correlation functions in general are generated by generalisations of polylogarithms, like Nielsen polylogarithms. The local Galois involution becomes fairly easy to handle: \begin{lemma} Let $\mathcal{V}:= \{z\in \mathbb{C}\;|~|z-\beta| <\lambda\}$ and $\tilde z(z)=\frac{\exp[(1+z)/\lambda](1+z)}{\lambda}$ with $\tilde z(\beta)=-1/e$, the threefold branch point. Then the local Galois involution $\sigma_{2}(z) $ with $R_2(z)=R_2(\sigma_{2}(z))$ and fixed point $\sigma_2(\beta)=\beta$ is piecewise defined within three branches of the Lambert W-function as \begin{align*} &\sigma_{2}(z) = -1+\lambda W_k(\tilde z) \\ &\text{with } \begin{cases} k=-1 & \text{if $\{\mathrm{Re}(\tilde z)\geq -1/e ,|\mathrm{Im}(\tilde z)| \cot [|\mathrm{Im}(\tilde z)|] \leq \mathrm{Re}(\tilde z) \}$}\\ k=1 & \text{c.c. of $k=-1$ sector } \\ k=0 & \text{else} \end{cases} \end{align*} \end{lemma} Figure~\ref{galois2d} gives a picture of this local Galois involution. \begin{figure}[h!] \centering \includegraphics[width= 0.99\textwidth]{2d_moyal.pdf} \caption{Symbolical interpretation of the local Galois involution $\sigma(z)$ (red) which can be defined piecewise using the three branches of the Lambert $W$-function $W_{0,\pm 1}$. The definition as Galois involution holds for $ |z-\beta|<\lambda$ giving a concrete meaning to the term \textit{local} involution. \label{galois2d}} \end{figure} \subsection{$D=4$} By transform of variables one can achieve $\Theta=\left(\begin{smallmatrix} 0 & \theta \\ -\theta & 0\end{smallmatrix}\right)\otimes \mathbb{I}_{2\times 2}$. We expand $\phi(x_1,x_2,x_3,x_4)=\phi_{\substack{k_1l_1\\k_2l_2}} f_{k_1l_1}(x_1,x_2)f_{k_2l_2}(x_3,x_4)$. Using the Cantor bijection $P\binom{k_1}{k_2} := k_2+\frac{1}{2}(k_1+k_2 )(k_1+k_2+1)$ between $\mathbb{N}^2$ and $\mathbb{N}$ we can map $\phi_{\substack{k_1l_1\\k_2l_2}}$ to standard matrix elements $\phi_{kl}$. Setting $|k|:=k_1+k_2$ if $P\binom{k_1}{k_2}=k$, the action functional \eqref{GW-action} takes for $D=4$ the form $S(\phi)=\lim_{N\to \infty} S_N(\phi)$ with \begin{align*} S_N(\phi)=\Big(\frac{\theta}{4}\Big)^2\Bigg(& \sum_{\substack{k,l=0 \\ |k|,|l|< \sqrt{N}}}^\infty \frac{4}{\theta} |k| +\frac{\mu^2}{2}+\frac{4}{\theta}\Big) \phi_{kl}\phi_{lk} +\frac{\lambda}{4} \sum_{\substack{k,l,m,n=0 \\ |k|,|l|,|m|,|n|< \sqrt{N}}}^\infty \hspace*{-1ex} \phi_{kl}\phi_{lm}\phi_{mn}\phi_{nk}\Big) \Bigg). \end{align*} There are $n+1$ natural numbers $k$ with $|k|=n$ and $\frac{(n+1)(n+2)}{2}$ natural numbers $k$ with $|k|\leq n$. Setting $\frac{\theta}{4}=\frac{\sqrt{N}}{\Lambda^2}$ and $e_n=\Lambda^2 \frac{n}{\sqrt{N}}+ \frac{\mu^2}{2}+\frac{\Lambda^2}{\sqrt{N}}$ we recover the Minlos measure (\ref{minlos-q-int}) as \begin{align} d\mu_{int}(\phi)= \frac{1}{\mathcal{Z}} \exp(-\Lambda^4 S_N(\phi)) d\phi \end{align} where the spectral values $e_n$ have multiplicity $r_n=n+1$. After some shift by the lowest spectral value $e_0$, which is absorbed in the bare mass, the spectral measure $\varrho_0(t)=\frac{1}{N}\sum_{n=0}^{\sqrt{N}-1} r_n \delta(t-e_n)$ converges for $N\to \infty$, in the sense of Riemann sums, to $t\chi_{[0,\Lambda^2]}(t)$, where $\chi_{[0,\Lambda^2]}$ is the characteristic function of $[0,\Lambda^2]$. Compared with Definition \ref{defR} we recover $D=4$ and obtain the function $R_4$ in the limit $\Lambda^2\to \infty$ as \begin{align}\label{linInt} R_4(z)=z-\lambda z^2\int_0^\infty \frac{dt\, R_4(t)}{(t+\mu^2)^2(t+\mu^2+z)}. \end{align} The leading order expansion is easily computed \begin{align*} R_4(z)=z+\lambda \bigg( z-\mu^2\Big(1+\frac{z}{\mu^2}\Big)\log\Big(1+\frac{z}{\mu^2}\Big)\bigg)+\mathcal{O}(\lambda^2). \end{align*} From the expansion, one would expect that $R_4(z)$ has linear asymptotics for $z$ tending to infinity. However, this is not the case: The asymptotic behaviour is only visible by the resummation of all orders in $\lambda$, which is given in terms of a Gau\ss{}ian hypergeometric function: \begin{theorem}[\cite{Grosse:2019qps}] The linear integral equation \eqref{linInt} is solved by \begin{align} R(z)=z \;_2F_1\Big(\genfrac{}{}{0pt}{}{ \alpha_\lambda,\;1-\alpha_\lambda}{2}\Big|-\frac{z}{\mu^2}\Big),\quad \text{where } ~ \alpha_\lambda:=\begin{cases} \frac{\arcsin(\lambda\pi)}{\pi} & \text{for }|\lambda|\leq \frac{1}{\pi} \;,\\ \frac{1}{2} +\mathrm{i} \frac{\mathrm{arcosh}(\lambda\pi)}{\pi} & \text{for } \lambda \geq \frac{1}{\pi}\;. \end{cases} \end{align} \end{theorem}\noindent The proof is obtained by inserting the result into the linear integral equation and using identities of the Gaussian hypergeometric function. The important fact is that the linear dependence of $\lambda$ within the integral equation \eqref{linInt} is packed into a highly nonlinear dependence given by the $\arcsin$-function into the coefficients of the hypergeometric function. Making use of that, a different asymptotic behaviour between $\varrho_\lambda$ and $\varrho_0$ is observed. It is well-known that the hypergeometric function behaves like \begin{align*} \,_2F_1\Big(\genfrac{}{}{0pt}{}{ a,\;1-a}{2}\Big|-x\Big) \stackrel{x\to \infty}{\sim} \frac{1}{x^a}\quad \text{for } |a|<\frac{1}{2}. \end{align*} Together with the definition of the spectral dimension \eqref{specdim}, we conclude: \begin{corollary}[\cite{Grosse:2019qps}] For $|\lambda|<\frac{1}{\pi}$, the deformed measure $\varrho_\lambda=R_4$ of four-dimensional Moyal space has the spectral dimension $D=4-2\frac{\arcsin(\lambda \pi)}{\pi}$. \end{corollary}\noindent The theory admits on the four-dimensional Moyal space a dimensional drop to an effective spectral dimension related to an effective spectral measure. This is revealed by knowledge of the exact solutions. From a quantum field theoretical perspective, this dimension drop is the most important result. It means \emph{that the $\lambda\phi^{*4}$-QFT model on 4D Moyal space is non-trivial}, i.e.\ consistent over any (energy) scale $\Lambda^2\to \infty$. If we had $R_4(z)\propto z$ as expected from a perturbative expansion then the integral in \eqref{linInt} had to be restricted to $[0,\Lambda^2]$ (otherwise $R^{-1}$ would be not defined on $\mathbb{R}_+$). We recall that the Landau ghost problem \cite{Landau:1954??}, or triviality, is a severe threat to quantum field theory. It almost killed renormalisation theory in the 1960s, rescued then by discovery of asymptotic freedom rescued non-Abelian Yang-Mills theories. But these theories are complicated; they require a non-perturbative treatment to deal with confinement. A simple 4D QFT-model without triviality problem was not known so far. For the $\lambda\phi^4$-model, triviality was proved in $D=4+\epsilon$ dimensions \cite{Aizenman:1981du, Frohlich:1982tw} in the 1980s. (Non-)triviality in $D=4$ dimensions remained an open problem for almost 40 years; only recently Aizenman and Duminil-Copin achieved the proof \cite{Aizenman:2019yuo} of (marginal) triviality. Therefore, the construction of a simple, solvable and non-trivial QFT-model in four dimensions (albeit on a noncommutative space) is a major achievement for renormalisation theory. In the next step one aims to apply the construction of (B)TR as explained in Sec.~\ref{Sec.BTR}. But here we are faced with a problem: Defining $x(z)=R_4(z)$, the derivative $x'(z)$ is no longer in the class of rational functions, which was a fundamental assumption in (B)TR. In other words, the full construction of (B)TR based on algebraic, not transcendental, ramification points fails directly from the beginning. It remains to investigate in a long-term project whether an adapted formulation of (B)TR can extend the previous results to four dimensions. \section{Cross-checks: perturbative expansions} In nowadays particle physics, one is often dependent on approximative methods like expansions of correlation functions into series of Feynman graphs. A main advantage of investigating the present toy model is undoubtedly the fact that we only use the Feynman graph expansion to perform a cross-check of our exact solutions. Due to its nature as a matrix model, the expansion of the cumulants creates \textit{ribbon graphs} (also \textit{fat Feynman graphs}). By their duality to maps on surfaces, we also have points of contact with enumerative geometry and combinatorics. \subsection{Ribbon graphs} In order to obtain a perturbative series of the cumulants (\ref{partition}), one expands the exponential $\exp(-\frac{\lambda N}{4} \mathrm{Tr}(\phi^4))$ inside the measure (\ref{minlos-q-int}) into a (formal) power series in the \textit{coupling constant} $\lambda$. As result we obtain products of matrix elements $\phi_{kl}$ from the expansion $(\mathrm{Tr}(\phi^4))^v$ of the exponential and from the products present in (\ref{partition}), integrated against the Gau\ss{} measure (\ref{minlos-q}). The resulting sum over pairings can be interpreted as ribbon graphs. A ribbon has two strands, which carry a labelling. Two strands connected by a four-valent vertex are identified. We denote by $\mathfrak{G}^{v,\pi}_{k_1,...,k_n}$ the set of labelled connected ribbon graphs with $v$ four-valent vertices and $n$ one-valent vertices, where the strands connected to the one-valent vertices (from the $n$ factors in (\ref{partition})) are labelled by $(k_1,k_{\pi(1)})$, \dots, $(k_n,k_{\pi(n)})$. It is assumed here that the $k_i$ in (\ref{partition}) are pairwise different so that $l_i=\pi(k_i)$ for a permutation $\pi in \mathcal{S}_{n}$. Let a graph $\Gamma$ have $r=2v+n/2$ ribbons and $s(\Gamma)$ loops (closed strands). We associate a weight $\varpi(\Gamma)$ to $\Gamma$ by applying the following \textit{Feynman rules}: \begin{itemize}\itemsep -1pt \item every 4-valent ribbon-vertex carries a factor $-\lambda$ \item every ribbon with strands labelled by $p,q$ carries a factor $\frac{1}{e_p+e_q}$ (\textit{propagator}) \item multiply all factors and apply the summation operator $\frac{1}{N^s}\sum_{l_1,..,l_s=1}^N r_{l_1}...r_{l_s}$ over the $s=s(\Gamma)$ loops (closed strands) labelled by $l_1,...,l_s$. \end{itemize} Then the cumulants expand for pairwise different $k_1,...,k_n$ and $n$ even into the following series: \begin{align} \sum_{v=0}^\infty \sum_{\Gamma \in \mathfrak{G}^{v,\pi}_{k_1,...,k_n}} N^{v-r+n+s(\Gamma)} \varpi(\Gamma). \label{expansion} \end{align} The cyclic order of the ribbon vertex gives an orientation of the full ribbon graph such that we achieve a natural embedding of the graph into a Riemann surface. The exponent of $N^{v-r+n+s(\Gamma)}$ represents the Euler characteristic $\chi=2g-2+b$ of the Riemann surface with $b$ the number of boundary components and $g$ the genus. Furthermore, the permutation $\pi$ splits $(k_1,k_{\pi(1)})$, \dots, $(k_n,k_{\pi(n)})$ into blocks with cyclic symmetry, see \eqref{eq:cumulants}. This suggests the notation $G_{|k_1^1\dots k_{n_1}^1|\dots |k_1^b\dots k_{n_b}^b|}$ where the blocks are seperated by the vertical lines and each block is cyclically symmetric (see \cite{Branahl:2020uxs} for more details). We will denote the set of these ribbon graphs by $\mathfrak{G}^{v}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|} =\bigcup_{g=0}^\infty \mathfrak{G}^{g,v}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|}$, so that finally the expansion reads:\enlargethispage{5mm} \begin{align} G^{(g)}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|} =\sum_{v=0}^\infty \sum_{\Gamma \in \mathfrak{G}^{g,v}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|}} \varpi(\Gamma)\;. \label{cumulants-3} \end{align} \subsection{Perturbative renormalisation of ribbon graphs} As mentioned before, a QFT is achieved in a continuum limit $N\to \infty$. In sec.~\ref{sec.Moyal} we have implemented this limit in two steps. We first passed to continuous spectral measure $\varrho_0$ on a finite interval $[0,\Lambda^2]$ and then arrived at a QFT in the limit $\Lambda^2\to \infty$. This procedure gives rise to slightly different Feynman rules. The summations converge to integrals, which are not necessarily convergent (by themselves) after removing the cut-off $\Lambda^2$. The perturbative renormalisation to avoid divergencies is well understood and of similar structure as renormalisation in local QFT. The overlapping divergencies are controlled by Zimmermann's forest formula (see e.g. \cite{Hock:2020rje}). In the work \cite{Thurigen:2021zwf} even more is proved. It is known that certain graphs (so-called 2-graphs, where ribbon graphs are a special case) have a Hopf-algebraic structure. The application of the antipode of the Hopf algebra proves Zimmermann's forest formula for the 2-graphs in general \cite{Thurigen:2021ntr}. This is analogous to the techniques of Connes and Kreimer \cite{Connes:1999yr}. We refer to these works and references therein for more details, but want to present a rather trivial example, where the forest consists only of the empty forest. \begin{example} The two graphs of the 2-point function of order $\lambda$ are given in Fig. \ref{fig:2PGraphs}. \begin{figure}[h!] \centering \def\svgwidth{400pt} \input{2PGraphs.pdf_tex} \caption{All graphs contributing to the 2-point function $G^{(0)}_{|ab|}$ up to order $\lambda^2$, where the upper strand is labelled by $a$ and the lower by $b$ for each graph. Topologically, some graphs are the same but different elements of $\mathfrak{G}^{0,v}_{|ab|}$ due to different labellings.} \label{fig:2PGraphs} \end{figure} The forest formula is trivial and realised by a Taylor subtraction depending on the dimension of the theory given by the spectral measure $\varrho_0(t) dt$. In $D$ dimensions this measure behaves asymptotically as $\varrho_0(t) dt\sim t^{\frac{D}{2}-1}dt$. For the first graph of order $\lambda$ with external labelling $a,b$, we get (put $\mu^2=1$ for convenience) \begin{align*} &\frac{-\lambda}{(1+a+b)^2}\int_0^\infty \varrho(t)dt\bigg(\frac{1}{1+a+t}-T^{D/2-1}_a\Big(\frac{1}{1+a+t}\Big)\bigg)\\ =&\frac{-\lambda}{(1+a+b)^2}\int_0^\infty \varrho(t)dt\bigg(\frac{1}{1+a+t}- \frac{1}{1+t}...-\frac{a^{\frac{D}{2}-1}}{(\frac{D}{2}-1)!}\frac{d^{\frac{D}{2}-1}}{dw^{\frac{D}{2}-1}}\frac{1}{1+t+w}\vert_{w=0}\bigg)\\ =&\frac{-\lambda}{(1+a+b)^2}\int_0^\infty \varrho(t)dt\frac{(-a)^{\frac{D}{2}}}{(1+a+t)(1+t)^{\frac{D}{2}}}. \end{align*} Taking the second graph of Fig. \ref{fig:2PGraphs} into account, we have to interchange $a$ with $b$ only. Adding both expressions, we confirm the result of Example \ref{ex:2P} for any $D$, which was the expansion of the explicit result. \end{example} We refer to \cite{Hock:2020rje} for further explicit examples. Already the second order of the 2-point function involves tricky computations, since the full beauty of Zimmermann's forest formula is required for the sunrise graph at $D=4$. An interested reader could try this computation and compare it with the expansion of the exact solution of Theorem \ref{Thm:2P}, where a contour integral around the branch cut of a sectionally holomorphic function has to be evaluated. \subsection{Boundary creation} In this subsection we will focus on the perturbative interpretation of the objects constructed in Sec.~\ref{Sec.BTR}. The objects $\Omega_{g,n}$ form the fundamental building blocks of the theory. We will forget about the continuum limit and work in the discrete setting again, since their construction in $D=4$ is not yet understood. It was shown in \cite{Branahl:2020uxs} that the $\Omega^{(g)}_{q_1,...,q_m}$ are special polynomials of the cumulants $G^{(g)}_{\dots}$. For $m=1$ this is the definition $\Omega^{(g)}_{q} := \frac{1}{N}\sum_{k=1}^N G^{(g)}_{|qk|}+\frac{1}{N^2}G^{(g)}_{|q|q|}$ given after (\ref{eq:Omega-gm}). For $m=2$ we have: \begin{proposition}[\cite{Branahl:2020uxs}] \label{prop:Om02G} For $q_1\neq q_2$ one has \begin{align} \Omega^{(g)}_{q_1,q_2} &= \frac{\delta_{g,0}}{(e_{q_1}-e_{q_2})^2} +\sum_{g_1+g_2=g} G^{(g_1)}_{|q_1q_2|} G^{(g_2)}_{|q_1q_2|} \nonumber \\ & + \frac{1}{N^2}\sum_{k,l=1}^N G^{(g)}_{|q_1k|q_2l|} +\frac{1}{N}\sum_{k=1}^N \Big(G^{(g)}_{|q_1kq_1q_2|}+G^{(g)}_{|q_2kq_2q_1|} +G^{(g)}_{|q_1kq_2k|}\Big) \nonumber \\ & +\frac{1}{N} \sum_{k=1}^N \Big(G^{(g-1)}_{|q_1k|q_2|q_2|} +G^{(g-1)}_{|q_2k|q_1|q_1|}\Big) +G^{(g-1)}_{|q_1q_2q_2|q_2|}+G^{(g-1)}_{|q_2q_1q_1|q_1|} \nonumber \\ &+\sum_{g_1+g_2=g-1} G^{(g_1)}_{|q_1|q_2|} G^{(g_2)}_{|q_1|q_2|} +G^{(g-2)}_{|q_1|q_1|q_2|q_2|}\;. \end{align} \end{proposition} Inserting the ribbon graph expansions of the cumulants $G^{(g)}$ one obtains in this way a ribbon graph expansion of the $\Omega^{(g)}_{q_1,...,q_m}$. There is an alternative way to produce these ribbon graph expansions. We recall from (\ref{eq:Omega-gm}) that the $\Omega^{(g)}_{q_1,...,q_m}$ are defined as derivatives of $\Omega_{q_1}^{(g)}$ with respect to the spectral values $e_{q_i}$. We declare this derivative as a \textit{boundary creation operator} $\hat{T}_q:=-N \frac{\partial}{\partial e_q}$ (looking at its action, its name is self-explanatory). One can go one step further and think about a primitive of $\Omega^{(g)}_q$ under the creation operator, $\Omega_q^{(g)}=:\hat{T}_q \mathcal{F}^{(g)}$. The free energies $\mathcal{F}^{(g)}$ defined in this way agree with the genus expansion of the logarithm of the partition function itself: \begin{align} \log \mathcal{Z} =\sum_{g=0}^\infty\sum_{v=0}^\infty \sum_{\Gamma_0\in\mathfrak{G}^{g,v}_\emptyset} \frac{N^{2-g}\varpi(\Gamma_0)}{|\mathrm{Aut}(\Gamma_0)|}\;, \end{align} where $\varpi(\Gamma_0)$ is given by the same Feynman rules as before. This expansion creates closed ribbon graphs without any boundary (in physics vocabulary: \textit{vacuum graphs}). In contrast to the previously considered open graphs, closed graphs have a non-trivial automorphism group. Let us focus on the planar sector to illustrate the structures. We start with $\mathcal{F}^{(0)}$ and describe geometrically how the operator $\hat{T}_q$ produces $\Omega^{(0)}_q$ with one boundary more. In the Quartic Kontsevich model the $g=0$ free energy has an expression as a residue at the poles $a$ of $\omega_{0,1}(z)$: \begin{align*}\label{f0} \mathcal{F}^{(0)} = \frac{1}{2} \sum_a \biggl [ \Res\displaylimits_{q \to a} \omega_{0,1}(q)V_a(q) +t_a \mu_a \biggl ] -\frac{\lambda}{2N}\sum_k r_k e_k^2 + \frac{\lambda^2}{2N^2} \sum_{k,i, k \neq i} r_ir_k \log(e_i-e_k) \\ \quad \mu_a:= \lim_{q \to a} \big( V_a(q) -t_a\log[\xi_a(q)]- \int \omega_{0,1} \big)\;. \end{align*} (see \cite{Branahl:2020uxs} for the definition of the potential $V_a$, the temperatures $t_a$ and the local variables $\xi_a(q)$). We give in Fig. \ref{fig:freeenergy} the elements of $\mathfrak{G}^{0,v}_\emptyset$ up to $v=2$. \begin{figure}[h!] \centering \def\svgwidth{400pt} \input{freeenergy.pdf_tex} \caption{The graph at order $\lambda^0$ is added as the empty ribbon graph. All these graphs contribute to the free energy of genus 0 up to order $\lambda^2$. These graphs are elements of $\mathfrak{G}^{0,v}_\emptyset$. The melon graph $\Gamma_M$ (in the second row) has $ |\mathrm{Aut}(\Gamma_M)|=8$ and the other four graphs $ |\mathrm{Aut}(\Gamma)|=2$. } \label{fig:freeenergy} \end{figure} Let us pick the order $\lambda^1$ to visualise the perturbative action of the boundary creation operator. The creation operator cuts all strands of the ribbon graph giving rise to two different graphs (two equal by symmetry, explaining the order 2 of the automorphism group) with one boundary and one loop: \begin{figure}[!h] \vspace*{-1.8cm} \begin{tikzpicture}[scale=1.85, line width=\lwMacro, font=\normalsize, miter limit=\miterLimit, baseline] \def7{7} \def-3.5{-3.5} \def-0.25{-0.25} \def0.5{0.5} \begin{scope}[scale=\smallScale+0.2, shift={(7,-3.5)}] \draw[q] (0,0) -- (0.5,0) node[right] {q}; \draw[n] (0,-0.25) -- (0.5,-0.25) node[right] {n}; \draw[k] (0,2*-0.25) -- (0.5,2*-0.25) node[right] {k}; \draw[l] (0,3*-0.25) -- (0.5,3*-0.25) node[right] {l}; \draw (0.9,0.2) rectangle (-0.2,-1); \end{scope} \def-0.55{-0.55} \def0{0} \def-4{-2} \node[draw] at (0,-4+0.75) {$\lambda^1$}; \ZeroOne{(0,-4)}{n}{k}{l} \draw[cut] (0.25,-4) -- (0.25,-4+0.4) node[above] {\RNum{1}}; \draw[cut] (0.75,-4) -- (0.75,-4+0.4) node[above] {\RNum{2}}; \node[right=2, scale=\mathScale] at (0,-4) {$ \displaystyle \sim \qquad \qquad \frac{1}{(e_n{+}e_k)(e_n{+}e_l)} $}; \def-4{-3} \node[red,left] at (0.6-0.1,-4) {\RNum{1}}; \coordinate (down) at (0.6,-4-0.5); \coordinate (up) at (0.6,-4+0.5); \draw[klammer] (down) node[above right] {cut n $\to$} -- (up) node[below right] {cut k $\to$}; \scoped[scale=\smallScale] \OneOne{($(up)+(1.4,-0.55)$)}{q}{n}{l}; \scoped[scale=\smallScale] \OneOne{($(down)+(1.4,0)$)}{k}{q}{l}; \coordinate (down) at (0.6+1.8,-4-0.5); \coordinate (up) at (0.6+1.8,-4+0.5); \draw[klammer] (up) -- (down); \def-4{-3} \node[red,left] at (3.5-0.1,-4) {\RNum{2}}; \coordinate (down) at (3.5,-4-0.5); \coordinate (up) at (3.5,-4+0.5); \draw[klammer] (down) node[above right] {cut n $\to$} -- (up) node[below right] {cut l $\to$}; \scoped[scale=\smallScale] \OneOne{($(up)+(1.4,-0.55)$)}{q}{n}{k}; \scoped[scale=\smallScale] \OneOne{($(down)+(1.4,0)$)}{l}{q}{k}; \coordinate (down) at (3.5+1.8,-4-0.5); \coordinate (up) at (3.5+1.8,-4+0.5); \draw[klammer] (up) -- (down); \def-4{-4} \coordinate (down) at (0.6+1.8,-4-0.5); \coordinate (up) at (0.6+1.8,-4+0.5); \node[right=2,scale=\mathScale] at (0,-4) {$ \displaystyle \sim 2 \times \frac{1}{(e_q{+}e_n)^2} \bigg(\frac{1}{e_n{+}e_k} + \frac{1}{e_q{+}e_k} \bigg) $}; \end{tikzpicture} \caption{The action of the boundary creation operator for the genus zero free energy at $\mathcal{O}(\lambda^1)$: there are four ways to cut the ribbons, however only two are distinct -- explaining pictorially the order 2 of the automorphism group of this graph. One obtains the $\mathcal{O}(\lambda^1)$-contribution to the planar $2$-point function.} \end{figure} \noindent One can perform exactly the same technique at $\mathcal{O}(\lambda^2)$ giving the graphs in Fig.~\ref{fig:2PGraphs}. We see that the initial data $\Omega^{(0)}_{q_1}(z)$ have to be identified with the partially summed $2$-point function. Repetitive application of the creation operator gives the ribbon graph expansion of the higher $\Omega^{(g)}_{q_1,...,q_m}$. \subsection{Enumeration of graphs} A natural question might be the following: Could one give a closed form for the numbers of graphs contributing to the cumulants and $\Omega^{(g)}_{q_1,...,q_m}$ at a certain order? In this subsection we give a partial answer by exploiting the duality (interchanging the r\^ole of vertices and faces) between ribbon graphs and maps on surfaces with marked faces. Observing that the Hermitian 1-matrix model is governed by topological recursion caused an enormous progress in the enumeration of maps with $k$-angulations and $m$ marked faces of any lengths (explained in a very readable way in \cite{Eynard:2016yaa}). Only allowing for a quartic potential, a connection to the correlation functions of the Quartic Kontsevich Model seems natural. However, despite having the same partition function if one sets $d=1$ (an $N$-fold degenerate eigenvalue $e_1=e$, giving the same weight to every strand in the ribbon graph), one has to look carefully at the definition of the objects studied in the original works, namely cumulants like $\langle \prod_{i=1}^b \mathrm{Tr}\, \phi^{L_i} \rangle_c$ for a sequence $(L_i)_{i=1}^b$ of natural numbers. Luckily, a very recent investigation \cite{Borot:2021eif} showed that when exchanging the role of $x$ and $y$ as the ingredients of the spectral curve of the topological recursion in the 1-matrix model one counts the so-called \textit{fully simple maps}. These are equivalent to our correlation functions $G^{(g)}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|}$ when replacing the traces (summed over all indices, possibly equal) by a set of $b$ pairwise disjoint cycles $\gamma_i=(k^i_1,...,k_{n_i}^i)$ of length $l(\gamma_i)=n_i$. This formulation takes pairwise different indices into account. Therefore, only a subset of the \textit{ordinary} graphs/maps from the former investigation of the 1MM can be generated. These fully simple maps can be concretely characterised by allowing only boundaries where no more than two edges belonging to the boundary are incident to a vertex. The enumeration of this kind of maps was investigated for quadrangulations by Bernardi and Fusy \cite{bernardi2017bijections}. Building on their results, we can relate (the $\lambda$-expansion of) our correlation functions $G^{(0)}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|}$ ($n_i$ even) for $d=1$ to the number of (planar) fully simple maps/ribbon graphs: \begin{align*} G^{(0)}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|} \Big|_{d=1} =\sum_{n=0}^{\infty} \frac{3^{b+n-2}(\#n_e-1)!}{n!(3l_h+b+n-2)!} \prod_{i=1}^b n_i \binom{\frac{3n_i}{2}}{\frac{n_i}{2}} \cdot \frac{(-\lambda)^{n+l_h+b-2}}{(2e)^{2(n+l_h+b-1)}}\;. \end{align*} Here $l_h:=\frac{1}{2}\sum_i n_i$ is the half boundary length and $\#n_e:=3l_h+2b+2n-4$ the number of edges. For $b=1$, $n_1=2$ and thus $l_h= 1$, one recovers the famous result of Tutte for the number of rooted planar quadrangulations $\frac{2 \cdot 3^n(2n)!}{n!(n+2)!}$, obtained as coefficient of $\frac{(-\lambda)^n}{(2e)^{2n+2}}$ in the 2-point function. Compare the first terms of this sequence $(1,2,9,54,...)$ with Fig.~\ref{fig:2PGraphs}. Once again, we can apply the creation operator to the $d=1$ result of $\mathcal{F}^{(0)}$ being \begin{align*} & \mathcal{F}^{(0)}= \sum_{n=1}^{\infty} 3^n \frac{(2n-1)!}{n!(n+2)!} \frac{(-\lambda)^n}{(2e)^{2n}} =\frac{-\lambda}{2(2e)^2}+\frac{9(-\lambda)^2}{8(2e)^4} +\frac{9(-\lambda)^3}{2(2e)^6} +\frac{189(-\lambda)^4}{8(2e)^8} +... \end{align*} For example, the orders $2,2,8$ of the automorphism groups the graphs contributing to $\mathcal{O}(\lambda^2)$ in Fig.~\ref{fig:freeenergy} produce the coefficient $\frac{1}{2}+\frac{1}{2}+\frac{1}{8}=\frac{9}{8}$ in front of $\frac{(-\lambda)^2}{(2e)^4}$. Acting with $-\frac{\partial}{\partial e}$ on $\mathcal{F}^{(0)}$ gives the aforementioned numbers $\frac{2 \cdot 3^n(2n)!}{n!(n+2)!}$, providing again evidence for the correctness of the creation operator. We give a very brief outlook on the transition from $ G^{(g)}_{|k_1^1...k_{n_1}^1|...|k_1^b...k_{n_b}^b|}$ (fully simple quadrangulations) to $\Omega_{q_1,...,q_n}^{(g)}$. Currently, we know that $\Omega_{q}^{(g)}$ generate the number of \textit{ordinary} quadrangulations as in the Hermitian one-matrix model. The basic relation $\Omega^{(g)}_{q} := \frac{1}{N}\sum_{k=1}^N G^{(g)}_{|qk|}+\frac{1}{N^2}G^{(g)}_{|q|q|}$ extends the relation for $g=1$ found in \cite{Borot:2017agy} to any genus $g$. Moreover, we remark that the pure TR constituents of $\Omega_{q}^{(g)}$ seem to be a generating function of only the \textit{bipartite} rooted quadrangulations (so far known for $g=0,1,2$). An interpretation of all $\Omega_{q_1,...,q_n}^{(g)}$, $n>1$, with and without blobs, especially in a closed or recursive form, is under current investigation. \section{Summary and outlook} In this article we reviewed the accomplishments of the last two decades towards the exact solution of a scalar quantum field theory on noncommutative geometries of various dimensions. We highlighted how the recursive structure in the solution theory fits into a larger picture in complex algebraic geometry. After having introduced our understanding of quantum fields on a noncommutative space as well as the powerful machinery of (blobbed) topological recursion, we mainly focused on results of the previous three years: With the discovery of meromorphic differentials in our model which are governed by blobbed topological recursion, the long-term project of its exact solution seems to gradually come to an end. We are convinced of having found the most suitable structures that lead us to the path of a complete understanding of the underlying recursive patterns in the quantum field theory. We interpret this plethora of fascinating algebraic structures as the main reason for the exact solvability of this class of QFT-models. This is highly exceptional. We started with the most concrete results in the finite-matrix (zero-dimensional) case, the quartic Kontsevich model, and explained how it relates to structures in algebraic geometry such as ramified coverings of Riemann surfaces, meromorphic differential forms and the moduli space of complex curves. Then we explained how at in the simplest topological sector one can achieve the limit to a quantum field theory models on 2- and 4-dimensional Moyal space. The most remarkable result here is the absence of any triviality problem. We concluded with a glimpse into perturbation theory, which we not only see as a useful cross-check of our exact results, but also as an active area of research with interesting connections to enumerative geometry and number theory. The mind-blowing algebro-geometric tool of topological recursion shall lead us one day to the long awaited property of integrability of a quantum field theory. To reach this goal at the horizon and also to explore further structures and phenomena, many questions are still on the agenda. We finally list an excerpt: \begin{itemize} \item Does a generic structure of the holomorphic part of $\omega_{g,n}$ also exist in the non-planar sector? How does it look like (also taking the free energies into consideration)? \item The recursion formula of the holomorphic add-ons shares many characteristics with usual topological recursion. Can we achieve these results also with pure TR by changing the spectral curve (e.g.\ of genus 1)? \item How can we formulate BTR in 4 dimensions, where no algebraic ramification point is given anymore? \item Does the particular combination of Feynman graphs contributing to $\omega_{g,n}$ have a particular meaning in quantum field theory? \item Which property of the moduli space of stable complex curves is captured by the intersection numbers generated by the quartic Kontsevich model? \item Is there an integrable hierarchy behind our quantum field theory? \end{itemize} We are looking forward to many interesting insights in the not-too-distant future and invite the reader to follow our progress towards blobbed topological recursion of a noncommutative quantum field theory! \section*{Acknowledgements} Our work was supported\footnote{``Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 427320536 -- SFB 1442, as well as under Germany's Excellence Strategy EXC 2044 390685587, Mathematics M\"unster: Dynamics -- Geometry -- Structure.''} by the Cluster of Excellence \emph{Mathematics M\"unster} and the CRC 1442 \emph{Geometry: Deformations and Rigidity}. AH is supported through the Walter-Benjamin fellowship\footnote{``Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 465029630}. \input{corfu_draft.bbl} \end{document}
43c53bda4c7804b2f0bba6ef9e929254d1a74292
\section{Introduction} In many industrial applications a \textit{nominal solution} ${x^{nom}}$ of an optimisation problem $P$ is intended to be used on a regular basis. This is for example the case in railway scheduling where a single planning can be repeated each week. Due to uncertainties on the data of $P$, ${x^{nom}}$ may occasionally become infeasible and, thus, require to be adapted. {The \textit{quality robustness} of $x^{nom}$ characterizes how low the deterioration of the objective function of $P$ is when changes occur. } Robust and stochastic methods tend to optimize the quality robustness {which may} not be the most relevant approach. Indeed, since solution $x^{nom}$ is regularly implemented, people involved in its realisation (e.g, employees, clients, ...) are accustomed to it. As a consequence, they may, by habit, disrupt the realisation of the adapted solution $x^{s}$. For example, users may miss a train if it leaves earlier than usual or an operator may activate a railroad switch which is no longer required. These disruptions may lead to incidents and customers dissatisfactions or even the need to adapt the solution once again. We call {\textit{solution costs}} {such indirect costs} that come from the adaptation of ${x^{nom}}$ into $x^{s}$. {The lower the solution costs are, the higher the \textit{solution robustness} is~\cite{sevaux2004genetic,vandevonder2005use}.} In this paper, we {introduce a solution robustness approach} which minimizes the solution cost while limiting the nominal objective of the nominal solution. We modelize the solution cost between two solutions by a distance. The key idea is that the closer the solutions are to each other, the less likely the human errors are. In Section~\ref{sec:struct} we formally present {our solution robustness approach} and introduce the nominal, the reactive and the proactive problems. We discuss links with existing approaches in Section~\ref{sec:literature}. {Sections~\ref{sec:mcf} and Section~\ref{sec:mf} are} dedicated to two network problems namely, the min-cost flow problem and the max-flow problem within the framework of solution robustness. We caracterize the complexity of the reactive and the proactive problems {for two different solution distances}. {Eventually, we present a case study on a railroad planning problem in Section~\ref{sec:experiments}, highlighting the benefits of the proactive approach over the reactive approach. We also compare ourselves to two other approaches from the litterature.} \section{Reactive and proactive robustness solutions} \label{sec:struct} {We consider the deterministic version of an optimisation problem $\nom{P}$. We call this version the \textit{nominal problem}. Its data correspond to the normal situation:} \begin{center} $\nom{P}\left\{ \begin{array}{lll} \min & {f(x)}\\ \mbox{s.t.} & {x\in X} \end{array} \right.$ \end{center} {\noindent where $X\subset\mathbb R^n$ is the feasible solution set of $\overline P$ {and its objective function $f:\mathbb R^n\mapsto\mathbb R$ is called the \textit{nominal objective}.}} We modelize the uncertainty {on the data by a set of scenarios which may occur $\mathcal S=\s{\xi_i}_{i=1}^{|\mathcal S|}$}. To each scenario $\xi_i$ is associated {a feasible set $X_i$} and a positive weight $w_i\in\mathbb R^+$ which represents its importance. The weight of a scenario can for example be chosen according to its probability of occurrence or its gravity. {We now introduce two problems which represent two possible approaches to handle the uncertainty. In the \textit{reactive problem}, the uncertainty has not been anticipated and a feasible solution is obtained once the uncertainty is revealed. In the \textit{proactive problem}, a nominal solution and one solution for each scenario are computed a priori.} \subsection{Reactive robust solutions} \label{sec:reac} We suppose that a solution ${x^{nom}}$ of the nominal problem $\nom P$ is known and that scenario $\xi_i$ occurs at operating time (i.e., a few days or hours before its realisation). The adaptation of ${x^{nom}}$ to scenario~$\xi_i$ may not be a trivial problem. This is all the more true when many resources are required to the concrete implementation of a solution. For example, in railway scheduling, different services must communicate to synchronize their ressources and trains may have to be transfered in advance between stations. Consequently, the solution cost of the new solution may become more crucial than optimizing its nominal objective and ignoring it may even lead to infeasibilities. Indeed, there may simply not be enough time to implement drastic modifications that would minimize the nominal objective but would require the coordination of multiple services and resources. Moreover, modifications of a recurrent solution at operating time increases the risks of human errors. This motivates us to define a \textit{reactive problem} which provides a \textit{reactive solution} $x^r$ satisfying feasible set $X_i$ of $\xi_i$ and which distance to $x^{nom}$ is minimal: \begin{center} ${P^{r}(\xi_i, {x^{nom}})}\left\{ \begin{array}{lll} \min & d(x^{r}, {x^{nom}})\\ \mbox{s.t.} & {x^{r}\in X_i} \end{array} \right.$ \end{center} \noindent where $d:\mathbb R^n\times\mathbb R^n\mapsto\mathbb R$ is a distance function between $x^{r}$ and ${x^{nom}}$. The reactive problem is considered once the uncertainty is revealed. We now introduce the proactive problem which allows to anticipate the uncertainties. \subsection{Proactive robust solutions} \label{sec:proac} A reactive solution $x^r$ may have a high nominal objective value $f(x^r)$ as $f$ is not taken into account in the reactive problem. Furthermore, the solution cost may also be consequent as it has not been anticipated. To address these issues, we introduce the \textit{proactive problem} $P^{p}$ which variables are a \textit{proactive solution} $x^{p}$ of the nominal problem as well as an \textit{adapted solution} $x^i$ for each scenario $\xi_i\in\mathcal S$: \begin{empheq}[left={P^{p}(\mathcal S\hbox{,\,} c^*)}\empheqlbrace]{alignat=2} \min~ & \sum_{\xi_i\in\mathcal S} w_i\,d(x^{p}, x^i) \label{eq:pro_objective}\\ \mbox{s.t.~} & x^{p}\in X \label{eq:pro_A}\\ & x^{i}\in X_i & \forall \xi_i\in\mathcal S\label{eq:pro_s}\\ & f(x^p) = c^*\label{eq:pro_price} \end{empheq} \noindent where $c^*$ is the optimal value of the nominal problem. Objective~\eqref{eq:pro_objective} minimizes the weighted sum of the solution costs over all the scenarios. Constraints~\eqref{eq:pro_A} and~\eqref{eq:pro_s} ensure that $x^{p}$ is feasible for $(\nom P)$ and that $x^i$ is feasible for scenario $\xi_i$. Eventually, Constraint~\eqref{eq:pro_price} ensures that the nominal objective of $x^{p}$ is equal to $c^*$. Consequently, $x^{p}$ is an optimal solution of the nominal problem which additionally minimizes the weighted sum of the solution costs over $\mathcal S$. Since the weights $w_i$ are positive in problem $P^{p}(\mathcal S, c^*)$, each solution $x^i$ is an optimal solution of $P^r(\xi_i, x^p)$. As a consequence, if the nominal problem has a unique optimal solution $x^*$ then, solving $P^p(\mathcal S, c^*)$ is equivalent to solving $\overline P$ and $P^r(\xi_i, x^*)$ for $i\in\s{1, ..., |\mathcal S|}$. Note that the proactive problem $P^p$ requires to know the optimal value $c^*$ of an optimal solution of the nominal problem. This should generally not be a problem as $\nom{P}$ is likely to be significantly easier to solve than $P^p$ given that $\nom{P}$ only finds one solution $\nom x$ while $P^p$ simultaneously finds $|\mathcal S|+1$ solutions. {The proactive problem finds an optimal solution of the nominal problem. However, the solution cost may be further improved by allowing a given flexibility on the nominal objective of the proactive solution $x^{p}$. To allow such a flexibility, Constraint~\eqref{eq:pro_price} can be relaxed into the following constraint:} \begin{equation} f(x^{p}) \leq c^* (1+\varepsilon), \label{eq:pro_price_eps} \end{equation} \noindent{where $\varepsilon\in \mathbb R^+$ represents the maximal allowed percentage of increase of the nominal objective value of $x^{p}$. } {We now introduce two distances for the reactive and proactive problems.} \subsection{Defining solution costs with two distances} A first intuitive distance corresponds {the $\ell_1$ norm} that we call the \textit{distance in values}. \begin{definition} \label{def:value} {\textit{Distance in values}} $d_{val}(x^1,x^2)\triangleq\displaystyle\sum_{j=1}^n |x^1_j - x^2_j|$. \end{definition} Depending on the context, $d_{val}$ may not be the most relevant distance. In railway scheduling for example, increasing by $1$ the number of units on an existing train is much cheaper than creating a new train. Indeed, the creation of a train requires to check the availability of a locomotive and of agents as well as the compatibility of the train schedule with that of other trains. Thus, we also introduce a \textit{distance in structure}. Let $\mathbb 1_{b}$ be equal to $1$ if condition $b$ is true and $0$ otherwise. \begin{definition} \label{def:structure} {\textit{Distance in structure}} $d_{struct}(x^1, x^2)\triangleq\displaystyle\sum_{j=1}^n |\mathbb 1_{x^1_j > 0} - \mathbb 1_{x^2_j > 0}|$. \end{definition} {Note that some problems may contain several subsets of variables. In that case, it may be relevant to apply these distances to some subsets only. } \subsection{Literature review on solution cost in robust optimization} \label{sec:literature} In the line of~\cite{soyster1973convex}, robust optimization approaches often search a nominal solution which is feasible for any value of the data in an uncertainty set~\cite{ben1998robust,bertsimas2004price}. In this context, the solution cost is equal to $0$ as the solution is not modified once the uncertainty is revealed and the \textit{price of robustness}~\cite{bertsimas2004price} corresponds to the minimal increase of the nominal objective required to ensure the feasibility of a single solution in all possible scenarios. In many concrete problems a competitive nominal solution may not be feasible for all possible realisations of the uncertainty. To alleviate this problem, two-stage robust optimization approaches have been introduced where the value of a set of recourse variables is fixed only once the uncertainty is revealed~\cite{atamturk2007two,ben2004adjustable}. To ease the resolution of such problems, restrictions are often imposed on the recourse variables~\cite{ben2004adjustable}. In adjustable robust problems, the value of the recourse variables can even be directly deduced from the value of the uncertain data~\cite{ben2003new,gupta2001provisioning,ouorou2007model,poss2013affine}, thus significantly reducing the complexity of the problem. In two-stage robust optimization approaches, the nominal objective is generally optimized and the solution cost is indirectly considered by the choice of the recourse variables. The article~\cite{bendotti2019anchor} constitutes an exception as the authors consider a scheduling problem in which the nominal objective is bounded and where the weighted sum of anchored jobs is maximized. A job is said to be \textit{anchored} if it can not be rescheduled once the uncertainty is revealed. This objective can be viewed as a way to limit the solution cost. {The authors consider uncertain processing times and show that maximizing the weight of the anchored jobs is polynomial for the box uncertainty set. It is however NP-hard for both the budgeted and the polytope uncertainty sets.} Liebchen et al. introduced a general two-stage framework, called \textit{recoverable robustness}, in which a solution $x$ and a recovery algorithm $A$ are determined such that the nominal objective is optimized and, once the uncertainty is revealed, it is guaranteed that the application of $A$ on $x$ leads to a feasible solution~\cite{liebchen2009concept}. Restrictions on $A$ can be imposed to limit its recovery actions, its complexity or even the distance between $x$ and the recovered solution. This last constraint could be used to bound the solution cost. However, it is generally not easy to ensure that these restrictions are satisfied for any realization of the uncertainty. In the solution robustness framework that we define in Section~\ref{sec:proac}, the proactive problem is able to minimize the solution cost at the price of an additional set of variables for each scenario considered. {Two examples of recoverable robustness applied to shortest path problems are introduced in~\cite{busing2012recoverable}. In \textit{$k$-distance recoverable robustness}, the recovery actions are limited since at most $k$ new arcs can be used once the uncertainty is revealed. In the second approach, called \textit{rent recoverable robustness}, an edge is said to be \textit{rented} if it is used in the first stage and it is \textit{bought} if it is used in the second stage. For each scenario $s$ and each arc $e$ the cost $c^s_{e}$ incurred for both renting and buying arc $e$ is defined. The cost of the arc is lower if it is only rented ($\alpha c^s_{e}$ with $\alpha\in]0, 1[$) and higher if it is only bought ($(1+\beta)c^s_{e}$ with $\beta\geq 0$). This second approach is more flexible than the first one since the number of new edges used in the second stage is not constrained but it is also less generic. Indeed, the notion of renting and buying edges is relevant in problems such as railway scheduling where the network is owned by a company and exploited by others but it is not suitable for all applications. The authors study the complexity of both variants on three uncertainty sets and show that only the rent recoverable robustness on the interval set is polynomial. In Section~\ref{sec:experiments} we compare the $k$-distance recoverable robustness and the anchored approach to our proactive approach.} {Another type of recoverable robustness called \textit{recovery-to-optimality} has been introduced \cite{goerigk2014recovery}. In this context, when a scenario $s$ occurs, the nominal solution $x^{nom}$ must be adapted into a solution $x^s$, with an optimal nominal objective and which, as a second objective, minimizes a recovery cost $d(x^{nom}, x^s)$. The objective is then to find a nominal solution which minimizes the average or the worst recovery cost over all the scenarios. The recovery cost can correspond to a solution cost or a regret. In addition to imposing the optimality of $x^s$, this approach is different from ours as it does not necessarily impose the feasibility of the nominal solution $x^{nom}$. Consequently, recovery-to-optimality appears more relevant when the solution cost is less important than the nominal objective value as $x^{nom}$ may require a significant solution cost in order to be transformed into a solution $x^s$ with an optimal nominal objective for scenario $s$. This is for example the case when the uncertainty is revealed early (e.g., months or years before implementation) or more generally when the changes are unlikely to lead to errors (e.g., few humans interventions). On the contrary, our approach is more appropriate when the solution cost is at least as important as the nominal objective value. For example, this occurs when the uncertainty is revealed late (e.g., a few minutes or hours before implementation) or when the cost of any error is high (e.g., when lives are at stake). The authors apply their approach to a linear program for timetabling in public transportation. Recovery-to-optimality requires the optimality of $x^s$ for each scenario $s$ which is harder to ensure than the optimality of $x^{nom}$ required by our approach. As a consequence, they consider a heuristic in which optimal solutions of a subset of scenarios $\mathcal S$ is first computed and then a solution $x$ which minimizes the recovery cost to these solutions is obtained.} The notion of $(a, b)$-supermodel introduced in~\cite{ginsberg1998supermodels} is related to the solution cost. A solution $x$ is said to be an \textit{$(a, b)$-supermodel} if whenever the value of at most $a$ variables in $x$ is changed, then a feasible solution of the original problem can still be obtained by modifying the value of at most $b$ other variables. The authors show that a $(1, 1)$-supermodel of SAT can be obtained by solving a larger instance of SAT. The originality of this approach is the fact that the uncertainty is not on the data but on the value of the solution itself. {The identification of efficient schedules both in terms of nominal objective and solution cost has been considered under different names: predictability~\cite{mehta1998predictable,o1999predictable}, stability~\cite{herroelen2004construction,leus2003generation,leus2004branch,leus2005complexity}, solution robustness~\cite{sevaux2004genetic,vandevonder2005use}. Exact approaches are rarely considered to solve such problems. One exception is~\cite{leus2004branch} in which a hand-made branch-and-bound algorithm is considered to solve a single-machine scheduling problem in which a single job is anticipated to be disrupted. In other works, heuristic approaches are generally considered. For example,} in~\cite{sevaux2004genetic} the authors present a multi-objective genetic algorithm in which both the solution cost and the nominal objective are optimized. Their problem share features with both the reactive problem $P^r$ (defined in Section~\ref{sec:reac}) and the proactive problems $P^p$ (defined in Section~\ref{sec:proac}). Indeed, as in $P^r$, a nominal solution $x^{nom}$ is fixed and we search a solution $x$ which distance to $x^{nom}$ is minimized, and similarly to $P^p$, the cost of $x$ for several scenarios is additionally minimized. The approach presented in this paper has already been applied to a scheduling problem of SNCF, the french national railway company~\cite{lucas2020planification,lucas2019reducing}. We below apply this framework to two network problems, namely: the integer min-cost flow and the integer max-flow. It is well known that the deterministic version of both of these problems can be solved with polynomial time algorithms~\cite{ahuja1988network} {but we show that it is not the case for their proactive counterparts}. We consider uncertain arc demands for the min-cost flow problem and uncertain arc capacities for the max-flow problem. \section{Solution robustness for the min-cost flow problem with uncertain arc demands} \label{sec:mcf} The min-cost flow problem can be stated as: \vspace{.3cm} \noindent\fbox{ \parbox{0.97\textwidth}{\textsc{Min-Cost Flow Problem} \newline \begin{tabular}[\textwidth]{p{0.06\textwidth}p{0.85\textwidth}} Input: & A digraph $G = (V, A)$ with arc demands $\ell_a\in\mathbb N$, capacities $u_a\in\mathbb N$ and unitary cost $c_a\in\mathbb R^+$ of each arc $a\in A$ and node demands $b_v\in\mathbb Z$ for each vertex $v\in V$ ($b_v>0$ if $v$ is a \textit{supply node}, $b_v <0$ if $v$ is a \textit{demand node} and $b_v=0$ if $v$ is a \textit{transshipment node}).\\ Output: & Find an integer flow with minimal cost. \end{tabular} } } \vspace{.3cm} We assume that the uncertainties are on the arcs demands $\ell$. We denote by $\ell^\xi$ the arc demands associated to a scenario $\xi\in \mathcal S$. We characterize the complexity of the four reactive and proactive min-cost flow problems associated with distances $d_{val}$ and $d_{struct}$, as represented in Table~\ref{tab:complexity}. Note that all these problems are in $\mathcal N\mathcal P$. \begin{table}[H] \centering \renewcommand{\arraystretch}{1.4} \begin{tabular}{ccc} \hline \multirow{2}{*}{\textbf{Problem}}& \multicolumn{2}{c}{\textbf{Distance}}\\ & $d_{val}$ & $d_{struct}$ \\\hline \multirow{2}{*}{\textbf{Proactive}} & \multirow{2}{*}{$\mathcal N\mathcal P$-hard (Section~\ref{sec:mcf_dval})} & $\mathcal N\mathcal P$-hard (Section~\ref{sec:mcf_dstruct})\\ & & \small{even with $1$ scenario}\\\hline {\textbf{Reactive}} & {Polynomial (Section~\ref{sec:mcf_dval})} & $\mathcal N\mathcal P$-hard (Section~\ref{sec:mcf_dstruct})\\\hline \end{tabular} \caption{{Complexity of the min-cost flow problems with uncertain arc demands associated with distances $d_{val}$ and $d_{struct}$.}} \label{tab:complexity} \end{table} \subsection{Complexity of the robust min-cost flow with distance $d_{val}$} \label{sec:mcf_dval} We show in this section that for distance $d_{val}$ the proactive problem is $\mathcal N\mathcal P$-hard and that the reactive problem is polynomial. \begin{theorem} $MCF_{d_{val}}^p$ is $\mathcal N\mathcal P$-hard. \label{th:MCF_DV_P} \end{theorem} \begin{proof} {We prove this result with a reduction from problem $3$-SAT. This problem considers a boolean expression composed of $n$ variables $X=\s{x_1, ..., x_n}$ and $m$ clauses $\s{C_1, ..., C_m}$. Let a \textit{literal} be either a boolean variable $x_i$ or its negation $\overline x_i$. Each clause of a $3$-SAT problem is a conjunction of three literals (e.g. $C_1=(x_1\vee\overline x_2\vee x_3)$). The aim of this problem is to determine whether there exists an assignment of the variables which satisfies all the clauses.} {We now present how to construct an instance $I_{MCF}$ of the optimization problem $MCF_{d_{val}}^p$ from an instance $I_{SAT}$ of the feasibility problem $3$-SAT. Instance $I_{MCF}$ has $m$ scenarios. We prove that an optimal solution of $I_{MCF}$ leads to a solution cost of value $4mn$ if and only if $I_{SAT}$ is a yes-instance. \vspace{.3cm} \noindent{\textbf{Construction of $I_{MCF}$}} As represented in Figure~\ref{fig:MCF_DV_P_G}, the set of nodes $V$ of the constructed graph $G=(V, A)$ is composed of:} \begin{itemize} \item {one source node $s$ {with supply} $b_s=n$ and one sink node $t$ {with demand} $b_s=-n$;} \item for each boolean variable $x_i$ one \textit{variable node} $x_i$ and two \textit{literal nodes} $l_i$ and $\overline l_i$ {with no demand} $b_{x_i}=b_{l_i}=b_{\overline l_i}=0$; \item for each clause $C_p$ one \textit{clause node} $C_p$ {with no demand} $b_{C_p}=0$. \end{itemize} \noindent {The arc set $A$ is composed of the following subsets:} \begin{itemize} \item {$A_{sl}$ contains one arc from $s$ to each literal node;} \item {$A_{lx}$ contains one arc from each literal node to its corresponding variable node;} \item {$A_{lC}$ contains one arc $(l, C_p)$ for each clause $C_p$ and each literal $l$ in this clause;} \item {$A_{xt}$ contains one arc from each variable node to $t$;} \item {$A_{Ct}$ contains one arc from each clause node to $t$;} \item $A_{st}$ contains one arc from $s$ to $t$. \end{itemize} tea {Note that the size of the constructed graph is polynomial in $n$ and $m$. It has $3n + m + 2$ nodes and $5n + 4m+1$ arcs.} {The capacity of any arc is $m$ and its cost is $0$. Consequently, $c^*$ is equal to $0$. The nominal arc demand of all the arcs is $0$ except for the $n$ arcs of $A_{xt}$ where it is equal to $1$.} {Instance $I_{MCF}$ contains $m$ scenarios, one scenario $\xi_p$ per clause $C_p$. For each scenario $\xi_p$ the weight $w_p$ is equal to $1$, the arc demands are all equal to $0$ except $\ell_{s,t}^{\xi_p}$ which is equal to $n-1$ and $\ell_{C_p,t}^{\xi_p}$ which is equal to $1$ (see Figure~\ref{fig:MCF_DV_P_G}). } \vspace{.3cm} \noindent{\textbf{Solving $I_{SAT}$ from a solution of $I_{MCF}$}} {Let $f$ be a nominal feasible flow of $I_{MCF}$. Since the graph does not contain any cycle, the $n$ units supplied by $s$ are necessarily sent on $n$ different paths to satisfy the arc demands $\ell_{x_i,t}=1$ for all $i\in\s{1, ..., n}$ (see example in Figure~\ref{fig:MCF_DV_P_N}).} \begin{figure}[H] \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MCF_G.pdf} \caption{Graph and scenarios of $I_{MCF}$. Only the positive demands are represented.} \label{fig:MCF_DV_P_G} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MCF_NS.pdf} \caption{Nominal flow $f$.} \label{fig:MCF_DV_P_N} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MCF_S1.pdf} \caption{Flow of the first scenario $f^{\xi_1}$.} \label{fig:MCF_DV_P_S1} \end{subfigure}\hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MCF_S2.pdf} \caption{Flow of the second scenario $f^{\xi_2}$.} \label{fig:MCF_DV_P_S2} \end{subfigure} \caption{{Example of the reduction to $MCF^p_{d_{val}}$ of an instance of the $3$-SAT problem with $3$ variables and $2$ clauses $C_1=(x_1\vee\overline x_2\vee x_3)$ and $C_2=(\overline x_1\vee\overline x_2\vee\overline x_3)$. Figures~\ref{fig:MCF_DV_P_N},~\ref{fig:MCF_DV_P_S1} and~\ref{fig:MCF_DV_P_S2} represent an optimal solution $(f, f^{\xi_1}, f^{\xi_2})$ to $MCF^p_{d_{val}}$. The arcs with a flow of value $0$ are colored in gray.}} \label{fig:MCF_DV_P} \end{figure} {Let $f^{\xi_p}$ be the flow of scenario $\xi_p$. To satisfy the lower bounds $\ell_{s,t}^{\xi_p}=n-1$ and $\ell_{C_p,t}^{\xi_p}=1$, flow $f^{\xi_p}$ necessarily sends $n-1$ units of flow on arc $(s,t)$ and $1$ unit of flow on a path $(s, l, C_p, t)$ with $l$ a literal included in clause $C_p$ (see Figures~\ref{fig:MCF_DV_P_S1} and~\ref{fig:MCF_DV_P_S2}). As represented in Table~\ref{tab:MCF_DV_P_costs}, each scenario leads to a solution cost of value $4n$ or $4n+2$. A cost of $4n$ is obtained if and only if the arc of $A_{sl}$ used in $f^{\xi_p}$ is also used in the nominal flow.} \begin{table} \centering\begin{tabular}{cccc} \hline \textbf{Arc sets} & $\sum_a f_a$ & $\sum_a f^{\xi_p}_a$ & \textbf{Solution cost} \\\hline $A_{st}$ & $0$ & $n-1$ & $n-1$\\ $A_{lx}$ & $n$ & $0$ & $n$\\ $A_{xt}$ & $n$ & $0$ & $n$\\ $A_{lC}$ & $0$ & $1$ & $1$\\ $A_{Ct}$ & $0$ & $1$ & $1$\\ $A_{sl}$ & $n$ & $1$ & $n-1$ or $n+1$\\\hline & & \textbf{Total} & $4n$ or $4n+2$\\\hline \end{tabular} \caption{{Solution cost incurred by scenario $\xi_p$ for each set of arcs. The solution cost associated with $A_{sl}$ is $n-1$ if the arc of $A_{sl}$ used in $f^{\xi_p}$ is also used in the nominal flow.}} \label{tab:MCF_DV_P_costs} \end{table} Consequently, the solution cost over all the scenarios is contained in $[4mn, 4mn+2m]$. A cost of $4mn$ is obtained if and only if all the arcs of $A_{sl}$ used by the scenarios are also used in the nominal flow (i.e., if setting to true all the variables $x_i$ such that $f_{s,l_i} > 0$ and to false the others enables to satisfy all the clauses). \end{proof} \begin{definition} (\cite{ahuja1988network}, Section 1.2) A \textit{convex cost flow problem} is a min-cost flow problem where the cost of an arc is a piecewise linear convex function of its flow. \end{definition} \begin{property} $MCF_{d_{val}}^r$ is a convex cost flow problem. \end{property} \begin{proof} The cost of an arc $a\in A$ is $|x^r_{a}-x_{a}|$ which is a convex piecewise linear function of flow $x^r_{a}$. \end{proof} \begin{theorem}(\cite{ahuja1988network}, Section 14.3) A convex cost flow problem with piecewise linear convex cost functions can be transformed into a min-cost flow problem. \label{thm:ahuja_convex_cost_flow} \end{theorem} \begin{corollary} $MCF_{d_{val}}^r$ is a polynomial problem. \end{corollary} \subsection{Complexity of the robust min-cost flow with distance $d_{struct}$} \label{sec:mcf_dstruct} We show that for distance $d_{struct}$ both the proactive and the reactive problems are $\mathcal N\mathcal P$-hard. \begin{theorem}$MCF_{d_{struct}}^p$ is $\mathcal N\mathcal P$-hard.\label{thm:MCFadaptD}\end{theorem} \begin{proof} We prove the theorem with a reduction from problem $3$-Partition. This problem considers a set $E=\s{1, 2, ..., 3m}$ of $3m$ elements and a positive integer $B$. To each element $i\in E$ is associated a size $s(i)\in]\frac B 4, \frac B 2[$ such that $\sum_{i\in E} s(i)=mB$. The aim of this problem is to determine whether there exists a partition $\s{E_1, ..., E_m}$ of $E$ into $m$ subsets such that the size of each subset $E_j$, $\sum_{i\in E_j}s(i)$, is equal to $B$. Note that since $s(i)\in]\frac B 4, \frac B 2[$, {the cumulative sizes of $2$ and $4$ objects are respectively lower and greater than $B$. As a consequence }each set necessarily contains exactly $3$ elements. We now present how to construct an instance $I_{MCF}$ of the optimization problem $MCF_{d_{struct}}^p$ from an instance $I_{3P}$ of the feasibility problem $3$-Partition and we prove that a solution of $I_{MCF}$ has a solution cost of $7m$ if and only if $I_{3P}$ is a yes-instance. \vspace{.3cm} \noindent{\textbf{Construction of $I_{MCF}$}} {Figure~\ref{fig:redGraph} illustrates the graph obtained for an instance of problem $3$-Partition with $m=2$ and $B=100$.} The set of nodes $V$ of the constructed graph $G=(V, A)$ is composed of one node $\alpha$, one node $V_i$ for each element $i\in E$ and one \textit{subset node} $E_j$ for each subset $j\in\s{1, ..., m}$. The demand of all the nodes is equal to $0$. \begin{figure}[H] \begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics{./img/reduction_reactif_delta_v3_G.pdf} \caption{{Graph of $I_{MCF}$. All node demands are $0$. For any arc, demand is $0$, capacity is $B$ and unitary cost is $1$.}} \label{fig:redGraph} \end{subfigure}\hspace{.5cm} \begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics{./img/reduction_reactif_delta_v3_L.pdf} \caption{{Positive demands of the scenario.}} \label{fig:redDemands} \end{subfigure} \centering\begin{subfigure}[t]{0.46\textwidth} \centering \includegraphics{./img/reduction_reactif_delta_v3_S.pdf} \caption{{Optimal proactive flow inducing the $3$-partition $E_1=\s{1, 2, 6}$ and $E_2=\s{3, 4, 5}$. The arcs with a flow of value $0$ are colored in gray.}} \label{fig:redsolution} \end{subfigure} \caption{{Example of the reduction to $MCF^p_{d_{struct}}$ of an instance of the $3$-Partition problem with $m=2$, $B=100$ and elements of size $(30, 30, 30, 35, 35, 40)$.}} \label{fig:redDR} \end{figure} The arc set $A$ contains one arc $(\alpha, V_i)$ for each $i\in E$ and one arc $(V_i, E_j)$ for each $i\in E$ and each $j\in\s{1, ..., m}$. The graph also contains an additional set of arcs, denoted by $A_{E\alpha}$, which contains one arc from each subset node to $\alpha$. Note that the size of the graph is polynomial in $n$ and $m$ as it contains $4 m + 1$ nodes and $3m^2+4m$ arcs. For any arc, the nominal demand is $0$, the capacity is $B$ and the unitary cost is $1$. It follows, that the only optimal nominal solution is the empty flow, with the objective value $c^*=0$. One unique scenario $\xi$ is considered in which the only arc demands different from $0$ are $\ell^\xi_{\alpha, V_i}$ of arcs $(\alpha, V_i)$ for all $i\in E$ which are set to $s(i)$ {(see Figure~\ref{fig:redDemands})}. \vspace{.3cm} \noindent{\textbf{Solving $I_{3P}$ from a solution of $I_{MCF}$}} Since $c^*=0$, the only feasible proactive solution is the empty flow. Let $f^\xi$ be a feasible solution for scenario $\xi$. Observe that the outgoing flow from $\alpha$ is at least $Bm$. Moreover, arcs $(E_j, \alpha)$ have a capacity of $B$. Thus, the arcs $(E_j,\alpha)$ must all be saturated and the flow running through $\alpha$ is exactly $Bm$. Since the flow on each arc $(E_j, \alpha)$ is $0$ in the nominal flow and $B$ in $f^\xi$, these $m$ arcs induce a solution cost of $m$. We now show that at least an additional cost of $2|E|$ is incurred by the remaining arcs. Each $V_i$ has an ingoing flow of value $s(i)$. Consequently, $f^\xi$ must send a total flow of $s(i)$ on at least one of the $m$ following paths $\s{(\alpha, V_i, E_j)}_{j=1}^m$. To sum up, $f^\xi$ uses at least $2$ additional arcs for each element $i\in E$, leading to a solution cost of $m+2|E|$ (see Figure~\ref{fig:redsolution}). The use of several paths to satisfy a demand $\ell^\xi_{\alpha V_i}$ would increase the solution cost as several arcs from $\s{(V_i, E_j)}_{j=1}^m$ would be used. Consequently, a solution cost of $7m$ is obtained if and only if for each $i\in E$ exactly one path is used to satisfy the demand $\ell_{\alpha, V_i}$ (i.e., if a partition with subsets of size $B$ is obtained by assigning each element $i$ to the unique set $E_j$ such that $f^\xi_{V_i, E_j}> 0$). \end{proof} \begin{theorem}$MCF_{d_{struct}}^r$ is $\mathcal N\mathcal P$-hard. \end{theorem} \begin{proof} {In the reduction considered in the proof of Theorem~\ref{thm:MCFadaptD}, the proactive flow is unique and known a priori as it is necessarily empty. Consequently, this flow can be given as an input of the problem rather than being part of its solution without altering the validity of the reduction.} {This new reduction leads to an instance of $MCF_{d_{struct}}^r$ since the nominal flow is fixed and only $1$ scenario is considered.} \end{proof} \section{Solution robustness for max-flow problem with uncertain arc capacities} \label{sec:mf} The max-flow problem can be stated as: \vspace{.3cm} \noindent\fbox{ \parbox{0.97\textwidth}{\textsc{Max-flow Problem} \newline \begin{tabular}[\textwidth]{p{0.06\textwidth}p{0.85\textwidth}} Input: & A digraph $G = (V, A)$ with a source $s\in V$, a sink $t\in V$ and capacities $u_a\in\mathbb N$ on the flow of each arc $a\in A$.\\ Output: & Find an integer flow with maximum value. \end{tabular} } } \vspace{.3cm} We consider max-flow problems with uncertainties on the capacities $u$. Let $\mathcal S$ be a set of scenarios and note $u^\xi$ the capacities associated to a scenario $\xi\in \mathcal S$. {Note that the associated reactive and proactive problems are not particular cases of the ones considered in the previous section as the uncertainty is not on the arc demands.} In the remaining of this section we determine the complexity of the four reactive and proactive max-flow problems associated with distances $d_{val}$ and $d_{struct}$, as represented in Table~\ref{tab:complexityMF}. \begin{table}[H] \centering \renewcommand{\arraystretch}{1.4} \begin{tabular}{ccc} \hline \multirow{2}{*}{\textbf{Problem}}& \multicolumn{2}{c}{\textbf{Distance}}\\ & $d_{val}$ & $d_{struct}$ \\\hline \multirow{2}{*}{\textbf{Proactive}} & \multirow{2}{*}{$\mathcal N\mathcal P$-hard (Section~\ref{sec:mf_dval})} & $\mathcal N\mathcal P$-hard (Section~\ref{sec:mf_dstruct})\\ & & \small{even with $1$ scenario}\\\hline {\textbf{Reactive}} & {Polynomial (Section~\ref{sec:mf_dval})} & $\mathcal N\mathcal P$-hard (Section~\ref{sec:mf_dstruct})\\\hline \end{tabular} \caption{{Complexity of the max-flow problems with uncertain arc capacities associated with distances $d_{val}$ and $d_{struct}$.}} \label{tab:complexityMF} \end{table} \subsection{Complexity of the robust max-flow with distance $d_{val}$} \label{sec:mf_dval} We show in this section that for distance $d_{val}$ the proactive problem is $\mathcal N\mathcal P$-hard and that the reactive problem is polynomial. \begin{theorem} $MF_{d_{val}}^p$ is $\mathcal N\mathcal P$-hard. \end{theorem} \begin{proof} The proof of this theorem is similar to the one of Theorem~\ref{th:MCF_DV_P}. An instance $I_{MF}$ of the optimization problem $MF_{d_{val}}^p$ is constructed from an instance $I_{SAT}$ of the feasibility problem $3$-SAT and we prove that $I_{MF}$ has an optimal solution with a solution cost equal to $m(3n+2m-1)$ if and only if $I_{SAT}$ is a yes-instance. \vspace{.3cm} \noindent{\textbf{Construction of $I_{MF}$}} As represented in Figure~\ref{fig:MF_DV_P}, the graph associated with $I_{MF}$ has the same nodes and arcs than the one in the proof of Theorem~\ref{th:MCF_DV_P} except for the arc $(s,t)$ which does not exist. The graph also includes an additional set of arcs $A_{sC}$ which contains one arc between $s$ and each clause node. { The nominal capacities of all the arcs is {$+\infty$ except for the arcs in $A_{xt}$ and $A_{Ct}$ for which it is equal to $1$ and the arcs in $A_{lC}$ for which it is null.}} {As proved below, $c^*$ is equal to $n+m$.} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MF_v2_G.pdf} \caption{Graph of $I_{MF}$. } \label{fig:MF_DV_P_G} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MF_v2_N.pdf} \caption{Nominal flow $f$. {The nominal capacities different from $+\infty$ are represented between brackets.}} \label{fig:MF_DV_P_N} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MF_v2_S1.pdf} \caption{Flow $f^{\xi_1}$ of the first scenario $\xi_1$. {The scenario capacities different from $+\infty$ are represented between brackets.}} \label{fig:MF_DV_P_S1} \end{subfigure}\hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_x_MF_v2_S2.pdf} \caption{Flow $f^{\xi_2}$ of the second scenario $\xi_2$. {The scenario capacities different from $+\infty$ are represented between brackets.}} \label{fig:MF_DV_P_S2} \end{subfigure} \caption{{Example of the reduction to $MF^p_{d_{val}}$ of an instance of the $3$-SAT problem with $3$ variables and $2$ clauses $C_1=(x_1\vee\overline x_2\vee x_3)$ and $C_2=(\overline x_1\vee\overline x_2\vee\overline x_3)$ to $MF^p_{d_{val}}$. Figures~\ref{fig:MF_DV_P_N},~\ref{fig:MF_DV_P_S1} and~\ref{fig:MF_DV_P_S2} represent an optimal solution $(f, f^{\xi_1}, f^{\xi_2})$. The arcs with a flow of value $0$ are colored in gray.}} \label{fig:MF_DV_P} \end{figure} {Instance $I_{MF}$ contains $m$ scenarios $\s{\xi_1, ..., \xi_m}$ with weights equal to $1$. The capacities of the arcs of a scenario $\xi_p$ are all equal to {$+\infty$ except the arcs in $A_{xt}$, $A_{sC}$ and $(C_q, t)$ for all $q\neq p$ for which it is equal to $0$ and $u_{C_p, t}$ which is equal to $1$} (see Figure~\ref{fig:MF_DV_P_G}). } \vspace{.3cm} \noindent{\textbf{Solving $I_{SAT}$ from a solution of $I_{MF}$}} Let $f$ be a nominal feasible flow of $I_{MF}$. Since the $n+m$ arcs arriving in $t$ all have a nominal capacity of $1$, the nominal flow value is at most $n+m=c^*$. Flow $f$ necessarily sends one unit of flow for each clause $C_p$ on path $(s, C_p, t)$ and one unit of flow for each variable $x_i$ on one of the following two paths $(s, l_i, x_i, t)$ or $(s, \overline l_i, x_i, t)$ (see example in Figure~\ref{fig:MF_DV_P_N}). {Let $f^{\xi_p}$ be the flow of scenario $\xi_p$. Due to the scenario capacities, $f^{\xi_p}$ can either be an empty flow (solution $f^{\xi_p,0}$) or send $1$ unit of flow from $s$ to $t$ through a path $(s, l, C_p, t)$ with $l$ a literal of clause $C_p$ (solution $f^{\xi_p,l}$) (see Figures~\ref{fig:MF_DV_P_S1} and~\ref{fig:MF_DV_P_S2}). As represented in Table~\ref{tab:MF_DV_P_costs}, solution $f^{\xi_p,0}$ leads to a solution cost of value $3n+2m$ and $f^{\xi_p,l}$ to a solution cost in $[3n+2m-1,3n+2m+1]$. A cost of $3n+2m-1$ is obtained if and only if the arc of $A_{sl}$ used in $f^{\xi_p,l}$ is also used in the nominal flow.} \begin{table} \centering\begin{tabular}{cccccc} \hline \multirow{2}{*}{\textbf{Arc sets}} & \multirow{2}{*}{$\sum_a f_a$} & \multirow{2}{*}{$\sum_a f^{\xi_p,0}_a$} & \multirow{2}{*}{$\sum_a f^{\xi_p,l}_a$} & \multicolumn{2}{c}{\textbf{Solution cost}}\\ & & & & $\sum_a |f^{\xi_p,0}_a-f_a|$ & $\sum_a |f^{\xi_p,l}_a - f_a|$ \\\hline $A_{sC}$ & $m$ & $0$ & $0$ & $m$ & $m$\\ $A_{lC}$ & $0$ & $0$ & $1$ & $0$ & $1$\\ $A_{Ct}$ & $m$ & $0$ & $1$ & $m$ & $m-1$\\ $A_{sl}$ & $n$ & $0$ & $1$ & $n$ & $n\pm 1$ \\ $A_{lx}$ & $n$ & $0$ & $0$ & $n$ & $n$\\ $A_{xt}$ & $n$ & $0$ & $0$ & $n$ & $n$\\\hline & & & {\textbf{Total}} & {$3n+2m$} & $3n+2m\pm 1$\\\hline \end{tabular} \caption{{Solution cost incurred by scenario $\xi_p$ for each set of arcs. The solution cost is reduced for $A_{sl}$ if the arc used in $f^{\xi_p,l}$ is also used in the nominal flow.}} \label{tab:MF_DV_P_costs} \end{table} Consequently, the solution cost over all the scenarios is between $m (3n+2m-1)$ and $m (3n+2m+1)$. An optimal cost of $m(3n+2m-1)$ is obtained if and only if all the arcs of $A_{sl}$ used by the scenarios are also used in the nominal flow (i.e., if setting to true all the variables $x_i$ such that $f_{s,l_i} > 0$ and to false the others enables to satisfy all the clauses). \end{proof} We show that $MF_{d_{val}}^r$ corresponds to a \textit{convex cost flow problem} which is known to be polynomial. \begin{property} $MF_{d_{val}}^r$ is a polynomial problem. \end{property} \begin{proof} $MF_{d_{val}}^r$ is a convex cost flow problem since the cost of an arc $a\in A$ is $|x^r_{a}-x_{a}|$. {Since a max-flow problem is a particular min-cost flow problem, according to Theorem~\ref{thm:ahuja_convex_cost_flow}, $MF_{d_{val}}^r$ is a polynomial problem.} \end{proof} \subsection{Complexity of the robust max-flow with distance $d_{struct}$} \label{sec:mf_dstruct} We show that for distance $d_{struct}$ both the proactive and the reactive problems are $\mathcal N\mathcal P$-hard. \begin{theorem}$MF_{d_{struct}}^p$ is $\mathcal N\mathcal P$-hard. \label{thm:MFadaptD}\end{theorem} \begin{proof} Similarly to the proof of $MCF_{d_{val}}^p$, we use a reduction from problem $3$-SAT. Let $I_{SAT}$ be an instance of the feasibility problem $3$-SAT with $m$ clauses and $n$ variables. As represented in Figure~\ref{fig:MF_DS_P}, we create an instance $I_{MF}$ of the optimization problem $MF_{d_{struct}}^p$ with one unique scenario $\xi$. We prove that an optimal solution of $I_{MF}$ leads to a solution cost of $3n + 2m$ if and only if $I_{SAT}$ is a yes-instance. \vspace{.3cm} \noindent{\textbf{Construction of $I_{MF}$}} In $I_{MF}$, the set of nodes $V$ of graph $G=(V, A)$ is composed of: \begin{itemize} \item $2$ nodes $s$ and $t$; \item {$2n$ \textit{literal nodes} $\s{l_i}_{i=1}^{n}$ and $\s{\overline l_i}_{i=1}^{n}$; } \item {$3n$ \textit{variable nodes} $\s{x^1_i, x^2_i,x^3_i}_{i=1}^n$; } \item {$2m$ \textit{clause nodes} $\s{C^1_p,C^2_p}_{p=1}^{m}$.} \end{itemize} \noindent {The arc set $A$ is composed of the following subsets:} \begin{itemize} \item {$A_{s,l}$ which contains one arc from $s$ to each literal node;} \item {$A_{s,x}$ which contains one arc from $s$ to each first variable node $x^1_i$;} \item {$A_{l,x}$ which contains one arc from each literal node to its corresponding first variable node $x^1$;} \item {$A_{x}$ which contains arcs $(x^1_i, x_i^2)$ and $(x^2_i, x^3_i)$ for each $i$;} \item {$A_{x, t}$ which contains one arc from each last variable node $x^3_i$ to $t$;} \item {$A_{s, C}$ which contains one arc from $s$ to each first clause node $C^1_p$;} \item {$A_{l,C}$ which contains one arc $(l, C^1_p)$ for each clause $C_p$ and each literal $l$ in $C_p$. Hence, $3$ such arcs exist for each clause;} \item {$A_C$ which contains one arc $(C^1_p, C^{2}_p)$ for each clause $C_p$;} \item {$A_{C, t}$ which contains one arc from each clause node $C^{2}_p$ to $t$.} \end{itemize} {Note that the size of the constructed graph is polynomial in $n$ and $m$. It has $5n+2m+2$ nodes and $8n+6m$ arcs.} {As proved below, the value of $c^*$ is equal to $n+m$. The nominal capacities $u$ and the scenario capacities $u^\xi$ for each arc subset are presented in Table~\ref{tab:MF_DS_P}.} \begin{figure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_delta_MF_NComplet.pdf} \caption{Nominal capacities.} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_delta_MF_NSolution.pdf} \caption{Unique optimal nominal solution with flow value $n+m$.} \label{fig:MF_DS_P_N} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_delta_MF_RComplet.pdf} \caption{Capacities of the scenario.} \label{fig:MF_DS_P_RComplet} \end{subfigure}\hfill \begin{subfigure}{0.45\textwidth} \centering \includegraphics{./img/reduction_adaptatif_delta_MF_RSolution.pdf} \caption{Flow of the scenario of an optimal solution.} \label{fig:MF_DS_P_RSolution} \end{subfigure} \caption{{Example of the reduction to $MF^p_{d_{struct}}$ of an instance of the $3$-SAT problem with $3$ variables and $2$ clauses $C_1=(x_1\vee\overline x_2\vee x_3)$ and $C_2=(\overline x_1\vee\overline x_2\vee\overline x_3)$. The arcs with a capacity or a flow of value $0$ are colored in gray.}} \label{fig:MF_DS_P} \end{figure} \begin{table} \centering\begin{tabular}{lcc} \hline \multirow{2}{*}{\textbf{Arc sets}} & \multicolumn{2}{c}{\textbf{Capacities}} \\ & $u$ & $u^\xi$\\ \hline $A_{s,l}$ & $0$ & $m+1$ \\ $A_{s,x}$, $A_{s, C}$ & $1$ & $0$\\ $A_{l,x}$, $A_{l,C}$ & $0$ & $1$\\ $A_{x}$, $A_{x,t}$, $A_C$, $A_{C,t}$ & $1$ & $1$\\ \hline \end{tabular} \caption{Capacities of each set of arcs for the nominal case and scenario $\xi$ in instance $I_{MF}$.} \label{tab:MF_DS_P} \end{table} \vspace{.3cm} \noindent{\textbf{Solving $I_{SAT}$ from a solution of $I_{MF}$}} {Let $f$ be a nominal flow of $I_{MF}$. As examplified in Figure~\ref{fig:MF_DS_P_N}, the maximal nominal flow is $c^*=n+m$ and it can only be reached by a unique flow which sends:} \begin{itemize} \item {$1$ unit for each clause $C_p$ through path $(s, C^1_p, C^2_p,t)$; and} \item {$1$ unit for each variable $x_i$ through path $(s,x^1_i, x^2_i, x^3_i,t)$.} \end{itemize} {Let $f^\xi$ be the flow of the scenario. An empty flow $f^\xi$ would leads to a solution cost of $4n+3m$ (i.e., the number of arcs used by the nominal flow). This can be improved by sending, for each variable $x$, $1$ unit of flow along one of the two following paths: $P_x=(s,l, x^1, x^2, x^3, t)$ or $\overline P_x=(s, \overline l, x^1, x^2, x^3, t)$. Let assume without loss of generality that only $P_x$ is used. This leads to a reduction of $1$ of the solution cost. Indeed, $3$ arcs of the path are also used in the nominal flow ($(x^1, x^2)$, $(x^2, x^3)$, $(x^3, t)$) and $2$ are not ($(s, l)$ and $(l, x^1)$). Note that both $P_x$ and $\overline P_x$ cannot be used simultaneously as the capacity of $(x^1, x^2)$ is equal to $1$. Thus, the solution cost of $f^\xi$ is reduced by $n$ by sending $1$ unit of flow on either $P_x$ or $\overline P_x$ for each variable $x$ (see Figure~\ref{fig:MF_DS_P_RSolution}). Let $A^\xi_{sl}$ be the subset of arcs of $A_{sl}$ now used in $f^\xi$.} {To further reduce the solution cost, $f^\xi$ must use arcs from $A_C$ and $A_{C,t}$. The clause nodes associated with a clause $C$ can only be reached by $f^\xi$ through a path $(s, l, C^1, C^{2}, t)$ with $l$ one of the three literals included in $C$. If $(s,l)\in A^\xi_{sl}$ the use of this path reduces by $1$ the solution cost (since $(C^1, C^2)$ and $(C^2,t)$ are also used in $f$ and $(l, C^1)$ is not), otherwise the solution cost remains the same. Thus, the solution cost can be further reduced by at most $m$ if each clause node is reached through a path which includes an arc in $A^\xi_{sl}$.} {In conclusion, there exists an optimal solution to $I_{MF}$ with solution cost equal to $3n+2m$ if and only if $I_{SAT}$ is satisfiable. In that case, the solution of $I_{SAT}$ consists in fixing to true the variables $x_i$ such that $f^\xi_{s,l_i}>0$ and to false the others. } \end{proof} \begin{corollary}$MF_{d_{struct}}^r$ is $\mathcal N\mathcal P$-hard. \end{corollary} \begin{proof} {The reduction used in the proof of Theorem~\ref{thm:MFadaptD} can also be used to prove this theorem since it only requires one scenario. The main difference is that the only feasible nominal solution (examplified in Figure~\ref{fig:MF_DS_P_N}), is given as an input of the reactive problem.} \end{proof} \renewcommand{\arraystretch}{1.3} \section{A case study - The Line Optimization Problem with uncertain demands} \label{sec:experiments} {In this section, we apply our solution robustness approach to a railroad planning problem inspired from~\cite{bussieck1997optimal,goossens2004branch}. We compare our approach with two approaches from the literature and highlight their similarities and differences. We then assess the solution cost reduction when relaxing the optimality constraint on the nominal objective of our approach. Finally, we show the efficiency of the proactive approach over the reactive approach.} The Line Optimization Problem occurs in a railway system with periodic timetables. A line in an urban transportation network is a path from a departure station to an arrival station with stops in intermediary stations. The frequency of a line is the number of times a train has to be operated on this line in a given time interval in order to cover passenger demands. We consider the problem of determining the lines deployment and frequencies in order to minimize the line costs. \subsection{Deterministic problem description} \label{sec:det_ol} We are given the set $\mathcal V $ of stations and the network $\mathcal N=(\mathcal V , E,d)$ where edges of $E$ represent the undirected links between stations and $d_e$ is the length of $e\in E$. An origin-destination matrix OD provides the number of passengers that need to travel from any station $s$ to any other station $s'$, within one hour. We consider a set $L$ of potential lines. A line $\ell$ is characterised by a pair $(s^1_\ell, s^2_\ell)$ of departure and arrival stations together with a shortest path linking the two stations. Trains are planned to run on the deployed lines and all trains are supposed to have the same carriage capacity of $C$ passengers. Therefore, if a line $\ell$ is scheduled with frequency $f$, then up to $C \times f$ passengers per hour may be carried on each edge of line $\ell$. Due to technical considerations, the frequency of any line is limited by a given value $\mbox{MF}\in\mathbb N$. We are also given the cost $K_\ell$ of deploying line $\ell$ and the unitary cost $K'_\ell$ associated to frequencies on line $\ell$. We introduce a binary variable $x_\ell$ equal to $1$ if and only if line $\ell\in L$ is deployed and $0$ otherwise. When $\ell$ is deployed, variable $f_\ell$ indicates the hourly frequency of line $\ell$. A solution $(x, f)$ is feasible for a given origin-destination matrix OD if it is included in set $\mathcal F(\mbox{OD})$ defined by the following constraints: \begin{numcases}{\mathcal F(\mbox{OD})} C \times \displaystyle\sum_{\ell\in L: e \in \ell} f_{\ell} \geq \displaystyle\sum_{\ell\in L: e \in \ell} \mbox{OD}(s^1_\ell, s^2_\ell) & $e \in E$ \label{eq:demand}\\ f_\ell \leq \mbox{MF}\,x_\ell & $\ell\in L.$ \label{eq:link}\\ x_\ell \leq f_\ell & $\ell\in L$ \label{eq:link2}\\ f_\ell \in \mathbb{Z}_+ & $\ell\in L$ \nonumber\\ x_\ell \in \{0,1\} & $\ell\in L$ \nonumber \end{numcases} \noindent Constraints~\eqref{eq:demand} force the demand satisfaction of each edge. Constraints~\eqref{eq:link} and~\eqref{eq:link2} link variables $x_\ell$ and $f_\ell$. If line $\ell$ is not deployed then $x_\ell$ and $f_\ell$ are equal to zero, otherwise $f_\ell$ is not larger than $\mbox{MF}$. \bigskip The nominal objective of a solution $(x, f)$ is defined as \begin{equation} \label{eq:deterministicObjective} \mathcal NO(x, f) = \sum_{\ell\in L}(K_\ell x_\ell + K'_\ell f_\ell) \end{equation} Let OD$^0$ be the nominal origin-destination matrix. The deterministic model is defined as \begin{center} $\overline P\left\{ \begin{array}{ll} \min ~ \mathcal NO(x^0, f^0)\\ \mbox{s.t.} ~(x^0, f^0)\in \mathcal F(OD^0) \end{array} \right.$ \end{center} We now assume that the OD-matrix is uncertain within a set $\mathcal S$ of scenarios. We are given the set $\s{OD^1, ..., OD^{|\mathcal S|}}$ of alternative matrices. {To handle the uncertainty, this case study compares three robust approaches: our proactive approach, the anchored approach~\cite{bendotti2019anchor} and the $k$-distance approach~\cite{busing2012recoverable}. Each approach either constrains or optimizes differences between the nominal solution and the solution of the scenarios. Our proactive approach minimizes an average distance between the nominal and the scenario solutions. The anchored approach maximises the number of variables which value is identical in all solutions while the $k$-distance approach requires that the value of at most $k$ variables can differ in the nominal solution and the solution of each scenario.} {We consider two cases in which the differences between solutions are either measured on the frequency variables $f$ or on the line deployment variables $x$. For the proactive approach this can be viewed as considering the distance in values $d_{val}$ over $f$ in the first case and the distance in structure $d_{struct}$ over the lines deployment variables $x$ in the second case.} {We show how this problem can be modelled for each approach as a compact integer linear program. The compactness of the formulations is made possible by the fact that the uncertainty is represented by a finite set of scenarios $\mathcal S$ rather than by an uncertainty set.} \subsection{Solution cost based on frequencies} \label{sec:robfreq} {In this section, the differences between solutions are measured through the frequency variables $f^0$ of the nominal solution and $\s{f^s}_{s\in\mathcal S}$ of the scenario solutions.} \medskip \noindent \textbf{Proactive approach}\\ As presented previously, the proactive approach minimizes the {average solution cost between the nominal solution $(x^0, f^0)$ and the scenario solutions $\s{(x^s, f^s)}_{s\in \mathcal S}$} while {setting the nominal objective $\mathcal NO(x^0, f^0)$ to its optimal value $c^*$}. {The solution cost based on frequencies corresponds to the distance in values $d_{val}$ from Definition~\ref{def:value} applied to the frequency variables $f^0$ and $\s{f^s}_{s\in\mathcal S}$ (i.e., $\sum_{s\in \mathcal S}\sum_{\ell\in L} |f^0_{\ell}-f^s_{\ell}|$)}. We consider variable $df_\ell^s$ equal to $|f^s_\ell-f^0_\ell|$ for each $\ell\in L$ and $s \in\mathcal S$: \begin{center} $P^p_f(\mathcal S, c^*)\left\{ \begin{array}{lll} \min & \sum_{s\in\mathcal S}\sum_{\ell\in L} df^s_\ell\\ \mbox{s.t.} & (x^0, f^0)\in \mathcal F(OD^0)\\ & (x^s, f^s)\in \mathcal F(OD^s) & s\in\mathcal S\\ & \mathcal NO(x^0, f^0)= c^*\\ & f^s_\ell - f^0_\ell \leq df^s_\ell & s\in\mathcal S,~\ell\in L\\ & -f^s_\ell + f^0_\ell \leq df^s_\ell & s\in\mathcal S,~\ell\in L \end{array} \right.$ \end{center} \bigskip \noindent \textbf{Anchored approach}\\ A nominal variable is said to be \textit{anchored} {if its value in the nominal solution and in all scenario solutions are equal}. For each line $\ell\in L$, variable $a_\ell$ is equal to $1$ if $f^0_\ell$ is anchored and $0$ otherwise. This approach maximizes the number of anchored variables while ensuring that the nominal objective of the nominal solution is optimal: \begin{center} $P^a_f(\mathcal S, c^*)\left\{ \begin{array}{lll} \max & \sum_{\ell\in L} a_\ell\\ \mbox{s.t.} &(x^0, f^0)\in \mathcal F(OD^0)\\ & (x^s, f^s)\in \mathcal F(OD^s) & s\in\mathcal S\\ & \mathcal NO(x^0, f^0)= c^*\\ & f^s_\ell - f^0_\ell \leq \mbox{MF}( 1-a_\ell) & s\in\mathcal S,~\ell\in L\\ & -f^s_\ell + f^0_\ell \leq \mbox{MF}( 1-a_\ell) & s\in\mathcal S,~\ell\in L\\ & a_\ell\in\s{0, 1} & \ell\in L \end{array} \right.$ \end{center} \noindent In any optimal solution, $a_\ell$ is equal to one unless $( f^s_\ell - f^0_\ell)$ is different from zero for some scenario $s$. \bigskip \noindent \textbf{$k$-distance approach}\\ In this approach, for each scenario $s\in\mathcal S$, the number of variables $f^0_\ell$ and $f^s_\ell$ which have different values is limited to $k\in\mathbb N$ and the nominal objective of the nominal solution $x^0$ is minimized. Variable $\delta^s_\ell$ is equal to $1$ if and only if the values of $f^0_\ell$ and $f^s_\ell$ are different and $0$ otherwise. \begin{center} $P^d_f(\mathcal S, k)\left\{ \begin{array}{lll} \min & \mathcal NO(x^0, f^0)\\ \mbox{s.t.}& (x^0, f^0)\in \mathcal F(OD^0)\\ & (x^s, f^s)\in \mathcal F(OD^s) & s\in\mathcal S\\ & f^s_\ell - f^0_\ell \leq \mbox{MF}~ \delta^s_\ell & s\in\mathcal S,~\ell\in L\\ & -f^s_\ell + f^0_\ell \leq \mbox{MF}~ \delta^s_\ell & s\in\mathcal S,~\ell\in L\\ & \sum_{\ell\in L} \delta^s_\ell\leq k & s\in\mathcal S\\ & \delta^s_\ell \in\s{0, 1} & s\in\mathcal S,~\ell\in L \end{array} \right.$ \end{center} \subsection{Solution cost based on lines deployment} \label{sec:roblines} {In this section, the differences between the solutions are measured through the lines deployment variables $x^0$ of the nominal solution and $\s{x^s}_{s\in\mathcal S}$ of the scenario solutions.} \medskip \noindent \textbf{Proactive approach}\\ {The solution cost based on lines deployment corresponds to the distance in structure $d_{struct}$ from Definition~\ref{def:structure} applied to the lines deployment variables $x^0$ and $\s{x^s}_{s\in\mathcal S}$ (i.e., $\sum_{s\in \mathcal S}\sum_{\ell\in L} |\mathbb 1_{x^0_{\ell}>0}-\mathbb 1_{x^s_{\ell}>0}|$)}. We consider variable $dx_\ell^s$ equal to {$|\mathbb 1_{x^s_\ell>0}-\mathbb 1_{x^0_\ell>0}|$} for each $\ell\in L$ and $s\in\mathcal S$. \begin{center} $P^p_x(\mathcal S, c^*)\left\{ \begin{array}{lll} \min & \sum_{s\in \mathcal S}\sum_{\ell\in L} dx^s_\ell\\ \mbox{s.t.} & (x^0, f^0)\in \mathcal F(OD^0)\\ & (x^s, f^s)\in \mathcal F(OD^s) & s\in\mathcal S\\ & \mathcal NO(x^0, f^0)= c^*\\ & x^s_\ell - x^0_\ell \leq dx^s_\ell & s\in\mathcal S,~\ell\in L\\ & -x^s_\ell + x^0_\ell \leq dx^s_\ell & s\in\mathcal S,~\ell\in L \end{array} \right.$ \end{center} \bigskip \noindent \textbf{Anchored approach}\\ For each line $\ell\in L$, the binary variable $a_\ell$ is equal to $1$ if and only if $x^0_\ell$ is anchored i.e. $x^s_\ell=x^0_\ell$ $\forall s\in\mathcal S$. \begin{center} $P^a_x(\mathcal S, c^*)\left\{ \begin{array}{lll} \max & \sum_{\ell\in L} a_\ell\\ \mbox{s.t.} &(x^0, f^0)\in \mathcal F(OD^0)\\ & (x^s, f^s)\in \mathcal F(OD^s) & s\in\mathcal S\\ & \mathcal NO(x^0, f^0)= c^*\\ &x^s_\ell - x^0_\ell \leq 1-a_\ell & s\in\mathcal S,~\ell\in L\\ &- x^s_\ell + x^0_\ell \leq 1-a_\ell & s\in\mathcal S,~\ell\in L\\ & a_\ell\in\s{0, 1} & \ell\in L \end{array} \right.$ \end{center} \noindent \textbf{$k$-distance approach}\\ Variable $\delta^s_\ell$ is equal to $1$ if and only if the values of $x^0_\ell$ and $x^s_\ell$ are different and $0$ otherwise. \begin{center} $P^d_x(\mathcal S, k)\left\{ \begin{array}{lll} \min & \mathcal NO(x^0, f^0)\\ \mbox{s.t.}& (x^0, f^0)\in \mathcal F(OD^0)\\ & (x^s, f^s)\in \mathcal F(OD^s) & s\in\mathcal S\\ & x^s_\ell - x^0_\ell \leq \delta^s_\ell & s\in\mathcal S,~\ell\in L\\ & -x^s_\ell + x^0_\ell \leq \delta^s_\ell & s\in\mathcal S,~\ell\in L\\ & \sum_{\ell\in L} \delta^s_\ell\leq k & s\in\mathcal S\\ & \delta_\ell^s\in\s{0, 1} & s\in\mathcal S,~\ell\in L \end{array} \right.$ \end{center} \subsection{Numerical results} We adapt an instance from data considered in~\cite{bussieck1997optimal}\footnote{\url{https://www.gams.com/latest/gamslib_ml/libhtml/gamslib_lop.html}} which provides an OD-matrix $OD^0$ and a network $\mathcal N=(\mathcal V , E, d)$ with $|V|=23$, $|E|=31$ and $|L|=210$. The set of lines $L$ contains one line $\ell_{ij}$ for each pair of stations $i,j\in \mathcal V $ such that $OD^0_{ij}>0$ or $OD^0_{ji}>0$. The maximal frequency MF is fixed to $6$ which corresponds to $1$ train on the line each $10$ minutes when a time interval of $1$ hour is considered. The line opening costs $\s{K_\ell}_{\ell\in L}$ are deduced from {the costs provided in the instance} and have mean values of $1300$. A fixed frequency cost $\s{K'_\ell}_{\ell\in L}$ of 100 is considered. The capacity of each train is fixed to $C=200$. A set of $10$ uncertain OD-matrices $\s{OD^1, ..., OD^{10}}$ is generated. To create a reasonable deviation from the nominal OD matrix $OD^0$, the value $OD_{ij}^s$ has a $20\%$ chance of being equal to $OD^0_{ij}$, otherwise it is uniformly drawn from $[0.9\times OD^0_{ij}, 1.1\times OD^0_{ij}]$. Consequently, $OD^s_{ij}=0$ whenever $OD^0_{ij}=0$. \subsubsection{Approaches comparison} The results obtained for the robustness in frequencies and the robustness in lines deployment for the considered approaches are presented in Table~\ref{tab:freqApproaches} and~\ref{tab:ldApproaches}, respectively. All the problems have been solved to optimality in less than an hour. In these tables, three instances with the first 2, the first 6, and all 10 scenarios are considered and this number of scenarios is reported in the first column. The $k$-distance approach requires to fix the value of parameter $k\in\mathbb N$ which represents the maximal number of differences between the nominal solution and any scenario solution. Since there is no a priori relevant choice for this parameter we set its value to $0$, $1$, $2$, $4$ and $10$. Consequently, seven lines are associated to each instance, one for the proactive approach, one for the anchored approach, and five for the $k$-distance approach. The proactive and the anchored approaches both ensure that the nominal objective of the nominal solution is equal to its optimal value $c^*$. This value is obtained by the resolution a priori of the nominal problem $\overline P$. {Note that $\overline P$ is only solved once as $c^*=81742$ does not depend on the number of scenarios nor the solution cost.} {For each line of Table~\ref{tab:freqApproaches}, a nominal solution $(x^0, f^0)$ as well as scenario solutions $\s{(x^s, f^s)}_{s\in\mathcal S}$ are obtained by solving $P^p_f(\mathcal S, c^*)$, $P^a_f(\mathcal S, c^*)$, or $P^k_f(\mathcal S, k)$. From these solutions, we compute a posteriori distance $d_{val}=\sum_{s\in\mathcal S}\sum_{\ell\in L} |f^s_\ell-f^0_\ell|$, the number of anchored lines frequency $\sum_{\ell\in L} \mathbb 1_{f^0=f^s\, \forall s\in\mathcal S}$ and the nominal objective $\mathcal NO (x^0, f^0)$ through~\eqref{eq:deterministicObjective}.} The nominal objective is reported in the third column. {Since its optimality is imposed by the proactive and the anchored approach, in both cases it is equal to $c^*=81742$ regardless of the number of scenarios.} This objective is larger for the $k$-distance approach whenever $k$ is small. This highlights a drawback of the $k$-distance approach which does not allow to constrain the nominal objective of the nominal solution. In other applications, a low value of $k$ could even lead to infeasibilities. The number of differences in frequencies $d_{val}$ is reported in the fourth column of Table~\ref{tab:freqApproaches}. {The proactive approach computes a nominal solution of nominal objective $81742$ which additionally minimizes this number of differences ({e.g.}, $50$ for two scenarios).} The number of anchored lines frequency is reported in the fifth column. {Among all solutions of nominal objective value $81742$, the anchored approach returns one which maximizes the number of anchored lines (e.g., $200$ over $210$ for two scenarios)}. Table~\ref{tab:ldApproaches} contains similar results. Nominal and scenario solutions are computed by solving $P^p_x(\mathcal S, c^*)$, $P^a_x(\mathcal S, c^*)$, or $P^d_x(\mathcal S, k)$. The fourth column contains the number of differences in lines deployment $d_{struct}=\sum_{s\in\mathcal S}\sum_{\ell\in L} |x^s_\ell-x^0_\ell|$ while the fifth columns contains the number of lines which deployment is anchored $\sum_{\ell\in L} \mathbb 1_{x^0=x^s\, \forall s\in\mathcal S}$. {The proactive and the anchored approaches do not lead to the same solutions as the solution cost is not optimal in the anchored approach and the number of anchored lines is not optimal in the proactive approach.} These differences are increased with the number of scenarios as it becomes harder to satisfy both objectives simultaneously. Note that the increase in {$d_{val}$ and $d_{struct}$} provided by the anchored approach is often greater than the decrease in number of variables anchored in the proactive approach. This is due to the fact that anchoring a variable is quite constraining since it requires its value to be identical in the nominal solution and in all the scenario solutions. The advantage is that it guarantees that parts of the nominal solution will not be disrupted. However, this comes at a price in terms of flexibility which is reflected by an increase of the {solution costs $d_{val}$ and $d_{struct}$}. In the $k$-distance approach, an increase of the nominal objective for low values of $k$ enables to obtain better {solution costs} than the proactive approach and better numbers of variables anchored than the anchored approach. {In particular, when $k=0$ the scenario solutions are necessarily identical to the nominal solution which leads to a solution cost of $0$ and an anchoring of all the $210$ lines.} However, we observe a quick deterioration of these two objectives when $k$ increases showing once again that the choice of parameter $k$ can be challenging. {Moreover, the value $k$ is shared by all scenarios while some of them may require less changes than others which can lead to less suitable scenario solutions.} \begin{table}[h] \centering \begin{tabular}{cllr@{}lr@{}l*{20}{l}} \hline \multirow{2}{*}{\textbf{$|\mathcal S|$}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{$\mathcal NO (x^0, f^0) $} & \multicolumn{2}{c}{\textbf{Lines frequency}} & \multicolumn{2}{c}{\textbf{Number of}} & \\ & & & \multicolumn{2}{c}{\textbf{differencies} $d_{val}$} & \multicolumn{2}{c}{\textbf{anchored lines}} & \\ \hline \multirow{7}{*}{2} & Proactive & \textbf{81742} & \textbf{50}& & 196&~(-2\%) & \\ & Anchored & {81742}& $\quad$76&~(+52\%) & $\quad$\textbf{200}& & \\ & 0-distance & 89028~(+9\%) & 0&~(-100\%) & 210&~(+5\%) & \\ & 1-distance & 86663~(+6\%) & 12&~(-76\%) & 208&~(+4\%) & \\ & 2-distance & 84740~(+4\%) & 19&~(-62\%) & 206&~(+3\%) & \\ & 4-distance & 82156~(+1\%) & 37&~(-26\%) & 202&~(+1\%) & \\ & 10-distance & 81742 & 81&~(+62\%) & 196&~(-2\%) & \\ \hline \multirow{7}{*}{6} & Proactive & \textbf{81742} & \textbf{120}& & 186&~(-7\%) & \\ & Anchored & {81742} & 246&~(+105\%) & \textbf{199}& & \\ & 0-distance & 91998~(+13\%) & 0&~(-100\%) & 210&~(+6\%) & \\ & 1-distance & 87274~(+7\%) & 28&~(-77\%) & 204&~(+3\%) & \\ & 2-distance & 85178~(+4\%) & 53&~(-56\%) & 200&~(+1\%) & \\ & 4-distance & 82158~(+1\%) & 105&~(-12\%) & 193&~(-3\%) & \\ & 10-distance & 81742 & 259&~(+116\%) & 176&~(-12\%) & \\ \hline \multirow{7}{*}{10} & Proactive & \textbf{81742} & \textbf{196}& & 183&~(-8\%) & \\ & Anchored & {81742} & 449&~(+129\%) & \textbf{198}&& \\ & 0-distance & 94308~(+15\%) & 0&~(-100\%) & 210&~(+6\%) & \\ & 1-distance & 89733~(+10\%) & 44&~(-78\%) & 201&~(+2\%) & \\ & 2-distance & 86485~(+6\%) & 98&~(-50\%) & 200&~(+1\%) & \\ & 4-distance & 83224~(+2\%) & 191&~(-3\%) & 187&~(-6\%) & \\ & 10-distance & 81742 & 436&~(+122\%) & 161&~(-19\%) & \\ \hline \end{tabular}\caption{Result of each approach when considering the solution cost based on frequencies. {For a given number of scenarios $|\mathcal S|$, the percentage in a cell corresponds to the relative change between the cell value and the value in bold in the same column.}} \label{tab:freqApproaches} \end{table} \begin{table}[h] \centering \begin{tabular}{cllr@{}lr@{}l*{20}{l}} \hline \multirow{2}{*}{\textbf{$|\mathcal S|$}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{$\mathcal NO (x^0, f^0) $} & \multicolumn{2}{c}{\textbf{Lines deployment}} & \multicolumn{2}{c}{\textbf{Number of}} & \\ & & & \multicolumn{2}{c}{\textbf{differencies} $d_{struct}$} & \multicolumn{2}{c}{\textbf{anchored lines}} & \\ \hline \multirow{7}{*}{2} & Proactive & \textbf{81742} & $\quad$\textbf{10}& & $\quad$200&~(-1\%) & \\ & Anchored & {81742}& 13&~(+30\%) & \textbf{202}&& \\ & 0-distance & 85709~(+5\%) & 0&~(-100\%) & 210&~(+4\%) & \\ & 1-distance & 84486~(+3\%) & 2&~(-80\%) & 208&~(+3\%) & \\ & 2-distance & 83340~(+2\%) & 4&~(-60\%) & 206&~(+2\%) & \\ & 4-distance & 81950~($<$1\%) & 8&~(-20\%) & 203&~($<$1\%) & \\ & 10-distance & 81742 & 17&~(+70\%) & 195&~(-3\%) & \\ \hline \multirow{7}{*}{6} & Proactive & \textbf{81742} & \textbf{23}& & 196&~(-3\%) & \\ & Anchored & {81742}& 40&~(+74\%) & \textbf{202}& & \\ & 0-distance & 87598~(+7\%) & 0&~(-100\%) & 210&~(+4\%) & \\ & 1-distance & 84774~(+4\%) & 6&~(-74\%) & 204&~(+1\%) & \\ & 2-distance & 83621~(+2\%) & 11&~(-52\%) & 202& & \\ & 4-distance & 81952~($<$1\%) & 22&~(-4\%) & 195&~(-3\%) & \\ & 10-distance & 81742 & 53&~(+130\%) & 177&~(-12\%) & \\ \hline \multirow{7}{*}{10} & Proactive & \textbf{81742}& \textbf{37}&& 190&~(-5\%) & \\ & Anchored & {81742} & 69&~(+86\%) & \textbf{200}&& \\ & 0-distance & 88907~(+9\%) & 0&~(-100\%) & 210&~(+5\%) & \\ & 1-distance & 85921~(+5\%) & 10&~(-73\%) & 203&~(+2\%) & \\ & 2-distance & 84045~(+3\%) & 17&~(-54\%) & 198&~(-1\%) & \\ & 4-distance & 82634~(+1\%) & 37& & 191&~(-4\%) & \\ & 10-distance & 81742 & 98&~(+165\%) & 165&~(-18\%) & \\ \hline \end{tabular}\caption{Result of each approach when considering the solution cost based on lines deployment. {For a given number of scenarios $|\mathcal S|$, the percentage in a cell corresponds to the relative change between the cell value and the value in bold in the same column.}} \label{tab:ldApproaches} \end{table} \subsubsection{Flexibility on the nominal objective} Imposing an optimal nominal objective value to the nominal solution of the proactive approach may be too restrictive as small increases of the nominal objective may lead to significant decreases of the solution cost. As presented in Equation~\eqref{eq:pro_price_eps}, a parameter $\varepsilon\geq 0$ which corresponds to an acceptable percentage of increase of the nominal objective can be introduced for this purpose. Consequently, the constraint in $P^p_f$ and $P^p_x$ which ensures that the nominal objective is equal to $c^*$ can be replaced by \begin{equation} \mathcal NO(x^0, f^0)\leq c^* (1+\varepsilon). \end{equation} Tables~\eqref{tab:freqEps} and~\eqref{tab:ldEps} present the results obtained for four values of $\varepsilon$ ($0\%$, $1\%$, $2\%$ and $5\%$) for the robustness in frequencies and the robustness in lines deployment, respectively. In both cases, allowing an increase of the nominal objective enables to significantly reduce the solution cost and increase the number of anchored variables. In particular, for $\varepsilon$ equals to $5\%$, the solution cost is at most equal to $1$ and the number of anchored variables is at least equal to $209$ over $210$ lines. Similar results have been observed in Table~\ref{tab:freqApproaches} and~\ref{tab:ldApproaches} with the recoverable robustness approach for small values of $k$. The major difference with the proactive approach is that it seems easier for a decision maker to chose an acceptable percentage~$\varepsilon$ of deterioration of the nominal objective rather than a maximal number $k$ of differences allowed in terms of variables values. \begin{table}[h] \begin{subtable}{0.45\linewidth} \centering\begin{tabular}{@{}cccc@{}*{20}{l}} \hline\multirow{2}{*}{\textbf{$|\mathcal S|$}} & \multirow{2}{*}{$\mathbf{\varepsilon}$} & \multirow{2}{*}{$\mathcal NO (x^0, f^0) $} & \textbf{Lines frequency \\ & & & \textbf{differences $d_{val}$}\\ \hline \multirow{4}{*}{2} & 0\% & 81742 & 50 & \\ & 1\% & 82557 & 29 & \\ & 2\% & 83348 & 24 & \\ & 5\% & 85797 & 11 & \\ \hline \multirow{4}{*}{6} & 0\% & 81742 & 120 & \\ & 1\% & 82558 & 71 & \\ & 2\% & 83339 & 55 & \\ & 5\% & 85789 & 32 & \\ \hline \multirow{4}{*}{10} & 0\% & 81742 & 196 & \\ & 1\% & 82558 & 121 & \\ & 2\% & 83338 & 93 & \\ & 5\% & 85790 & 59 & \\ \hline \end{tabular}\caption{Solution cost based on the frequencies.} \label{tab:freqEps} \end{subtable} \begin{subtable}{0.45\linewidth} \centering\begin{tabular}{@{}cccc@{}*{20}{l}} \hline\multirow{2}{*}{\textbf{$|\mathcal S|$}} & \multirow{2}{*}{$\mathbf{\varepsilon}$} & \multirow{2}{*}{$\mathcal NO (x^0, f^0) $} & \textbf{Lines deployment} \\ & & & \textbf{differences $d_{struct}$}\\ \hline \multirow{4}{*}{2} & 0\% & 81742 & 10 & \\ & 1\% & 82448 & 6\\ & 2\% & 83340 & 4\\ & 5\% & 85822 & 0\\ \hline \multirow{4}{*}{6} & 0\% & 81742 & 23 & \\ & 1\% & 82349 & 16\\ & 2\% & 83360 & 11\\ & 5\% & 85809 & 2\\ \hline \multirow{4}{*}{10} & 0\% & 81742 & 37 & \\ & 1\% & 82449 & 25\\ & 2\% & 83286 & 19\\ & 5\% & 85734 & 6\\ \hline \end{tabular}\caption{Solution cost based on the lines deployment.} \label{tab:ldEps} \end{subtable} \caption{Influence of parameter $\varepsilon$ of our proactive approach on the solution cost.} \label{tab:eps} \end{table} \subsection{Proactive and reactive approaches comparison} \label{sec:pro_reac} {The reactive approach is considered when the uncertainty has not been anticipated and the nominal solution $(x^0, f^0)$ is infeasible for a scenario $s$. In that context, the reactive problem $P^r$ provides a reactive solution $(x^r, f^r)$ feasible for $s$ and which solution cost with $(x^0, f^0)$ is minimal. Problem $P^r$ is similar to the proactive problem $P^p$ except that only one scenario $s\in \mathcal S$ is considered and that the nominal solution $(x^0, f^0)$ is given as an input rather than an output. The reactive line optimization problem associated with the solution cost based on frequencies can be modelled by considering a binary variable $d_\ell$ equal to $1$ if and only if $f^r_\ell$ is equal to $f^0_\ell$ for each $\ell\in L$:} \begin{center} $P^r(s, x^0, f^0)\left\{ \begin{array}{ll@{\hspace{-.1cm}}l} \min & \sum_{\ell\in L} d_\ell\\ \mbox{s.t.}& (x^r, f^r)\in \mathcal F(OD^s)\\ & f^r_\ell - f^0_\ell \leq d_\ell & \ell\in L\\ & -f^r_\ell + f^0_\ell \leq d_\ell & \ell\in L \end{array} \right.$ \end{center} \vspace{.3cm} {We do not present the reactive model for the solution cost based on lines deployment as it only requires in the last two sets of constraints to replace $f^r_\ell$ and $f^0_\ell$ by $x^r_\ell$ and $x^0_\ell$, respectively.} {Let $Q^*$ be the set of nominal solutions with an optimal nominal objective $c^*$. The aim of our proactive approach can be viewed as finding a solution in $Q^*$ which minimizes the sum of the solution costs over the scenarios in set $\mathcal S$. To assess the efficiency of the proactive solution, we compare its solution cost to the ones of four other solutions from $Q^*$. These four solutions have been selected to be as different as possible in order to be representative of $Q^*$ (see Appendix~\ref{sec:div_sol} for more details on the selection process).} {Tables~\ref{tab:reacF} and~\ref{tab:reacX} present the results obtained when considering the solution cost based on frequencies and lines deployment, respectively. {For each considered nominal solution $(x, f)$, the solution cost $v(P^r(s, x, f))$ of any scenario $s\in\mathcal S$ is obtained by solving the reactive problem $P^r(s, x, f)$. The sum of the solution costs over all scenarios in $\mathcal S$ is then obtained by computing $\sum_{s\in\mathcal S} v(P^r(s, x, f))$ and this value is represented in the third column of both tables.} The proactive solution necessarily returns the optimal solution cost and the others lead to a mean increase of $8\%$ with a maximal increase of $24\%$. This significant variability of the solution cost among solutions from $Q^*$ highlights the relevance of considering the proactive approach. In Table~\ref{tab:reacF}, the first reactive solution is very similar to the proactive solution as its solution cost is identical for $2$ and $6$ scenarios and it is only incremented for $10$ scenarios. In Table~\ref{tab:reacX} one reactive solution always leads to an optimal solution cost but it is not always the same depending on the number of scenarios. This shows that the choice of the scenarios is a sensitive task that may significantly impact the proactive solution.} \begin{table}[h] \begin{subtable}{0.45\linewidth} \centering\begin{tabular}{cl@{}r@{}l@{}} \hline \multirow{2}{*}{$|\mathcal S|$} & \multirow{2}{*}{\textbf{Solution}} & \multicolumn{2}{c}{\textbf{Lines frequency}}\\ & & \multicolumn{2}{c}{\textbf{differences} $d_{val}$}\\\hline \multirow{5}{*}{2} & Proactive & \textbf{50} & \\ & Reactive $(x^A, f^A)$ & $\qquad$50 & \\ & Reactive $(x^B, f^B)$ & 57&~(+14\%) \\ & Reactive $(x^C, f^C)$ & 62&~(+24\%) \\ & Reactive $(x^D, f^D)$ & 56&~(+12\%) \\ \hline \multirow{5}{*}{6} & Proactive & \textbf{120} \\ & Reactive $(x^A, f^A)$ & 120 & \\ & Reactive $(x^B, f^B)$ & 123&~(+2\%) \\ & Reactive $(x^C, f^C)$ & 133&~(+11\%) \\ & Reactive $(x^D, f^D)$ & 121&~(+1\%) \\ \hline \multirow{5}{*}{10} & Proactive& \textbf{196} & \\ & Reactive $(x^A, f^A)$ & 197&~(+1\%) \\ & Reactive $(x^B, f^B)$ & 211&~(+8\%) \\ & Reactive $(x^C, f^C)$ & 220&~(+12\%) \\ & Reactive $(x^D, f^D)$ & 204&~(+4\%) \\ \hline \end{tabular}\caption{Solution cost based on frequencies.} \label{tab:reacF} \end{subtable} \begin{subtable}{0.45\linewidth} \centering\begin{tabular}{cl@{}r@{}l@{}} \hline \multirow{2}{*}{$|\mathcal S|$} & \multirow{2}{*}{\textbf{Solution}} & \multicolumn{2}{c}{\textbf{Lines deployment}}\\ & & \multicolumn{2}{c}{\textbf{differences} $d_{struct}$}\\\hline \multirow{5}{*}{2} & Proactive& \textbf{10} & \\ & Reactive $(x^A, f^A)$& 10 & \\ & Reactive $(x^B, f^B)$& $\qquad\quad$12&~(+20\%) \\ & Reactive $(x^C, f^C)$& 11&~(+10\%) \\ & Reactive $(x^D, f^D)$& 11&~(+10\%) \\ \hline \multirow{5}{*}{6} & Proactive& \textbf{23} & \\ & Reactive $(x^A, f^A)$& 26&~(+13\%) \\ & Reactive $(x^B, f^B)$& 28&~(+22\%) \\ & Reactive $(x^C, f^C)$& 24&~(+4\%) \\ & Reactive $(x^D, f^D)$& 23 & \\ \hline \multirow{5}{*}{10} & Proactive& \textbf{37} & \\ & Reactive $(x^A, f^A)$ & 38&~(+3\%) \\ & Reactive $(x^B, f^B)$ & 43&~(+16\%) \\ & Reactive $(x^C, f^C)$ & 39&~(+5\%) \\ & Reactive $(x^D, f^D)$ & 37 & \\ \hline \end{tabular}\caption{Solution cost based on lines deployment.} \label{tab:reacX} \end{subtable} \caption{Comparison of the proactive and the reactive approaches with four different optimal nominal solutions $(x^A, f^A)$, $(x^B, f^B)$, $(x^C, f^C)$ and $(x^D, f^D)$. {For each solution $(x^i, y^i)$ and each scenario $s\in\mathcal S$ the reactive problem $P^r(s, x^i, y^i)$ is solved. The values in the table correspond to the sum of the solution costs obtained over all scenarios. For a given number of scenarios $|\mathcal S|$, the percentage in a cell corresponds to the relative change between the cell value and the value in bold.}} \end{table} \section*{Conclusion} {We introduced a new robust approach which optimizes the solution robustness by minimizing the solution cost over a discrete set of scenarios while ensuring the optimality of the nominal objective. We proved that the proactive counterparts of two polynomial network flow problems are NP-hard and that their reactive counterparts are polynomial for $d_{val}$ and NP-hard for $d_{struct}$.} {We show in a case study of a railroad planning problem that a proactive solution can significantly reduce the solution cost compared to other solutions with an optimal nominal objective. Relaxing the optimality constraint on the nominal objective can also further reduce the solution cost. Unlike the $k$-distance approach, the proactive approach does not require the definition of a parameter $k$ which may prove difficult to fix. We also observed that the anchored approach tends to increase more significantly the solution cost that the proactive approach decreases the number of anchored lines.} {In future works it would be interesting to study the complexity of other problems or distances in this framework all the more so if it enables the identification of polynomial proactive integer problems. The discrete set of scenarios $\mathcal S$ could also be replaced by classical sets such as box, budgeted or polytope uncertainty sets. This may lead to more challenging problems as it would no longer be possible to define a compact formulation which associates a set of variables to each possible scenario. Finally, rather than imposing a bound on the nominal objective, the proactive problem could be solved as a bi-objective problem in which both the solution cost and the nominal objective are minimized.} \section*{Acknowledgement} The beginnings of this work originates from Remi Lucas's PhD thesis~\cite{lucas2020planification}. The authors thank Fran\c cois Ramond, R\'emi Chevrier and R\'emi Lucas for the fruitful collaboration during this thesis. \clearpage \section*{Appendix}
eac41bb40c892a6ca356f624b608ae1faea0ecc5
\section{Introduction} \label{sec:intro} The goal of semantic segmentation is to classify each pixel in an image into semantically meaningful classes. The data-driven representation learning enabled by deep neural networks has led to accurate and compelling segmentation results~\cite{long2015fully, chen2018encoder}. However, most segmentation methods rely on large datasets with full annotations, which are expensive and labour-intensive to collect. Furthermore, the predictable categories are often constrained to those that have been annotated in the dataset. It is, therefore, of great value to reduce the annotation burden by enabling unseen class prediction with only a few training examples. Specifically, given a small number of \emph{support} images with ground truth binary masks of an unseen \emph{target} class, the task is to segment the regions of the same class in unannotated \emph{query} images. This is known as few-shot semantic segmentation (FSS)~\cite{shaban2017one}. While existing methods~\cite{wang2019panet,tian2020prior} have achieved significant improvements over fine-tuning and foreground-background segmentation baselines~\cite{shaban2017one}, we observed a significant drop in performance when the \emph{query} image contains multiple classes (Figure~\ref{fig:num_class}), \emph{i.e.} the more classes \textcolor{black}{(including both training and testing classes defined in the dataset)} the image has, the more difficult is it to segment the \emph{target} class precisely. For example, on the 1-shot PASCAL-5i~\cite{pascal-voc-2012, shaban2017one} benchmark, the current state-of-the-art (PFENet~\cite{tian2020prior}) achieved a meanIoU score of $66.1\%$ for \emph{query} images with one class but only $32.7\%$ for \emph{query} images with four classes. A similar trend has also been observed in the 5-shot results and with other benchmarks such as few-shot COCO~\cite{lin2014microsoft}. We conjecture that the performance drop is caused by the presence of the other non-target classes in the \emph{query} images during training; Here, target class is the class of segmentation interest, whilst in binary segmentation tasks all the non-target classes are labelled as background regardless of their differences. Although the \emph{target}-versus-\emph{non-target} binary classification paradigm seems appropriate for images with single target class, it may inhibit the model from learning meaningful representations from the \emph{non-target} pixels. Given practical limitations such as finite data and imperfect labels, further discriminating these non-target classes can potentially aid classifying \emph{target} pixels, similar to multi-task learning~\cite{he2017mask}. This is especially true in few-shot learning, in which data-efficient model updating is sought-after. Following the above intuition, we propose a novel self-supervised training strategy that generates pseudo-classes in the \emph{query} images. For each episode, a pseudo-class is created by sampling superpixels with high activation from the \emph{query} background. Together with the \emph{query} image, an extra pseudo image-mask pair can be generated to assist training. By guiding the model to discriminate the pseudo-class from the background, this training paradigm enforces the model to distinguish possible non-target classes (those unannotated in the training set) present in the background of the \emph{query} images. As details presented in the remainder of the paper, the proposed method leads to consistent improvement over different architectures and different datasets. Our contributions are summarised as follows: \begin{itemize} \setlength\itemsep{-0.2em} \item We propose a novel self-supervised task that generates pseudo-classes for few-shot semantic segmentation, which encourage the models to learn a more discriminative feature space during adapting to \emph{novel} target classes. \item We highlight the significant performance drop as the number of classes in \emph{query} images increases for existing methods. To the best of our knowledge, it is the first time this issue is highlighted and investigated. \item We present an extensive set of 1-shot and 5-shot experiments that demonstrate the significant improvement from the proposed method, for both PASCAL-5i and COCO datasets. The performance on \emph{query} images with multiple classes were improved and the imbalance between classes were mitigated. \end{itemize} \section{Related Work} \label{sec:related-work} \subsection{Few-shot Semantic Segmentation} OSLSM~\cite{shaban2017one} first introduced the task of few-shot segmentation and demonstrated the effectiveness of episodic training, compared favourably with a fine-tuning baseline. PLNet~\cite{dong2018few} extracted class-wise prototypes from the support set and makes predictions based on cosine similarity between pixels and prototypes. This two-step architecture is later inherited by many in this field. Based on PLNet, extracting information from the support set was proposed to further improve the performance. For instance, PPNet~\cite{liu2020part} and RPMM~\cite{yang2020prototype} generated multiple prototypes for each class through clustering and Expectation-Maximisation. PGNet~\cite{zhang2019pyramid} and DAN~\cite{wang2020few} adopted the attention unit to adjust the foreground prototype according to query pixel features. OANet~\cite{zhao2020objectness} incorporated an objectness segmentation module to calculate the objectness prior. FWB~\cite{Nguyen_2019_ICCV} introduced regularisation to suppress support image background activation and boosts the prototype with an ensemble of experts. Other improvements can be achieved by refining the feature comparison module. In particular, multi-scale comparison were utilised to overcome spatial inconsistency \cite{zhang2019canet,tian2020prior,zhang2019pyramid,yang2020prototype}. PANet~\cite{wang2019panet} described a prototype alignment regularisation which encouraged the resulted segmentation model to perform few-shot learning in reverse direction. However, these methods share the common limitation with binary masks, as discussed, during episodic training: pixels not belonging to the sampled target class in query images are all labelled as the same, background class, disregarding how semantically different/similar they are. In this work, we address this issue by generating pseudo-classes through superpixels, which creates more training classes and enables discrimination of these non-query classes from the background to provide a better training signal. \subsection{Superpixel Segmentation} Superpixel segmentation is an over-segmentation method that groups pixels coherently based on handcrafted features. It is widely used in image segmentation task to reduce computational costs ~\cite{hwang2019segsort,mivcuvslik2009semantic}. As superpixel provides additional local information, it has been used for few-shot semantic segmentation to compensate for the lack of annotated data. For example, SSL-ALPNet~\cite{ouyang2020self} used superpixel-based pseudo-labels to replace manual annotation. An important difference in our use of pseudo-label is to amend the performance drop when multiple classes present in the \emph{query} image. PPNet~\cite{liu2020part} enriched support prototypes by extracting features from unlabelled support images based on superpixel segmentation. Rather than focusing on the support set, in this work, we sample superpixels to define pseudo-classes only on the background of the query images. \section{Method} \label{sec:method} \begin{figure*}[ht] \centering \includegraphics[width=.85\linewidth]{img/pipeline.pdf} \caption{An illustration of training data flow. The upper diagram demonstrates the supervised pathway which predicts the query mask, conditioned on the given support example. The lower diagram demonstrates the self-supervised pathway which generates a pseudo-class and predicts the generated mask, conditioned on the pseudo-support example. Networks are shared between two pathways during training.} \label{fig:pipeline} \end{figure*} \subsection{Task Description} \label{sec:task_desc} Few-shot semantic segmentation aims to learn a model that can segment novel classes when given only a few annotated examples of these classes. Specifically, denote $(I, M(c))$ as an image-mask pair, where $I$ represents an RGB image and $M(c)$ represents its binary mask of a class $c$. The model is trained on a \emph{base} dataset $\mathcal{D}_{base} = \{(I_i, M_i(c)) \mid c \in \mathcal{C}_{base}\}^N_{i=1}$ and tested on a \emph{novel} dataset $\mathcal{D}_{novel} = \{(I_j, M_j(c)) \mid c \in \mathcal{C}_{novel}\}^{N'}_{j=1}$, where \emph{base} classes and \emph{novel} classes are mutually exclusive \emph{i.e.} $\mathcal{C}^\text{base} \cap \mathcal{C}^\text{novel} = \emptyset$. Following the benchmark introduced by \cite{shaban2017one}, the evaluation takes place in an episodic manner: a \emph{target} class $c \in C_{novel}$ is sampled for each \emph{episode}, consisting of a \emph{support} set $\mathcal{S} = \{ (I^s_i, M^s_i(c)) \}^k_{i=1}$ and a \emph{query} image-mask pair $(I^q, M^q(c))$ of the \emph{target} class. The model is tasked to predict the \emph{query} mask $M^q(c)$ given the \emph{query} image $I_q$ and the \emph{support} set $\mathcal{S}$. \begin{figure} \centering \begin{tabular}{cccc} \includegraphics[width=2.5cm]{img/sampling/image.jpg} & \includegraphics[width=2.5cm]{img/sampling/superpixel.png} & \includegraphics[width=2.5cm]{img/sampling/masked_superpixel.png} & \includegraphics[width=2.5cm]{img/sampling/chosen_superpixel.png} \\ (a)&(b)&(c)&(d) \end{tabular} \caption{Step-by-step breakdown of pseudo-mask generation for self-supervision. (a) Input query image. (b) Generate superpixels in the query image. (c) Remove superpixels that coincide with the original class ground truth. (d) Randomly sample superpixel with high corresponding feature activation, and convert it into a binary mask.} \label{fig:superpixel_gen} \end{figure} \subsection{Self-supervision from Pseudo-classes} \label{sec:superpixel_method} Most state-of-the-art few-shot semantic segmentation methods follow the episodic training paradigm. For each iteration, an \emph{episode}, consisting a \emph{support} set $\mathcal{S} = \{ (I^s_i, M^s_i(c)) \}^k_{i=1}$ and a \emph{query} image-mask pair $(I^q, M^q(c))$, where the \emph{target} class $c \in C_{base}$ is sampled from the \emph{base} dataset $\mathcal{D}_{base}$. As shown in Figure~\ref{fig:pipeline} both \emph{query} and \emph{support} images are encoded by the feature extractor into feature maps. The \emph{target} class prototype is then derived through global average pooling all foreground pixels of the support feature maps. The comparison module makes the final prediction based on the input \emph{query} feature maps and \emph{target} class prototype. The model is optimised to reduce $\mathcal{L}(M^q(c),\hat{M}^q(c))$, the cross entropy loss between the predicted \emph{query} mask $\hat{M}^q(c)$ and the ground-truth \emph{query} mask $M^q(c)$. Denote $\mathcal{L}(M,\hat{M})$ as the spatially averaged cross-entropy loss between ground-truth $M$ and prediction $\hat{M}$: \begin{equation} \mathcal{L}(M,\hat{M}) = -\frac{1}{WH} \sum^{W}_{x=1} \sum^{H}_{y=1} \left[ M_{(x,y)} \log \hat{M}_{(x,y)} + (1 - M_{(x,y)}) \log (1 - \hat{M}_{(x,y)})\right], \end{equation} where $M_{(x,y)}$ represents the value of mask $M$ at pixel $(x,y)$. Under this training paradigm, \emph{non-target}-class objects with different semantic meanings in the \emph{query} image are all regarded as the same class -- background. The purposed self-supervision task aims to better utilise the information from the background pixels. First, we apply a superpixel segmentation method (such as Felzenszwalb \emph{et al.}~\cite{felzenszwalb2004efficient}) to $I^q$. For each superpixel, denote its region as the pseudo-class $\Dot{c}$ and the binary mask representing the region as the pseudo-mask $M^p(\Dot{c})\in\{0,1\}^{H\times W}$. Second, we refine each pseudo-mask $M^p(\Dot{c})$ with the aid of the ground-truth mask $M^q(c)$ by removing pixels of the \emph{target} class $c$, \begin{align} \tilde{M}^p(\Dot{c}) = M^p(\Dot{c}) \odot (1 - M^q(c)), \end{align} where $\odot$ denote the element-wise multiplication. Then, a class activation score $s(\Dot{c})$ is calculated for each pseudo-class $\Dot{c}$ by averaging the extracted query feature $F^q$ over the pseudo-mask foreground and channels, specifically: \begin{align} s(\Dot{c}) &= \frac{\sum^{W}_{x=1} \sum^{H}_{y=1} \sum^{d}_{z=1} F^q_{(x,y,z)} \tilde{M}^p_{(x,y)}(\Dot{c})}{\sum^{W}_{x=1} \sum^{H}_{y=1} \sum^{d}_{z=1} \tilde{M}^p_{(x,y)}(\Dot{c})}. \label{eq:activation_score} \end{align} Lastly, a pseudo-class $\hat{c}$ is randomly selected among the ones having the top-5 scores, so that the selected pseudo-class is more likely to be non-background while keeping diversity between pseudo-classes. The corresponding refined mask is denoted as $\tilde{M}^p(\hat{c})$. As shown in Figure~\ref{fig:pipeline}, the additional image-mask pair $(I^q, \tilde{M}^p(\hat{c}))$ serves as both \emph{pseudo-support} and \emph{pseudo-query}, forming a \emph{pseudo support-query} pair for $I^q$ in the current training iteration. Specifically, the self-supervised task is defined as: given the \emph{pseudo-support} example $(I^q, \tilde{M}^p(\hat{c}))$, the model is required to predict the same mask $\tilde{M}^p(\hat{c})$ for the pseudo-class $\hat{c}$. The corresponding prediction is denoted as $\hat{M}^p(\hat{c})\in [0, 1]^{H \times W}$. Similarly, we compute a cross-entropy loss $\mathcal{L}(\tilde{M}^p(\hat{c}), \hat{M}^p(\hat{c})),$ for this self-supervision. The total loss is therefore a weighted sum of the supervised loss with the target class $c$ and the self-supervised loss with the pseudo-class $\hat{c}$, \begin{equation}\label{eq:total_loss} \mathcal{L}_\text{total} = \mathcal{L}(M^q(c), \hat{M}^q(c)) + \alpha \mathcal{L}(\tilde{M}^p(\hat{c}), \hat{M}^p(\hat{c})), \end{equation} where $\alpha$ is a scaling coefficient to control the contribution of the self-supervised loss towards the overall objective. \section{Experiments} \subsection{Implementation Details} \label{sec:imp_details} We evaluated our methods on the PASCAL-5i~\cite{shaban2017one} (which consists of PASCAL VOC 2012~\cite{pascal-voc-2012} with extended annotations from SDS~\cite{hariharan2014simultaneous}) and COCO~\cite{lin2014microsoft} datasets. Each dataset has 4 configurations (folds) of base-novel class splits, following OSLSM~\cite{shaban2017one} and FWB~\cite{Nguyen_2019_ICCV} protocol. We chose PFENet~\cite{tian2020prior}, the current state-of-the-art, as our baseline architecture. We also followed the training and evaluation setting of the original paper~\cite{tian2020prior} while removing the train-free prior generation module. Felzenszwalb \emph{et al.}~\cite{felzenszwalb2004efficient} was used for superpixel segmentation with scale set to 100, sigma to 0.8 and min\_size to 200. The derived superpixel segmetations were further refined by the training (base) class masks. The hyper-parameter $\alpha$ was set to $0.5$ in equation \eqref{eq:total_loss}. We report the meanIoU metric for evaluation, which is the intersection-over-union (IoU) averaged over classes. For 5-shot evaluation, the individual 1-shot models were used without fine-tuning and we averaged the resulting prototype vectors of supports before feeding them to the comparison module. For brevity, the prefix ``SS-'' denotes models trained with the proposed self-supervision. \subsection{Results} \label{sec:result} \CatchFileDef{\pascaltable}{pascal_table.tex}{} \begin{table}[ht] \centering \pascaltable \caption{meanIoU results on PASCAL-5i~\cite{shaban2017one}. Best performance with the same model and overall are respectively bolded and starred. ``SS-'' refers to models with self-supervision.} \label{tab:pascal_table} \end{table} \CatchFileDef{\cocotable}{coco_table.tex}{} \begin{table}[ht] \centering \cocotable \caption{meanIoU results on COCO~\cite{pascal-voc-2012}. Best performance with the same model and overall are respectively bolded and starred. ``SS-'' refers to models with self-supervision.} \label{tab:coco_table} \end{table} The meanIoU performances per fold for PASCAL-5i 1-shot and 5-shot tasks are presented in Table~\ref{tab:pascal_table}. By incorporating the self-supervision during the training of ResNet-50 PFENet, we achieved $63.2\%$ meanIoU on the PASCAL-5i 1-shot task, improving its baseline performance by $2.5\%$. For the 5-shot task, we achieved $68.6\%$ meanIoU with a $6.7\%$ absolute improvement. This sets a new state-of-the-art on the PASCAL-5i dataset for both 1-shot and 5-shot. On the COCO benchmark (Table~\ref{tab:coco_table}), we achieved $37.5\%$ and $41.8\%$ meanIoU for 1-shot and 5-shot using VGG-16 PFENet and set the new state-of-the-art from our knowledge. \begin{figure} \begin{tabular}{cc} \includegraphics[width=3.9cm]{img/class_iou_pfenet_pascal.pdf} & \includegraphics[width=7.4cm]{img/class_iou_pfenet_coco.pdf} \\ (a) PASCAL-5i & (b) COCO \end{tabular} \caption{Class IoUs reached by PFENet with and without superpixel self-supervision. X-axis corresponds to different classes. A larger version with class names annotated is available in the supplementry material.} \label{fig:iou_by_class} \end{figure} Additionally, we present the IoU values per class in Figure~\ref{fig:iou_by_class} for PFENet with and without self-supervision on both PASCAL-5i and COCO. Most of the improvement came from previously under-performing classes (i.e person, dining table, potted plant, etc.), and thus led to a more class-wise balanced performance. The meanIoU performances on images with the different number of classes are plotted in Figure~\ref{fig:num_class}. \textcolor{black}{Here the number of classes is defined by the ground-truth labels. Therefore, for PASCAL and COCO, the maximum available numbers of classes in a query image are respectively 20 and 80.} From Figure~\ref{fig:num_class}(a) and Figure~\ref{fig:num_class}(c), one can observe that the proposed method consistently improved the performance when multiple classes were presented in the query image over both PASCAL-5i and COCO dataset. This explains the higher relative improvement in performance achieved on COCO ($11.8\%$) compare to PASCAL-5i ($3.1\%$) when the same backbone (VGG-16) was adopted. \begin{figure} \centering \includegraphics[width=8.0cm]{img/pfenet_qualitative.pdf} \caption{Qualitative results of ResNet-50 PFENet with and without the purposed supervision. More examples are available in the supplementary materials.} \label{fig:qualitative_example} \end{figure} \textcolor{black}{ Figure~\ref{fig:qualitative_example} shows some qualitative examples achieved by ResNet-50 PFENet with and without the purposed self-supervision. When multiple objects exist in the query image, PFENet succeeds in detecting them from the background but fails to filter out non-target-class objects. This failure in discriminating class differences has been rectified by the proposed self-supervised task which is likely to increase class diversities due to introducing pseudo-classes. } To further understand the impact of the proposed self-supervised task, we used t-SNE~\cite{ljpvd2008visualizing} to visualise the foreground embedding from query images in Figure~\ref{fig:feature_space}. Each dot corresponds to the global average pooled query foreground feature from a testing episode, and different colours represent different classes. The point clouds of different classes tend to be better disentangled, which qualitatively supports our theory and explains the observed results. \begin{figure}[ht] \centering \begin{tabular}{ccc} \includegraphics[width=2.8cm]{img/num_class_pfenet_pascal.pdf} & \includegraphics[width=2.8cm]{img/num_class_panet_pascal.pdf} & \includegraphics[width=5.6cm]{img/num_class_pfenet_coco.pdf} \\ (a) PASCAL-5i & (b) PASCAL-5i & (c) COCO \end{tabular} \caption{meanIoUs reached with and without superpixel self-supervision when different number of classes occur in the query image.} \label{fig:num_class} \end{figure} \begin{figure}[ht] \begin{tabular}{cccc} \includegraphics[width=2.8cm]{img/feature_space/pascal_pfenet.png} & \includegraphics[width=2.8cm]{img/feature_space/pascal_ours.png} & \includegraphics[width=2.8cm]{img/feature_space/coco_pfenet.png} & \includegraphics[width=2.8cm]{img/feature_space/coco_ours.png} \\ (a) PFENet & (b) SS-PFENet & (c) PFENet & (d) SS-PFENet \end{tabular} \caption{Visual comparison between feature spaces from PFENet trained with and without the purposed self-supervision. (a) and (b) were trained on PASCAL-5i~\cite{shaban2017one} excluding \emph{fold2} classes, (c) and (d) were trained on COCO excluding \emph{fold3} classes.} \label{fig:feature_space} \end{figure} \subsection{Ablation Studies} \label{sec:ablation} \subsubsection{Model Architecture} \label{sec:ablation-model-arch} To test the applicability of the proposed self-supervision across different architectures, further experiments were performed on PANet~\cite{wang2019panet}, an intrinsically different architecture that adopts cosine similarity to compare both foreground and background prototypes with each pixel and make predictions. As shown in Table~\ref{tab:pascal_table} and Table~\ref{tab:coco_table}, the proposed method achieved $4.1\%$ and $2.6\%$ absolute improvement on PASCAL-5i 1-shot and 5-shot tasks as well as $5.2\%$ and $8.0\%$ absolute improvement on COCO 1-shot and 5-shot tasks. The consistent improvement on multi-class query images was also observed across architectures as shown in Figure~\ref{fig:num_class}(b). \subsubsection{Pseudo-class Generation} We compared the following three other different pseudo-class generation processes: 1) Gridding: divide the image into $10\times 10$ grids evenly, 2) SLIC~\cite{achanta2012slic}: segment image through iterative clustering, setting compactness to 10 and n\_segments to 100, 3) HED contour detector~\cite{xie2015holistically}: leverage deep learning supervision to detect edges. The meanIoU on the PASCAL-5i 1-shot task are reported in Table~\ref{tab:superpixel_table}. Gridding leads to a significant drop of performance to 57.5\%, lower than being without self-supervision. The possible reason may be the presence of multiple classes together with the background in the same grid, making the pseudo-classes misleading. We chose to use the second-best algorithm Felzenszwalb instead of HED because HED was pre-trained on the SDS dataset~\cite{hariharan2014simultaneous}, part of our test dataset, which may cause potential information leakage. \CatchFileDef{\superpixeltable}{superpixel_table.tex}{} \begin{table}[ht] \centering \superpixeltable \caption{Performance of ResNet-50 PFENet on PASCAL-5i 1-shot task with self-supervision using different superpixel segmentation algorithms.} \label{tab:superpixel_table} \end{table} We also compared three different pseudo-class sampling strategies: 1) select the superpixel with the highest activation score calculated in Eq.~\ref{eq:activation_score}, 2) sample a superpixel with the top-5 activation scores, 3) sample a superpixel from the entire image randomly. The meanIoUs on the PASCAL-5i 1-shot task are reported in Table~\ref{tab:toppick_table}, with the "top-5" strategy performing the best. This is understandable as on the one hand, top-5 provides more pseudo-classes than top-1 for training; on the other hand, selecting a random superpixel without ranking may result in over-segmenting, for example, a background region, e.g. sky, to be different to other background regions, e.g. other superpixels of the sky, thus misguiding the model. \CatchFileDef{\toppicktable}{toppick_table.tex}{} \begin{table}[ht] \centering \toppicktable \caption{Performance of ResNet50-PFENet on PASCAL-5i task with self-supervision when choosing pseudo-class from superpixels with the top-n activation scores.} \label{tab:toppick_table} \end{table} \textcolor{black}{ \subsubsection{Effect of Novel Class in Training Background} Though there is no label of testing classes available during training, their existences in the training images could still lead to advantage when the sampled \emph{pseudo-classes} overlap with those regions. This ablation study examines the performance of the purposed method without such advantage by masking all pixels belonging to the testing classes in training images so these testing class objects will not be part of any pseudo-classes during training. As shown in Table~\ref{tab:mask_novel_table}, the meanIoU decreased only by $0.1\%$ and $0.4\%$ respectively for PANet and PFENet architecture. This minor decrease indicates the ability of the proposed self-supervision to generalise to completely unseen novel classes. } \CatchFileDef{\masknoveltable}{mask_novel_table.tex}{} \begin{table}[ht] \centering \masknoveltable \caption{meanIoU results on PASCAL-5i~\cite{shaban2017one}. All models adopts ResNet-50 as backbone and best performance achieved by same model are bolded. ``SS-'' refers to models with self-supervision and ``-mask'' refers to models trained with pixels belonging to novel classes masked.} \label{tab:mask_novel_table} \end{table} \subsubsection{Effect of $\alpha$ Parameter} \CatchFileDef{\alphatable}{alpha_table.tex}{} \begin{table}[ht] \centering \alphatable \caption{Performance of ResNet-50 PFENet on PASCAL-5i~\cite{shaban2017one} with self-supervision using different $\alpha$ values.} \label{tab:alpha_table} \end{table} \textcolor{black}{ We varied the self-supervised loss scaling factor $\alpha$ and report the performance achieved on PASCAL-5i~\cite{shaban2017one} in Table~\ref{tab:alpha_table}. $\alpha = 0.5$ appears to be the optimal value. This is reasonable because the pseudo labels generated by superpixel segmentation are inaccurate (over-segmentation) and thus less aligned with our objective than ground-truth annotations. } \section{Conclusion} \label{sec:conclusion} In this work, we first raised the issue of the dependency between the few-shot segmentation performance and the inherent complexion of individual \emph{query} images, specifically, the presence of additional classes. We have provided quantitative evidence showing accuracy decreases in segmentation for \emph{novel} classes, as a result of more latent objects appearing in the background of the \emph{target} class of interest. We directly address the lack of background class supervision, by proposing a novel strategy that generates self-supervising pseudo-labels for the non-target classes. A set of extensive experimental results have shown that the proposed approach consistently improved the few-shot segmentation performance on images that contain multiple classes of objects, for different network architectures on multiple datasets. Finally, we showed that different classes were indeed better disentangled in the embedding space using our method. The proposed methodology is not limited by the choice of pseudo-supervision generation method, here we proposed a superpixel-based approach that is simple and effective. Future work will investigate alternatives that, for instance, base on higher-level features to scale better to intra-class variations such as shape and colour variations, and their potentially latent relationship to the model adaptability and predictivity. \newpage {\small
1be6314d8b7b5cffd7939f6542cf020b346ad8bf
\subsubsection*{\bibname}} \begin{document} \twocolumn[ \aistatstitle{Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes} \aistatsauthor{ Elvis Nava \And Mojm\'ir Mutn\'y \And Andreas Krause } \aistatsaddress{ ETH Zurich\\\texttt{elvis.nava@ai.ethz.ch} \And ETH Zurich\\\texttt{mojmir.mutny@inf.ethz.ch} \And ETH Zurich\\\texttt{krausea@inf.ethz.ch} } ] \begin{abstract} \looseness -1 In Bayesian Optimization (BO) we study black-box function optimization with noisy point evaluations and Bayesian priors. Convergence of BO can be greatly sped up by batching, where multiple evaluations of the black-box function are performed in a single round. The main difficulty in this setting is to propose at the same time diverse and informative batches of evaluation points. In this work, we introduce {\em DPP-Batch Bayesian Optimization (DPP-BBO)}, a universal framework for inducing batch diversity in sampling based BO by leveraging the repulsive properties of Determinantal Point Processes (DPP) to naturally diversify the batch sampling procedure. We illustrate this framework by formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to sample from it. We then prove novel Bayesian simple regret bounds for both classical batched TS as well as our counterpart DPP-TS, with the latter bound being tighter. Our real-world, as well as synthetic, experiments demonstrate improved performance of DPP-BBO over classical batching methods with Gaussian process and Cox process models. \end{abstract} \begin{figure*}[h] \vspace{-0.4cm} \begin{tabular}{c@{\hskip-5mm}c@{\hskip-5mm}c} \includegraphics[width=60mm]{{figs/1-pmax.pdf}} & \includegraphics[width=60mm]{{figs/2-phal.pdf}} & \includegraphics[width=60mm]{{figs/3-pdpp.pdf}} \end{tabular} \vspace{-0.4cm} \caption{\looseness -1 Diversification Demonstration. Given a Gaussian Process posterior on $f$ defined over $\mathcal{X} = [-0.5,0.5]$, we sample a batch of $B=2$ evaluation points for our next optimization iteration using a randomized Batch BO algorithm. With Thompson Sampling (a), this corresponds to sampling from the symmetric 2d distribution $P_{\max}(x_1,x_2) = p_{\max}(x_1)p_{\max}(x_2)$. We wish to sample diverse batches, therefore we would like to reduce the probability mass near the diagonal (where $x_1 = x_2$). To do so, we can use hallucinated observations (b) or our DPP-TS sampling distribution (c), which exploits DPP repulsion properties. It is apparent how $P_{\text{DPP-TS}}$, by assigning much less probability mass to locations near the diagonal, disfavors the selection of non-diverse batches.} \label{fig:pmaxfigs} \end{figure*} \section{INTRODUCTION} Gradient-free optimization of noisy black-box functions is a broadly relevant problem setting, with a multitude of applications such as de-novo molecule design \citep{gonzalez_bayesian_2015}, electron laser calibration \citep{Kirschner2019, Kirschner2019b}, and hyperparameter selection \citep{snoek_practical_2012} among many others. Several algorithms have been devised for such problems, some with theoretical guarantees, broadly referred to as Bayesian optimization \citep{Mockus1982} or multi-armed bandits \citep{berry_bandit_1985}. Our work falls into Bayesian optimization as we assume a known prior for the unknown function, and use evaluated points to update our belief about the function. In BO, the optimization procedure is performed sequentially by evaluating the noisy function on locations informed by past observations. In many real world applications, multiple evaluations (experiments) can be executed in parallel. We refer to this setting as {\em batched Bayesian optimization (Batch BO)}. This is a common situation when the experimental process is easily parallelizable, such as in high-throughput wetlab experiments, or parallel training of multiple ML models on a cluster. A main concern in the batched setting is \emph{batch diversification}: guaranteeing that the selected experimental batch does not perform redundant evaluations. We tackle this problem via Determinantal Point Processes (DPP) \citep{kulesza_determinantal_2012}, a family of repulsive stochastic processes on sets of items. DPPs have already been successfully employed for Experimental Design \citep{derezinski_bayesian_2020}, optimization \citep{Mutny2020b}, and in combination with a deterministic batched Bayesian optimization algorithm \citep{kathuria_batched_2016}. In this work, we show how DPP-based diversification can naturally, and in a principled manner, be integrated into randomized algorithms for BO. Of special interest is the Thompson sampling BO algorithm, which is randomized but universally applicable \citep{Thompson1933}, often empirically outperforms UCB \citep{Chapelle2011}, and in some cases has better computational properties \citep{Mutny2020}. \subsection{Our Contribution} In this work we introduce a framework for randomized Batched Bayesian Optimization diversification through DPPs (DPP-BBO). Our main result is an algorithm called DPP-TS, which samples from a Regularized DPP, capturing both Thompson Sampling (posterior sampling) and information-theoretic batch diversity. We use a Markov Chain Monte Carlo (MCMC) approach adapted from the DPP literature \citep{anari_monte_2016} to sample batches for this new algorithm. We establish improved Bayesian Simple Regret bounds for DPP-TS compared to classical batching schemes for Thompson Sampling, and experimentally demonstrate its effectiveness w.r.t.~BO baselines and existing techniques, both on synthetic and real-world data. Lastly, we demonstrate the generality of our diversification framework by applying it on an alternative randomized BO algorithm called \emph{Perturbed History Exploration} (PHE) \citep{kveton_perturbed-history_2020}; and extend it to cover Cox Process models in addition to classically assumed Gaussian Processes. \section{BACKGROUND}\label{background} \paragraph{Bayesian Optimization} The problem setting for Bayesian Optimization (BO) is as follows: we select a sequence of actions $x_t \in \mathcal{X}$, where $t$ denotes the \textit{iteration count} so that $t \in [1,T]$, and $\mathcal{X}$ is the action domain, which is either discrete or continuous. For each chosen action $x_t$, we observe a noisy reward $y_t = f(x_t) + \epsilon_t$ in sequence, where $f : \mathcal{X} \rightarrow \mathbb{R}$ is the unknown reward function, and $\epsilon_t$ are assumed to be i.i.d.~Gaussian s.t.~$\epsilon_t \sim \mathcal{N}(0, \sigma^2)$ with known variance. Most BO algorithms select each point $x_t$ through maximization of an \textit{acquisition function} $x_t = \argmax_{x \in \mathcal{X}} u_t(x|D_{t-1})$, determined by the state of an internal Bayesian model of $f$. We indicate with $D_{t-1} = \lbrace (x_1,y_1), \ldots, (x_{t-1},y_{t-1}) \rbrace$ the filtration consisting of the history of evaluation points and observations up to and including step $t-1$ on which the model is conditioned on. The main algorithmic design choices in BO are which acquisition function and which internal Bayesian model of $f$ to use. \paragraph{Gaussian Processes} Obtaining any theoretical convergence guarantees in infinite or continuous domains is impossible without assumptions on the structure of $f$. A common assumption in Bayesian Optimization is that $f$ is a sample from a Gaussian Process (GP) \citep{rasmussen_gaussian_2005} prior, which has the property of being versatile yet allowing for posterior updates to be obtained in closed form. Many BO algorithms make use of an internal GP model of $f$, which is initialized as a prior and then sequentially updated from feedback. This GP is parametrized by a kernel function $k(x,x^\prime)$ and a mean function $\mu(x)$. To denote that $f$ is sampled from the GP, we write $f \sim GP(\mu, k)$. \paragraph{Regret Minimization} We quantify our progress towards maximizing the unknown $f$ via the notion of \textit{regret}. In particular, we define the \textit{instantaneous regret} of action $x_t$ as $r_t =f(x^\star) - f(x_t)$, with $x^\star = \argmax_{x \in \mathcal{X}} f(x)$ being the optimal action. A common objective for BO is that of minimizing {\em Bayesian Cumulative Regret} $\text{BCR}_T = \mathbb{E}\left[\sum_{t=1}^T r_t\right] = \mathbb{E}\left[\sum_{t=1}^T \left( f(x^\star) - f(x_t) \right)\right]$, where the expectation is over the prior of $f$, observation noise and algorithmic randomness. Obtaining bounds on the cumulative regret that scale sublinearly in $T$ allows us to prove convergence of the \textit{average regret} $\text{BCR}_T / T$, therefore also minimizing the {\em Bayesian Simple Regret} $\text{BSR}_T = \mathbb{E}\left[\min_{t \in [1,T]} r_t\right] = \mathbb{E}\left[\min_{t \in [1,T]} f(x^\star) - f(x_t) \right]$ and guaranteeing convergence of our optimization of $f$. \paragraph{Batch Bayesian Optimization} We define {\em Batch Bayesian Optimization (BBO)} as the setting where, instead of sequentially proposing and evaluating points, our algorithms propose a batch of points of size $B$ at every iteration $t$. Importantly, the batch must be finalized {\em before} obtaining any feedback for the elements within it. Batched Bayesian Optimization algorithms encounter two main challenges with respect to performance and theoretical guarantees: proposing diverse evaluation batches, and obtaining regret bounds competitive with full-feedback sequential algorithms, sublinear in the total number of experiments $BT$, where $T$ denotes the iteration count $T$ and $B$ the batch size. \paragraph{Determinantal Point Processes (DPPs)} \citep{kulesza_determinantal_2012} are a family of point processes characterized by the property of \textit{repulsion}. We define a point process $P$ over a set $\mathcal{X}$ as a probability measure over subsets of $\mathcal{X}$. Given a similarity measure for pairs of points in the form of a kernel function, Determinantal Point Processes place high probability on subsets that are \textit{diverse} according to the kernel. We will now describe DPPs for finite domains due to their simplicity, however their definition can be extended to continuous $\mathcal{X}$. For our purposes, we restrict our focus on L-ensemble DPPs: given a so-called L-ensemble kernel $L$ defined as a matrix over the entire (finite) domain $\mathcal{X}$, a Determinantal Point Process $P_L$ is defined as the point process such that the probability of sampling the set $X \subseteq \mathcal{X}$ is proportional to the determinant of the kernel matrix $L_X$ restricted to $X$ \begin{equation} P_{L}(X) \propto \det\left( L_X \right)\text{.} \end{equation} \looseness -1 Remarkably, the required normalizing constant can be obtained in closed form as $\sum_{X \subseteq \mathcal{X}} \det(L_X) = \det(L+I)$. If the kernel $L$ is such that $L_{ij} = l(x_i, x_j)$, for $x_i, x_j \in \mathcal{X}$, encodes the similarity between any pair of points $x_i$ and $x_j$, then the determinant $\det\left( L_X \right)$ will be greater for diverse sets $X$. Intuitively, for the linear kernel, diversity can be measured by the area of the $|X|$-dimensional parallelepiped spanned by the vectors in $X$ \citep[see Section 2.2.1 from][]{kulesza_determinantal_2012}. For our application, we require sampling of batches of points with a specific predetermined size. For this purpose, we focus on $k$-DPPs. A $k$-DPP $P_L^k$ over $\mathcal{X}$ is a distribution over subsets of $\mathcal{X}$ with fixed cardinality $k$, such that the probability of sampling a specific subset $X$ is proportional to that for the generic DPP case: $P_L^k(X) = \frac{\det\left( L_X \right)}{\sum_{X^\prime \subseteq \mathcal{X}, |X^\prime| = k} \det\left(L_{X^\prime}\right)}$. Sampling from DPPs and $k$-DPPs can be done with a number of efficient exact or approximate algorithms. The seminal exact sampling procedure for $k$-DPPs from \citet{deshpande_efficient_2010} requires time $O(kN^{\omega + 1}\log N)$ in the batch size $k$ and the size of the domain $N$, with $\omega$ being the exponent of the arithmetic complexity of matrix multiplication. This does not scale well for large domains, nor does it work for the continuous case. Fortunately, an efficient MCMC sampling scheme with complexity of $O(Nk\log(\epsilon^{-1}))$ introduced by \citet{anari_monte_2016} works much better in practice. Variants of such MCMC schemes have been proven to also work for continuous domains \citep{rezaei_polynomial_2019}. \section{RELATED WORK} A number of different acquisition functions have been proposed for Bayesian Optimization, such as Probability of Improvement, Expected Improvement, Upper Confidence Bound (UCB) among many others \citep[cf., ][]{brochu_tutorial_2010}. The Gaussian process version of UCB \citep[GP-UCB,][]{srinivas_gaussian_2010} is a popular technique based on a deterministic acquisition function, with sublinear regret bounds for common kernels. \paragraph{Thompson Sampling} Thompson Sampling is an intuitive and theoretically sound BO algorithm using a randomized acquisition function \citep{Thompson1933,russo_tutorial_2020}. When choosing the next evaluation point, we sample a realization from the current posterior modeling the objective function, and use this as the acquisition function to maximize $x_t = \argmax_{x \in \mathcal{X}}\tilde{f}(x)$ where $\tilde{f}$ is the sample function, e.g. $\tilde{f} \sim \text{GP}(\mu_t,K_t)$. Bayesian Cumulative Regret was first bounded as $O(\sqrt{T\gamma_T})$ by \citet{russo_learning_2014}, where $\gamma_T$ is the maximum mutual information obtainable from $T$ observations (for more details on this well established quantity, see Appendix \ref{infobackground}). This bound matches lower bounds in $T$ \citep{Scarlett2017}. \paragraph{Batched UCB and Pure Exploration}\label{bucbbackground} For Batched BO, heuristic algorithms such as Simulation Matching \citep{azimi_batch_2010} or Local Penalization \citep{pmlr-v51-gonzalez16a} attempt to solve the problem of generating informative and diverse evaluation point batches, albeit without theoretical guarantees on regret. In particular, Local Penalization selects explicitly diversified batches by greedily penalizing already-sampled points with penalization factors in the acquisition function. \citet{desautels_parallelizing_2014} are the first to provide a theoretically justified batched algorithm, introducing GP-BUCB, a batched variant of GP-UCB. To induce diversity within batches, they use \textit{hallucinated observations}, so that ${x_{\text{GP-BUCB}}}_{t,b}$ is sampled by maximizing a UCB based on the hallucinated posterior $\tilde{D}_{t,b-1}$. The hallucinated history is constructed by using the posterior mean in place of the observed reward for points with delayed feedback. GP-BUCB attains a cumulative regret bound of $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$, which, however, requires an \textit{initialization phase} before the deployment of the actual algorithm. For the first $T_{\text{init}}$ iterations, the evaluations are chosen with Uncertainty Sampling, picking the point satisfying $x_t = \argmax_{x \in \mathcal{X}} \sigma_t(x)$, effectively exploring the whole domain, which limits the practicality of the method. To alleviate this, \citet{contal_parallel_2013} introduce the alternative GP-UCB Pure Exploration \citep[GP-UCB-PE,][]{contal_parallel_2013}, which mixes the UCB acquisition function with a Pure Exploration strategy. Sampling a batch at timestamp $t$, GP-UCB-PE operates in two phases: the first point of each batch is sampled with standard GP-UCB, while the remaining $B-1$ points are sampled by first defining a \textit{high probability region} $\mathfrak{R}^+$ for the maximizer, and then performing Uncertainty Sampling ${x_{\text{UCB-PE}}}_{t,b} = \argmax_{x \in \mathfrak{R}^+} \sigma_{t,b}(x)$. GP-UCB-PE's cumulative regret is bounded by $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$ without an initialization phase, as opposed to GP-BUCB. \paragraph{Batched TS} \citet{kandasamy_parallelised_2018} are first to consider batching with Thompson sampling and GPs. They propose to simply resample multiple times from the posterior within each batch, effectively lifting the Thompson Sampling algorithm as-is to the batched case. By repeating TS sampling for each point within the batch, they bound the Bayesian cumulative regret by $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$. It is possible but not required to use hallucinated observations (hal-TS). However, an initialization phase identical to that of GP-BUCB is needed for the their proof on the bound to hold. A novel result from our work is an improved proof technique such that the initialization phase for Batched TS is not required for the Bayesian simple regret version of the bound to hold. \paragraph{DPPs in Batched BO} \citet{kathuria_batched_2016} use Determinantal Point Process sampling to define a variation of GP-UCB-PE \citep{contal_parallel_2013}, called UCB-DPP-SAMPLE. They observe that the Uncertainty Sampling phase of GP-UCB-PE corresponds to greedy maximization of the posterior covariance matrix determinant $\det\left({K_{t,1}}_X\right)$ with respect to batches $X$ of size $B-1$ from $\mathfrak{R}^+$, with ${K_{t,1}}_X$ being the covariance matrix produced by the posterior kernel of the GP after step $(t,1)$ and restricted to the set $X$. Finding the $(B-1)$-sized submatrix of the maximum determinant is an NP-hard problem, and picking each element greedily so that it maximizes $\sigma^2_{t,b}(x) = k_{t,b}(x,x)$ fails to guarantee the best solution. Maximizing the above determinant is also equivalent to maximizing $\det\left({L_{t,1}}_X\right)$ for the DPP L-ensemble kernel defined as $L_{t,1} = I + \sigma^{-2}K_{t,1}$, called the \emph{mutual information kernel} \citep{kathuria_batched_2016}. Instead of selecting the last $B-1$ points of each batch with Uncertainty Sampling, UCB-DPP-SAMPLE samples them from a $(B-1)$-DPP restricted to $\mathfrak{R}^+$ with the Mutual Information L-ensemble kernel $L_{t,1} = I + \sigma^{-2}K_{t,1}$. \citet{kathuria_batched_2016} provide a bound for UCB-DPP-SAMPLE as a variation of the $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$ bound for GP-UCB-PE. However, as we illustrate in Appendix \ref{appendix:kathuriabound}, their bound is \emph{necessarily worse} than the existing one for GP-UCB-PE. The concurrent work of \citet{nguyen_optimal_2021} is another recent example of DPP usage in BBO diversification, proposing DPP sampling (with DPP kernel informed by a GP posterior) as a method of diverse batch selection, demonstrating good performance in experimental tasks, but no known theoretical regret guarantees. \section{THE DPP-BBO FRAMEWORK}\label{dppbboframework} A key insight our approach relies on is to view Thompson Sampling as a procedure that samples at each step from a \textit{maximum distribution} $p_{\max}$ over $\mathcal{X}$, so that $x_t \sim p_{\max, t}$ with \begin{gather} p_{\max,t}(x) = \mathbb{E}_{\tilde{f} \sim \text{Post}_t}\bigg[ \mathds{1}[x = \argmax_{x^\prime \in \mathcal{X}}\tilde{f}(x^\prime)]\bigg] \text{.} \end{gather} A simple approach towards Batched Thompson Sampling is to obtain a batch $X_t$ of evaluation points (with $|X_t| = B$) by sampling $B$ times from the posterior in each round. This can again be interpreted as \begin{equation} X_t \sim P_{\max,t} ~ \text{with} ~ P_{\max,t}(X) = \prod_{x_b \in X} p_{\max,t}(x_b) \text{.} \end{equation} This way, we can view Thompson Sampling or any other randomized Batch BO algorithm as iteratively sampling from a batch distribution over $\mathcal{X}^B$ dependent on $t$. The main downside of this simple approach is that {\em independently} obtaining multiple samples may lead to redundancy. As a remedy, in our DPP-BBO framework, we modify such sampling distributions by {\em reweighing} them by a DPP likelihood. This technique is general, and allows us to apply DPP diversification to {\em any }randomized BBO algorithm with batch sampling likelihood $P_{\text{A},t}(X)$. \begin{definition}[DPP-BBO Sampling Likelihood]\label{dppbbodef} The batch sampling likelihood of generic DPP-BBO at step $t$ is \begin{equation} P_{\text{DPP-BBO}, t}(X) \propto P_{\text{A}, t}(X) \det({L_t}_X) \end{equation} with $L_t$ being a DPP L-ensemble kernel defined over the domain $\mathcal{X}$. \end{definition} Notice that the domain $\mathcal{X}$ does not need to be discrete, even though we introduced the approach on discrete ground sets in order to simplify notation. This is in contrast to the existing DPP-based BO algorithm from \citet{kathuria_batched_2016}, which requires the domain to be discrete in order to efficiently sample the DPP restricted to the arbitrary region $\mathfrak{R}^+$ in the general case. We now proceed to justify our formulation, defining the DPP-Thompson Sampling (DPP-TS) procedure in the process. \subsection{The DPP quality-diversity decomposition} DPPs capture element diversity but also take into account element \textit{quality} independently of the similarity measure, as illustrated by \citet{kulesza_determinantal_2012}. Namely, L-ensemble DPPs can be decomposed into a quality-diversity representation, so that the entries of the L-ensemble kernel for the DPP are expressed as $L_{ij} = q_i \phi_i^\top \phi_j q_j$ with $q_i \in \mathbb{R}^+$ representing the \textit{quality} of an item $i$, and $\phi_i \in \mathbb{R}^m$, $\|\phi_i\| = 1$ being normalized \textit{diversity} features. We also define $S$ with $S_{ij} = \phi_i^\top \phi_j$. This allows us to represent the DPP model as $P_L(X) \propto \left(\prod_{i \in X} q_i^2 \right) \det(S_X)$. We then consider a k-DPP with L-ensemble kernel $L$ in its quality-diversity representation, and re-weigh the quality values of items by their likelihood under a Bayesian Optimization random sampling scheme $P_{A,t}(x)$ such as Thompson Sampling $P_{\max,t}(x)$. Following this approach, we can obtain a new k-DPP likelihood by renormalizing the product of the Thompson Sampling likelihood of the batch $P_{\max}$ and an existing DPP likelihood $P_L$ for $X = \lbrace x_1, \ldots, x_B \rbrace$: \begin{align} \label{pbatch} P_{\text{DPP-TS}}(X) & \propto \left(\prod_{x_b \in X} p_{\max}(x_b) \right) P_L(X)\\ & \propto \left(\prod_{x_b \in X} p_{\max}(x_b) \right) \det(L_X)\\ & \propto \left(\prod_{x_b \in X} p_{\max}(x_b) \cdot q_{x_b}^2 \right) \det(S_X) \end{align} The result is a k-DPP with L-ensemble kernel $\tilde{L}_{ij} = \sqrt{p_{\max}(x_i)p_{\max}(x_j)}L_{ij}$, generalizing the sampling distribution for batched TS as a stochastic process with repulsive properties. To recover original batched TS, we just need to set $L_t = I$. \subsection{The Mutual Information Kernel} For our choice of kernel, we follow the insight from \citet{kathuria_batched_2016} and use $L_t = I + \sigma^{-2} K_t$, with $K_t$ being the GP posterior kernel at step $t$. Consequently, the DPP loglikelihood of a set $X$ at time $t$ is proportional to the mutual information between the true function $f$ and the observations obtained from $X$: $I(f_X;\mathbf{y}_X|\mathbf{y}_{1:t-1,1:B}) = \frac{1}{2} \log \det (I + \sigma^{-2}{K_t}_X)$ (see Appendix \ref{infobackground}). This is an example of a so-called {\em Regularized k-DPP}, a k-DPP such that a symmetric positive semidefinite regularization matrix $A$ is added to an original unregularized L-ensemble DPP kernel, for the particular case of $A = \lambda I$. In such a setting, we allow for the same element to be selected multiple times and enforce that any set $X$ must have nonzero probability of being selected. By tuning the strength of the regularization, we can tune how extreme we wish our similarity repulsion to be. \begin{definition}[DPP-TS Sampling Likelihood]\label{dpptsdef} The batch sampling likelihood of DPP-TS at step $t$ is \begin{equation} P_{\text{DPP-TS}, t}(X) \propto P_{\max, t}(X) \det(I + \sigma^{-2}{K_t}_X) \text{.} \end{equation} \end{definition} In Figure~\ref{fig:pmaxfigs}, we illustrate the $|X|=2$ case to compare the original $P_{\max}$ TS distribution, a TS variant with hallucinated observations, and $P_{\text{DPP-TS}}$ with its repulsion properties. \subsection{Markov Chain Monte Carlo for DPP-BBO} Sampling from the mutual information DPP component $\det(I + \sigma^{-2}{K_t}_X)$ of DPP-TS on its own can be done easily and efficiently, as numerous algorithms exist for both exact and approximate sampling from k-DPPs \citep{kulesza_determinantal_2012}. Likewise, we assume we are in a setting in which Thompson Sampling on its own can be performed relatively efficiently, as sampling from $P_{\max t}$ reduces to sampling a function realization $\tilde{f}$ from the posterior, e.g. $\text{GP}(\mu_t,K_t)$, and maximizing $\tilde{f}$ over $\mathcal{X}$. However, when sampling from the product of the two distributions, we must resort to tools of approximate inference. The main issue with adopting standard approaches is that computation of the explicit likelihood $P_{\text{DPP-TS}, t}$ is {\em doubly intractable}: computation of $P_{\max, t}$ is intractable on its own, and it appears in the enumerator of $P_{\text{DPP-TS}, t}$ before normalization. Our approach for sampling from $P_{\text{DPP-TS}, t}$ relies on a Markov Chain Monte Carlo (MCMC) sampler. We construct an ergodic Markov Chain over batches from $\Omega = \{ X ~|~ \; X \subset \mathcal{X}, |X| = k \}$ with transition kernel $T(X^{\prime} | X)$ such that the detailed balance equation $Q(X)T(X^{\prime} | X) = Q(X^{\prime})T(X | X^{\prime})$ is satisfied almost surely with respect to $P_{\text{DPP-TS}, t}$, with $Q(X) =\\ P_{\max, t}(X)\det({L_t}_X)$ being the unnormalized potential of $P_{\text{DPP-TS}, t}$. If $Q(X)$ were tractable, we could use the standard Metropolis-Hastings algorithm \citep{hastings_monte_1970}, which satisfies the detailed balance equation. The problem with naively using Metropolis-Hasting MCMC sampling is that our $Q(X) = P_{\max, t}(X)\det({L_t}_X)$ contains $P_{\max, t}(X) = \prod_{x_b \in X} p_{\max, t}(x_b)$, which is intractable and cannot be computed on the fly. As previously stated, the only thing we can easily do is sample from it by sampling $\tilde{f}$ and then maximizing it. However, if we modify the standard Metropolis-Hastings MCMC algorithm by using $p_{\max, t}$ proposals, we obtain Algorithm \ref{algomcmc}, which satisfies detailed balance. We refer to Appendix \ref{mcmcappendix} for the proof. This algorithm can be interpreted as a variant of an existing k-DPP sampler proposed by \citet{anari_monte_2016}. \begin{algorithm}[H] \caption{DPP-TS MCMC sampler}\label{algomcmc} \begin{algorithmic} \State pick random initial batch $X$ \Repeat \State uniformly pick point $x_b \in X$ to replace \State sample candidate point $x_b^{\prime} \sim p_{\max, t}(x_b^{\prime})$ \State define $X^{\prime} = \left(X \setminus \{x_b\}\right) \cup \{x_b^{\prime}\}$ \State accept with probability $\alpha = \min \left\lbrace 1, \frac{\det({L_t}_{X^{\prime}})}{\det({L_t}_X)} \right \rbrace$ \If {accepted} \State $X$ = $X^{\prime}$ \EndIf \Until converged \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{DPP-TS Algorithm}\label{dpptsalgo} \begin{algorithmic} \item \textbf{Input:} Action space $\mathcal{X}$, GP prior $\mu_1$, $k_1(.,.)$, history $D_0 = \lbrace\rbrace$ \For{$t = 1, \ldots, T$} \State Sample $X_t \sim P_{\text{DPP-TS}, t}(X_t)$ with Alg.\ \ref{algomcmc} \State Observe $y_{t,b} = f(x_{t,b}) + \epsilon_{t,b}$ for $b \in [1,B]$ \State Add observations to history $D_t = D_{t-1} \cup \lbrace (x_{t,1},y_{t,1}),\ldots,(x_{t,B},y_{t,B}) \rbrace$ \State Update the GP with $D_t$ to get $\mu_{t+1}$, $k_{t+1}(.,.)$ \EndFor \end{algorithmic} \end{algorithm} \subsection{DPP-TS} Given the sampling distribution (Definition \ref{dpptsdef}) and the above MCMC algorithm, we can now fully specify the overall procedure for our DPP-TS sampling algorithm summarized in Algorithm \ref{dpptsalgo}. \section{BAYESIAN REGRET BOUNDS} We now establish bounds on the Bayesian regret. Instead of assuming the existence of a fixed true $f$, we assume that the true function is sampled from a Gaussian Process prior $f \sim GP(0,K)$. In particular, our regret bounds are obtained on a variant of Bayes regret called Bayes Batch Cumulative Regret $\text{BBCR}_{T,B} = \mathbb{E}\left[ \sum_{t=1}^T \min_{b \in [1,B]} r_{t,b} \right] = \mathbb{E}\left[ \sum_{t=1}^T \min_{b \in [1,B]} \left( f(x^{\star}) - f(x_{t,b}) \right) \right]$ which only considers the best instantaneous regret within each batch, as we make use of proof techniques from \citet{contal_parallel_2013} involving such a formulation. It is straightforward to see that by bounding BBCR we at the same time bound the Bayes Simple Regret (introduced in Section \ref{background}), as $\text{BSR}_{T,B} \leq \text{BBCR}_{T,B} / T$, similarly to how $\text{BSR}_{T,B} \leq \text{BCR}_{T,B} / TB$. \subsection{Improved bound on BBCR for Batched Thompson Sampling} Our first theoretical contribution is an improved version of the bound on Bayesian Simple Regret from \citet{kandasamy_parallelised_2018}. Our version of the algorithm requires no initialization procedure to guarantee sublinear regret in contrast to prior work. Unlike the original Gaussian TS Bayesian bounds from \citet{russo_learning_2014}, \citet{kandasamy_parallelised_2018} analyze the problem over a continuous domain. Therefore, it requires an additional assumption previously used in the Bayesian continuous-domain GP-UCB bound \citep{srinivas_gaussian_2010}. \begin{assumption}[Gradients of GP Sample Paths]\label{gradassumpt} Let $\mathcal{X} \subseteq [0,l]^d$ compact and convex with $d \in \mathbb{N}$ and $l>0$, $f \sim \text{GP}(0,K)$ where $k$ is a stationary kernel. Moreover, there exist constants $a,b > 0$ such that $P\left( \sup_{x\in \mathcal{X}} \left| \frac{\partial f(x)}{\partial x_i} \right| > L \right) \leq a e^{-(L/b)^2} \quad \forall L > 0, \forall i \in \lbrace 1, \ldots, d \rbrace$. \end{assumption} Using the above assumption, we can show the following theorem. \begin{theorem}[BBCR Bound for Batched TS]\label{tsbatchbound} If $f \sim GP(0,K)$ with covariance kernel bounded by 1 and noise model $\mathcal{N}(0,\sigma^2)$, and either \begin{itemize} \item Case 1: finite $\mathcal{X}$ and $\beta_t = 2\ln\left(\frac{B(t^2 + 1)|\mathcal{X}|}{\sqrt{2\pi}}\right)$; \item Case 2: compact and convex $\mathcal{X} \subseteq [0,l]^d$, with Assumption \ref{gradassumpt} satisfied and $\beta_t = 4(d+1)\log(Bt) + 2d\log(dab\sqrt{\pi})$. \end{itemize} Then Batched Thompson Sampling attains Bayes Batch Cumulative Regret of \emph{ \begin{equation} {\text{BBCR}_{\text{TS}}}_{T,B} \leq \frac{C_1}{B} + \sqrt{ C_2 \frac{T}{B} \beta_T \gamma_{TB}} \end{equation} } with $C_1 = 1$ for Case 1, $C_1 = \frac{\pi^2}{6} + \frac{\sqrt{2\pi}}{12}$ for Case 2, and $C_2 = \frac{2}{\log(1 + \sigma^{-2})}$. \end{theorem} Therefore, ${\text{BSR}_{\text{TS}}}_{T,B} \leq \frac{C_1}{TB} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}}$. We point to Appendix \ref{tsbbcrboundproof} for proof. The bound from \citet{kandasamy_parallelised_2018} (without an initialization phase) is similar, except for the presence of an $\exp(C)$ factor in the square root term, which scales linearly with $B$. Our version of the bound does not contain $\exp(C)$, allowing thus for sublinear regret in $B$. \subsection{BBCR bound for DPP-TS} We now shift the focus to our novel DPP-TS algorithm and obtain an equivalent bound. To do so, we modify the algorithm we developed and introduce DPP-TS-alt, so that for every batch: a) For the first sample in the batch $x_{\text{DPP-TS-alt}\; t,1}$, we sample from $p_{\max, t}$ as in standard Thompson Sampling; b) For all the other samples $x_{\text{DPP-TS-alt}\; t,b}$ with $b \in [2,B]$, we sample from joint $P_{\text{DPP-TS }t}$, using the most updated posterior variance matrix $K_{t,1}$ to define the DPP kernel. The reason why we introduced DPP-TS as such and not DPP-TS-alt in the first place is both for simplicity and because in practice their performance is virtually identical (see Appendix \ref{appendix:addexp}). We have the following \begin{theorem}[BBCR Bound for DPP-TS]\label{dpptsbatchbound} Consider the same assumptions as for Theorem \ref{tsbatchbound}. Then DPP-TS (in its DPP-TS-alt variant) attains Bayes Batch Cumulative Regret of \emph{ \begin{equation} {\text{BBCR}_{\text{DPP-TS}}}_{T,B} \leq \frac{C_1}{B} + \sqrt{ C_2 \frac{T}{B} \beta_T \gamma_{TB}} -C_3 \end{equation} } with $C_1 = 1$ for Case 1, $C_1 = \frac{\pi^2}{6} + \frac{\sqrt{2\pi}}{12}$ for Case 2, $C_2 = \frac{2}{\log(1 + \sigma^{-2})}$, and $-C_3 < 0$ (defined in Appendix \ref{dpptsbbcrboundproof}). \end{theorem} We can thus obtain ${\text{BSR}_{\text{DPP-TS}}}_{T,B} \leq \frac{C_1}{TB} - \frac{C_3}{T} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}}$. Moreover, this bound is necessarily tighter than that for standard TS: $\frac{C_1}{TB} - \frac{C_3}{T} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}} \leq \frac{C_1}{TB} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}}$. We point to Appendix \ref{dpptsbbcrboundproof} for the proof. \begin{figure*}[h] \includegraphics[width=\textwidth]{{figs/experiments.pdf}} \caption{Comprehensive experimental comparisons between DPP-TS and classic BBO techniques for Simple Regret (log scale): \textbf{a)} $f$ sampled from Squared Exponential GP; \textbf{b)} Rosenbrock; \textbf{c)} Styblinski-Tang; \textbf{d)} Michalewicz; \textbf{e)} PHE experiment with $f$ sampled from QFF Squared Exponential GP; \textbf{f)} Cox process sensing experiment on the Porto taxi dataset. The named functions are defined in Section \ref{dppexpsynth}. Overall, DPP-TS outperforms or equals the other algorithms, quickly sampling good maximizers thanks to improved batch diversification.} \vspace{-0.3cm} \label{fig:experimentcomp} \end{figure*} \section{EXPERIMENTS AND COMPARISONS}\label{expsection} To make the case for our algorithmic framework's effectiveness in practice, we perform a series of benchmark tests on synthetic and real world optimization problems, comparing DPP-BBO against classic BBO algorithms on Simple Regret metrics. (Cumulative Regret comparisons feature in Appendix \ref{appendix:addexp}.) \subsection{DPP-TS Comparisons on Synthetic Data} \label{dppexpsynth} We first compare DPP-TS on synthetic benchmarks against regular batched TS, GP-BUCB, hallucinated TS (Batched Thompson Sampling with hallucinations as in GP-BUCB), Pure DPP Exploration (DPP sampling from the DPP component of DPP-TS) and Uniform Exploration (uniform random sampling over the domain). We exclude algorithms that are not applicable to continuous domains. Figure \ref{fig:experimentcomp} details a number of such comparisons on synthetic benchmark functions under different settings, averaged over 15 experimental runs. For \ref{fig:experimentcomp}.a and \ref{fig:experimentcomp}.b we optimize over a discrete finite domain $\mathcal{X}$, using an exact Gaussian Process prior with a squared exponential kernel. The acquisition function is maximized by calculation of the explicit maximum over the discretized domain. For \ref{fig:experimentcomp}.c and \ref{fig:experimentcomp}.d, we optimize over a continuous domain $\mathcal{X} = [0,l]^d$, using an approximate Gaussian Process prior specified with Quadrature Fourier Features \citep{mutny_efficient_2018}. These functions are additive and, hence, the optimization can be done dimension-wise. When optimizing the one-dimensional projection of the acquisition function we use first order gradient descent with restarts. Specific benchmarks we use are the Rosenbrock function $f(x) = 100(x_2 - x_1^2)^2 + (x_1 - 1)^2$; the Stiblinski-Tang function $f(x) = \frac{1}{2} \sum_{i=i}^d \left(x_i^4 - 16x_i^2 + 5x_i \right)$; and the Michalewicz function $f(x) = -\sum_{i=i}^d \sin\left(x_i\right)\sin^{2d}\left(ix_i^2 / \pi\right)$. Overall, DPP-TS converges very quickly to sampling good maximizers, almost always beating or at least equaling the Simple Regret performance of the other algorithms, while exhibiting low-variance behavior. The added diversity from the DPP sampling procedure appears to favor quickly finding better maxima while not getting stuck in suboptimal but high-confidence regions, as seems to often happen to GP-UCB. A series of additional experiments is discussed in Appendix \ref{appendix:addexp}, including experiments on Cumulative Regret, DPP-TS with parametrized DPP kernels, and a comparison between DPP-TS and DPP-TS-alt which shows them to be of equivalent performance in practice. \subsection{DPP-Perturbed History Exploration} To further demonstrate the effectiveness and versatility of the DPP-BBO framework, we apply it to the recently introduced Perturbed History Exploration (PHE) algorithm \citep{kveton_perturbed-history_2020}. PHE is a BO algorithm which is agnostic of the specific model $f_\theta$ chosen for modeling $f$. Assuming that rewards are bounded, and given a parameter $a$, the algorithm introduces {\em pseudo-rewards} $a$ for each observation in its global history, and at each step maximizes its learned perturbed $f_\theta$ to propose a new evaluation point. We can interpret this procedure as sampling from $p_{\text{PHE}, t}(x)$, with the stochastic component stemming from the pseudo-reward generation. Given this, we can define DPP-PHE as $P_{\text{DPP-PHE}, t}(X) \propto \left(\prod_{x \in X} p_{\text{PHE}, t}(x)\right) \det(I + \sigma^{-2}{K_t}_X)$ where $K_t$ is an approximation of the Bayesian posterior covariance for the $f_\theta$ model. Figure \ref{fig:experimentcomp}.e experimentally compares PHE and DPP-PHE for $a=0.5$ and $a=1$ on a synthetic function (over a continuous $\mathcal{X}$) sampled from a 1-d squared exponential GP prior, while using as internal model a QFF GP regression. We can see that DPP-PHE improves on the Simple Regret when compared to regular PHE for the same $a$. \subsection{DPP-TS and Cox Process Sensing} \looseness -1 To benchmark our DPP-TS algorithm on a real world setting and demonstrate the versatility of the modeling choice, we turn to a Cox Process Sensing problem in the form of taxi routing on a 2-dimensional city grid, as considered by \citet{Mutny2021a}. Given a dataset of geo-localized taxi cab hails in Porto and a subdivision of the city into an 8x8 grid, we aim to learn the best locations where to schedule a fleet of taxis while, at beginning of each day - corresponding to a single iteration, we only observe the taxi hailing events in the grid cells which had vehicles scheduled to them. \looseness -1 We put a Gaussian process prior on the unknown rate function of a Poisson process, yielding a Cox Process with Poisson Process likelihood. The likelihood of observing a realization $\mathcal{D} = \lbrace x_n \rbrace_{n=1}^N$ over the domain $\mathcal{X}$ for a Poisson Process with rate function $\lambda(.)$ is $p(\mathcal{D} \mathop{} | \mathop{} \lambda(.)) = \exp(-\int_{\mathcal{X}}{\lambda(x)\mathop{}\!\mathrm{d}x})\prod_{n}{\lambda(x_n)}$. This Poisson process specification is used in the construction of a Cox process model, which is $p(\mathcal{D},\lambda(.),\Theta) = p\left(\mathcal{D}\mathop{}|\mathop{}\lambda(.)\right) \cdot p(\lambda(.)\mathop{}|\mathop{}\Theta) \cdot p(\Theta)$, with $\lambda(.)$ being a Gaussian Process conditioned on being positive-valued over the domain. We adopt the inference scheme along with the approximation scheme to maintain positivity of the rate function from \citet{Mutny2021}. The samples from the posterior are obtained via Langevin dynamics. In our experiment, we compare TS for Cox Process Sensing from \citet{Mutny2021a} with our DPP-TS approach, leveraging our diversifying process to improve city coverage by our scheduled taxi fleets. As DPP kernel, we use the mutual information kernel that is obtained when the posterior for the rate function is approximated with a Gaussian distribution, known as the Laplace Approximation. In Figure \ref{fig:experimentcomp}.f we depict allocation of 5 taxis to city blocks and report the simple regret. DPP-TS reliably achieves lower simple regret than standard Thompson Sampling sensing with resampling. \section{CONCLUSIONS} In this work we introduced DPP-BBO, a natural and easily applicable framework for enhancing batch diversity in BBO algorithms which works in more settings than previous diversification strategies: it is directly applicable to the continuous domain case, when due to approximation and non-standard models we are unable to compute hallucinations or confidence intervals (as in the Cox process example), or more generally when used in combination with any randomized BBO sampling scheme or arbitrary diversity kernel. Moreover, for DPP-TS we show improved theoretical guarantees and strong practical performance on simple regret. \subsubsection*{Acknowledgements} This research was supported by the ETH AI Center and the SNSF grant 407540 167212 through the NRP 75 Big Data program. This publication was created as part of NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation. \bibliographystyle{apalike} \subsubsection*{\bibname}} \begin{document} \twocolumn[ \aistatstitle{Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes} \aistatsauthor{ Elvis Nava \And Mojm\'ir Mutn\'y \And Andreas Krause } \aistatsaddress{ ETH Zurich\\\texttt{elvis.nava@ai.ethz.ch} \And ETH Zurich\\\texttt{mojmir.mutny@inf.ethz.ch} \And ETH Zurich\\\texttt{krausea@inf.ethz.ch} } ] \begin{abstract} \looseness -1 In Bayesian Optimization (BO) we study black-box function optimization with noisy point evaluations and Bayesian priors. Convergence of BO can be greatly sped up by batching, where multiple evaluations of the black-box function are performed in a single round. The main difficulty in this setting is to propose at the same time diverse and informative batches of evaluation points. In this work, we introduce {\em DPP-Batch Bayesian Optimization (DPP-BBO)}, a universal framework for inducing batch diversity in sampling based BO by leveraging the repulsive properties of Determinantal Point Processes (DPP) to naturally diversify the batch sampling procedure. We illustrate this framework by formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to sample from it. We then prove novel Bayesian simple regret bounds for both classical batched TS as well as our counterpart DPP-TS, with the latter bound being tighter. Our real-world, as well as synthetic, experiments demonstrate improved performance of DPP-BBO over classical batching methods with Gaussian process and Cox process models. \end{abstract} \begin{figure*}[h] \vspace{-0.4cm} \begin{tabular}{c@{\hskip-5mm}c@{\hskip-5mm}c} \includegraphics[width=60mm]{{figs/1-pmax.pdf}} & \includegraphics[width=60mm]{{figs/2-phal.pdf}} & \includegraphics[width=60mm]{{figs/3-pdpp.pdf}} \end{tabular} \vspace{-0.4cm} \caption{\looseness -1 Diversification Demonstration. Given a Gaussian Process posterior on $f$ defined over $\mathcal{X} = [-0.5,0.5]$, we sample a batch of $B=2$ evaluation points for our next optimization iteration using a randomized Batch BO algorithm. With Thompson Sampling (a), this corresponds to sampling from the symmetric 2d distribution $P_{\max}(x_1,x_2) = p_{\max}(x_1)p_{\max}(x_2)$. We wish to sample diverse batches, therefore we would like to reduce the probability mass near the diagonal (where $x_1 = x_2$). To do so, we can use hallucinated observations (b) or our DPP-TS sampling distribution (c), which exploits DPP repulsion properties. It is apparent how $P_{\text{DPP-TS}}$, by assigning much less probability mass to locations near the diagonal, disfavors the selection of non-diverse batches.} \label{fig:pmaxfigs} \end{figure*} \section{INTRODUCTION} Gradient-free optimization of noisy black-box functions is a broadly relevant problem setting, with a multitude of applications such as de-novo molecule design \citep{gonzalez_bayesian_2015}, electron laser calibration \citep{Kirschner2019, Kirschner2019b}, and hyperparameter selection \citep{snoek_practical_2012} among many others. Several algorithms have been devised for such problems, some with theoretical guarantees, broadly referred to as Bayesian optimization \citep{Mockus1982} or multi-armed bandits \citep{berry_bandit_1985}. Our work falls into Bayesian optimization as we assume a known prior for the unknown function, and use evaluated points to update our belief about the function. In BO, the optimization procedure is performed sequentially by evaluating the noisy function on locations informed by past observations. In many real world applications, multiple evaluations (experiments) can be executed in parallel. We refer to this setting as {\em batched Bayesian optimization (Batch BO)}. This is a common situation when the experimental process is easily parallelizable, such as in high-throughput wetlab experiments, or parallel training of multiple ML models on a cluster. A main concern in the batched setting is \emph{batch diversification}: guaranteeing that the selected experimental batch does not perform redundant evaluations. We tackle this problem via Determinantal Point Processes (DPP) \citep{kulesza_determinantal_2012}, a family of repulsive stochastic processes on sets of items. DPPs have already been successfully employed for Experimental Design \citep{derezinski_bayesian_2020}, optimization \citep{Mutny2020b}, and in combination with a deterministic batched Bayesian optimization algorithm \citep{kathuria_batched_2016}. In this work, we show how DPP-based diversification can naturally, and in a principled manner, be integrated into randomized algorithms for BO. Of special interest is the Thompson sampling BO algorithm, which is randomized but universally applicable \citep{Thompson1933}, often empirically outperforms UCB \citep{Chapelle2011}, and in some cases has better computational properties \citep{Mutny2020}. \subsection{Our Contribution} In this work we introduce a framework for randomized Batched Bayesian Optimization diversification through DPPs (DPP-BBO). Our main result is an algorithm called DPP-TS, which samples from a Regularized DPP, capturing both Thompson Sampling (posterior sampling) and information-theoretic batch diversity. We use a Markov Chain Monte Carlo (MCMC) approach adapted from the DPP literature \citep{anari_monte_2016} to sample batches for this new algorithm. We establish improved Bayesian Simple Regret bounds for DPP-TS compared to classical batching schemes for Thompson Sampling, and experimentally demonstrate its effectiveness w.r.t.~BO baselines and existing techniques, both on synthetic and real-world data. Lastly, we demonstrate the generality of our diversification framework by applying it on an alternative randomized BO algorithm called \emph{Perturbed History Exploration} (PHE) \citep{kveton_perturbed-history_2020}; and extend it to cover Cox Process models in addition to classically assumed Gaussian Processes. \section{BACKGROUND}\label{background} \paragraph{Bayesian Optimization} The problem setting for Bayesian Optimization (BO) is as follows: we select a sequence of actions $x_t \in \mathcal{X}$, where $t$ denotes the \textit{iteration count} so that $t \in [1,T]$, and $\mathcal{X}$ is the action domain, which is either discrete or continuous. For each chosen action $x_t$, we observe a noisy reward $y_t = f(x_t) + \epsilon_t$ in sequence, where $f : \mathcal{X} \rightarrow \mathbb{R}$ is the unknown reward function, and $\epsilon_t$ are assumed to be i.i.d.~Gaussian s.t.~$\epsilon_t \sim \mathcal{N}(0, \sigma^2)$ with known variance. Most BO algorithms select each point $x_t$ through maximization of an \textit{acquisition function} $x_t = \argmax_{x \in \mathcal{X}} u_t(x|D_{t-1})$, determined by the state of an internal Bayesian model of $f$. We indicate with $D_{t-1} = \lbrace (x_1,y_1), \ldots, (x_{t-1},y_{t-1}) \rbrace$ the filtration consisting of the history of evaluation points and observations up to and including step $t-1$ on which the model is conditioned on. The main algorithmic design choices in BO are which acquisition function and which internal Bayesian model of $f$ to use. \paragraph{Gaussian Processes} Obtaining any theoretical convergence guarantees in infinite or continuous domains is impossible without assumptions on the structure of $f$. A common assumption in Bayesian Optimization is that $f$ is a sample from a Gaussian Process (GP) \citep{rasmussen_gaussian_2005} prior, which has the property of being versatile yet allowing for posterior updates to be obtained in closed form. Many BO algorithms make use of an internal GP model of $f$, which is initialized as a prior and then sequentially updated from feedback. This GP is parametrized by a kernel function $k(x,x^\prime)$ and a mean function $\mu(x)$. To denote that $f$ is sampled from the GP, we write $f \sim GP(\mu, k)$. \paragraph{Regret Minimization} We quantify our progress towards maximizing the unknown $f$ via the notion of \textit{regret}. In particular, we define the \textit{instantaneous regret} of action $x_t$ as $r_t =f(x^\star) - f(x_t)$, with $x^\star = \argmax_{x \in \mathcal{X}} f(x)$ being the optimal action. A common objective for BO is that of minimizing {\em Bayesian Cumulative Regret} $\text{BCR}_T = \mathbb{E}\left[\sum_{t=1}^T r_t\right] = \mathbb{E}\left[\sum_{t=1}^T \left( f(x^\star) - f(x_t) \right)\right]$, where the expectation is over the prior of $f$, observation noise and algorithmic randomness. Obtaining bounds on the cumulative regret that scale sublinearly in $T$ allows us to prove convergence of the \textit{average regret} $\text{BCR}_T / T$, therefore also minimizing the {\em Bayesian Simple Regret} $\text{BSR}_T = \mathbb{E}\left[\min_{t \in [1,T]} r_t\right] = \mathbb{E}\left[\min_{t \in [1,T]} f(x^\star) - f(x_t) \right]$ and guaranteeing convergence of our optimization of $f$. \paragraph{Batch Bayesian Optimization} We define {\em Batch Bayesian Optimization (BBO)} as the setting where, instead of sequentially proposing and evaluating points, our algorithms propose a batch of points of size $B$ at every iteration $t$. Importantly, the batch must be finalized {\em before} obtaining any feedback for the elements within it. Batched Bayesian Optimization algorithms encounter two main challenges with respect to performance and theoretical guarantees: proposing diverse evaluation batches, and obtaining regret bounds competitive with full-feedback sequential algorithms, sublinear in the total number of experiments $BT$, where $T$ denotes the iteration count $T$ and $B$ the batch size. \paragraph{Determinantal Point Processes (DPPs)} \citep{kulesza_determinantal_2012} are a family of point processes characterized by the property of \textit{repulsion}. We define a point process $P$ over a set $\mathcal{X}$ as a probability measure over subsets of $\mathcal{X}$. Given a similarity measure for pairs of points in the form of a kernel function, Determinantal Point Processes place high probability on subsets that are \textit{diverse} according to the kernel. We will now describe DPPs for finite domains due to their simplicity, however their definition can be extended to continuous $\mathcal{X}$. For our purposes, we restrict our focus on L-ensemble DPPs: given a so-called L-ensemble kernel $L$ defined as a matrix over the entire (finite) domain $\mathcal{X}$, a Determinantal Point Process $P_L$ is defined as the point process such that the probability of sampling the set $X \subseteq \mathcal{X}$ is proportional to the determinant of the kernel matrix $L_X$ restricted to $X$ \begin{equation} P_{L}(X) \propto \det\left( L_X \right)\text{.} \end{equation} \looseness -1 Remarkably, the required normalizing constant can be obtained in closed form as $\sum_{X \subseteq \mathcal{X}} \det(L_X) = \det(L+I)$. If the kernel $L$ is such that $L_{ij} = l(x_i, x_j)$, for $x_i, x_j \in \mathcal{X}$, encodes the similarity between any pair of points $x_i$ and $x_j$, then the determinant $\det\left( L_X \right)$ will be greater for diverse sets $X$. Intuitively, for the linear kernel, diversity can be measured by the area of the $|X|$-dimensional parallelepiped spanned by the vectors in $X$ \citep[see Section 2.2.1 from][]{kulesza_determinantal_2012}. For our application, we require sampling of batches of points with a specific predetermined size. For this purpose, we focus on $k$-DPPs. A $k$-DPP $P_L^k$ over $\mathcal{X}$ is a distribution over subsets of $\mathcal{X}$ with fixed cardinality $k$, such that the probability of sampling a specific subset $X$ is proportional to that for the generic DPP case: $P_L^k(X) = \frac{\det\left( L_X \right)}{\sum_{X^\prime \subseteq \mathcal{X}, |X^\prime| = k} \det\left(L_{X^\prime}\right)}$. Sampling from DPPs and $k$-DPPs can be done with a number of efficient exact or approximate algorithms. The seminal exact sampling procedure for $k$-DPPs from \citet{deshpande_efficient_2010} requires time $O(kN^{\omega + 1}\log N)$ in the batch size $k$ and the size of the domain $N$, with $\omega$ being the exponent of the arithmetic complexity of matrix multiplication. This does not scale well for large domains, nor does it work for the continuous case. Fortunately, an efficient MCMC sampling scheme with complexity of $O(Nk\log(\epsilon^{-1}))$ introduced by \citet{anari_monte_2016} works much better in practice. Variants of such MCMC schemes have been proven to also work for continuous domains \citep{rezaei_polynomial_2019}. \section{RELATED WORK} A number of different acquisition functions have been proposed for Bayesian Optimization, such as Probability of Improvement, Expected Improvement, Upper Confidence Bound (UCB) among many others \citep[cf., ][]{brochu_tutorial_2010}. The Gaussian process version of UCB \citep[GP-UCB,][]{srinivas_gaussian_2010} is a popular technique based on a deterministic acquisition function, with sublinear regret bounds for common kernels. \paragraph{Thompson Sampling} Thompson Sampling is an intuitive and theoretically sound BO algorithm using a randomized acquisition function \citep{Thompson1933,russo_tutorial_2020}. When choosing the next evaluation point, we sample a realization from the current posterior modeling the objective function, and use this as the acquisition function to maximize $x_t = \argmax_{x \in \mathcal{X}}\tilde{f}(x)$ where $\tilde{f}$ is the sample function, e.g. $\tilde{f} \sim \text{GP}(\mu_t,K_t)$. Bayesian Cumulative Regret was first bounded as $O(\sqrt{T\gamma_T})$ by \citet{russo_learning_2014}, where $\gamma_T$ is the maximum mutual information obtainable from $T$ observations (for more details on this well established quantity, see Appendix \ref{infobackground}). This bound matches lower bounds in $T$ \citep{Scarlett2017}. \paragraph{Batched UCB and Pure Exploration}\label{bucbbackground} For Batched BO, heuristic algorithms such as Simulation Matching \citep{azimi_batch_2010} or Local Penalization \citep{pmlr-v51-gonzalez16a} attempt to solve the problem of generating informative and diverse evaluation point batches, albeit without theoretical guarantees on regret. In particular, Local Penalization selects explicitly diversified batches by greedily penalizing already-sampled points with penalization factors in the acquisition function. \citet{desautels_parallelizing_2014} are the first to provide a theoretically justified batched algorithm, introducing GP-BUCB, a batched variant of GP-UCB. To induce diversity within batches, they use \textit{hallucinated observations}, so that ${x_{\text{GP-BUCB}}}_{t,b}$ is sampled by maximizing a UCB based on the hallucinated posterior $\tilde{D}_{t,b-1}$. The hallucinated history is constructed by using the posterior mean in place of the observed reward for points with delayed feedback. GP-BUCB attains a cumulative regret bound of $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$, which, however, requires an \textit{initialization phase} before the deployment of the actual algorithm. For the first $T_{\text{init}}$ iterations, the evaluations are chosen with Uncertainty Sampling, picking the point satisfying $x_t = \argmax_{x \in \mathcal{X}} \sigma_t(x)$, effectively exploring the whole domain, which limits the practicality of the method. To alleviate this, \citet{contal_parallel_2013} introduce the alternative GP-UCB Pure Exploration \citep[GP-UCB-PE,][]{contal_parallel_2013}, which mixes the UCB acquisition function with a Pure Exploration strategy. Sampling a batch at timestamp $t$, GP-UCB-PE operates in two phases: the first point of each batch is sampled with standard GP-UCB, while the remaining $B-1$ points are sampled by first defining a \textit{high probability region} $\mathfrak{R}^+$ for the maximizer, and then performing Uncertainty Sampling ${x_{\text{UCB-PE}}}_{t,b} = \argmax_{x \in \mathfrak{R}^+} \sigma_{t,b}(x)$. GP-UCB-PE's cumulative regret is bounded by $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$ without an initialization phase, as opposed to GP-BUCB. \paragraph{Batched TS} \citet{kandasamy_parallelised_2018} are first to consider batching with Thompson sampling and GPs. They propose to simply resample multiple times from the posterior within each batch, effectively lifting the Thompson Sampling algorithm as-is to the batched case. By repeating TS sampling for each point within the batch, they bound the Bayesian cumulative regret by $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$. It is possible but not required to use hallucinated observations (hal-TS). However, an initialization phase identical to that of GP-BUCB is needed for the their proof on the bound to hold. A novel result from our work is an improved proof technique such that the initialization phase for Batched TS is not required for the Bayesian simple regret version of the bound to hold. \paragraph{DPPs in Batched BO} \citet{kathuria_batched_2016} use Determinantal Point Process sampling to define a variation of GP-UCB-PE \citep{contal_parallel_2013}, called UCB-DPP-SAMPLE. They observe that the Uncertainty Sampling phase of GP-UCB-PE corresponds to greedy maximization of the posterior covariance matrix determinant $\det\left({K_{t,1}}_X\right)$ with respect to batches $X$ of size $B-1$ from $\mathfrak{R}^+$, with ${K_{t,1}}_X$ being the covariance matrix produced by the posterior kernel of the GP after step $(t,1)$ and restricted to the set $X$. Finding the $(B-1)$-sized submatrix of the maximum determinant is an NP-hard problem, and picking each element greedily so that it maximizes $\sigma^2_{t,b}(x) = k_{t,b}(x,x)$ fails to guarantee the best solution. Maximizing the above determinant is also equivalent to maximizing $\det\left({L_{t,1}}_X\right)$ for the DPP L-ensemble kernel defined as $L_{t,1} = I + \sigma^{-2}K_{t,1}$, called the \emph{mutual information kernel} \citep{kathuria_batched_2016}. Instead of selecting the last $B-1$ points of each batch with Uncertainty Sampling, UCB-DPP-SAMPLE samples them from a $(B-1)$-DPP restricted to $\mathfrak{R}^+$ with the Mutual Information L-ensemble kernel $L_{t,1} = I + \sigma^{-2}K_{t,1}$. \citet{kathuria_batched_2016} provide a bound for UCB-DPP-SAMPLE as a variation of the $O\left(\sqrt{TB \beta_{TB} \gamma_{TB}}\right)$ bound for GP-UCB-PE. However, as we illustrate in Appendix \ref{appendix:kathuriabound}, their bound is \emph{necessarily worse} than the existing one for GP-UCB-PE. The concurrent work of \citet{nguyen_optimal_2021} is another recent example of DPP usage in BBO diversification, proposing DPP sampling (with DPP kernel informed by a GP posterior) as a method of diverse batch selection, demonstrating good performance in experimental tasks, but no known theoretical regret guarantees. \section{THE DPP-BBO FRAMEWORK}\label{dppbboframework} A key insight our approach relies on is to view Thompson Sampling as a procedure that samples at each step from a \textit{maximum distribution} $p_{\max}$ over $\mathcal{X}$, so that $x_t \sim p_{\max, t}$ with \begin{gather} p_{\max,t}(x) = \mathbb{E}_{\tilde{f} \sim \text{Post}_t}\bigg[ \mathds{1}[x = \argmax_{x^\prime \in \mathcal{X}}\tilde{f}(x^\prime)]\bigg] \text{.} \end{gather} A simple approach towards Batched Thompson Sampling is to obtain a batch $X_t$ of evaluation points (with $|X_t| = B$) by sampling $B$ times from the posterior in each round. This can again be interpreted as \begin{equation} X_t \sim P_{\max,t} ~ \text{with} ~ P_{\max,t}(X) = \prod_{x_b \in X} p_{\max,t}(x_b) \text{.} \end{equation} This way, we can view Thompson Sampling or any other randomized Batch BO algorithm as iteratively sampling from a batch distribution over $\mathcal{X}^B$ dependent on $t$. The main downside of this simple approach is that {\em independently} obtaining multiple samples may lead to redundancy. As a remedy, in our DPP-BBO framework, we modify such sampling distributions by {\em reweighing} them by a DPP likelihood. This technique is general, and allows us to apply DPP diversification to {\em any }randomized BBO algorithm with batch sampling likelihood $P_{\text{A},t}(X)$. \begin{definition}[DPP-BBO Sampling Likelihood]\label{dppbbodef} The batch sampling likelihood of generic DPP-BBO at step $t$ is \begin{equation} P_{\text{DPP-BBO}, t}(X) \propto P_{\text{A}, t}(X) \det({L_t}_X) \end{equation} with $L_t$ being a DPP L-ensemble kernel defined over the domain $\mathcal{X}$. \end{definition} Notice that the domain $\mathcal{X}$ does not need to be discrete, even though we introduced the approach on discrete ground sets in order to simplify notation. This is in contrast to the existing DPP-based BO algorithm from \citet{kathuria_batched_2016}, which requires the domain to be discrete in order to efficiently sample the DPP restricted to the arbitrary region $\mathfrak{R}^+$ in the general case. We now proceed to justify our formulation, defining the DPP-Thompson Sampling (DPP-TS) procedure in the process. \subsection{The DPP quality-diversity decomposition} DPPs capture element diversity but also take into account element \textit{quality} independently of the similarity measure, as illustrated by \citet{kulesza_determinantal_2012}. Namely, L-ensemble DPPs can be decomposed into a quality-diversity representation, so that the entries of the L-ensemble kernel for the DPP are expressed as $L_{ij} = q_i \phi_i^\top \phi_j q_j$ with $q_i \in \mathbb{R}^+$ representing the \textit{quality} of an item $i$, and $\phi_i \in \mathbb{R}^m$, $\|\phi_i\| = 1$ being normalized \textit{diversity} features. We also define $S$ with $S_{ij} = \phi_i^\top \phi_j$. This allows us to represent the DPP model as $P_L(X) \propto \left(\prod_{i \in X} q_i^2 \right) \det(S_X)$. We then consider a k-DPP with L-ensemble kernel $L$ in its quality-diversity representation, and re-weigh the quality values of items by their likelihood under a Bayesian Optimization random sampling scheme $P_{A,t}(x)$ such as Thompson Sampling $P_{\max,t}(x)$. Following this approach, we can obtain a new k-DPP likelihood by renormalizing the product of the Thompson Sampling likelihood of the batch $P_{\max}$ and an existing DPP likelihood $P_L$ for $X = \lbrace x_1, \ldots, x_B \rbrace$: \begin{align} \label{pbatch} P_{\text{DPP-TS}}(X) & \propto \left(\prod_{x_b \in X} p_{\max}(x_b) \right) P_L(X)\\ & \propto \left(\prod_{x_b \in X} p_{\max}(x_b) \right) \det(L_X)\\ & \propto \left(\prod_{x_b \in X} p_{\max}(x_b) \cdot q_{x_b}^2 \right) \det(S_X) \end{align} The result is a k-DPP with L-ensemble kernel $\tilde{L}_{ij} = \sqrt{p_{\max}(x_i)p_{\max}(x_j)}L_{ij}$, generalizing the sampling distribution for batched TS as a stochastic process with repulsive properties. To recover original batched TS, we just need to set $L_t = I$. \subsection{The Mutual Information Kernel} For our choice of kernel, we follow the insight from \citet{kathuria_batched_2016} and use $L_t = I + \sigma^{-2} K_t$, with $K_t$ being the GP posterior kernel at step $t$. Consequently, the DPP loglikelihood of a set $X$ at time $t$ is proportional to the mutual information between the true function $f$ and the observations obtained from $X$: $I(f_X;\mathbf{y}_X|\mathbf{y}_{1:t-1,1:B}) = \frac{1}{2} \log \det (I + \sigma^{-2}{K_t}_X)$ (see Appendix \ref{infobackground}). This is an example of a so-called {\em Regularized k-DPP}, a k-DPP such that a symmetric positive semidefinite regularization matrix $A$ is added to an original unregularized L-ensemble DPP kernel, for the particular case of $A = \lambda I$. In such a setting, we allow for the same element to be selected multiple times and enforce that any set $X$ must have nonzero probability of being selected. By tuning the strength of the regularization, we can tune how extreme we wish our similarity repulsion to be. \begin{definition}[DPP-TS Sampling Likelihood]\label{dpptsdef} The batch sampling likelihood of DPP-TS at step $t$ is \begin{equation} P_{\text{DPP-TS}, t}(X) \propto P_{\max, t}(X) \det(I + \sigma^{-2}{K_t}_X) \text{.} \end{equation} \end{definition} In Figure~\ref{fig:pmaxfigs}, we illustrate the $|X|=2$ case to compare the original $P_{\max}$ TS distribution, a TS variant with hallucinated observations, and $P_{\text{DPP-TS}}$ with its repulsion properties. \subsection{Markov Chain Monte Carlo for DPP-BBO} Sampling from the mutual information DPP component $\det(I + \sigma^{-2}{K_t}_X)$ of DPP-TS on its own can be done easily and efficiently, as numerous algorithms exist for both exact and approximate sampling from k-DPPs \citep{kulesza_determinantal_2012}. Likewise, we assume we are in a setting in which Thompson Sampling on its own can be performed relatively efficiently, as sampling from $P_{\max t}$ reduces to sampling a function realization $\tilde{f}$ from the posterior, e.g. $\text{GP}(\mu_t,K_t)$, and maximizing $\tilde{f}$ over $\mathcal{X}$. However, when sampling from the product of the two distributions, we must resort to tools of approximate inference. The main issue with adopting standard approaches is that computation of the explicit likelihood $P_{\text{DPP-TS}, t}$ is {\em doubly intractable}: computation of $P_{\max, t}$ is intractable on its own, and it appears in the enumerator of $P_{\text{DPP-TS}, t}$ before normalization. Our approach for sampling from $P_{\text{DPP-TS}, t}$ relies on a Markov Chain Monte Carlo (MCMC) sampler. We construct an ergodic Markov Chain over batches from $\Omega = \{ X ~|~ \; X \subset \mathcal{X}, |X| = k \}$ with transition kernel $T(X^{\prime} | X)$ such that the detailed balance equation $Q(X)T(X^{\prime} | X) = Q(X^{\prime})T(X | X^{\prime})$ is satisfied almost surely with respect to $P_{\text{DPP-TS}, t}$, with $Q(X) =\\ P_{\max, t}(X)\det({L_t}_X)$ being the unnormalized potential of $P_{\text{DPP-TS}, t}$. If $Q(X)$ were tractable, we could use the standard Metropolis-Hastings algorithm \citep{hastings_monte_1970}, which satisfies the detailed balance equation. The problem with naively using Metropolis-Hasting MCMC sampling is that our $Q(X) = P_{\max, t}(X)\det({L_t}_X)$ contains $P_{\max, t}(X) = \prod_{x_b \in X} p_{\max, t}(x_b)$, which is intractable and cannot be computed on the fly. As previously stated, the only thing we can easily do is sample from it by sampling $\tilde{f}$ and then maximizing it. However, if we modify the standard Metropolis-Hastings MCMC algorithm by using $p_{\max, t}$ proposals, we obtain Algorithm \ref{algomcmc}, which satisfies detailed balance. We refer to Appendix \ref{mcmcappendix} for the proof. This algorithm can be interpreted as a variant of an existing k-DPP sampler proposed by \citet{anari_monte_2016}. \begin{algorithm}[H] \caption{DPP-TS MCMC sampler}\label{algomcmc} \begin{algorithmic} \State pick random initial batch $X$ \Repeat \State uniformly pick point $x_b \in X$ to replace \State sample candidate point $x_b^{\prime} \sim p_{\max, t}(x_b^{\prime})$ \State define $X^{\prime} = \left(X \setminus \{x_b\}\right) \cup \{x_b^{\prime}\}$ \State accept with probability $\alpha = \min \left\lbrace 1, \frac{\det({L_t}_{X^{\prime}})}{\det({L_t}_X)} \right \rbrace$ \If {accepted} \State $X$ = $X^{\prime}$ \EndIf \Until converged \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{DPP-TS Algorithm}\label{dpptsalgo} \begin{algorithmic} \item \textbf{Input:} Action space $\mathcal{X}$, GP prior $\mu_1$, $k_1(.,.)$, history $D_0 = \lbrace\rbrace$ \For{$t = 1, \ldots, T$} \State Sample $X_t \sim P_{\text{DPP-TS}, t}(X_t)$ with Alg.\ \ref{algomcmc} \State Observe $y_{t,b} = f(x_{t,b}) + \epsilon_{t,b}$ for $b \in [1,B]$ \State Add observations to history $D_t = D_{t-1} \cup \lbrace (x_{t,1},y_{t,1}),\ldots,(x_{t,B},y_{t,B}) \rbrace$ \State Update the GP with $D_t$ to get $\mu_{t+1}$, $k_{t+1}(.,.)$ \EndFor \end{algorithmic} \end{algorithm} \subsection{DPP-TS} Given the sampling distribution (Definition \ref{dpptsdef}) and the above MCMC algorithm, we can now fully specify the overall procedure for our DPP-TS sampling algorithm summarized in Algorithm \ref{dpptsalgo}. \section{BAYESIAN REGRET BOUNDS} We now establish bounds on the Bayesian regret. Instead of assuming the existence of a fixed true $f$, we assume that the true function is sampled from a Gaussian Process prior $f \sim GP(0,K)$. In particular, our regret bounds are obtained on a variant of Bayes regret called Bayes Batch Cumulative Regret $\text{BBCR}_{T,B} = \mathbb{E}\left[ \sum_{t=1}^T \min_{b \in [1,B]} r_{t,b} \right] = \mathbb{E}\left[ \sum_{t=1}^T \min_{b \in [1,B]} \left( f(x^{\star}) - f(x_{t,b}) \right) \right]$ which only considers the best instantaneous regret within each batch, as we make use of proof techniques from \citet{contal_parallel_2013} involving such a formulation. It is straightforward to see that by bounding BBCR we at the same time bound the Bayes Simple Regret (introduced in Section \ref{background}), as $\text{BSR}_{T,B} \leq \text{BBCR}_{T,B} / T$, similarly to how $\text{BSR}_{T,B} \leq \text{BCR}_{T,B} / TB$. \subsection{Improved bound on BBCR for Batched Thompson Sampling} Our first theoretical contribution is an improved version of the bound on Bayesian Simple Regret from \citet{kandasamy_parallelised_2018}. Our version of the algorithm requires no initialization procedure to guarantee sublinear regret in contrast to prior work. Unlike the original Gaussian TS Bayesian bounds from \citet{russo_learning_2014}, \citet{kandasamy_parallelised_2018} analyze the problem over a continuous domain. Therefore, it requires an additional assumption previously used in the Bayesian continuous-domain GP-UCB bound \citep{srinivas_gaussian_2010}. \begin{assumption}[Gradients of GP Sample Paths]\label{gradassumpt} Let $\mathcal{X} \subseteq [0,l]^d$ compact and convex with $d \in \mathbb{N}$ and $l>0$, $f \sim \text{GP}(0,K)$ where $k$ is a stationary kernel. Moreover, there exist constants $a,b > 0$ such that $P\left( \sup_{x\in \mathcal{X}} \left| \frac{\partial f(x)}{\partial x_i} \right| > L \right) \leq a e^{-(L/b)^2} \quad \forall L > 0, \forall i \in \lbrace 1, \ldots, d \rbrace$. \end{assumption} Using the above assumption, we can show the following theorem. \begin{theorem}[BBCR Bound for Batched TS]\label{tsbatchbound} If $f \sim GP(0,K)$ with covariance kernel bounded by 1 and noise model $\mathcal{N}(0,\sigma^2)$, and either \begin{itemize} \item Case 1: finite $\mathcal{X}$ and $\beta_t = 2\ln\left(\frac{B(t^2 + 1)|\mathcal{X}|}{\sqrt{2\pi}}\right)$; \item Case 2: compact and convex $\mathcal{X} \subseteq [0,l]^d$, with Assumption \ref{gradassumpt} satisfied and $\beta_t = 4(d+1)\log(Bt) + 2d\log(dab\sqrt{\pi})$. \end{itemize} Then Batched Thompson Sampling attains Bayes Batch Cumulative Regret of \emph{ \begin{equation} {\text{BBCR}_{\text{TS}}}_{T,B} \leq \frac{C_1}{B} + \sqrt{ C_2 \frac{T}{B} \beta_T \gamma_{TB}} \end{equation} } with $C_1 = 1$ for Case 1, $C_1 = \frac{\pi^2}{6} + \frac{\sqrt{2\pi}}{12}$ for Case 2, and $C_2 = \frac{2}{\log(1 + \sigma^{-2})}$. \end{theorem} Therefore, ${\text{BSR}_{\text{TS}}}_{T,B} \leq \frac{C_1}{TB} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}}$. We point to Appendix \ref{tsbbcrboundproof} for proof. The bound from \citet{kandasamy_parallelised_2018} (without an initialization phase) is similar, except for the presence of an $\exp(C)$ factor in the square root term, which scales linearly with $B$. Our version of the bound does not contain $\exp(C)$, allowing thus for sublinear regret in $B$. \subsection{BBCR bound for DPP-TS} We now shift the focus to our novel DPP-TS algorithm and obtain an equivalent bound. To do so, we modify the algorithm we developed and introduce DPP-TS-alt, so that for every batch: a) For the first sample in the batch $x_{\text{DPP-TS-alt}\; t,1}$, we sample from $p_{\max, t}$ as in standard Thompson Sampling; b) For all the other samples $x_{\text{DPP-TS-alt}\; t,b}$ with $b \in [2,B]$, we sample from joint $P_{\text{DPP-TS }t}$, using the most updated posterior variance matrix $K_{t,1}$ to define the DPP kernel. The reason why we introduced DPP-TS as such and not DPP-TS-alt in the first place is both for simplicity and because in practice their performance is virtually identical (see Appendix \ref{appendix:addexp}). We have the following \begin{theorem}[BBCR Bound for DPP-TS]\label{dpptsbatchbound} Consider the same assumptions as for Theorem \ref{tsbatchbound}. Then DPP-TS (in its DPP-TS-alt variant) attains Bayes Batch Cumulative Regret of \emph{ \begin{equation} {\text{BBCR}_{\text{DPP-TS}}}_{T,B} \leq \frac{C_1}{B} + \sqrt{ C_2 \frac{T}{B} \beta_T \gamma_{TB}} -C_3 \end{equation} } with $C_1 = 1$ for Case 1, $C_1 = \frac{\pi^2}{6} + \frac{\sqrt{2\pi}}{12}$ for Case 2, $C_2 = \frac{2}{\log(1 + \sigma^{-2})}$, and $-C_3 < 0$ (defined in Appendix \ref{dpptsbbcrboundproof}). \end{theorem} We can thus obtain ${\text{BSR}_{\text{DPP-TS}}}_{T,B} \leq \frac{C_1}{TB} - \frac{C_3}{T} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}}$. Moreover, this bound is necessarily tighter than that for standard TS: $\frac{C_1}{TB} - \frac{C_3}{T} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}} \leq \frac{C_1}{TB} + \sqrt{ C_2 \frac{1}{TB} \beta_T \gamma_{TB}}$. We point to Appendix \ref{dpptsbbcrboundproof} for the proof. \begin{figure*}[h] \includegraphics[width=\textwidth]{{figs/experiments.pdf}} \caption{Comprehensive experimental comparisons between DPP-TS and classic BBO techniques for Simple Regret (log scale): \textbf{a)} $f$ sampled from Squared Exponential GP; \textbf{b)} Rosenbrock; \textbf{c)} Styblinski-Tang; \textbf{d)} Michalewicz; \textbf{e)} PHE experiment with $f$ sampled from QFF Squared Exponential GP; \textbf{f)} Cox process sensing experiment on the Porto taxi dataset. The named functions are defined in Section \ref{dppexpsynth}. Overall, DPP-TS outperforms or equals the other algorithms, quickly sampling good maximizers thanks to improved batch diversification.} \vspace{-0.3cm} \label{fig:experimentcomp} \end{figure*} \section{EXPERIMENTS AND COMPARISONS}\label{expsection} To make the case for our algorithmic framework's effectiveness in practice, we perform a series of benchmark tests on synthetic and real world optimization problems, comparing DPP-BBO against classic BBO algorithms on Simple Regret metrics. (Cumulative Regret comparisons feature in Appendix \ref{appendix:addexp}.) \subsection{DPP-TS Comparisons on Synthetic Data} \label{dppexpsynth} We first compare DPP-TS on synthetic benchmarks against regular batched TS, GP-BUCB, hallucinated TS (Batched Thompson Sampling with hallucinations as in GP-BUCB), Pure DPP Exploration (DPP sampling from the DPP component of DPP-TS) and Uniform Exploration (uniform random sampling over the domain). We exclude algorithms that are not applicable to continuous domains. Figure \ref{fig:experimentcomp} details a number of such comparisons on synthetic benchmark functions under different settings, averaged over 15 experimental runs. For \ref{fig:experimentcomp}.a and \ref{fig:experimentcomp}.b we optimize over a discrete finite domain $\mathcal{X}$, using an exact Gaussian Process prior with a squared exponential kernel. The acquisition function is maximized by calculation of the explicit maximum over the discretized domain. For \ref{fig:experimentcomp}.c and \ref{fig:experimentcomp}.d, we optimize over a continuous domain $\mathcal{X} = [0,l]^d$, using an approximate Gaussian Process prior specified with Quadrature Fourier Features \citep{mutny_efficient_2018}. These functions are additive and, hence, the optimization can be done dimension-wise. When optimizing the one-dimensional projection of the acquisition function we use first order gradient descent with restarts. Specific benchmarks we use are the Rosenbrock function $f(x) = 100(x_2 - x_1^2)^2 + (x_1 - 1)^2$; the Stiblinski-Tang function $f(x) = \frac{1}{2} \sum_{i=i}^d \left(x_i^4 - 16x_i^2 + 5x_i \right)$; and the Michalewicz function $f(x) = -\sum_{i=i}^d \sin\left(x_i\right)\sin^{2d}\left(ix_i^2 / \pi\right)$. Overall, DPP-TS converges very quickly to sampling good maximizers, almost always beating or at least equaling the Simple Regret performance of the other algorithms, while exhibiting low-variance behavior. The added diversity from the DPP sampling procedure appears to favor quickly finding better maxima while not getting stuck in suboptimal but high-confidence regions, as seems to often happen to GP-UCB. A series of additional experiments is discussed in Appendix \ref{appendix:addexp}, including experiments on Cumulative Regret, DPP-TS with parametrized DPP kernels, and a comparison between DPP-TS and DPP-TS-alt which shows them to be of equivalent performance in practice. \subsection{DPP-Perturbed History Exploration} To further demonstrate the effectiveness and versatility of the DPP-BBO framework, we apply it to the recently introduced Perturbed History Exploration (PHE) algorithm \citep{kveton_perturbed-history_2020}. PHE is a BO algorithm which is agnostic of the specific model $f_\theta$ chosen for modeling $f$. Assuming that rewards are bounded, and given a parameter $a$, the algorithm introduces {\em pseudo-rewards} $a$ for each observation in its global history, and at each step maximizes its learned perturbed $f_\theta$ to propose a new evaluation point. We can interpret this procedure as sampling from $p_{\text{PHE}, t}(x)$, with the stochastic component stemming from the pseudo-reward generation. Given this, we can define DPP-PHE as $P_{\text{DPP-PHE}, t}(X) \propto \left(\prod_{x \in X} p_{\text{PHE}, t}(x)\right) \det(I + \sigma^{-2}{K_t}_X)$ where $K_t$ is an approximation of the Bayesian posterior covariance for the $f_\theta$ model. Figure \ref{fig:experimentcomp}.e experimentally compares PHE and DPP-PHE for $a=0.5$ and $a=1$ on a synthetic function (over a continuous $\mathcal{X}$) sampled from a 1-d squared exponential GP prior, while using as internal model a QFF GP regression. We can see that DPP-PHE improves on the Simple Regret when compared to regular PHE for the same $a$. \subsection{DPP-TS and Cox Process Sensing} \looseness -1 To benchmark our DPP-TS algorithm on a real world setting and demonstrate the versatility of the modeling choice, we turn to a Cox Process Sensing problem in the form of taxi routing on a 2-dimensional city grid, as considered by \citet{Mutny2021a}. Given a dataset of geo-localized taxi cab hails in Porto and a subdivision of the city into an 8x8 grid, we aim to learn the best locations where to schedule a fleet of taxis while, at beginning of each day - corresponding to a single iteration, we only observe the taxi hailing events in the grid cells which had vehicles scheduled to them. \looseness -1 We put a Gaussian process prior on the unknown rate function of a Poisson process, yielding a Cox Process with Poisson Process likelihood. The likelihood of observing a realization $\mathcal{D} = \lbrace x_n \rbrace_{n=1}^N$ over the domain $\mathcal{X}$ for a Poisson Process with rate function $\lambda(.)$ is $p(\mathcal{D} \mathop{} | \mathop{} \lambda(.)) = \exp(-\int_{\mathcal{X}}{\lambda(x)\mathop{}\!\mathrm{d}x})\prod_{n}{\lambda(x_n)}$. This Poisson process specification is used in the construction of a Cox process model, which is $p(\mathcal{D},\lambda(.),\Theta) = p\left(\mathcal{D}\mathop{}|\mathop{}\lambda(.)\right) \cdot p(\lambda(.)\mathop{}|\mathop{}\Theta) \cdot p(\Theta)$, with $\lambda(.)$ being a Gaussian Process conditioned on being positive-valued over the domain. We adopt the inference scheme along with the approximation scheme to maintain positivity of the rate function from \citet{Mutny2021}. The samples from the posterior are obtained via Langevin dynamics. In our experiment, we compare TS for Cox Process Sensing from \citet{Mutny2021a} with our DPP-TS approach, leveraging our diversifying process to improve city coverage by our scheduled taxi fleets. As DPP kernel, we use the mutual information kernel that is obtained when the posterior for the rate function is approximated with a Gaussian distribution, known as the Laplace Approximation. In Figure \ref{fig:experimentcomp}.f we depict allocation of 5 taxis to city blocks and report the simple regret. DPP-TS reliably achieves lower simple regret than standard Thompson Sampling sensing with resampling. \section{CONCLUSIONS} In this work we introduced DPP-BBO, a natural and easily applicable framework for enhancing batch diversity in BBO algorithms which works in more settings than previous diversification strategies: it is directly applicable to the continuous domain case, when due to approximation and non-standard models we are unable to compute hallucinations or confidence intervals (as in the Cox process example), or more generally when used in combination with any randomized BBO sampling scheme or arbitrary diversity kernel. Moreover, for DPP-TS we show improved theoretical guarantees and strong practical performance on simple regret. \subsubsection*{Acknowledgements} This research was supported by the ETH AI Center and the SNSF grant 407540 167212 through the NRP 75 Big Data program. This publication was created as part of NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation. \bibliographystyle{apalike}
9f23f1c49d9f3510a6749f9e36cd809c3a67eb1c
\section*{Introduction} Counterfactuals are subjunctive conditionals with false antecedent: "If I were 5 meters tall, I would be the tallest human being alive", "If there were no gravity, it would be harder to walk down the streets", "If I choosed to bet tail, I would win" etc. If we try to provide a formal semantics for these statements, we will find out that counterfactual conditionals require intensional logic: the validity of expressions of the form "if it were $\phi$, then it would be $\psi$" cannot be defined just in terms of truth-values of $\phi$ and $\psi$. Counterfactuals with true consequent may be both true (2) or false (1), just as counterfactuals with false consequents: \begin{equation} \mbox{If my heart stopped, I would still be able to run (0 $>$ 1; 0)} \end{equation} \begin{equation} \mbox{If I were bald, I would still be able to run (0 $>$ 1; 1)} \end{equation} Moreover, strict implication is not a good candidate too: if we try to use formulas of the form $\Box (\phi \rightarrow \psi)$ of any normal modal logic, where $\Box$ is interpreted as necessity operator, \textit{strengthening the antecedent} will be valid, i.e. \[ \Box (\phi \rightarrow \psi) \vdash \Box((\phi \land \chi)\rightarrow \psi) \] But it is strange to think that from \begin{center} \textit{If you had cut your finger, you would not need any medical help} \end{center} it follows that \begin{center} \textit{If you had cut your finger and your throat, you would not need any medical help} \end{center} The most common semantic framework of logic for counterfactual conditionals, which escapes these pitfalls, is so called comparative similarity analysis, proposed by Robert Stalnaker \cite{stalnaker1968} and David Lewis \cite{lewis1973}. The basic idea may be expressed as follows: \begin{center} \textit{"If it were $\phi$, then it would be $\psi$"} is true at some possible world $w$ iff in all $\phi$-worlds accessible from $w$, which differ minimally from $w$ itself, $\psi$ is true. \end{center} Since we evaluate the expression in regards of some subset of the accessible worlds, which depends on the antecedent, \textit{strengthening the antecedent} is easy to falsify. The proper formal definition of the comparative similarity semantics and the corresponding logic is well-known established results. But new questions arise when we think of adding temporality in the framework. Since the very idea of counterfactual statements might imply alternative possibilities (we will discuss it in details in Section 2), the most natural way to represent the notion is to define the logic of counterfactuals with respect to branching time structures. But then we face both formal and conceptual problems. In this paper we will give an overview of these concerns and propose the minimal solution. In Section 1, the simplest logic of counterfactuals \textbf{P} is presented. In Section 2, we discuss main metaphysical problems connected with time and counterfactual conditionals: determinism versus indeterminism, the difference between historical and plain counterfactuals. Section 3 presents Ockhamist branching time temporal logic OBTL, which has several nice properties, suitable for counterfac-\-tuals. At the last Section we combine both logics presented in Section 1 and Section 3 in regards of concerns, observed in Section 2. \section{Minimal logic P} We will precisely present the logic of counterfactuals and its comparative similarity semantics. The logic is the simplest system \textbf{P}. Other popular systems are usually \textbf{P}'s extensions: we can add more axioms depending on our metaphysical views on possible worlds and similarity relations on them. We will look at \textbf{P} for the sake of simplicity. The syntax of our language defined in BNF form as follows. For a countable set of propositions $Var$ and $p \in Var$: \[\phi, \psi : = p \: | \: \neg \phi \: | \: \phi \lor \psi \: | \: \phi \rightsquigarrow \psi \] $\phi \rightsquigarrow \psi$ means "if it were $\phi$, then it would be $\psi$". Other Boolean connectives are interpreted in a standard way. Semantically our logic is defined on the frames of the form: \[ \mathcal{F}_1 = \langle W, \langle \lessdot_w \rangle_{w \in W}, f \rangle \] \begin{itemize} \item $W \neq \emptyset$ is a non-empty set of possible worlds. \item $\langle \lessdot_w \rangle_{w \in W}$ is a tuple of strict partial ordering on $W$, defined for every world $w \in W$. The orderings represent comparative similarity relations. For example, for some $x, y, w \in W$, $x \lessdot_w y$ means that $x$ is more similar to $w$ than $y$. For every $w \in W$, its corresponding comparative similarity relation is defined on a subset of worlds $W_w \subseteq W$, which are accessible from $w$. \item $f: W \mapsto \{ \lessdot_w \}_{w \in W}$ is a function, which maps a corresponding strict partial ordering to every world. \end{itemize} As usual, a frame is extended to a model $\mathcal{M} = \langle\mathcal{F}_1 ,\nu \rangle$, where $\nu: Var \mapsto 2^W$ is an evaluation function, mapping each proposition to a subset of possible worlds, in which that proposition is true. Formal semantics is natural here: $\\$ \noindent$\mathcal{M}, w \models p \mbox{ iff } w \in \nu(p)$\\ $\mathcal{M}, w \models \neg \phi \mbox{ iff } \mathcal{M}, w \not\models \phi $\\ $\mathcal{M}, w \models \phi \lor \psi \mbox{ iff } \mathcal{M}, w \models \phi \mbox{ or } \mathcal{M},w \models \psi$ \\ $\mathcal{M}, w \models \phi \rightsquigarrow \psi \mbox{ iff } \mathcal{M}, v \models \psi \mbox{ for every } \lessdot_w\mbox{-maximal world } v\in W_w \mbox{, such that } \mathcal{M}, v \models \phi$ $\\$ The adequate logic of the frames defined above is namely \textbf{P}. It is sound and complete on the class of frames defined earlier \cite{burg1981}: $ \\$ \noindent(PL) All tautologies of classic propositional logic \\ (CI) $\phi \rightsquigarrow \phi$\\ (CC) $((\phi \rightsquigarrow \psi) \land (\phi \rightsquigarrow \chi)) \rightarrow (\phi \rightsquigarrow (\psi \land \chi))$\\ (CW) $(\phi \rightsquigarrow \psi) \rightarrow (\phi \rightsquigarrow (\psi \lor \chi))$\\ (SA) $((\phi \rightsquigarrow \psi) \land (\psi \rightsquigarrow \chi)) \rightarrow ((\phi \land \psi)\rightsquigarrow \chi) $\\ (AD) $((\phi \rightsquigarrow \chi) \land (\psi \rightsquigarrow \chi)) \rightarrow ((\phi \lor \psi)\rightsquigarrow \chi)$ \\ (REA) If $\vdash \phi \leftrightarrow \psi$ then $(\phi \rightsquigarrow \chi) \leftrightarrow (\psi \rightsquigarrow \chi)$\\ (REC) If $\vdash \phi \leftrightarrow \psi$ then $(\chi \rightsquigarrow \phi) \leftrightarrow (\chi \rightsquigarrow \psi)$\\ \noindent (MP) \AxiomC{$\phi, \phi \rightarrow \psi$} \UnaryInfC{$\psi$} \DisplayProof $\\$ As we have noted before, other axioms can be added, if we impose more metaphysically motivated restrictions on comparative similarity relation. But we will omit this discussion and work with that simplest system. \section{Metaphysical concerns} Now we need to apply the logic we have defined earlier to temporally sensitive expressions. But before going into details of temporal logic framework we want to use, there is a need to figure out several metaphysical concerns about time and alternative courses of events. \subsection{Plain versus Historical Counterfactuals} Some subjunctive conditionals may be interpreted in two different fashions. For example, \begin{center} \textit{If I were 3 meters tall, I would be the tallest person in the world} \end{center} We can understand it as just the idea about the notion of being me: if we take that notion and change one of the properties it implies -- namely, my height -- that new notion of me necessarily will imply the property of being the tallest man in the world. The other way is to think of it as a thought about actual me and alternative course of events. If we imagine some alternative history, which is as similar to the real history as possible, where I am in fact 3 meters tall, in that alternative history I will be the tallest. But in that interpretation, it may be the case that the possible course of events resulting in me being that tall presuppose me getting other genetic heritage, consequently, it may presuppose other course of evolution of human beings, where 3 meters is an average height of a mature man, so the original statement might be false. Or another example: \begin{center} \textit{If John were not late, he wouldn't drive when it's dark} \end{center} Again, we can understand it as the very notion of John not being late presupposes that property of not driving in the darkness, or we can think of just the alternative course of events in the real world when John was not late, while every other aspect of the history stays the same as soon as it is compatible with John not being late. It seems obvious that the second interpretation is more natural in that context. In some other cases, the first way of interpreting coutnerfactuals will be more natural: \begin{center} \textit{If the square root of 2 were a rational number, it would be possible to express it as a ratio of 2 integers} \end{center} It is even hard to imagine that alternative course of real world evolution where the square root of 2 would be a rational number, so the expression is more likely to be a purely conceptual one. Wawer and Wroński proposed the difference between \textit{plain} and \textit{historical} counterfactuals to grasp that two interpretations \cite{waw2015}. If the pure comparative similarity analysis is enough for plain counterfactual, we need to compose it with a certain temporal logic somehow to grasp the historical one. \subsection{Local Miracles and Indeterminism} The next problem is indeterminism associated with historical counterfactual conditionals: we are considering the possibility of alternative course of events in the actual world, i.e. at some point, given the same past and settled laws of nature, the future might be different. Several authors argue that we should stay either neutral or taking the side of determinism when we are dealing with counterfactuals \cite{mul2007}. The most remarkable case is Lewis himself: in his article "Are We Free to Break the Laws?", Lewis argues for compatibilism (an idea that causal determinism is compatible with free will) with the help of counterfactual definition of ability to act otherwise \cite{lewis1981}. We will consider the next premises: \begin{enumerate} \item The events of the past ($H$) and the laws of nature ($L$) causally determine future course of events \item Every human action, for example, raising one's hand ($R$), as an event, is predetermined by $H$ and $L$: $\Box((H \land L) \rightarrow R)$. \item Therefore, if one's acting against their predetermination, for example, not raising one's hand ($\neg R$), then either our past were different ($\neg H$), or the laws of nature were broken ($\neg L$). \end{enumerate} Lewis claiming that we do have a possibility to act otherwise, i.e. to do something, that if we did it, the laws of nature would be broken ($\neg R \rightsquigarrow \neg L$). But it does not mean that we are able to break the laws of nature: that hypothetical law-breaking event might occur before our action and not be caused by us. As Placek and Müller noticed\cite{mul2007}, the position seems a bit ambivalent: while stating determinism, Lewis speaks of \textit{local miracles}, i.e. the events, after which the course of events starts to deviate from the path, predetermined by the past and laws of nature. This picture, where we have different histories and forks, is almost isomorphic to indeterministic theory we need for historical counterfactuals. We will see the similarities in the next section. \section{Branching Time Temporal Logic} As we have noticed before, the very notion of historical counterfactual statements implies certain kind of indeterminism, especially when it comes to time and so called future contingents. Luckily we have a well-developed formal theory of indeterminism without thin red line, i.e. the theory which states that alternative courses of events are real and there is no unique actual scenario that happened, happens and will continue happening. It is the theory of Ockhamist branching time, developed by Arthur Prior, Richard Thomanson, Nuel Belnap and other contributors \cite{belnap2001}. If we don't want to stick to indeterminism and save Lewisean view, we can regard this alternative courses of events not as real, but rather as mere possibilities of local miracles. The main advantage of Ockhamist branching time theory in the context of counterfactuals is that it allows both expressions about time and historical possibility/necessity. "It will be hot tomorrow, but it is possible that it will be cold" is a consistent, perfectly well-formed statement, so that "if it were the case that it will be cold, we would be in need to find a warm coat" will be. The basic idea is to represent time with the help of generalized McTaggart's B-series: we postulate a non-empty set of moments $M$, which is ordered by a precedence relation: $m_1 \succ m_2$ means that some moment $m_1$ comes earlier than another moment $m_2$. This relation should be partially ordered, i.e. it should allow some moments to be incompatible in regards to precedence: we will think of this pairs of moments in the next manner: if $m_1 \not\succ m_2$ and $m_2 \not\succ m_1$, then $m_1$ and $m_2$ belong to different \textit{histories}, i.e. full courses of events, how things might be. Then we may define some linearly ordered subsets of $M$, which will represent histories. Obviously, the subsets should be \textit{maximal} linearly ordered subsets of $M$ to illustrate the full histories. So that, the flow of time may be represented by a tree-like structure, where every node designates moments and edges -- precedence relations, every longest path -- a history (see Figure 1 for an illustration). Another important feature is \textit{backward linearity}: for any $m_1, m_2, m_3 \in M$, if $m_1 \succ m_3$ and $m_2 \succ m_3$ then $m_2 \succ m_1$ or $m_1 \succ m_2$ or $m_1 = m_2$. It means that past is settled: alternative possibilities lies only in future. Now we can formally define our Ockhamist branching time temporal logic OBTL. The syntax can be presented in BNF form as follows. Given a countable set $Var$, for any propositional variable from that set $p \in Var$: \[ \phi, \psi : = p \: | \: \neg \phi \: | \: \phi \lor \psi \: | \: G \phi \: | \: H \phi \: | \: \Box \phi \] Atomic propositions and Boolean connectives have standard meaning. $G \phi$ reads as "at every moment in the future, $\phi$", $H\phi$ -- "at every moment in the past, $\phi$", $\Box \phi$ -- "it is historically necessary that $\phi$", which means that in all possible alternative histories, it is $\phi$ at the moment. As usual, all modal operators have duals: \begin{center} $G \phi \equiv \neg F \neg \phi $\\ $H \phi \equiv \neg P \neg \phi $\\ $\Box \phi \equiv \neg \Diamond \neg \phi$ \end{center} $F\phi$ stands for the truth of $\phi$ at \textit{some} moment in the future, $P \phi$ -- $\phi$ at some moment in the past and $\Diamond \phi$ -- for $\phi$ is true at the moment in at least some possible history. So that, the Ockhamist branching time temporal logic is defined on a frame \[\mathcal{F}_2 = \langle M, H, \succ \rangle\] where $M = \{ m_1, m_2,... \}$ is a non-empty countable set of moments. $\succ \subseteq M^2$ is a partial ordering defined on $M$. A set of histories is $H = \{ h_1, h_2,...\}$, where each history is a maximal linearly $\succ$-ordered subset of $M$. A model is $\mathcal{M} = \langle \mathcal{F}_2, \nu \rangle$, where $\mathcal{F}$ is a frame and $\nu: Var \mapsto 2^{M \times H}$ is a standard evaluation function, mapping a set of moment/history pairs (we will call them \textit{points}) to every atomic proposition. The key difference with linear temporal logics is that evaluation depends not only on moments, but on histories as well: the same moment may satisfy different formulas depending on what possible history is taking place. \begin{figure}[h] \centering \includegraphics{ex.pdf} \caption{An example of BT frame} \label{fig:my_label} \end{figure} By $H_m$ we will denote the set of histories, running through the moment $m$: $H_m = \{ h \: | \: m \in h \}$. Now we can provide semantics for OBTL language. $\\$ \begin{align*} \noindent \mathcal{M}, m/h &\models p \mbox{ iff } m/h \in \nu(p) \\ \mathcal{M}, m/h &\models \phi \lor \psi \mbox{ iff } \mathcal{M}, m/h \models \phi \mbox{ or } \mathcal{M}, m/h \models \psi\\ \mathcal{M}, m/h &\models \neg \phi \mbox{ iff } \mathcal{M}, m/h \not\models \phi\\ \mathcal{M}, m/h &\models P\phi \mbox{ iff } \exists m'(m'\succ m \land \mathcal{M}, m'/h \models \phi) \\ \mathcal{M}, m/h &\models F\phi \mbox{ iff } \exists m' (m \succ m' \land \mathcal{M}, m'/h \models \phi)\\ \mathcal{M}, m/h &\models \Box \phi \mbox{ iff } \forall h' \in H_m (\mathcal{M}, m/h' \models p)\\ \end{align*} When it comes to complete axiomatization for OBTL, it is still an open issue. We will use the logic proposed by Mark Reynolds in \cite{rey2002}. Moreover, we are going to treat it as a black box: no interference in OBTL axioms will be made, we will just extend the axiomatics with other systems and axioms, despite what the OBTL axiomatics is actually like. We will stick to the next variant proposed by Reynolds, which is proved to be sound: $\\$ \noindent All tautologies of classic propositional logic \\ K4 for both $G$ and $H$ operators\\ S5 for $\Box$ operator\\ (L1) $\phi \rightarrow GP\phi$\\ (L1') $\phi \rightarrow HF \phi$\\ (L2) $F \phi \rightarrow G(F\phi \lor \phi \lor P \phi)$\\ (L2') $P \phi \rightarrow H(P \phi \lor \phi \lor F \phi)$\\ (L3) $\Box H \phi \equiv H \Box \phi$\\ (L4) $P \Box \phi \rightarrow \Box P \phi$\\ (L5) $\Box G \phi \rightarrow G \Box \phi$\\\ (L6) $G \bot \rightarrow \Box G \bot$ $\\$ The logic is closed under the next inference rules: $\\$ \noindent(MP) \AxiomC{$\phi, \phi \rightarrow \psi$} \UnaryInfC{$\psi$} \DisplayProof \\ (RN) \AxiomC{$\phi$} \UnaryInfC{$\Box \phi$} \DisplayProof \AxiomC{$\phi$} \UnaryInfC{$G \phi$} \DisplayProof \AxiomC{$\phi$} \UnaryInfC{$H \phi$} \DisplayProof \vspace{3mm} \noindent(IRR) \AxiomC{$(p \land H \neg p)\rightarrow \phi$} \UnaryInfC{$\phi$} \DisplayProof where $\phi$ does not contain $p$. \section{Combining logics} \subsection{Degrees of strictness} Now we face our main task: how to combine \textbf{P} with OBTL in the most meaningful and formally correct way? The biggest conceptual concern is the scope of comparative similarity relations: which moment/history pairs should be comparable? In the original theory proposed by Lewis, we can compare logically/nomically accessible worlds (depending on how strict we want our counterfactual to be), i.e. we can compare how arbitrary two worlds $w_1, w_2$ are similar to $w$ if $w_1$ and $w_2$ are logically consistent/share the same laws of nature with $w$ \cite{lewis1973}. When it comes to historical counterfactuals, there are plenty of candidates to be the adequate restriction. The first one is \textit{co-presence}: we can compare only those moment/history pairs, which occur at the same time in different histories. For example, if we want to evaluate the sentence \textit{"If I were polite today, you would not complain about how annoying I am with being rude all the time"}, we need to bear in mind exactly that points of alternative histories, where 1) I was polite at the given day and 2) which are co-present with the moment of utterance of the original sentence: if we consider the history, where I am polite today, it is still hard to tell if I would face the complaint or not in the nearest future or 3 years ago in that alternative history. We can even strengthen the co-presence restriction to historical accessibility, stating than only historically accessible moment/history pairs are comparable. \begin{figure}[h] \centering \includegraphics[scale=0.7]{tour} \caption{Tournament tree} \label{fig:my_label} \end{figure} To illustrate the distinction, let us consider the next example. We have a tournament of philosophers: Aristotle (a), Berkeley (b), Chrysippus (c) and Descartes (d) are participating. On the Figure 2, we can see an Ockhamist branching time model of the tournament. It starts with the battle of Aristotle and Berkeley, the winner is going to compete with Chrysippus in the next round and the defeater is going to challenge Descartes at the final stage (for the matter of simplicity, we haven't graphically illustrated the evaluation function of that model, but the truth/falsity of the propositions we are going to mention should be obvious: for example, at the point $(b \: vs. \: c/h_1)$ it is true that Berkeley has won the previous round and will win the current one). For example, Aristotle wins the first match and faces Chrysippus. The fan of stoicism states: \begin{center} \textit{"If Chrysippus were debating Berkeley, he would easily refute his claims and win"}. \end{center} If we take co-presence as the only restriction for comparative similarity relation, then we need to take that moment/history pairs, which are the most similar to $(a \: vs.\: c/h_3)$ and in which Chrysippus confronts Berkeley, from the next set: \[\{(a \: vs.\: c/h_3), (a \: vs.\: c/h_4), (b \: vs.\: c/h_1), (b \: vs.\: c/h_2) \}\] Basically, Chrysippus supporter is right with his counterfactual statement, if $(b \: vs.\: c/h_2)$ is more similar to $(a \: vs.\: c/h_3)$ than $(b \: vs.\: c/h_1)$. But if we want to compare only historically accessible points, then we need to consider only $\{(a \: vs.\: c/h_3), (a \: vs.\: c/h_4) \}$. None of these points models the situation of Chrysippus debating Berkeley, so the given counterfactual is vacuously true: it is historically impossible for the battle mentioned in antecedent to occur. In that case, the Chrysippus supporter should be more accurate and reformulate his claim: \begin{center} \textit{"In the past it was true that if Berkeley had defeated Aristotle and were debating Chrysippus, the latter one would easily refute Berkeley's claims and win"}. \end{center} Intuitively, the reformulation is more precise: by saying \textit{"If Chrysippus were debating Berkeley..."}, it was meant the case in which the previous round, Aristotle versus Berkeley, ended up differently. Obviously, historical accessibility is a stronger notion than co-presence: all historically accessible points are co-present. But even co-presence is not so simple. Let us consider the next counterfactual: \begin{center} \textit{"If I were 17 years old, I would not be able to buy a bottle of wine"}. \end{center} In terms of comparative similarity analysis, if we take co-presence as a valid restriction, we shall interpret that sentence in a very unnatural way: in some history, as similar to the actual one as possible, where time goes in a deviant way and I am younger than actual me \textit{at the same moment}, I am not able to purchase a bottle of wine. While the most intuitively appealing interpretation will be that simple: when I was 17, I couldn't buy a bottle of wine. \begin{figure}[h] \centering \includegraphics{timetrav.pdf} \caption{The bottle of wine example} \label{fig:my_label} \end{figure} Figure 3 illustrates the corresponding model. If we evaluate our counterfactual at $(m_2/h_1)$, where I am 18 years old, than we should look either at co-present $(m_4/h_2)$, where I am 17 years old somehow (maybe I have spent some time on the other planet of Solar system after 17), or at the $(m_1/h_1)$, the actual past when I was 17. The example is not so artificial: it is common to express statements about actual past in the counterfactual manner. \textit{"If I were younger, I would be able to...", "If today were yesterday, he might not be late"}, etc. But not all "if today were yesterday" cases are inconsistent with co-presence: sometimes it is important to save some essential features of the current moment. \textit{"If he were younger, he would marry her"} may be an example: obviously, it is not synonymous to \textit{"He would marry her in the past, when he was young"}. To conclude, historical counterfactuals are extremely vague, so we have to be accurate not to impose way too strict definitions. That is our motivation to favour for as weak definition as possible, allowing different context-dependent evaluations. We will regard neither co-presence nor historical accessibility as a necessary restriction to comparative similarity relations. Stronger system with co-presence was proposed by Canavotto: \cite{canavotto2020}. Another motivation for us to abandon the restrictions is a metaphysics behind Ockhamist branching time theory: all moment/history pairs occur in the only one world, \textit{our world}. Regarding Lewis theory, all points should be both logically and nomically accessible for each other. \subsection{\textbf{P}+OBTL} Now we will formally describe the logic $\mathcal{L}_{PBT}$, corresponding to our understanding of historical counterfactuals. The language of the logic may be defined as follows. For a countable set of propositions $Var$ and $p \in Var$: \[ \phi, \psi := p \: | \: \neg \phi \: |\: \phi \lor \psi \: | \: \phi \rightsquigarrow \psi \: | \: G \phi \: | \: H \phi \: | \: \Box \phi \] As we can see, it is just a combination of \textbf{P} and OBTL languages. The $\mathcal{L}_{PBT}$ frame is a tuple: \[ \mathcal{F}_3 = \langle M, H, \succ, \langle \lessdot_{m/h}\rangle_{m/h \in I}, f \rangle \] \begin{itemize} \item $M \neq \emptyset$ is a non-empty countable set of moments, $\succ$ is a partial ordering on $M$, $H\subseteq 2^M$ is a set of histories, i.e. maximal linearly $\succ$-ordered subsets of $M$, just like in OBTL frame. We will denote a set of all points in the frame as $I \subseteq (M \times H)$. \item $\langle \lessdot_{m/h}\rangle_{m/h \in I}$ is a tuple of strict partial orderings on $I$, defined for every point $m/h \in I$. The orderings represent comparative similarity relations. \end{itemize} As usual, a frame $\mathcal{F}_3$ is extended to a model $\mathcal{M}= \langle \mathcal{F}_3, \nu \rangle$ by adding a valuation function $\nu: Var \mapsto 2^{M \times H}$, which maps every proposition to a set of points, in which the proposition is true. Semantics stays the same for propositions, Boolean connectives and modal operators, as it was in OBTL. Naturally, semantics for $\rightsquigarrow$ is defined as follows: $\\$ \noindent $\mathcal{M}, m/h \models \phi \rightsquigarrow \psi \mbox{ iff } \mathcal{M}, m'/h' \models \psi$ for every $\lessdot_{m/h}$-maximal point $m'/h'$, such that \\ $\mathcal{M}, m'/h' \models \phi$ $\\$ We can see that semantically $\mathcal{L}_{PBT}$ is nothing more than a fusion of minimal counterfactual logic \textbf{P} and Ockhamist branching time temporal logic OBTL. Consequently, if we combine \textbf{P} axioms with OBTL axioms (i.e. join two sets of axioms and define it as closed under the rules of inference of both systems), we will preserve completeness \cite{fine1996}. In order to show it explicitly, we need to redefine our frame in the next manner: \[ \mathcal{F}_{3'} = \langle W, \succ, R_{\Box}, \langle \lessdot_w \rangle_{w \in W}, f \rangle \] \begin{itemize} \item $W \neq \emptyset$ is non-empty countable set of possible worlds or states \item $\succ \subseteq W \times W$ is a partial ordering on $W$ \item $R_{\Box} \subseteq W \times W$ is an equivalence (i.e. transitive, reflexive and symmetric) relation, defined on $W$: for arbitrary $w_1, w_2 \in W$, if $w_1 R_{\Box}w_2$, then $w_2$ is historically accessible from $w_1$, i.e. $w_1$ and $w_2$ are points sharing the same moment, but contained in different histories \item $ \langle \lessdot_w \rangle_{w \in W}$ is a tuple of strict partial orderings defined on $W$ for every possible state. Their definition and meaning stays the same as in \textbf{P}. \end{itemize} Now we can clearly see that the frame $\mathcal{F}_{3'}$ shares the same domain $W$ and the same tuple of relations $\langle \lessdot_w \rangle_{w \in W}$, as it was in $\mathcal{F}_1$ case. We have added $R_{\Box}$ and $\succ$ relations, corresponding to OBTL frame $\mathcal{F}_2$: the only difference is that instead of defining histories as $\succ$-maximal subsets of $W$ and evaluating expressions on points as moment/history pairs, we have defined an equivalence relation on points, sharing the same moment. It won't affect our logic, since we can present a family of sets of moment/history pairs sharing the same moment $ \{\{ m/h \: | \: h \in H_m \} \} _{m \in M}$ as a partition of the set of all points $W$, and it is trivial to show that we can define an equivalence relation on $W$, corresponding to that partition, and it is exactly $R_{\Box}$. After extending the frame $\mathcal{F}_{3'}$ to a model in a usual manner, the only semantically different definition we will see is that for $\Box \phi$ formulas. Other definitions stay the same with the only change is that we use states $w \in W$ instead of moment/history pairs $ m/h \in (M \times H )$: \begin{equation*} \mathcal{M}, w \models \Box \phi \mbox{ iff } \forall w' \in W (w R_{\Box} w' \rightarrow \mathcal{M}, w' \models \phi) \end{equation*} Now we can observe that: \begin{enumerate} \item $\mathcal{L}_{PBT}$ language is freely generated by the union of signatures $\textbf{P}$ and $OBTL$ \item $\mathcal{L}_{PBT}$ model is obtained by combining relations defined on the same domain $W$, saving the same evaluation function $\nu$ \end{enumerate} So that, by a results proved in \cite{fine1996}, the logic $\mathcal{L}_{PBT} = \textbf{P} \bigoplus OBTL$, which is generated by the simple union of axiom schemas and rules of inference of both $\textbf{P}$ and $OBTL$, is sound and complete with regards to $\mathcal{F}_{3'}$ frames (as soon as $OBTL$ is complete with regards to $\mathcal{F}_2$ frames). No further constraints on relations interplay were imposed, hence, no multimodal axioms is needed. $\mathcal{L}_{PBT}$ is a conservative extension of both $\textbf{P}$ and $OBTL$. \section{Conclusion} We have presented a formal analysis of temporally sensitive counterfactual conditionals, using the fusion of Ockhamist branching time temporal logic and minimal counterfactual logic \textbf{P}. We have presented an overview of the problems, both formal and metaphysical, which occur in combination of counterfactuals with temporal logics, and showed our motivation to use exactly that formal apparatus. Nevertheless, we are aware of the variety of problems we have refrained from touching. We haven't elaborated on formal constraints on comparative similarity relations and the corresponding axioms, as well as philosophical motivations to impose them. The enormously big discussion on determinism versus indeterminism was slightly mentioned as well. Moreover, we haven't paid any attention to spatial aspect of the problem, observing only temporal one. We have knowingly left this white spaces in our overview to concentrate on a single aspect. This obstacle leaves the possibility to continue our work. \clearpage
2977a05245dcafd756b3973faca2cc7c74538634
\section{Introduction} Given a first order elliptic operator $D$ with symbol $c$, one can look for zeroth-order perturbation ${\mathcal A}$ such that all $W^{1,2}$-solutions of the equation \begin{equation} \label{eq:1} D_s\xi\ :=\ (D+s{\mathcal A})\xi\ =\ 0 \end{equation} concentrate along submanifolds $Z_\ell$ as $\lvert s \rvert \to\infty$. In \cite{m} we gave a simple algebraic criterion satisfied from $c$ and ${\mathcal A}$ (see also Definition~\ref{defn:Main_Def_CP}) that ensures localization of solutions to $D_s\xi=0$ in the sense of Proposition~\ref{prop:concentration_Prop}. We also provided a general method for constructing examples of $c$ and ${\mathcal A}$ satisfying this criterion. This paper is a sequel to \cite{m} proving two main results: The first is a Spectral Decomposition Theorem for operators $D_s$ (Theorem~\ref{Th:mainT}) that satisfy the concentration condition \eqref{eq:cond}, two transversality conditions namely and two degeneration conditions. The Spectral Decomposition Theorem then states that: \begin{itemize} \item The eigenvectors corresponding to the low eigenvalues of $D^*_sD_s$ concentrate near the singular set of the perturbation bundle map ${\mathcal A}$ as $\lvert s \rvert \to\infty$. \item The eigenvalues of $D^*_sD_s$ corresponding to the eigensections that do not concentrate grow at least as $O(\lvert s \rvert)$ when $s\to\infty$. \item The components of the critical set of the perturbation bundle map ${\mathcal A}$ are submanifolds $Z_\ell$ and each determines an associated decomposition of the normal bundle to the $Z_\ell$ giving a precise asymptotic formula for the solutions of \eqref{eq:1} when $\lvert s \rvert$ is large. \end{itemize} Our second main result is an Index Localization Theorem, which follows from the Spectral Decomposition Theorem. It describes an Atiyah-Singer type index formula for how the index of $D$ decomposes as a sum of local indices associated with specific Dirac operators on suitable bundles over the submanifolds $Z_\ell$. The concentration condition \eqref{eq:cond} was previously found by I. Prokhorenkov and K.~Richardson in \cite{pr}. They classified the complex linear perturbations ${\mathcal A}$ that satisfy \eqref{eq:cond} but found few examples, all of which concentrate at points. They proved a spectral decomposition theorem and an index localization theorem in the case where the submanifolds $Z_\ell$ are finite unions of points. Our result generalize that theorem in the more general case where $Z_\ell$ are submanifolds. Using our general construction of perturbations ${\mathcal A}$ using spinors described in \cite{m}, the Index Localization formula draws some interesting conclusions of the index of $D$ as described by the local information of the zero set of a spinor. Part of our analysis on the asymptotics of the solutions when $s$ is large, is similar to the work \cite{bl} of J. M. Bismut and G. Lebeau. They use adiabatic estimates to examine in detail the case where ${\mathcal A}$ is a Witten deformation on a compact complex manifold with singular set a complex submanifold. \medskip This paper has nine sections. Section~\ref{sec:Concentration_Principle_for_Dirac_Operators} reviews the concentration condition, improves on some analytic consequences described in \cite{m} and states the main assumptions and results, which are proved in later sections. An important part of the story relies on the vector bundles $S^{0\pm}_\ell$ and $S^{1\pm}_\ell$ over the sub manifolds $Z_\ell$ that are introduced for the statement of Theorem~\ref{Th:mainT} and described in detail later. Section~\ref{Sec:structure_of_A_near_the_singular_set} is divided into two subsections. In the first subsection, we examine the perturbation term ${\mathcal A}$, regarding it as a section of a bundle of linear maps, and requiring it to be transverse to certain subvarieties, where the linear maps jump in rank. The transversality condition, allows us to write down a Taylor series expansion of ${\mathcal A}$ in the normal directions along each connected component $Z_\ell$ of $Z_{\mathcal A}$. The 1-jet terms of ${\mathcal A}$ together with the assumptions of Theorem~\ref{Th:mainT} indicate the existence of the vector bundles $S^{0\pm}_\ell$ and $S^{1\pm}_\ell$ and their geometry regarding the kernel bundles $\ker {\mathcal A}$ and $\ker {\mathcal A}^*$. In the second subsection we examine the geometry of the Clifford connection 1-form along the singular set and introduce a more appealing connection that ties better with the geometry of the bundles described earlier. In Section \ref{sec:structure_of_D_sA_along_the_normal_fibers}, we examine the geometric structure of $D + s{\mathcal A}$ near the singular set $Z_{\mathcal A}$ of ${\mathcal A}$. The transversality Assumption, allows us to write down a Taylor expansion of the coefficients of the operator $D$ in the normal directions along each connected component $Z_\ell$ of $Z_{\mathcal A}$. The expansion decomposes into the sum of a ``vertical'' operator $\slashed{D}_0$ that acts on sections along each fiber of the normal bundle, and a ``horizontal'' operator $\bar D^Z$ that acts on sections of the bundles $\pi^* S^+_\ell \to N$ on each normal bundle $N \to Z_\ell$ of each component $Z_\ell$ of $Z_{\mathcal A}$. These local operators are, in turn, used to describe the local models of the solutions we will be using in the construction of the asymptotic formula of eigenvectors of the global low eigenvalues when $s$ is sufficiently large. In Section \ref{sec:Properties_of_the_operators_slashed_D_s_and_D_Z.} we review some well known properties of the horizontal and vertical operators introduced in Section \ref{sec:structure_of_D_sA_along_the_normal_fibers}. The vertical operator is well known to associate to a Euclidean harmonic oscillator in the normal directions. In Section \ref{sec:harmonic_oscillator} we introduce weighted Sobolev spaces related to the aforementioned harmonic oscillator. The mapping properties of the vertical operator allows to build a bootstrap argument for the weights of the Gaussian approximate solutions. In Section \ref{sec:The_operator_bar_D_Z_and_the_horizontal_derivatives} we examine the mapping properties of the horizontal operator with respect to these spaces. The technical analysis needed to prove the Spectral Separation Theorem is carried out in Section~\ref{sec:Separation_of_the_spectrum} and Section~\ref{sec:A_Poincare_type_inequality}. Using the horizontal operator $D^Z_+$ and a splicing procedure, we define a space of approximate eigensections, and work through estimates needed to show that these approximate eigensections are uniquely corrected to true eigensections. In Section~\ref{sec:Morse_Bott_example} we provide an example from Morse-Bott homology theory where the Euler characteristic of closed manifold is written as a signed sum of the Euler characteristics of the critical submanifolds of a Morse-Bott function. Finally, we provide an appendix with the statements and proofs that fall long in the main text. In subsections \ref{subApp:tau_j_tau_a_frames}- \ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z} of the Appendix we describe the construction and properties of an adapted connection that is an extension of the connection appeared in Section~\ref{Sec:structure_of_A_near_the_singular_set}, over the total space of the normal bundle $N\to Z$ of the singular component $Z$. The adapted connection is used for the analysis carried out in Sections~\ref{sec:Separation_of_the_spectrum} and \ref{sec:A_Poincare_type_inequality}. The topology of the subvarieties that produce the singular set $Z_{\mathcal A}$ is the next step in the pursue of a connection of the Index Localization formula with characteristic numbers of these subvarieties. We will pursue this objective in another article. A good reference for them is \cite{k2}. \section*{Acknowledgements} The author would like to express his gratitude to his advisor, Professor T.H. Parker for introducing him to the subject, for his suggestions and for his continuous encouragement and to \'{A}kos Nagy for numerous engaging conversations on the subject. \medskip \vspace{1cm} \setcounter{equation}{0} \section{Concentration Principle for Dirac Operators } \label{sec:Concentration_Principle_for_Dirac_Operators} This section reviews some very general conditions described in \cite{m} in which a family $D_s$ of deformed Dirac operators concentrates. Furthermore it describes the necessary assumptions and states the two main theorems of the paper. \vspace{8mm} Let $(X,\, g_X)$ be a closed Riemannian manifold and $E$ be a real Riemannian vector bundle over $X$. We say that $E$ admits a $\mathbb{Z}_2$ graded Clifford structure if there exist an orthogonal decomposition $E=E^0\oplus E^1$ and a bundle map $c: T^*X\to \mathrm{End}(E)$ so that that $c(u)$ interchanges $E^0$ with $E^1$ and satisfies the Clifford relations, \begin{equation} \label{eq:Clifford_relations} c(u)c(v)+c(v)c(u)\ =\ -2 g_X(u, v)\ 1_E, \end{equation} and for all $u, v\in T^*X$. We will often denote $u_\cdot = c(u)$. Let also $\nabla : \Gamma(E) \to \Gamma(T^*X\otimes E)$ be a connection compatible with the Clifford structure; by definition $\nabla$ is compatible with the Clifford structure if it is compatible with the Riemannian metric of $E$, if it preserves the $\mathbb{Z}_2$ grading and if it satisfies the parallelism condition $\nabla c = 0$. The composition \begin{equation} \label{eq:Def_Dirac} D = c\circ \nabla : C^\infty(X; E^0)\rightarrow C^\infty(X; E^1), \end{equation} then defines a Dirac operator. Given a real bundle map ${\mathcal A}: E^0\to E^1$ we form the family of operators \begin{align*} D_s=D+s{\mathcal A}, \end{align*} where $ s\in\mathbb{R}$. Furthermore, using the Riemannian metrics on the bundles $E^0$ and $E^1$ and the Riemannian volume form $d\mathrm{vol}^X$, we can form the adjoint ${\mathcal A}^*$, and the formal $L^2$ adjoint $D_s^*= D^*+ s{\mathcal A}^*$ of $D_s$. We recall from \cite{m} the definition of a concentrating pair: \begin{defn}[Concentrating pairs] \label{defn:Main_Def_CP} In the above context, we say that $(c, {\mathcal A})$ is a {\em concentrating pair} if it satisfies the algebraic condition \begin{align} \label{eq:cond} {\mathcal A}^*\circ c(u) = c(u)^*\circ {\mathcal A}, \qquad \text{for every $u\in T^*X$.} \end{align} \end{defn} It is proven in \cite{m}[Lemma 2.2] that $(c, {\mathcal A})$ being a concentrating pair is equivalent to the differential operator \begin{equation} \label{eq:bundle_cross_terms} B_{\mathcal A} = D^*\circ {\mathcal A} + {\mathcal A}^* \circ D, \end{equation} being a bundle map. In the analysis of the family $D+s{\mathcal A}$, a key role is played by the {\em singular set of ${\mathcal A}$}, defined as \[ Z_{\mathcal A} : = \left\{p\in X:\, \ker {\mathcal A}(p)\ne 0 \right\}, \] that is, the set where ${\mathcal A}$ fails to be injective. Since the bundles $E^0$ and $E^1$ have the same rank, a bundle map ${\mathcal A}:E^0\to E^1$ will necessarily have index zero. However maps satisfying condition \eqref{eq:cond} are far from being generic and they may have a nontrivial kernel bundle everywhere in which case the former definition of $Z_{\mathcal A}$ will be equal to $X$. In particular, the kernel bundle $\ker{\mathcal A} \to X$ and $\ker{\mathcal A}^* \to X$, will be a $\text{Cl}(T^*X)$- submodules of $E^0$ and $E^1$ over $X$. In that case the critical set $Z_{\mathcal A}$ is defined as the jumping locus of of the generic dimension of the kernel in $X$. Note that the dimensions of the kernel and cockernel bundles of ${\mathcal A}$ will jump by the same amount in each of the connected components of $Z_{\mathcal A}$. In this case we can consider the kernel subbundles of $E^0$ and $E^1$ over $X\setminus Z_{\mathcal A}$ and we will assume that we can extend them to subbundles of constant rank over the whole $X$. The Clifford compatible connection can be chosen to preserve these subbundles. Therefore ${\mathcal A}$ and the Dirac equation can be broken to the part of $D$ and ${\mathcal A}$ living in the orthogonal complement of these bundles, reducing the problem to the case where ${\mathcal A} :E^0 \to E^1$ is generically an isomorphism except at the set $Z_{\mathcal A}$. Fix $\delta>0$, set $Z_{\mathcal A}(\delta)$ to be the $\delta$-neighborhood of $Z_{\mathcal A}$ where the distance function is well defined and set, \[ \Omega(\delta)=X\setminus Z_{\mathcal A}(\delta), \] be its complement. The following proposition from \cite{m} shows the importance of the concentrating condition \eqref{eq:cond}. \begin{prop}[Concentration Principle] \label{prop:concentration_Prop} There exist $C'=C'(\delta, {\mathcal A}, C)>0$ and $s_0 = s_0(\delta, {\mathcal A})>0$ such that whenever $\lvert s\rvert > s_0$ and $\xi\in C^{\infty}(E)$ is a section with $L^2$ norm 1 satisfying $\|D_s\xi\|^2_{L^2(X)}\leq C \lvert s \rvert $, one has the estimate \begin{equation} \label{eq:concentration_estimate} \int_{\Omega(\delta)} \lvert\xi\rvert^2\, d\mathrm{vol}^X\ <\frac{C'}{\vert s\rvert}. \end{equation} \end{prop} We have the following improvement: \begin{prop}[Improved concentration principle] \label{prop:improved_concentration_Prop} Let $\ell \in \mathbb{N}_0,\ 0<\alpha<1$ and $C_1>0$, and choose $\delta>0$ small enough so that $100(\ell+1) \delta$ is a radius of a tubular neighborhood where the distance from $Z_{\mathcal A}$ is defined. Then there exist $s_0 = s_0(\delta, C_1, c, {\mathcal A}),\ \epsilon = \epsilon(\delta, \ell, \alpha, {\mathcal A})$ and $C'=C'(\delta, \ell, \alpha, {\mathcal A})$ all positive numbers such that whenever $\lvert s \rvert > s_0$ and $\xi\in C^{\infty}(X;E)$ is a section satisfying $D_s^* D_s \xi = \lambda_s \xi$ with $\lambda_s < C_1 \lvert s\rvert$, one has the estimate \begin{equation} \label{eq:improved_concentration_estimate} \|\xi\|_{C^{\ell, \alpha}(\Omega(\delta))}\ < C' \lvert s\rvert^{-\tfrac{\ell}{2}} e^{-\epsilon\lvert s\rvert}\|\xi\|_{L^2(X)}. \end{equation} \end{prop} The proof is located in Appendix subsection~\ref{subApp:various_analytical_proofs}. \begin{rem} \begin{enumerate} \item The dependence of the constant $C'$ in \eqref{eq:concentration_estimate} from $\delta$ can be made explicit when one has an estimate of the form $\lvert{\mathcal A}\xi\rvert^2 \geq r^a \lvert\xi\rvert^2$ on a sufficiently small, tubular neighborhood of $Z_{\mathcal A}$, where $r$ is the distance from $Z_{\mathcal A}$ and $a>0$. Assumption~ \ref{Assumption:normal_rates} below gives such an estimate. \item Propositions~\ref{prop:concentration_Prop} and \ref{prop:improved_concentration_Prop} are applicable to general concentration pairs $(\sigma, {\mathcal A})$ where the symbol $\sigma$ is just elliptic. \end{enumerate} \end{rem} Propositions~\ref{prop:concentration_Prop} and \ref{prop:improved_concentration_Prop} also hold for the adjoint operator $D^*_s$ because of the following lemma: \begin{lemma} \label{lem:lr} The concentration condition \eqref{eq:cond} is equivalent to \begin{equation} \label{eq:cond_version2} c(u)\circ {\mathcal A}^* = {\mathcal A}\circ c(u)^*\ \quad \forall u\in T^*X. \end{equation} Hence $D + s{\mathcal A}$ concentrates if and only if the adjoint operator $D^* + s{\mathcal A}^*$ does. \end{lemma} From Proposition~\ref{prop:concentration_Prop}, the eigensections $\xi$ satisfying $D_s^*D_s \xi = \lambda(s) \xi$ with $\lambda (s)=O(s)$, concentrate around $Z_{\mathcal A}$ for large $s$. An interesting question arises as to what extend these localized solutions can be reconstructed using local data obtained from $Z_{\mathcal A}$. The answer will be given in Theorem~\ref{Th:mainT}. We start by describing the assumptions we will use throughout the paper in proving the main theorem. We start by imposing conditions on ${\mathcal A}$ that will guarantee that the connected components $Z_\ell$ of the singular set $Z_{\mathcal A}$ are submanifolds and that the rank of ${\mathcal A}$ is constant on each $Z_\ell$. For this, we regard ${\mathcal A}$ as a section of a subbundle ${\mathcal L}$ of $\mathrm{Hom}(E^0,E^1)$ as in the following diagram: \begin{align*} \label{diag:Ldiagram} \xymatrix{ \ \ {\mathcal L}\ \ \ar@{^{(}->}[r] \ar[d] & \mathrm{Hom}(E^0, E^1) \supseteq {\mathcal F}^l\\ X \ar@/^1pc/[u]^{{\mathcal A}} & } \end{align*} Here ${\mathcal L}$ is a bundle that parametrizes some family of linear maps ${\mathcal A}:E^0\to E^1$ that satisfy the concentration condition \eqref{eq:cond} for the operator \eqref{eq:Def_Dirac}, that is, each $A\in{\mathcal L}$ satisfies $A^*\circ c(u) = c(u)^*\circ A$ for every $u\in T^*X$. Inside the total space of the bundle $\mathrm{Hom}(E^0, E^1)$, the set of linear maps with $l$-dimensional kernel is a submanifold $\mathcal{F}^l$; because $E^0$ and $E^1$ have the same rank, this submanifold has codimension~$l^2$. In all of our examples in \cite{m} the set ${\mathcal L}\cap {\mathcal F}^l$ is a manifold for every $l$. \bigskip \noindent\textbf{Transversality Assumptions:} \begin{enumerate} \addtocounter{enumi}{0} \item \label{Assumption:transversality1} As a section of ${\mathcal L}$, ${\mathcal A}$ is transverse to ${\mathcal L}\cap {\mathcal F}^l$ for every $l$, and these intersections occur at points where ${\mathcal L}\cap {\mathcal F}^l$ is a manifold. \item \label{Assumption:transversality2} $Z_\ell$ is closed for all $\ell$. \end{enumerate} As a consequence of the Implicit Function Theorem, ${\mathcal A}^{-1}({\mathcal L}\cap {\mathcal F}^l)\subseteq X$ will be a submanifold of $X$ for every $l$. The singular set decomposes as a union of these critical submanifolds and, even further, as a union of connected components $Z_\ell$: \begin{equation} \label{eq:def_Z_l} Z_{\mathcal A}\, =\, \bigcup_l {\mathcal A}^{-1}({\mathcal L}\cap {\mathcal F}^l)\, =\,\bigcup_\ell Z_\ell. \end{equation} The bundle map ${\mathcal A}$ has constant rank along each $Z_\ell$, and $\ker {\mathcal A}$ and $\ker {\mathcal A}^*$ are well defined bundles over $Z_\ell$. Fix one critical $m$-dimensional submanifold $Z:= Z_\ell$ with normal bundle $\pi: N \to Z$. As explained in Appendix~\ref{App:Fermi_coordinates_setup_near_the_singular_set}, for small $\varepsilon>0$ the exponential map identify an open tubular neighborhood $B_Z(2\varepsilon)\subset X$ of $Z$ with the neighborhood of the zero section $\mathcal{N}:=\mathcal{N}_\varepsilon \subset N$. There are principal frame bundle isomorphisms and induced bundle isomorphisms, \begin{equation} \label{eq:exp_diffeomorphism} I:= T\exp: TN\vert_{\mathcal{N}} \to TX\vert_{B_Z(2\varepsilon)}\quad \text{and} \quad {\mathcal I}: \tilde{E} \to E\vert_{B_Z(2\varepsilon)}. \end{equation} Note that when restricted to the zero set of $N$, these bundle maps become identities. Let $S^0$ and $S^1$ denote the bundles obtained by parallel translating $\ker {\mathcal A}\to Z$ and $\ker {\mathcal A}^* \to Z$ along the rays emanating from the core set of $\mathcal{N}$, with corresponding bundles of orthogonal projections $P^i: E^i\vert_Z \to S^i\vert_Z,\, i =0,1$. The next couple of assumptions involve the conditions for the infinitesimal behavior of ${\mathcal A}$ near each $Z \subset Z_{\mathcal A}$: \bigskip \noindent \textbf{Non-degeneracy Assumption:} \begin{enumerate} \addtocounter{enumi}{2} \item \label{Assumption:normal_rates} Set $\tilde {\mathcal A} = {\mathcal I}^{-1}{\mathcal A} {\mathcal I}$ and $\bar{\mathcal A}$ the restriction of $\tilde{\mathcal A}$ to the zero set. We require \[ \left.\tilde{\mathcal A}^* \tilde{\mathcal A} \right\vert_{S^0} = r^2\left(Q^0 + \left.\frac{1}{2}\bar{\mathcal A}^* \bar A_{rr}\right\vert_{S^0} \right)+ O(r^3), \] where $r$ is the distance function from $Z$, and $Q^0$ is a positive-definite symmetric endomorphism of the bundle $S^0$. \end{enumerate} Comparing with the expansion of Proposition~\ref{prop:properties_of_perturbation_term_A}\eqref{prop:Taylor_expansions_ of perturbation_term} for $\left.\tilde{\mathcal A}^*\tilde{\mathcal A}\right\vert_{S^0}$, the condition replaces the $\text{Cl}_n^0(TX\vert_Z)$ invariant term $\left.\tfrac{x_\alpha x_\beta}{r^2} (A^*_\alpha A_\beta)\right\vert_{S^0}$ with a matrix $Q^0$. The statement of the non-degeneracy assumption is twofold: 1) For every $v = v_\alpha e_\alpha\in N,\ \lvert v\rvert=1$, by Schur's lemma, there are $\text{Cl}_n^0(TX\vert_Z)$-invariant decompositions \[ S^0\vert_Z = \bigoplus^{\ell(v)}_{k=1} S_{k,v}\quad \text{and}\quad \left.v_\alpha v_\beta (A^*_\alpha A_\beta)\right\vert_{S^0} = \sum_k \lambda^2(v) P_{S_{k, v}}, \] for some $\lambda(v)\in [0,\infty)$, where $P_{k, v}:S^0\vert_Z \to S_{k,v}$ is a $\text{Cl}_n^0(TX\vert_Z)$-invariant orthogonal projection. The assumption guarantees that the decomposition doesn't depend on the radial directions $v\in N,\ \lvert v\rvert=1$. 2) $Q^0$ has trivial kernel as an element of $\mathrm{End}(S^0)$. An eigenvalue of the section $Q^0: Z \to \mathrm{End}(S^0\vert_Z)$ is a function $\lambda^2:Z \to (0,\infty)$ so that the section $Q^0 - \lambda^2 1_{S^0\vert_Z}: Z \to \mathrm{End}(S^0\vert_Z)$ has image consisting of solely singular matrices. By \cite{k}[Ch.II, Th.6.8, pp.122] we can always choose a family of eigenvalues (possibly with repetitions) that are smooth functions $\{\lambda_\ell :Z \to (0, \infty)\}_{\ell=1}^d$. In Lemma~\ref{lem:basic_properties_of_M_v_w} it is proven that Assumption~\ref{Assumption:normal_rates} implies an analogue expansion for the bundle map, \[ \left.\tilde{\mathcal A}\tilde{\mathcal A}^*\right\vert_{S^1} = r^2\left(Q^1 + \left.\frac{1}{2}\bar{\mathcal A} \bar A^*_{rr}\right\vert_{S^1} \right)+ O(r^3). \] Also from equation \eqref{eq:Q0_vs_Q1} of that lemma follows that $Q^1: S^1\vert_Z \to S^1\vert_Z$ is a $\text{Cl}^0(T^*X\vert_Z)$-invariant map and that the matrices $Q^i,\ i=0,1$ have the same spectrum. By Schur's lemma, we have a decomposition \begin{equation} \label{eq:eigenspaces_of_Q_i} S^i\vert_Z = \bigoplus_\ell S^i_\ell, \end{equation} into the $\text{Cl}^0(T^*X\vert_Z)$-invariant eigenspaces of the distinct eigenvalues $\{\lambda_\ell^2\}_\ell$ of $Q^i$. Our final assumption guarantees that the eigenbundles of this decomposition have constant rank: \newpage \noindent \textbf{Stable degenerations:} \begin{enumerate} \addtocounter{enumi}{3} \item \label{Assumption:stable_degenerations} We require that every two members of the family $\{\lambda_\ell:Z \to (0,\infty)\}_{\ell=1}^d$ are either identical or their graphs do not intersect. \end{enumerate} The summands of decomposition \eqref{eq:eigenspaces_of_Q_i} are also $\text{Cl}^0(T^*X\vert_Z)$-submodules. By Assumption~\ref{Assumption:stable_degenerations}, the graphs of eignevalues $\{\lambda_\ell\}_\ell$ do not intersect and therefore the bundle of vector spaces $\{(S_\ell^i)_p\}_{p \in Z}$ has constant rank and form a vector bundle over $Z$, for every $\ell$ and every $i=0,1$. We introduce a $\text{Cl}^0(T^*Z)$-invariant section of $\mathrm{End}(S^i\vert_Z)$ defined by the composition, \begin{equation} \label{eq:IntroDefCp} \xymatrix{ C^i: S^i\vert_Z \ar[r]^-{\nabla{\mathcal A}^i} & T^*X\vert_Z \otimes E^{1-i}\vert_Z \ar[r]^-{\iota_N^* \otimes P^{1-i}} & N^* \otimes S^{1-i}\vert_Z \ar[r]^-{- c} & S^i\vert_Z }, \end{equation} where $\iota_N : N \hookrightarrow TX\vert_Z$ is the inclusion and ${\mathcal A}^0={\mathcal A},\ {\mathcal A}^1={\mathcal A}^*$. We set $C = C^0\oplus C^1$. It is proven in Proposition~\ref{prop:properties_of_compatible_subspaces} that $C^i$ is a symmetric operator that respects the decompositions \eqref{eq:eigenspaces_of_Q_i}. It is further proven that $C^i$ has eigenvalues $\{(n-m-2 k)\lambda_\ell\}_{k=0}^{n-m}$ and that the corresponding eigenspaces are $\text{Cl}^0(T^*Z)$-modules of constant rank. It is the eigenspaces with eigenvalues $\{\pm (n-m) \lambda_\ell\}_\ell$ that are important: \begin{defn} \label{defn:IntroDefSp} For each component $Z$ of $Z_{\mathcal A}$ with $S\vert_Z= \ker{\mathcal A} \oplus \ker{\mathcal A}^*$ and every $k\in \{0, \dots, n-m\}$, let $S^i_{\ell k}$ denote the eigenspace of $C^i$ with eigenvalue $(n-m - 2k) \lambda_\ell$ when $i=0,1$ and set $S_{\ell k } = S^0_{\ell k } \oplus S^1_{\ell k}$. In particular, when $k=0$ or $k = \dim N$, we define \[ S^{i\pm}_\ell = \{\text{eigenspace of $C^i$ with eigenvalue $\pm (n-m) \lambda_\ell$}\}, \] and set \[ S^{i\pm} = \bigoplus_\ell S^{i\pm}_\ell, \qquad S^\pm_\ell = S_\ell^{0\pm} \oplus S_\ell^{1\pm}, \qquad S^{\pm} := S^{0\pm} \oplus S^{1\pm}, \] for every $i=0,1$ with corresponding bundles of orthogonal projections $P^{i\pm},\ P^{i\pm}_\ell$, etc, where the projection indices follow the same placing as the indices of the spaces they project to. \end{defn} In particular, $S^\pm=S^{0\pm} \oplus S^{1\pm}$ are bundles of $\mathbb{Z}_2$-graded, $\text{Cl}(T^*Z)$-modules, over $Z$ with Clifford multiplication defined as the restriction \[ c_Z: = c: T^*Z \otimes S^{i\pm} \to S^{(1-i)\pm},\ i=0,1. \] They posses a natural connection, compatible with its Clifford structure given by, \[ \nabla^\pm:=\sum_\ell P^{i\pm}_\ell\circ \nabla\vert_{TZ} : C^\infty(Z; S^{i\pm}) \to C^\infty(Z; T^*Z\otimes S^{i\pm}). \] \begin{defn} \label{defn:Dirac_operator_component} On each $m$-dimensional component $Z\subset Z_{\mathcal A}$ we have a triple $( S^\pm, \nabla^\pm, c_Z) $. We define a Dirac operator, \[ D^Z_\pm \ :=\ c_Z\circ \nabla^\pm : C^\infty(Z; S^{0\pm}) \to C^\infty(Z; S^{1\pm}). \] It's formal $L^2$-adjoint is denoted by $D^{Z*}_\pm$. Finally let ${\mathcal B}_{i\pm}^Z: S^{i\pm} \to S^{(1-i)\pm}$, \[ {\mathcal B}_{i\pm}^Z:= \begin{cases} \sum_{\ell, \ell'} C_{\ell, \ell'} P^{1\pm}_\ell\circ\left( {\mathcal B}^0 + \frac{1}{2(\lambda_\ell + \lambda_{\ell'})} \sum_\alpha \bar A_{\alpha\alpha} \right) \circ P^{0\pm}_{\ell'}, & \quad \text{when $i=0$}, \\ \sum_{\ell, \ell'} C_{\ell, \ell'} P^{0\pm}_\ell\circ\left( {\mathcal B}^1 + \frac{1}{2(\lambda_\ell + \lambda_{\ell'})} \sum_\alpha \bar A_{\alpha\alpha}^* \right) \circ P^{1\pm}_{\ell'}, & \quad \text{when $i=1$}, \end{cases} \] where ${\mathcal B}^i$ are defined in expansions of Lemma~\ref{lem:Dtaylorexp} and, \[ C_{\ell, \ell'} = (2\pi)^{\tfrac{n-m}{2}} \frac{(\lambda_\ell \lambda_{\ell'})^{\tfrac{n-m}{4}}}{ (\lambda_\ell + \lambda_{\ell'})^{\tfrac{n-m}{2}}}. \] \end{defn} \vspace{4mm} The main result of this paper is a converse of Proposition~\ref{prop:concentration_Prop}. Recall that Proposition~\ref{prop:concentration_Prop} shows that, for each $C$, the eigensections $\xi$ satisfying $D_s^*D_s \xi = \lambda(s) \xi$ with $\lvert\lambda(s)\rvert\le C$, concentrate around $\bigcup_\ell Z_\ell$ for large $\lvert s\rvert$. Let $\mathrm{span}^0(s, K)$ be the span of the eigenvectors corresponding to eignevalues $\lambda(s) \leq K$ of $D^*_s D_s$ and denote the dimension of this space by $N^0(s, K)$. Similarly denote by $\mathrm{span}^1(s, K)$ and $N^1(s,K)$ the span and dimension of the corresponding eignevectors for the operator $D_s D^*_s$. The following Spectral Separation Theorem shows that these localized solutions can be reconstructed using local data obtained from $Z_{\mathcal A}$. \begin{spectrumseparationtheorem*} \label{Th:mainT} Suppose that $D_s=D+s{\mathcal A}$ satisfies Assumptions~\ref{Assumption:transversality1}-\ref{Assumption:stable_degenerations} above. Then there exist $\lambda_0>0$ and $s_0>0$ with the following significance: For every $s>s_0$, there exist vector space isomorphisms \[ \mathrm{span}^0(s, \lambda_0) \,\overset{\cong} \longrightarrow\,\bigoplus_{Z\in \mathrm{Comp}(Z_{\mathcal A})} \ker (D^Z_+ + {\mathcal B}_{0+}^Z), \] and \[ \mathrm{span}^1(s , \lambda_0) \overset{\cong}\longrightarrow\bigoplus_{Z\in \mathrm{Comp}(Z_{\mathcal A})} \ker (D^{Z*}_+ + {\mathcal B}^Z_{1+}), \] where $Z$ runs through all the set $\mathrm{Comp}(Z_{\mathcal A})$ of connected components of the singular set $Z_{\mathcal A}$. Furthermore $N^i (s, \lambda_0) = N^i(s, C_1 s^{-1/2})$, for every $s> s_0$ and every $i=0,1$, where $C_1$ is the constant of Theorem~\ref{Th:hvalue} \eqref{eq:est1}. Also, for every $s< -s_0$, the isomorphisms above hold by replacing $\ker (D^Z_+ + {\mathcal B}_{0+}^Z)$ with $\ker (D^Z_- + {\mathcal B}_{0-}^Z)$ and $ \ker (D^{Z*}_+ + {\mathcal B}_{1+}^Z)$ with $\ker (D^{Z*}_- + {\mathcal B}_{1-}^Z)$ and $N^i (s, \lambda_0) = N^i(s, C_1 \lvert s\rvert^{-1/2}),\ i=0,1$ . \end{spectrumseparationtheorem*} As a corollary we get the following localization for the index : \begin{indexlocalizationtheorem*} \label{Th:index_localization_theorem} Suppose that $D_s=D+s{\mathcal A}$ satisfies Assumptions~\ref{Assumption:transversality1}-\ref{Assumption:stable_degenerations} above. Then the index of $D$ can be written as a sum of local indices as \[ \mathrm{index\,} D = \sum_{Z\in \mathrm{Comp}(Z_{\mathcal A})} \mathrm{index\,} D^Z_\pm. \] \end{indexlocalizationtheorem*} \begin{proof} By the Spectral Separation Theorem there exist $\lambda_0>0$ and $s_0>0$ so that for every $\lvert s\rvert>s_0$ \begin{align*} \mathrm{index\,} D_s &= \dim \ker D_s - \dim \ker D_s^* \\ &= \dim \ker D^*_sD_s - \dim \ker D_sD_s^* \\ &= \dim \mathrm{span}^0(s, \lambda_0) - \dim \mathrm{span}^1(s, \lambda_0) \\ &= \sum_{Z\in \mathrm{Comp}(Z_{\mathcal A})}\mathrm{index\,} D^Z_\pm, \end{align*} where the third equality holds because $D^*_sD_s$ and $D_sD_s^*$ have the same spectrum and their eigenspaces corresponding to a common non zero eigenvalue are isomorphic. We also use the fact that the index remains unchanged under compact perturbations. This finishes the proof. \end{proof} \begin{rem} \begin{enumerate} \item Due to the conclusion of its last sentence the Spectral Separation Theorem is a direct generalization of the localization theorem proved in \cite{pr}. \item By Lemma~\ref{lemma:whew} of the Appendix, the term $\sum_\alpha \bar A_{\alpha\alpha}$ in Definition~\ref{defn:Dirac_operator_component} can be assumed to be zero without affecting the generality of the statement of Theorem~\ref{Th:mainT}. \item The non - degeneracy assumption~\ref{Assumption:normal_rates} can be weaken for the proof of the Spectral Separation Theorem. It is included for making the construction of the approximation simpler. In general one has to examine the various normal vanishing rates of the eigenvalues of ${\mathcal A}^*{\mathcal A}$ and correct the expansions in Corollary~\ref{cor:taylorexp} up to higher order in $r\to \infty$. The re-scaling $\{w_\alpha= \sqrt{s} x_\alpha\}_\alpha$ required to change the problem into a regular one will change multiplicity. In particular the bundles of solutions examined here will have a layer structure corresponding to those rates and possibly will have a jumping locus in their dimension. \end{enumerate} \end{rem} The proof of Spectral Separation Theorem relies on the existence of a gap in the spectrum of the operators $D^*_s D_s$ and $D_s D^*_s$ for $\lvert s \rvert >>0$. The proof of the existence of this gap relies in a splicing construction of approximate eigenvectors associated to the lower part of the spectrum. Inequalities of Theorem~\ref{Th:hvalue} are used to prove this association. In particular, inequality \eqref{eq:est2} is an analogue of a Poincare inequality and its proof is the heart of the argument. By Lemma~\ref{lemma:local_implies_global}, the proof of inequality \eqref{eq:est2} is essentially local in nature and one has to study the perturbative behavior of the density function $\lvert D_s \eta\rvert^2\, d\mathrm{vol}^X$ for sections $\eta$ supported in tubular neighborhoods of a component $Z$ of the critical set $Z_{\mathcal A}$. Using the maps \eqref{eq:exp_diffeomorphism}, the metric tensor $g_X$ and volume form $d\mathrm{vol}^X$ pulled back to $g = \exp^* g_X$ and $d\mathrm{vol}^\mathcal{N} = \exp^* d\mathrm{vol}^X$ in $TN\vert_\mathcal{N}$ respectively. \begin{defn} \label{eq:tilde_D} Set $\tilde D$ for the Dirac operator ${\mathcal I}^{-1} D {\mathcal I}$ and $\tilde{\mathcal A}$ for ${\mathcal I}^{-1} {\mathcal A} {\mathcal I}$ and $\tilde c$ for ${\mathcal I}^{-1} \circ c \circ ((I^*)^{-1} \otimes {\mathcal I})$. Also let $\tilde\nabla^{T{\mathcal N}}$ and $\tilde\nabla^{\tilde E}$ denote the corresponding connections induced by the Levi-Civita connection on $TX$ and the Clifford connection on $E$ respectively. \end{defn} The association \[ C^\infty\left(B_Z(2\varepsilon\right); E\vert_{B_Z(2\varepsilon)}) \to C^\infty( \mathcal{N} ; \tilde E),\ \eta \mapsto \tilde\eta:= {\mathcal I}^{-1} \xi \circ \exp, \] satisfies $\widetilde{D \eta} = \tilde{D} \tilde \eta$. Also \[ \int_{B_Z(2\varepsilon)} \lvert D_s \eta\rvert^2\, d \mathrm{vol}^X = \int_\mathcal{N} \lvert\tilde D_s \tilde \eta\rvert^2\, d \mathrm{vol}^\mathcal{N} = \int_\mathcal{N}\lvert\tau^{-1}(\tilde D_s \tilde \eta)\rvert^2\, d\mathrm{vol}^\mathcal{N}, \] where in the last equality, we used the parallel transport map $\tau$ introduced in Appendix~\ref{App:Taylor_Expansions_in_Fermi_coordinates} \eqref{eq:parallel_transport_map}. Recall the volume element $d\mathrm{vol}^N$ introduced in Appendix~\ref{subApp:The_expansion_of_the_volume_from_along_Z}. By \eqref{eq:density_comparison}, the density $d\mathrm{vol}^\mathcal{N}$ can be replaced by the density $d\mathrm{vol}^N$ and we have to prove the inequalities of Theorem~\ref{Th:hvalue} for the density function $\lvert\tau^{-1}(\tilde D_s \tilde \eta)\lvert^2\, d\mathrm{vol}^N$. We study the perturbative behavior of the operator $\tau^{-1}\circ\tilde D_s : C^\infty(\mathcal{N}; \pi^*(E^0\vert_Z)) \to C^\infty(\mathcal{N}; \pi^*(E^1\vert_Z))$ in two parts: in Section~\ref{Sec:structure_of_A_near_the_singular_set} we analyze the infinitesimal data of the perturbation term $\tilde {\mathcal A}$ and in Section~\ref{sec:structure_of_D_sA_along_the_normal_fibers} we analyze the perturbative behavior of $\tau^{-1}\circ\tilde D_s$. \vspace{10 mm} \medskip \vspace{1cm} \section{Structure of \texorpdfstring{${\mathcal A}$}{} near the singular set} \label{Sec:structure_of_A_near_the_singular_set} In proving the Spectral Separation Theorem we will have to analyze the geometry of the operator \[ \tilde D_s = \tilde c\circ \tilde\nabla + s \tilde{\mathcal A}: C^\infty(\mathcal{N}; \tilde E^0)\rightarrow C^\infty(\mathcal{N};\tilde E^1), \] near the singular set $Z$, that is the corresponding connected component of $Z_{\mathcal A}$ with tubular neighborhood $\mathcal{N}\subset N$. The idea is to expand into Taylor series along the normal directions $Z$. \bigskip \subsection{The data from the 1-jet of \texorpdfstring{$\tilde{\mathcal A}$}{} and \texorpdfstring{$\tilde{\mathcal A}^*\tilde{\mathcal A}$}{} along \texorpdfstring{$Z$}{}.} Fix a connected component $Z$ of the critical set $Z_{\mathcal A}$, an $m$-dimensional submanifold of the $n$-dimensional manifold $X$. Recall that $\pi:N \rightarrow Z$ is the normal bundle of $Z$ in $X$ and set $S(N) \rightarrow Z$ to be the normal sphere bundle. Our first task is to understand the perturbation term $\tilde{\mathcal A}$ on a tubular neighborhood $\mathcal{N}$ of $Z$. Note that $\tilde {\mathcal A}\vert_Z = {\mathcal A}\vert_Z$. \begin{defn} \label{defn:kernel_bundles} We introduce the rank $d$-subbundles, \[ S_p^0:=\ker {\mathcal A}_p\subset E^0_p,\qquad S^1_p : =\ker {\mathcal A}^*_p\subset E^1_p \qquad \text{and}\qquad S_p = S^0_p \oplus S^1_p, \] as $p$ runs in $Z$, and expand them by parallel transport, along the normal radial geodesics of $Z$, to bundles $S^0, S^1 \to \mathcal{N}$, defined over a sufficiently small tubular neighborhood $\mathcal{N}$ of $Z$. We denote by $P^i$ and $P_\perp^i$, the orthogonal projections to $S^i$ and its complement $(S^i)^\perp$, respectively. \end{defn} Since $\mathrm{index\,} {\mathcal A}=0$, the bundles $S^i,\ i=0,1$ are of equal dimension and a consequence of the concentration condition \eqref{eq:cond} is \begin{align} \label{eq:spin_action_on_S} u_\cdot S^i= S^{1-i}, \qquad u_\cdot (S^i)^\perp = (S^{1-i})^\perp, \end{align} for every $i=0,1$ and every $u\in T^*X\vert_\mathcal{N}$. Therefore the bundles $S^0\oplus S^1$ and $(S^0)^\perp \oplus (S^1)^\perp$ are both $\mathbb{Z}_2$- graded $\text{Cl}^0(T^*X\vert_Z)$-modules. By derivating relations \eqref{eq:cond} and \eqref{eq:cond_version2} we also get \begin{align} \label{eq:dcond} v_\cdot \nabla_u {\mathcal A} = - \nabla_u{\mathcal A}^* v_\cdot \qquad \mbox{and}\qquad \nabla_u {\mathcal A}\, v_\cdot= - v_\cdot\nabla_u{\mathcal A}^*, \end{align} for every $u\in N,\ v\in T^*X\vert_Z$. Since ${\mathcal A}$ is transverse to ${\mathcal L}\cap {\mathcal F}^\ell$, along the normal directions of $Z_\ell = Z \subset X$, by Proposition~\ref{prop:properties_of_perturbation_term_A} \eqref{eq:transversality_assumption_with_respect_to_connection}, we have \[ \nabla_u{\mathcal A}(S^0)\subseteq S^1 \qquad \mbox{and}\qquad \nabla_u{\mathcal A}^*(S^1)\subseteq S^0, \] for every $u \in N$. Using the preceding relations with equations \eqref{eq:spin_action_on_S}, we make the following definition: \begin{defn} Let $S(N) \rightarrow Z$ be the normal sphere bundle of $Z$ in $X$. For every $v\in N$ let $v^*$ be its algebraic dual. The bundle maps, \begin{equation} \begin{aligned} \label{defn:IntroDefM} M^i: S(N) &\rightarrow \mathrm{End}(S^i\vert_Z),\quad v\mapsto M_v^i := \begin{cases}- v^*_\cdot \nabla_v{\mathcal A}, & \quad \text{if $i=0$,} \\ - v^*_\cdot \nabla_v{\mathcal A}^*, & \quad \text{if $i=1$,} \end{cases} \\ M: S(N) &\rightarrow \mathrm{End}(S\vert_Z),\quad v\mapsto M_v^0 \oplus M_v^1, \end{aligned} \end{equation} will be of the utmost importance. \end{defn} \begin{proposition} \label{prop:spectrum_and_eigenspaces_as_Clifford_submodules} Assume that $n = \dim X > \dim Z= m>0$. Fix $v\in S(N)$ and $w\in S(TX)\vert_Z$ perpendicular to $v$ and set $p=\pi(v) \in Z$. Given an eigenvalue $\lambda_v:=\lambda$ of $M^i_v$ we denote by $S_\lambda^i \subset S^i_p$, the associated eigenspace and set $S_\lambda = S_\lambda^0 \oplus S_\lambda^1$. The matrices $M^i_v,\ i=0,1$ enjoy the following properties: \begin{enumerate} \item The matrix $M_v^0$ is symmetric with spectrum a symmetric finite set around the origin. Moreover, the map \[ v^*_\cdot w^*_\cdot :S^0_\lambda \to S_{-\lambda}^0, \] is an isomorphism. \item The matrices $M_v^i,\ i=0,1$ have the same spectrum and the Clifford multiplication \[ w^*_\cdot : S_\lambda^i \to S^{1-i}_\lambda \quad \text{and}\quad v^*_\cdot : S_\lambda^i \to S_{-\lambda}^{1-i}, \] induce isomorphisms for every $i=0,1$. \item We have the following submodules of $S_p$: \begin{align*} S_\lambda\oplus S_{-\lambda} &\qquad \text{ is a $\text{Cl}(T^*_pX)$-module,} \\ S_\lambda^i\oplus S_{-\lambda}^i &\quad \text{ is a $\text{Cl}^0(N^*_p)$-module,} \\ S^i_\lambda \oplus S^{1-i}_\lambda &\quad \text{ is a $\mathbb{Z}_2$-graded $\text{Cl}(T^*Z_p)$-module,} \end{align*} for every $i=0,1$. \end{enumerate} \end{proposition} \begin{proof} Let $\{e^A\}_{A= j, \alpha}$ be an orhonormal basis of $T^* X_p$ so that $e^1= v^*$ and $e^2=w^*$. For an ordered string $1\leq i_1 < \dots < i_k \leq n$ we set $I = (i_1, \dots, i_k),\ \lvert I\rvert = k$ and $e^I = e^{i_1}_\cdot \dots e^{i_k}_\cdot$. The proof of the proposition follow by the identity, \begin{equation} \label{eq:basic_identity} M_v^i e^I = \begin{cases} e^I M_v^i,& \quad \text{if $\lvert I\rvert$ is even and $1\notin I$}, \\ - e^I M_v^i,& \quad \text{if $\lvert I\rvert$ is even and $1\in I$}, \\ e^I M_v^{1-i},& \quad \text{if $\lvert I\rvert$ is odd and $1\notin I$}, \\ -e^I M_v^{1-i},& \quad \text{if $\lvert I\rvert$ is odd and $1\in I$}. \end{cases} \end{equation} The identity is a direct consequence of \eqref{eq:Clifford_relations}, \eqref{eq:dcond} and the definitions of $M^i_v$ in \eqref{defn:IntroDefM}. \end{proof} We now study the properties of the matrix functions $S(N)\ni v\mapsto M^i_v \in \mathrm{End}(S^i\vert_Z)$. By \cite{k}[Ch.II, Th. 6.8, pp.122], we can always choose a family of eigenvalues (possibly with repetitions) that are smooth functions $\{\lambda_\ell :S(N) \to \mathbb{R} \}_{\ell=1}^d$ so that $M^0_v- \lambda_\ell(v) 1_{S^0\vert_Z}$ is a singular matrix for every $v\in S(N)$ and every $\ell$. Associated to each such $\lambda_\ell$ is a bundle of eigenspaces $\{(S_\ell^0)_v\}_{v\in S(N)}$ that is a subbundle of $\pi^*(S^0\vert_Z)$ with varying dimensions in the fibers, and its jumping locus in $S(N)$ located where the different graphs of the eigenvalues intersect. Assumptions~\ref{Assumption:normal_rates} and \ref{Assumption:stable_degenerations} guarantee two main properties: 1) the bundles of eigenspaces have constant ranks and are pullbacks under $\pi: S(N)\to Z$ of eigenbundles defined on the core set $Z$ and 2) the eigenvalues are functions of the core set $Z$. The following couple of propositions explain why this is the case. The following second order term in the expansion of $\tilde{\mathcal A}^*\tilde{\mathcal A}$ along $Z$ will be important: \begin{defn} \begin{enumerate} \item We introduce the bundle map, \begin{align*} M^0: N\otimes N &\to \mathrm{Sym}(S^0),\quad v\otimes w \mapsto \left.\frac{1}{2}\left(\nabla_v{\mathcal A}^*\nabla_w{\mathcal A} + \nabla_w{\mathcal A}^*\nabla_v{\mathcal A}\right)\right\vert_{S^0\vert_Z}, \\ M^1: N\otimes N &\to \mathrm{Sym}(S^1),\quad v\otimes w \mapsto \left.\frac{1}{2}\left(\nabla_v{\mathcal A} \nabla_w{\mathcal A}^* + \nabla_w{\mathcal A} \nabla_v{\mathcal A}^*\right)\right\vert_{S^1\vert_Z}. \end{align*} We also introduce \[ M: N\otimes N \to \mathrm{Sym}(S^0)\oplus \mathrm{Sym}(S^1),\quad v\otimes w \mapsto M^0_{v,w} \oplus M^1_{v,w}. \] \item We say that $\{M^i_{v,w}\}_{v,w\in S(N)}$ is compatible with the inner product $g_X$ or simply compatible, if \begin{equation} \label{eq:compatibility_assumption_2} M^i_{v,w} \equiv 0 \quad \text{whenever} \quad g_X(v,w)=0, \end{equation} for every $v,w\in N$. If both $M^0$ and $M^1$ are compatible, we say that $M$ is compatible. \end{enumerate} \end{defn} \begin{lemma} \label{lem:basic_properties_of_M_v_w} The term $M_{v,w}$ satisfies the following identities: \begin{enumerate} \item For every $v\in S(N)$, we have $M_{v,v} = M_v^2$. \item For every $v,w\in N$ the equation, \begin{equation} \label{eq:commutator_vs_second_order_term} [M_v, M_w] = 2 w^*_\cdot v^*_\cdot (M_{v,w} - g(v,w) M_v M_w), \end{equation} holds. \item For every $u\in TX\vert_Z,\ v,w\in N$ and every $i=0,1$, the relation, \begin{equation} \label{eq:quadratic_term_vs_adjoint_quadratic_term} u^*_\cdot M^i_{v,w} = M^{1-i}_{v,w} u^*_\cdot, \end{equation} holds. \item Assumption~\ref{Assumption:normal_rates} is equivalent to $M^0_{v,w}$ being compatible and to $M^0_{v,v}$ being invertible for some $v\in S(N)$. \item If $\tilde{\mathcal A}^*\tilde{\mathcal A}$ satisfies Assumption~\ref{Assumption:normal_rates} then so does $\tilde{\mathcal A} \tilde{\mathcal A}^*$ that is, there exist a positive-definite symmetric endomorphism $Q^1$ of the bundle $S^1$, so that \[ \left.\tilde{\mathcal A}\tilde{\mathcal A}^*\right\vert_{S^1} = r^2\left(Q^1 + \left.\frac{1}{2}\bar{\mathcal A}\bar A^*_{rr}\right\vert_{S^1}\right)+ O(r^3). \] Furthermore \begin{equation} \label{eq:Q0_vs_Q1} u^*_\cdot Q^i = Q^{1-i}u^*_\cdot, \end{equation} and $Q^0$ and $Q^1$ share the same spectrum and they are $\text{Cl}^0(T^*X)$-equivariant. \end{enumerate} \end{lemma} \begin{proof} All the identities are direct consequences of \eqref{eq:Clifford_relations} and \eqref{eq:dcond}. For the last couple of entries, working in a Fermi chart $(\mathcal{N}_U, (x_j, x_\alpha)_{j,\alpha})$ with $v = \sum_\alpha \tfrac{x_\alpha}{r}\partial_\alpha \in S(N)$, we have that \[ M^0_{v,v} = \sum_{\alpha, \beta}\frac{x_\alpha x_\beta}{r^2} M^0_{\alpha, \beta} =\sum_{\alpha, \beta}\frac{x_\alpha x_\beta}{r^2}\left. \bar A_\alpha^* \bar A_\beta\right\vert_{S^0}. \] By comparing terms in expansion of Proposition~\ref{prop:properties_of_perturbation_term_A} \eqref{eq:jet1}, it follows that Assumption~\ref{Assumption:normal_rates} is equivalent to $M^0_{v,v} = Q^0$ for every $v\in S(N)$ for $Q^0$ a positive definite matrix, a quadratic relation in $S(N)$. By polarization, this is equivalent to $M^0_{v,w}$ satisfying condition \eqref{eq:compatibility_assumption_2} with $i=0$ and to $M^0_{v,v}$ being invertible for some $v\in S(N)$. Equation \eqref{eq:quadratic_term_vs_adjoint_quadratic_term} then shows that $M^0_{v,w}$ being compatible is equivalent to $M^1_{v,w}$ being compatible. It follows that if ${\mathcal A}^*{\mathcal A}$ satisfies Assumption~\ref{Assumption:normal_rates} the so does $\tilde{\mathcal A} \tilde{\mathcal A}^*$ and there exist a positive-definite symmetric endomorphism $Q^1$ of the bundle $S^1$, so that \begin{equation*} \left.\tilde{\mathcal A}\tilde{\mathcal A}^*\right\vert_{S^1} = r^2\left(\left.Q^1 + \frac{1}{2}\bar{\mathcal A}\bar A^*_{rr}\right\vert_{S^1}\right)+ O(r^3). \end{equation*} Finally for $v=w$, equation \eqref{eq:quadratic_term_vs_adjoint_quadratic_term} becomes \eqref{eq:Q0_vs_Q1}. The last couple of assertions follow then from this resulting equation. \end{proof} The following proposition shows that Assumption~\ref{Assumption:normal_rates}, implies that the eigenbundles of $M_v$ are pullbacks under $\pi:S(N)\to Z$ of eigenbundles defined on the core set $Z$ and that the corresponding eigenvalues are functions of the core set $Z$: \begin{prop} \label{prop:more_properties} We use the notation from Proposition~\ref{prop:spectrum_and_eigenspaces_as_Clifford_submodules}. \begin{enumerate} \item \label{lem:commutativity_of_M_v_w} The family $\{M_{v,w}\}_{v,w\in S(N)}$ commutes if and only if the subfamily $\{M_v^2\}_{v\in S(N)}$ commutes and that happens if and only if, the direct sum of the eigenbundles $S_\lambda\oplus S_{-\lambda} \to S(N)$ can be pushed forward to subbundles of the bundle $S\vert_Z\to Z$, for every eigenvalue $\lambda: S(N) \to \mathbb{R}$ of the family $\{M_v: v\in S(N)\}$. \item \label{rem:metric_compatibility1} If $\{M_{v,w}\}_{v,w \in S(N)}$ is compatible with $g_X$, then there exist $Q: Z \to \mathrm{Sym}\mathrm{End}(S^0) \oplus \mathrm{Sym}\mathrm{End}(S^1)\vert_Z$ so that \[ M_{v,w} = g_X(v,w)Q, \qquad \text{for every $v,w \in N$}. \] \item \label{rem:metric_compatibility2} Assume the squares $\{M_v^2: v \in S(N)\}$ are invertible matrices, independent of $v\in S(N)$ but dependent on $\pi(v)$. Call their common matrix value by $Q$. Then $Q$ is a symmetric matrix and the family $\{M_v\}_{v\in S(N)}$ respects the eigenspaces of $Q$. Moreover, if $\lambda: S(N) \to \mathbb{R}$ is an eigenvalue of this family then $\lambda$ is a function on the core set $Z$. \end{enumerate} \end{prop} \begin{proof} In proving \eqref{lem:commutativity_of_M_v_w}, let $v,w\in S(N)$ and set $e= (v+w)/\lvert v+w\rvert$ another unit vector. By polarization of the identity $M_{u,u} = M^2_u$, we obtain \[ M_{v,w} = \frac{1}{2}(\lvert v+w\rvert^2 M^2_e -M^2_v - M^2_w). \] Hence, if the family $\{M_v^2\}_{v\in S(N)}$ commutes, then the family $\{M_{v,w}\}_{v,w \in S(N)}$ commutes as well. On the other hand, the family $\{M_v^2\}_{v\in S(N)}$ commutes, if and only if the matrices in the family are simultaneously diagonalizable. Since $M_v^2$ is a direct sum of the $M_v^i, i=0,1$, these matrices satisfy analogue commutativity properties. Thus, given an eigenvalue $\lambda = \lambda(v)$ of $M_v^i$, its eigenspace $S_\lambda^i\oplus S_{-\lambda}^i$, is common for all $(M^i_w)^2$ with $\pi(w)=\pi(v)\in Z$. Assume now that $\{M_{v,w}\}_{v,w\in S(N)}$ satisfies condition \eqref{eq:compatibility_assumption_2}. Given $v,w\in S(N)$ perpendicular vectors, we get another pair of unit perpendicular vectors $e^{\pm}= (v \pm w)/\sqrt{2}$ and by the definition of compatibility we have \[ 0=M_{e^+, e^-} = \frac{1}{2} (M^2_v - M^2_w) \quad \Longrightarrow \quad M_v^2 = M^2_w. \] It follows that the family $\{M^2_v\}_{v\in S(N)}$ consists of only one positive definite symmetric matrix $Q: S\vert_Z \to S\vert_Z$. Given an orthonormal basis $\{e_\alpha\}$ of $N$ and arbitrary unit vectors $v= \sum_\alpha v_\alpha e_\alpha$ and $w= \sum_\alpha w_\alpha e_\alpha$, we obtain \[ M_{v,w}= \sum_{\alpha, \beta} v_\alpha w_\beta M_{\alpha, \beta} = \sum_\alpha v_\alpha w_\alpha M_\alpha^2 = g_X(v,w)Q, \] which is \eqref{rem:metric_compatibility1} Finally the first part of \eqref{rem:metric_compatibility2} follows since $[Q, M_v] = M_v^3 - M_v^3 =0$, for every $v\in S(N)$. Given an eigenvalue $\lambda: S(N) \to \mathbb{R}$ of $\{M_v\}_{v\in S(N)}$ we have that $\lambda^2$ is an eigenvalue of $Q$ and therefore a non-vanishing function on the core set $Z$. But $\lambda$ is continuous and maintain sign, as function of $S(N)$ and therefore it is a function on the core set $Z$. \end{proof} From now on we assume Assumptions~\ref{Assumption:normal_rates} and \ref{Assumption:stable_degenerations}. It is evident from Proposition~\ref{prop:more_properties} \eqref{rem:metric_compatibility2} that the family $\{M_v\}_{v\in S(N)}$ respects the decomposition \eqref{eq:eigenspaces_of_Q_i} into eigenbundles of $Q$. Recall the definitions of the bundles $S^i_{\ell k}$ in Definition~\ref{defn:IntroDefSp}. We proceed to describe the structure of a single $S_\ell = S^0_\ell \oplus S^1_\ell$ for fixed eigenvalue $\lambda_\ell:Z \to (0, \infty)$: \begin{prop}[Structure of compatible subspaces] \label{prop:properties_of_compatible_subspaces} Assume $0<m=\dim Z< n = \dim X$. \begin{enumerate} \item We have an alternate description of $S_\ell^{i\pm}$ as, \begin{equation} \label{defn:positive_negative_espaces1} S_\ell^{i\pm} = \bigcap_{v\in S(N)}\{ \xi\in S^i_\ell: M_v^i\xi = \pm\lambda_\ell \xi\}, \end{equation} for every $i=0,1$. \item $S^\pm_\ell$ are both, $\mathbb{Z}_2$-graded, $\text{Cl}(T^*Z)$-modules. Furthermore \begin{equation} \label{prop:structure_of_compatible_subspaces2} \bigcap_{v,w \in S(N)} \ker[M^i_v, M^i_w] = S_\ell^{i+} \oplus S_\ell^{i-}, \end{equation} for every $i=0,1$. \item \label{prop:structure_of_compatible_subspaces3} We view $\text{Cl}(N^*)$ as an irreducible $\mathbb{Z}_2$-graded $\text{Cl}(N^*)$-left module and $S^+_\ell = S_\ell^{0+} \oplus S_\ell^{1+}$ as an irreducible $\mathbb{Z}_2$-graded $\text{Cl}(T^*Z)$-module. Then there is a bundle isomorphism of $\text{Cl}(N^*)\hat\otimes \text{Cl}(T^*Z)$-modules \begin{equation} \label{eq:graded_tensor_product_decomposition} c_\ell : \text{Cl}(N^*) \hat\otimes S^+_\ell\to S_\ell, \end{equation} induced by the the Clifford multiplication of $S$ as a $\text{Cl}(T^*X\vert_Z)$-module, where $\hat \otimes$ represents the $\mathbb{Z}_2$-graded tensor product. Moreover the principal $\mathrm{SO}(\dim S_\ell)$- bundle of $S_\ell$ is reduced to the product of the principal $\mathrm{SO}$-bundles of $N$ and $S^+_\ell$ respectively. \item Under the identification of bundles $\Lambda^* N^* \simeq \text{Cl}(N^*)$, the bundle isomorphism \eqref{eq:graded_tensor_product_decomposition} restricts to a bundle isomorphism \[ c_\ell: \Lambda^k N^* \otimes S^{+ i}_\ell \to \begin{cases} S^i_{\ell k},& \quad \text{if $k$ is even,} \\ S^{1-i}_{\ell k}, & \quad \text{if $k$ is odd,} \end{cases} \] for every $i=0,1$. In particular, there is an orthogonal decomposition \begin{equation} \label{eq:decompositions_of_S_ell} S^i_\ell =\left( \bigoplus_{k\, \text{even}} S^i_{\ell k} \right) \oplus \left( \bigoplus_{k\, \text{odd}} S^{1-i}_{\ell k} \right), \end{equation} for every $i=0,1$. \end{enumerate} \end{prop} \begin{proof} Throughout the proof, we fix an orthonormal basis $\{e_\alpha\}_\alpha$ of $N$ with dual basis $\{e^\alpha\}_\alpha$, we work with $i=0$ and set $M_\alpha^0:= M^0_{e_\alpha}$. Similar proofs holds for $i=1$. Combining equation from Proposition~\ref{prop:more_properties} \eqref{rem:metric_compatibility1} with equation \eqref{eq:commutator_vs_second_order_term}, we have that, \begin{equation} \label{eq:com_1} [ M_v^0, M_w^0 ] = 2 w^*_\cdot v^*_\cdot g(v, w)(Q^0 - M_v^0 M_w^0) = 2 w^*_\cdot v^*_\cdot g(v, w)M_v^0(M_v^0 - M_w^0), \end{equation} for any $v,w\in S(N)$. Hence \begin{equation} \label{eq:dihotomy} \xi \in \ker[M_v^0, M_w^0] \qquad \Longleftrightarrow \qquad v\perp w \quad \text{or if}\quad M_v^0 \xi = M_w^0\xi. \end{equation} In particular we have that $[M_\alpha^0, M_\beta^0] = 0$ for every $\alpha, \beta$ so that the family $\{M_\alpha^0\}_\alpha$ commutes. Recall the bundle map $C^0$ introduced in \eqref{eq:IntroDefCp}. By Proposition~\ref{prop:properties_of_perturbation_term_A} \ref{eq:transversality_assumption_with_respect_to_connection}, $C^0 = \sum_\alpha M_\alpha^0$. In particular \[ [C^0, M_\alpha^0]= \sum_\beta [M_\beta^0, M_\alpha^0] =0, \] and since $v= e_\alpha$ is an arbitrary vector completed to a basis of $N$, we have that $[C^0, M_v^0]=0$ and therefore $M^0_v$ preserves the eigenspaces of $C^0$, for every $v \in S(N)$. We calculate these eigenspaces by using the description of $C^0$ with respect to the orthonormal base $\{e_\alpha\}_\alpha$. The members of the family $\{M_\alpha^0\}_\alpha$ are commuting symmetric matrices that preserve $S_\ell^0$ so that \[ (M^0_\alpha)^2 = Q^0\rvert_{S_\ell^0} = \lambda_\ell^2 1_{S^0_\ell}. \] Hence there exist a common eigenvector $\eta \in S^0_\ell$ and a string $I$, with \[ M_\alpha^0 \eta =\begin{cases} \lambda_\ell \eta,& \quad \text{if $\alpha\notin I$,}\\ - \lambda_\ell \eta,& \quad \text{if $\alpha\in I$.} \end{cases} \] Recall that for every string $I= (\alpha_1, \cdots,\alpha_h)$ of ordered positive integers we denote $e^I_\cdot = e^{\alpha_1}_\cdot \dots e^{\alpha_h}_\cdot \in \text{Cl}(N^*)$. Define \begin{equation} \label{eq:xi_minus_from_xi_plus} \xi^{0+} = \begin{cases} e^I_\cdot \eta,& \quad \text{if $\lvert I\rvert$ is even,}\\ e^I_\cdot e_\cdot \eta,& \quad \text{if $\lvert I\rvert$ is odd,} \end{cases} \quad \text{and}\quad \xi^{0-} = \begin{cases} d\mathrm{vol}^N_\cdot \xi^{0+},& \quad \text{if $\dim N$ is even,}\\ d\mathrm{vol}^N_\cdot e_\cdot \xi^{0+},& \quad \text{if $\dim N$ is odd,} \end{cases} \end{equation} so that, by \eqref{eq:basic_identity}, $M_\alpha^0 \xi^{0\pm} = \pm \lambda_\ell \xi^{0\pm}$, for every $\alpha$, where $e\in T^*Z$ is a unit covector (since $m>0$). In particular $\xi^{0\pm} \in S^{0\pm}_\ell$ so that $S^{0\pm}_\ell$ are nontrivial subspaces of $S^0_\ell$ and therefore eigenspaces of $C^0$. Given the string $I= (\alpha_1, \cdots,\alpha_h)$, we define a $\text{Cl}^0(T^*Z)$-module by \[ S^0_{\ell I} := \bigcap_{j=1}^{\lvert I\rvert}\{\xi \in S^0_\ell: M_{\alpha_j}^0 \xi = -\lambda_\ell \xi\}, \] for every $1\leq \lvert I\rvert \leq \dim N-1$. Notice that $S^0_{\ell I}$ and $S^0_{\ell J}$ are orthogonal when $I\neq J$. A similar construction holds for $S^1_\ell$, constructing $S^1_{\ell I}$. Finally, there are isometries \[ A_I : = \begin{cases}e^I_\cdot : S^{i+}_\ell \to S^i_{\ell I} &\quad \text{when $\lvert I\rvert$ is even,}\\ e^I_\cdot : S^{i+}_\ell \to S^{1-i}_{\ell I} &\quad \text{when $\lvert I\rvert$ is odd,} \end{cases} \] for every $i=0,1$. It follows that the $S^i_{\ell I}$ eigenspaces are nontrivial, they all have equal dimensions and we obtain decompositions \begin{equation} \label{eq:decomposition_of_S_0_given_frame_e_a} S^i_\ell\vert_Z = \bigoplus_I S^i_{\ell I} = \bigoplus_{\{I: \lvert I\rvert= \text{even}\}} A_I S^{i+}_\ell \oplus \bigoplus_{\{I: \lvert I\vert= \text{odd}\}} A_I S^{(1- i)+}_\ell, \end{equation} for every $i=0,1$. This decomposition depends on our choice of frame $\{e^\alpha\}_\alpha$. However, for each $0\leq k \leq n-m$, the component $ \bigoplus_{\{I:\lvert I\rvert=k\}} S^i_{\ell I}$ is the eigenspace $S^i_{\ell k}$ of the matrix $C^i$, corresponding to the eigenvalue $(n-m-2k)\lambda_\ell$. Therefore for every $k$ the component is independent of the choice of the frame. Decomposition \eqref{eq:decompositions_of_S_ell} follows. When $k=0$ or $k=\dim N$, then every $\xi \in S^0_{\ell k}$ satisfies \[ M^0_\alpha \xi = \begin{cases} \lambda_\ell \xi, & \quad \text{if $k=0$,} \\ -\lambda_\ell \xi , & \quad \text{if $k=\dim N$,} \end{cases} \] for every $\alpha$. Given $v= \sum_\alpha v_\alpha e_\alpha \in N$ with $\lvert v\rvert=1$ and using the expansion, \begin{equation} \label{eq:M_v_expansion} M_v = \sum_\alpha v_\alpha^2M_\alpha + \sum_{\alpha<\beta} v_\alpha v_\beta e^\alpha_\cdot e^\beta_\cdot (M_\alpha - M_\beta), \end{equation} we obtain \[ M_v^0 \xi = \begin{cases} \lambda_\ell \xi, & \quad \text{if $k=0$,} \\ -\lambda_\ell \xi , & \quad \text{if $k=\dim N$,} \end{cases} \] for every $v\in S(N)$. This proves \eqref{defn:positive_negative_espaces1}. By \eqref{eq:dihotomy} \[ \xi \in \bigcap_{v,w\in S(N)} \ker[M^0_v, M^0_w] \qquad \Longleftrightarrow \qquad \{M_v^0\xi\}_{v\in S(N)} \quad \text{is a singleton.} \] This implies the first inclusion in \eqref{prop:structure_of_compatible_subspaces2}. For the reverse inclusion of \eqref{prop:structure_of_compatible_subspaces2}, assume $\xi \in S^0_\ell$ so that $[M_v^0, M^0_w]\xi=0$ for all $v,w$. Using decomposition \eqref{eq:decomposition_of_S_0_given_frame_e_a}, there exist $a_I^i\in \mathbb{R}$ and $\xi_I^i\in S^{i+}_\ell$, linearly independent vectors, so that \[ \xi = \sum_{\lvert I\rvert\ \text{even}} a^0_I e^I_\cdot \xi^0_I + \sum_{\lvert I\rvert\ \text{odd}} a^1_I e^I_\cdot \xi^1_I. \] Clearly, whenever $\lvert I\rvert\neq 0 , \dim N$ is even and $a_I \neq 0$, we have that there exist $a\in I$ and $\beta\notin I$. But then \[ M^0_\alpha e^I_\cdot \xi^0_I = \lambda_\ell e^I_\cdot \xi^0_I\quad \text{and} \quad M^0_\beta e^I_\cdot \xi^0_I = -\lambda_\ell e^I_\cdot \xi^0_I, \] so that $\{M_\gamma^0\xi\}_\gamma$ is not singleton and cannot belong to the left hand side of \eqref{prop:structure_of_compatible_subspaces2}. A similar statement is true when $\lvert I\rvert\neq 0 , \dim N$ and $\lvert I\rvert$ is odd. Therefore $\lvert I\rvert=0$ or $\lvert I\rvert= \dim N= n-m$ and \[ \xi = \begin{cases} a_0^0 \xi_0^0 + a_{n-m}^0 d\mathrm{vol}^N_\cdot\xi^0_{n-m} ,& \quad \text{if $n-m$ is even,} \\ a_0^0 \xi_0^0 + a_{n-m}^1 d\mathrm{vol}^N_\cdot \xi^1_{n-m} ,& \quad \text{if $n-m$ is odd.} \end{cases} \] Observe that in each case $ d\mathrm{vol}^N_\cdot\xi^0_{n-m},\ d\mathrm{vol}^N_\cdot \xi^1_{n-m}\in S^{0-}_\ell$. This finishes the proof of the other inclusion in \eqref{prop:structure_of_compatible_subspaces2}. To prove \eqref{eq:graded_tensor_product_decomposition} we observe the inclusion bundle maps, \begin{equation*} (\text{Cl}^0(N^*) \otimes S^{0+}_\ell) \oplus (\text{Cl}^1(N^*)\otimes S^{1+}_\ell) \hookrightarrow (\text{Cl}^0(T^*X\vert_Z) \otimes S^{0+}_\ell) \oplus (\text{Cl}^1(T^*X\vert_Z)\otimes S^{1+}_\ell ) \rightarrow S^0_\ell\vert_Z, \end{equation*} and \begin{equation*} (\text{Cl}^0(N^*) \otimes S^{1+}_\ell) \oplus (\text{Cl}^1(N^*)\otimes S^{0+}_\ell) \hookrightarrow (\text{Cl}^0(T^*X\vert_Z) \otimes S^{1+}_\ell) \oplus (\text{Cl}^1(T^*X\vert_Z)\otimes S^{0+}_\ell ) \rightarrow S^1_\ell\vert_Z, \end{equation*} define isomorphisms on the fibers as a consequence of the decomposition \eqref{eq:decompositions_of_S_ell} and its analogue for $S^1_\ell\vert_Z$. Under the identification $\Lambda^* N^* \simeq \text{Cl}(N^*)$ we have that $\Lambda^k N^*\otimes S^{0+}_\ell$ is a bundle isomorphic to the eigenbunlde $S^0_{\ell k}$, for every $0\leq k \leq n-m$. This finishes the proof of \eqref{eq:graded_tensor_product_decomposition} and \eqref{eq:decompositions_of_S_ell} and the proof of the proposition. \end{proof} \medskip \subsection{Connection \texorpdfstring{$\bar\nabla$}{} emerges from the 1-jet of the connection \texorpdfstring{$\bar \nabla^{E\vert_Z}$}{} along \texorpdfstring{$Z$}{}.} Recall now the Fermi coordinates $(\mathcal{N}_U, (x_j, x_\alpha)_{j,\alpha})$ and the frames $\{e_j, e_\alpha\}_{j,\alpha}$ of $N\vert_U$, centered at $p\in Z$, introduced in Appendix~\ref{App:Fermi_coordinates_setup_near_the_singular_set}. Let $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ orthonormal frames trivializing $E^0\vert_U$ and $E^1\vert_U$. Using Proposition~\ref{prop:properties_of_compatible_subspaces}, we choose the frames $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$ to respect the decomposition \eqref{eq:decompositions_of_S_ell} and the decomposition of each $S^i_{\ell k}$ into simultaneous eigenspaces of $\{M_\alpha\}_\alpha$. In particular we obtain a trivialization of $S^{i+}_\ell\vert_U$. From Proposition~\ref{prop:properties_of_compatible_subspaces}, the bundle maps, \begin{equation} \label{eq:decompositions} \begin{aligned} S^0\vert_Z &= \bigoplus_\ell S_\ell^0, \qquad c_\ell : \text{Cl}^0(N^*)\otimes S^{0+}_\ell\oplus \text{Cl}^1(N^*)\otimes S^{1+}_\ell \rightarrow S_\ell^0 , \\ S^1\vert_Z &= \bigoplus_\ell S^1_\ell, \qquad c_\ell: \text{Cl}^0(N^*)\otimes S^{1+}_\ell\oplus \text{Cl}^1(N^*)\otimes S^{0+}_\ell \rightarrow S_\ell^1. \end{aligned} \end{equation} are isometric isomorphisms and the structure group of $S^i\vert_Z$ is reduced to the product of $SO(n-m)$ and the structure group of $S^{i+}$. The Clifford bundles $\text{Cl}^i (N^*)$ admit an $SO(n-m)$ connection, induced by $\nabla^N$. Based on the decompositions \eqref{eq:decompositions}, we introduce a new connection on $E^i\vert_Z$: \begin{defn} \label{eq:connection_bar_nabla} Let $v\in TZ$. By decompositions \eqref{eq:decompositions}, a given section $\xi \in C^\infty(Z; S^0_\ell \oplus S^1_\ell)$ is a sum of elements of the form $c(w_i) \xi^{i+}$ for some uniquely defined $w_i\in C^\infty( Z ; \text{Cl}(N^*))$ and some $\xi^{i+} \in C^\infty(Z; S^{i+}_\ell),\, i =0,1$. We define the connection \[ \bar\nabla_v (c(w_i) \xi^{i+}):= c(\nabla^{N^*}_v w_i) \xi^{i+} + ( c(w_i) \circ P_\ell^{i+})(\nabla^{E^i\vert_Z}_v \xi^{i+}), \] for every $i=0,1$. When $\xi \in C^\infty(Z; (S^i\vert_Z)^\perp)$, we define \[ \bar\nabla_v \xi = (1_{E^i\vert_Z}- P^i)(\nabla^{E^i\vert_Z}_v\xi). \] \end{defn} The connection $\bar\nabla$ satisfies the following basic properties: \begin{prop} \label{prop:basic_restriction_connection_properties} \begin{enumerate} \item By definition, $\bar\nabla$ preserves the space of sections of the bundles $S^i\vert_Z,\ (S^i\vert_Z)^\perp$ and $S^i_\ell,\ S^{i+}_\ell$, for every $\ell$ and every $i=0,1$. Moreover it is compatible with the metrics of $E^i\vert_Z, \ i=0,1$ and reduces, by definition, to a sum of connections, each of which is associated to each summand of the decomposition of $S^i\vert_Z$ into eigenbundles $S^i_{\ell k}$ introduced in decompositions \eqref{eq:eigenspaces_of_Q_i} and \eqref{eq:decompositions_of_S_ell}. . \item Let $\xi\in C^\infty(Z; S^i\vert_Z)$. Then $\bar\nabla$ satisfies, \begin{equation} \label{prop:basic_restriction_connection_properties2} [\bar\nabla, c(w)] \xi = \begin{cases} c( \nabla^{N^*} w) \xi,& \quad \text{if $w\in C^\infty(Z; N^*)$,} \\ c(\nabla^{T^*Z}w)\xi,& \quad \text{if $w\in C^\infty(Z; T^*Z)$.} \end{cases} \end{equation} When $\xi\in C^\infty(Z; (S^i\vert_Z)^\perp)$ and $w\in C^\infty(Z; T^*\mathcal{N}\vert_Z)$, then \begin{equation} \label{prop:basic_restriction_connection_properties1} [\bar\nabla, c(w)] \xi = c(\nabla^{T^*\mathcal{N}\vert_Z} w) \xi. \end{equation} \end{enumerate} \end{prop} The proof is provided in Appendix subsection \ref{subApp:The_expansion_of_the_Spin_connection_along_Z} Finally, we consider the difference \begin{equation} \label{eq:remainder_term} B^i: TZ \to \mathrm{End}(E^i\vert_Z),\quad v \mapsto \nabla_v - \bar\nabla_v, \end{equation} for every $i=0,1$. By the definition of $\bar \nabla$, it follows that \begin{equation} \label{eq:properties_of_remainder_term} B_v^i(S^{i+}_\ell) \perp S^{i+}_\ell \qquad \text{and} \qquad B_v^i ((S^i\vert_Z)^\perp) \subset S^i\vert_Z, \end{equation} for every $\ell$ and every $i=0,1$. By the Proposition \ref{prop:basic_restriction_connection_properties}, $B_v^i$ is a skew-symmetric bundle map satisfying \[ B_v^i( w_\cdot \xi) = (p_Z\nabla^{T^*X\vert_Z}_v w)_\cdot \xi + w_\cdot B_v^{1-i} \xi, \] for every $v\in TZ,\ i=0,1$ and every $w\in C^\infty(Z; N^*)$ and $\xi\in C^\infty(Z; S^i\vert_Z)$, where $p_Z:T^*X\vert_Z \to T^*Z$ is the orthogonal projection. The bundle maps $B^i,\ i=0,1$ appear in the definition of the terms ${\mathcal B}^i$ in the expansions of Lemma~\ref{lem:Dtaylorexp} which will be the main objective of the following section. \medskip \vspace{1cm} \setcounter{equation}{0} \section{Structure of \texorpdfstring{$D + s{\mathcal A}$}{} along the normal fibers} \label{sec:structure_of_D_sA_along_the_normal_fibers} Recall that $Z\subset X$ is an $m$-dimensional submanifold with normal bundle $\pi:\mathbb{R}^{n-m} \hookrightarrow N\to Z$. We study the perturbative behavior of the operator $\tau^{-1}\circ\tilde D_s : C^\infty(\mathcal{N}; \pi^*(E^0\vert_Z)) \to C^\infty(\mathcal{N}; \pi^*(E^1\vert_Z))$. The expansions will be carried out for the diffeomorphic copies $g = \exp^* g_X$, connection $\tilde \nabla$, volume form $d\mathrm{vol}^N$ and Clifford structure $\tilde c = {\mathcal I}^{-1} \circ c \circ (I \otimes {\mathcal I})$ and $\tilde {\mathcal A}$ as defined in Definition~\ref{eq:tilde_D}. Throughout the section, we use bundle coordinates $(N\vert_U, (x_j, x_\alpha)_{j,\alpha})$ of the total space $N$ that are restricted to the tubular neighborhood $\mathcal{N}_\varepsilon = \mathcal{N}$ and frames $\{\sigma_\ell\}_\ell$ of $E^0\vert_U$ and $\{f_k\}_k$ of $E^1\vert_U$ that were introduced in Appendix~\ref{App:Taylor_Expansions_in_Fermi_coordinates}, obeying relations \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames} and \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}. Assumption~\ref{Assumption:normal_rates} guarantees that the singular behavior of the perturbation term $\tilde {\mathcal A}$ operator $ \tau^{-1} \circ\tilde D_s$ becomes regular after rescaling $\{w_\alpha = \sqrt{s} x_\alpha\}_\alpha$. Recall the decomposition $TN = {\mathcal V} \oplus {\mathcal H}$ into vertical and horizontal distributions introduced in Appendix~\ref{subApp:The_total_space_of_the_normal_bundle_N_to_Z}. The terms in the perturbation series of $\tilde D$ involving derivation fields in the normal distribution will re-scale to order $O(\sqrt{s})$, after the re-scaling is applied, introducing operator $\slashed D_0$. Since the metric in the fibers of $N$ becomes Euclidean in the blow up, this is a Euclidean Dirac operator in the normal directions. The terms involving derivation fields in the horizontal distribution will contain the fields of order $O(1)$ introducing the horizontal operator $\bar D^Z$. Each of these differential operators is originally defined to act on sections of the bundles $\pi^*(E\vert_Z)\to \mathcal{N}$ of the tubular neighborhood $ \mathcal{N}\subset N$. However these operators make sense on sections of the bundles $\pi^*(E\vert_Z)\to N$ over the total space of the normal bundle $\pi : N \to Z$. Recall the operators $\mathfrak{c}_N,\ \nabla^{\mathcal V}$ and $\nabla^{\mathcal H}$ introduced in Appendix from subsections~ \ref{subApp:tau_j_tau_a_frames} to \ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z}. We have the following definitions: \begin{defn} \label{defn:vrtical_horizontal_Dirac} We define $\slashed D_0$ by composing \begin{equation*} \xymatrix{ C^\infty(N; \pi^*(E^0\vert_Z)) \ar[r]^-{\bar\nabla^{\mathcal V}} & C^\infty(N; {\mathcal V}^* \otimes \pi^*(E^0\vert_Z)) \ar[r]^-{\mathfrak{c}_N} & C^\infty(N; \pi^*(E^1\vert_Z)), } \end{equation*} and $\bar D^Z$ by composing \begin{equation*} \xymatrix{ C^\infty(N; \pi^*(E^0\vert_Z)) \ar[r]^-{\bar\nabla^{\mathcal H}} & C^\infty(N; {\mathcal H}^* \otimes \pi^*(E^0\vert_Z)) \ar[r]^-{\mathfrak{c}_N} & C^\infty(N; \pi^*(E^1\vert_Z)). } \end{equation*} We restrict $\slashed D_0$ to sections of the sub-bundles $\pi^*(S^i\vert_Z)$ of $\pi^*(E^i\vert_Z),\, i =0,1$ and recall the term $\bar A_r$ introduced in \eqref{eq:1st_jet_2nd_jet_of_A}. Define \[ \slashed{D}_s : C^\infty(N; \pi^*(S^0\vert_Z)) \rightarrow C^\infty(N; \pi^*(S^1\vert_Z)), \quad \xi\mapsto \slashed D_0\xi + s r \bar A_r\xi, \] for every $s\in \mathbb{R}$. \end{defn} \begin{rem} \label{rem:properties_of_horizontal_and_vertical_operators} Given a section $\xi: N \to \pi^*(E\vert_Z)$, we use the same letter $\xi=(\xi_1,\dots, \xi_d)$ to denote is coordinates with respect to the frames $\{\sigma_\ell\}_\ell,\ \{f_k\}_k$. Also recall the lifts $\{h_A = H e_A\}_{A=j,\alpha}$ with their coframes $\{h^A = (H^*)^{-1}(e^A) \}_{A=j,\alpha}$ and the matrix expressions $c^A$ of $\mathfrak{c}_N(h^A)$ with respect to these frames introduced in Appendix from subsections~ \ref{subApp:tau_j_tau_a_frames} to \ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z}: \begin{enumerate} \item \label{rem:local_symbol_expressions} Let $v = v_j h^j + v_\alpha h^\alpha\in T^*N$. The symbol maps are described in local coordinates by \begin{align*} \sigma_{\slashed D_0}: T^*N\otimes \pi^*(E^0\vert_Z) &\to \pi^* (E^1\vert_Z), \quad v\otimes \xi \mapsto v_\alpha c^\alpha \xi, \\ \sigma_{\bar D^Z}: T^*N\otimes \pi^*(E^0\vert_Z) &\to \pi^* (E^1\vert_Z), \quad v\otimes \xi \mapsto (v_j - v_\alpha x_\beta \bar\omega_{j\beta}^\alpha) c^j \xi. \end{align*} \item \label{rem:local_expression_of_slashed_D_0} For fixed $z\in Z$, the operator $\slashed D_0: C^\infty(N_z; E_z^0) \to C^\infty(N_z; E^1_z)$ is a Euclidean Dirac operator because its expression in local coordinates is \[ \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha} = c^\alpha (h_\alpha \xi) = c^\alpha \partial_\alpha \xi. \] Recalling that $ r \bar A_r= x_a \bar A_\alpha = c^\alpha M^0_\alpha$, we have the local expression \[ \slashed D_s \xi = c^a( h_\alpha + s x_\alpha M_\alpha^0) \xi. \] \item \label{rem:local_expression_of_D_Z} By using \eqref{eq:pullback_bar_connection_components}, the operator $\bar D^Z$ has local expression \[ \bar D^Z \xi = \mathfrak{c}_N(h^j) \bar\nabla_{h_j}\xi = c^j( h_j + \phi^0_j)\xi. \] \item \label{rem:respectful_eigenspaces} Recall the eigenbundles $S^i_{\ell k} \to Z$ of $C^i,\, i =0,1$ in Definition~\ref{defn:IntroDefSp}. Recall also the construction of the covariant derivative $ \bar\nabla_{h_j}$ in Appendix~\ref{subApp:The_pullback_bundle_E_Z_bar_nabla_E_Z_c_Z} By Proposition~\ref{prop:basic_extension_bar_connection_properties} \eqref{prop:basic_extension_bar_connection_properties3}, the operators $\slashed D_s$ and $\bar D^Z$ decompose into blocks with each block being a differential operator on sections $C^\infty(N; \pi^*S^0_{\ell k}) \to C^\infty(N; \pi^*S^1_{\ell k})$, for every $\ell$ and every $k=0,\dots, n-m$. \end{enumerate} \end{rem} The following expansions are proven in \cite{bl}[Theorem 8.18, p. 93]. We include the proof for completeness: \begin{lemma} \label{lem:Dtaylorexp} Let $\eta = \tau \xi$ where $\xi : \mathcal{N}_U\rightarrow \pi^*(E^i\vert_Z)$. At the fibers of $\mathcal{N}_U \to U$, we have the expansions \[ \tau^{-1} \tilde D \eta = \slashed D_0 \xi + \bar D^Z \xi + {\mathcal B}^0\xi + O(r^2 \partial^{\mathcal V} + r \partial^{\mathcal H} + r)\xi, \] where $r$ is the distance function from the core set. \end{lemma} \begin{proof} We use Hitchin's dot notation $\tilde c(v) = v_\cdot$ throughout the proof. By linearity, it suffices to work with $\eta=\tau\xi = f \sigma_k$, for some $f:\mathcal{N}_U \to \mathbb{R}$ and use the expression of $\tilde D$ in local frames, \[ \tilde D \eta = \tau^\alpha_\cdot \tilde \nabla_{\tau_\alpha} \eta + \tau^j_\cdot \tilde\nabla_{\tau_j} \eta. \] Since the Clifford multiplication is $\tilde\nabla^\mathcal{N}$-parallel, we have \[ \tau^\alpha_\cdot \sigma_k = \tau( c^\alpha \sigma_k) \quad \text{and}\quad \tau^j_\cdot \sigma_k = \tau ( c^j \sigma_k). \] By \eqref{eq:on_frames_expansions}, \[ \tau_\alpha = h_\alpha + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}), \] and by \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames}, \begin{align*} \tilde\nabla_{\tau_\alpha} \eta &= (\tau_\alpha f) \sigma_k + f\theta_\alpha^0 \sigma_k \\ &=(h_\alpha f) \sigma_k + O( (r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r) (f \sigma_k). \end{align*} Hence, for the normal part, we estimate \begin{align*} \tau^\alpha_{\cdot}\tilde \nabla_{\tau_\alpha}\eta &= ( h_\alpha f) \tau^\alpha_\cdot \sigma_k \, +\, + O( (r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r) (f \sigma_k) \\ &= (h_\alpha f) \tau c^\alpha \sigma_k +\, O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + r) (f \sigma_k) \\ &= \tau \left[ \slashed D_0 \xi +\, O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + r)\xi \right], \end{align*} where the last equality follows by Remark~\ref{rem:properties_of_horizontal_and_vertical_operators} \ref{rem:local_expression_of_slashed_D_0}. By \eqref{eq:on_frames_expansions}, \[ \tau_j = h_j + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}), \] and by \eqref{eq:Clifford_connection_one_form_local_representations_in_on_frames} and \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}, the connection on the horizontal directions expands as \begin{align*} \tilde \nabla_{\tau_j}\eta &= (\tau_j f) \sigma_k + f\theta_j^0 \sigma_k \\ &= (h_j f) \sigma_k + f{(\phi_j^0 + B_j^0)}_k^l\sigma_l + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r)(f\sigma_k), \end{align*} where the term $B_j^0 = B_{jk}^{0l} \sigma^k\otimes \sigma_l$ is introduced in \eqref{eq:remainder_term}. It follows that \begin{align*} \tau^j_{\cdot} \tilde \nabla_{\tau_j}\eta &= (h_j f) \tau^j_\cdot\sigma_k + f{(\phi_j^0 + B_j^0)}_k^l \tau^j_\cdot\sigma_l + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r)(f\sigma_k) \\ &= \tau[ (h_j f) c^j\sigma_k + f{(\phi_j^0 + B_j^0)}_k^l c^j \sigma_l + O(r^2\partial^{\mathcal V} + r\partial^{\mathcal H}+ r)(f\sigma_k)], \end{align*} that is, by using Remark~\ref{rem:properties_of_horizontal_and_vertical_operators} \ref{rem:local_expression_of_D_Z}, \begin{align*} \tau^j_{\cdot}\tilde\nabla_{\tau_j}\eta = \tau \left[ \bar D^Z\xi\,+ {\mathcal B}^0\xi +\, O(r^2 \partial^{\mathcal V} + r\partial^{\mathcal H} + r)\xi \right]. \end{align*} Here ${\mathcal B}^0:= c^j B_j^0\in \mathrm{Hom}(E^0\vert_Z; E^1\vert_Z)$. Adding up the two preceding expressions and using the local expressions for $\slashed D_0$ and $\bar D^Z$ in Remark~\ref{rem:properties_of_horizontal_and_vertical_operators}, we obtain the required expansion. \end{proof} In Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms}, the properties of these vertical and horizontal operators are presented. The horizontal operator satisfies a Weitzenboock formula. This is a naturally occurring formula if one introduces the metric $g^{TN}$ and the Clifford action $\mathfrak{c}_N$ on $TN$. The connections $\bar\nabla^{\pi^*(E\vert_Z)}$ and $\bar\nabla^{TN}$ introduced in Appendix subsection~\ref{subApp:The_total_space_of_the_normal_bundle_N_to_Z} become compatible with the Riemannian metric and the Clifford multiplication per Proposition~\ref{prop:basic_extension_bar_connection_properties}. The following proposition calculates in local frames, the formal adjoints of $\slashed D_s$ and $\bar D^Z$ on the total space $(N, g^{TN}, d\mathrm{vol}^N)$. \begin{proposition} \label{prop:cross_adjoint} The formal adjoints of the operators $\slashed D_s$ and $\bar D^Z$ with respect to the metric on $\pi^*(S^0\vert_Z)$ and volume form $d\mathrm{vol}^N$ are computed in local coordinates by \begin{equation} \label{eq:local_expressions_for_formal_adjoints} \slashed D^*_s = c^\alpha (\partial_\alpha + sx_\alpha M^1_a) \qquad \text{and}\qquad \bar D^{Z*} = c^j(h_j + \phi_j^1). \end{equation} \end{proposition} \begin{proof} Let $\xi_i: N \to \pi^*(S^i\vert_Z),\ i=0,1$ smooth sections so that at least one of them is compactly supported. We define vector field $Y\in C^\infty(N; TN)$ by acting on covectors, \[ Y: T^*N \to \mathbb{R},\ v \mapsto \langle \mathfrak{c}_N(v) \xi_1, \xi_2\rangle, \] and we decompose it into its vertical and horizontal parts $Y = Y_{\mathcal V} + Y_{\mathcal H}$. We then define the $n-1$-forms \[ \omega^{\mathcal V} = \iota_{Y_{\mathcal V}} d \mathrm{vol}^N \quad \text{and} \quad \omega^{\mathcal H} = \iota_{Y_{\mathcal H}} d \mathrm{vol}^N. \] We have the equations, \begin{align*} d \omega^{\mathcal V} &= ( \langle \slashed D_0 \xi_1, \xi_2\rangle - \langle \xi_1, \slashed D_0^* \xi_2 \rangle)\, d \mathrm{vol}^N, \\ d \omega^{\mathcal H} &= ( \langle \bar D^Z \xi_1, \xi_2\rangle - \langle \xi_1, \bar D^{Z*} \xi_2 \rangle)\, d \mathrm{vol}^N. \end{align*} We prove the second identity. Calculating over $N_p$, as in the proof of Proposition~\ref{prop:basic_extension_bar_connection_properties}, we have ${\mathcal L}_{Y_{\mathcal H}} d \mathrm{vol}^N = h_j(Y_j)\, d\mathrm{vol}^N$ and \[ h_j Y_j = \langle \mathfrak{c}_N(h^j) \bar\nabla_{h_j} \xi_1, \xi_2\rangle + \langle \mathfrak{c}_N(h^j) \xi_1, \bar\nabla_{h_j}\xi_2\rangle = \langle \bar D^Z \xi_1, \xi_2\rangle - \langle \xi_1, \bar D^{Z*} \xi_2\rangle. \] The first identity is proven analogously. The proof of the proposition follows. \end{proof} Combining Lemma~\ref{lem:Dtaylorexp} with expansion in Appendix~\ref{subApp:The_expansion_of_A_A*A_nabla_A_along_Z} \eqref{eq:jet0} we obtain the expansions for $\tilde D_s$. The same computation and the local expresions given in Proposition~\ref{prop:cross_adjoint} give analogue expansions for $\tilde D_s^*$, the $L^2(\mathcal{N})$ formal adjoint of $\tilde D_s$ with respect to the density function $d\mathrm{vol}^\mathcal{N}$. The expansions are described in the following corollary: \begin{cor} \label{cor:taylorexp} $\tilde D + s\tilde {\mathcal A}$ expands along the normal directions of the singular set $Z$ with respect to $\xi_i \in C^\infty(\mathcal{N}; \pi^*(S^i\vert_Z)),\, i=0,1$, as \begin{equation} \label{eq:taylorexp} (\tilde D+ s \tilde{\mathcal A})\tau\xi_0 = \tau \left(\slashed{D}_s+ \bar D^Z + {\mathcal B}^0 + \frac{1}{2} sr^2 \bar A_{rr}\right) \xi_0 + O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H }+r + sr^3)\tau\xi_0, \end{equation} and \begin{equation*} (\tilde D^*+ s\tilde {\mathcal A}^*)\tau\xi_1 = \tau \left(\slashed{D}_s^*+ \bar D^{Z*} + {\mathcal B}^1 + \frac{1}{2} sr^2 \bar A^*_{rr}\right) \xi_1 + O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} +r + sr^3)\tau\xi_1, \end{equation*} as $r \to 0^+$. When $\xi_i\in C^\infty(\mathcal{N} ; \pi^*(S^i\vert_Z)^\perp)$ then, \begin{equation} \label{eq:taylorexp1} (\tilde D + s \tilde {\mathcal A}) \tau \xi_0 = \tau(\slashed D_0 + \bar D^Z + {\mathcal B}^0) \xi_0 + s\bar{\mathcal A} \tau \xi_0 + O( r^2\partial^{\mathcal V}+ r\partial^{\mathcal H} +(1+s)r)\tau\xi_0, \end{equation} and \begin{equation*} (\tilde D + s\tilde {\mathcal A})^* \tau \xi_1 = \tau(\slashed D_0^* + \bar D^{Z*} + {\mathcal B}^1) \xi_1 + s \bar{\mathcal A}^* \tau \xi_1 +O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} +(1+s)r)\tau\xi_1, \end{equation*} as $r\to 0^+$. \end{cor} \begin{rem} \begin{enumerate} \item We note that in the preceding expansions, the $L^2$-adjoint of $\tilde D_0 : C^\infty (\mathcal{N}; \tilde E^0\vert_\mathcal{N}) \to C^\infty(\mathcal{N}; \tilde E^1\vert_\mathcal{N})$ on the left hand side, denoted by $\tilde D^*_0$ is computed with respect to the metrics on $\tilde E$ and $T^*\mathcal{N}$ and with respect to the volume form $d\mathrm{vol}^\mathcal{N}$. On right hand side, the adjoints $\slashed{D}_0^*$ and $\bar D^{Z*}$ are computed with respect to the pullback metric, on $\pi^*E$, the metric $g^{TN}$ and the volume form $d\mathrm{vol}^N$. \end{enumerate} \end{rem} \medskip \vspace{1cm} \setcounter{equation}{0} \section{Properties of the operators \texorpdfstring{$\slashed D_s$}{} and \texorpdfstring{$D^Z$}{}.} \label{sec:Properties_of_the_operators_slashed_D_s_and_D_Z.} In this section we review some well known Weitzenbock type formulas from \cite{bl} for the operators $\slashed D_s$ and $\bar D^Z$ introduced in the preceding paragraph. In particular the Weitzenbock identity of the operator $\slashed D_0$ involves a harmonic oscillator. As proven in Proposition~\ref{prop:basic_extension_bar_connection_properties} \eqref{prop:basic_extension_bar_connection_properties1} of the Appendix, the connection $\bar\nabla$ on the bundle $\pi^*E \to N$ is Clifford compatible to the connection $\bar\nabla^{TN}$ (introduced in Appendix~\ref{subApp:The_total_space_of_the_normal_bundle_N_to_Z}). The latter has nontrivial torsion $T$ that is described explicitly in Proposition~\ref{prop:basic_extension_bar_connection_properties}. Throughout the section we will use the constructions frames of Appendices~\ref{App:Fermi_coordinates_setup_near_the_singular_set} and \ref{App:Taylor_Expansions_in_Fermi_coordinates}. We work on a bundle chart $(N\vert_U, (x_j, x_\alpha)_{j,\alpha})$ where the normal coordinates $(U, (x_j)_j)$ are centered at $p \in U$. Recall the frames $\{e_j\}_j$ being $\nabla^{TZ}$-parallel at $p\in Z$, the frames $\{e_\alpha\}_\alpha$ that are $\nabla^N$-parallel at $p$ and the horizontal lifts $\{h_A = {\mathcal H}(e_A)\}_{A=j,\alpha}$. By using relations \eqref{eq:local_components_for_bar_nabla_TN} and \eqref{eq:connection_comp_rates}, the connection components of $\bar\nabla^{TN}$ vanish over $N_p$ so that $\bar\nabla_{h_A} = h_A, \ A=j, \alpha$. Recall also the frames $\{\sigma_k, f_\ell\}_{k,\ell}$ trivializing $\tilde E_{\mathcal{N}_U}$, introduced in Appendix~\ref{App:Taylor_Expansions_in_Fermi_coordinates}. Similarly, by using relations \eqref{eq:pullback_bar_connection_components} and \eqref{eq:comparing_tilde_nabla_orthonormal_to_bar_nabla_orthonormal}, the connection components of $\bar\nabla^{\pi^*(E^i\vert_Z)},\ i=0,1$ vanish over $N_p$ so that $\bar\nabla_{h_A} = h_A,\ A=j,\alpha$. Recall $Q= Q^0 \oplus Q^1$ introduced in Proposition~\ref{prop:more_properties} \eqref{rem:metric_compatibility1} and $C = C^0 \oplus C^1$ introduced in \eqref{eq:IntroDefCp}. Recall also that $M_\alpha =M_\alpha^0 \oplus M_\alpha^1$ satisfies $C = \sum_\alpha M_\alpha$ and $M_\alpha^2 = Q$, for every $\alpha$. Recall the Clifford map $\mathfrak{c}_N$, introduced in Definition~\ref{defn:definition_of_mathfrac_Clifford} with local representation as the matrix $c^A,\ A=j,\alpha$. By employing relations \eqref{eq:horizontal_lift}, \eqref{eq:basic_extension_bar_connection_properties1} and \eqref{eq:connection_comp_rates} over $N_p$, \begin{equation} \label{eq:c_N_commutator_identities} \begin{aligned} \relax[\bar\nabla_{h_j}, c^k] &= [\bar\nabla_{h_j}, \mathfrak{c}_N(h^k)] = \bar\omega_{jk}^l(p)c^l =0, \\ [\bar\nabla_{h_j}, c^\alpha] &= [\bar\nabla_{h_j}, \mathfrak{c}_N(h^\alpha)] = \bar\omega_{j\alpha}^\beta(p) c^\beta =0, \\ [\bar\nabla_{h_j} , x_\alpha c^\alpha ] &= [ h_j, x_\alpha ] c^\alpha+ x_\alpha[\bar\nabla_{h_j}, c^\alpha]= x_\beta\bar\omega_{j\beta}^\alpha(p) c^\alpha =0, \\ [\bar\nabla_{h_\alpha}, c^A]&= [\bar\nabla_{h_\alpha}, \mathfrak{c}_N(h^A)] = \begin{cases} \bar\omega_{\alpha j}^k(p)c^l, & \quad \text{if $A=j$},\\ \bar\omega_{\alpha \beta}^\gamma(p) c^\gamma,&\quad \text{if $A=\beta$} \end{cases} =0. \end{aligned} \end{equation} Finally recall the expressions in local frames of the operators $\slashed D_0$ and $\bar D^Z$ and their symbols described in Remark~\ref{rem:properties_of_horizontal_and_vertical_operators}. In the aforementioned frames, above the fiber $N_p$, they become \begin{equation} \label{eq:local_expressions_over_N_p} \begin{aligned} \sigma_{\slashed D_0}(v)\xi &= v_\alpha c^\alpha\xi, \quad \slashed D_0 \xi = \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha}\xi = c^\alpha \partial_\alpha\xi, \\ \sigma_{\bar D^Z}(v)\xi &= v_j c^j\xi, \quad \bar D^Z\xi = \mathfrak{c}_N(h^j)\bar\nabla_{h_j}\xi = c^jh_j \xi, \end{aligned} \end{equation} where $v= v_jh^j+v_\alpha h^\alpha\in T_p^*N$ and $\xi\in C^\infty(N; \pi^*(E^0\vert_Z))$. \begin{proposition} \label{prop:Weitzenbock_identities_and_cross_terms} \begin{enumerate} \item For every $\xi \in C^\infty(N; \pi^* (S^0\vert_Z))$, \begin{equation} \label{eq:slashed_D_s_Weitzenbock} \slashed D_s^* \slashed D_s \xi = (- \Delta + s^2 r^2 Q^0 - sC^0)\xi, \end{equation} where $\Delta = \sum_\alpha \partial^2_\alpha$ is the Euclidean Laplacian in the fibers $N_z,\, z \in Z$ and \begin{equation} \label{eq:bar_D_z_Weitzenbock} \bar D^{Z*}\bar D^Z\xi = \bar\nabla^{{\mathcal H}*}\bar\nabla^{\mathcal H} \xi - (c_T\bar\nabla) \xi + F\xi , \end{equation} where $F$ is the Clifford contraction of the curvature of the bundle $(S^0\vert_Z, \bar\nabla) \to Z$ and $c_T \bar\nabla$ is the Clifford contraction $\frac{1}{2}\mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) T_{jk}^\alpha \bar\nabla_{h_\alpha}$. \item On sections of the bundle $\pi^* (E^0\vert_Z))\to N$, the term \begin{equation} \label{eq:cross_terms3} (\slashed D_0 + \bar D^Z)^* \circ \bar{\mathcal A} + \bar{\mathcal A}^* \circ(\slashed D_0 + \bar D^Z), \end{equation} is a zeroth order operator. \item For every $\xi \in C^\infty(N; \pi^*(S^{0+}_\ell))$, \begin{equation} \label{eq:cross_terms} (\slashed D_s^* \bar D^Z + \bar D^{Z*} \slashed D_s )\xi=- s x_\alpha \mathfrak{c}_N(h^\alpha) \mathfrak{c}_N( \pi^*d\lambda_\ell)\xi. \end{equation} \item On sections of the bundle $\pi^* (E^0\vert_Z))\to N$, the term \begin{equation} \label{eq:cross_terms1} \slashed D_s^* \bar D^Z + \bar D^{Z*} \slashed D_s, \end{equation} is 0-th order operator with coefficients of order $sO(r)$. In particular \begin{equation} \label{eq:cross_terms2} \slashed D^*_0 \bar D^Z + \bar D^{Z*} \slashed D_0 \equiv 0. \end{equation} \item For every $v\in C^\infty(N;TN)$ and every $\xi\in C^\infty(N;\pi^*E^1)$, \begin{equation} \label{eq:cross_terms25} [\slashed D_s^*, \bar\nabla_v] \xi = - s[\bar\nabla_v, r \bar A_r^*] \xi. \end{equation} \end{enumerate} Analogous facts fold for the operators $ \slashed D_s \slashed D_s^*,\ \bar D^Z \bar D^{Z*}$ and $\slashed D_s \bar D^{Z*} + \bar D^Z \slashed D_s^*$. \end{proposition} \begin{proof} Most of the calculations involved in the proof of the proposition are carried over the fiber $N_p$ of the total space $N$ where expressions \eqref{eq:local_expressions_over_N_p} hold. To prove \eqref{eq:slashed_D_s_Weitzenbock} we use \eqref{eq:local_expressions_over_N_p} over $N_p$ and calculate \begin{align*} \slashed D_s^* \slashed D_s \xi &= \sum_{\alpha, \beta}(c^\alpha\partial_\alpha + s x_\alpha \bar A^*_\alpha) (c^\beta \partial_\beta + s x_\beta \bar A_\beta) \xi \\ &=\sum_{\alpha, \beta} ( c^\alpha c^\beta \partial^2_{\alpha \beta} + sx_\alpha(c^\beta \bar A_\alpha + \bar A^*_\alpha c^\beta) \partial_\beta + s^2 x_\alpha x_\beta \bar A^*_\alpha \bar A_\beta ) \xi + s \sum_\alpha c^\alpha \bar A_\alpha \xi \\ &= \frac{1}{2}\sum_{\alpha, \beta} [(c^\alpha c^\beta + c^\beta c^\alpha) \partial^2_{\alpha \beta} + s^2 x_\alpha x_\beta (\bar A^*_\alpha \bar A_\beta +\bar A_\beta^* \bar A_\alpha)] \xi - s\sum_\alpha M^0_\alpha \xi, \end{align*} where in the last line we used equation \eqref{eq:dcond} to eliminate the cross terms. By Proposition~\ref{prop:more_properties} \ref{rem:metric_compatibility1}, we have $M_{\alpha,\beta}^0 = \frac{1}{2}( \bar A^*_\alpha \bar A_\beta +\bar A_\beta^* \bar A_\alpha) = \delta_{\alpha \beta}Q$ and by the Clifford relations \eqref {eq:Clifford_relations} we obtain $c^\alpha c^\beta + c^\beta c^\alpha = -2 \delta_{\alpha \beta}$. Identity \eqref{eq:slashed_D_s_Weitzenbock} follows. Identity \eqref{eq:bar_D_z_Weitzenbock} is treated as the usual calculation for the Weitzenbock type formula. We use the local expressions \eqref{eq:local_expressions_over_N_p} of $\bar D^Z$ and $\bar D^{Z*}$ over $N_p$, together with \eqref{eq:c_N_commutator_identities} and calculate: \begin{align*} \bar D^{Z*} (\bar D^Z\xi) &= \mathfrak{c}_N(h^j) \bar\nabla_{h_j} (\mathfrak{c}_N(h^k) \bar\nabla_{h_k}\xi) \\ &= \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) (\bar\nabla_{h_j}\bar\nabla_{h_k} \xi) \\ &= - \bar\nabla_{h_k}\bar\nabla_{h_k} \xi + \sum_{j<k} \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) \left(\bar\nabla_{h_j}\bar\nabla_{h_k} -\bar\nabla_{h_j}\bar\nabla_{h_k}\right) \xi \\ &= \bar\nabla^{{\mathcal H}*}\bar\nabla^{\mathcal H} \xi + \sum_{j<k} \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) \left( \mathrm{Hess}(h_j, h_k) - \mathrm{Hess}(h_k, h_j)\right) \xi \\ &= \bar\nabla^{{\mathcal H}*}\bar\nabla^{\mathcal H} \xi + \sum_{j<k} \mathfrak{c}_N(h^j) \mathfrak{c}_N(h^k) \left(F^{\pi^*S}(h_j, h_k)\xi - \bar\nabla_{T(h_j, h_k)}\xi \right). \end{align*} In the last equality we used \eqref{eq:basic_extension_bar_connection_properties5}. For the proof of \eqref{eq:cross_terms3} we calculate over $N_p$, the symbols of the involved operators, \begin{align*} [\sigma_{\slashed D_0 + \bar D^Z} (v)\circ \bar {\mathcal A}+ \bar{\mathcal A}^* \circ\sigma_{\slashed D_0 + \bar D^Z} (v)] \xi &= [ (v_\alpha c^\alpha + v_j c^j) \circ \bar {\mathcal A} + \bar{\mathcal A}^* \circ (v_\alpha c^\alpha + v_j c^j)]\xi, \end{align*} where the last expression vanishes by equation \eqref{eq:cond}. Hence the associated differential operator is of zeroth order. To prove \eqref{eq:cross_terms2} we use Clifford relations and \eqref{eq:c_N_commutator_identities} over $N_p$ to calculate, \begin{align*} (\slashed D_0^* \bar D^Z + \bar D^{Z*} \slashed D_0)\xi &= \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha}(\mathfrak{c}_N(h^j) \bar\nabla_{h_j} \xi) + \mathfrak{c}_N(h^j) \bar\nabla_{h_j}( \mathfrak{c}_N(h^\alpha) \bar\nabla_{h_\alpha} \xi) \\ &=\mathfrak{c}_N(h^\alpha)\mathfrak{c}_N(h^j) \left(\bar\nabla_{h_\alpha} \bar\nabla_{h_j} - \bar\nabla_{h_j} \bar\nabla_{h_\alpha} \right) \xi \\ &= \mathfrak{c}_N(h^\alpha)\mathfrak{c}_N(h^j) \left(\mathrm{Hess}(h_\alpha, h_j) - \mathrm{Hess}(h_j, h_\alpha) \right) \xi. \end{align*} But the last term is the zero section, by using the symmetry of the Hessian in Proposition~\ref{prop:basic_extension_bar_connection_properties} \eqref{prop:basic_extension_bar_connection_properties5}. To prove \eqref{eq:cross_terms25} we calculate over $N_p$, \begin{align*} [\slashed D_s^*, \bar\nabla_v] \xi &= \mathfrak{c}_N(h^\alpha) \left(\bar\nabla_{h_\alpha} \bar\nabla_v - \bar\nabla_v \bar\nabla_{h_\alpha} \right) \xi - s[\bar\nabla_v, r \bar A_r^*] \xi \\ &= \mathfrak{c}_N(h^\alpha) \left(\mathrm{Hess}(h_\alpha, v) - \mathrm{Hess}(v, h_\alpha) \right) \xi - s[\bar\nabla_v, r \bar A_r^*] \xi \\ &= - s[\bar\nabla_v, r \bar A_r^*] \xi, \end{align*} where in the intermediate equality we used again the symmetry of the Hessian. To prove \eqref{eq:cross_terms1}, we use the local expression in frames, $r \bar A_r = x_a \bar A_\alpha$ and we employ \eqref{eq:horizontal_lift} to obtain $[h_j , x_\alpha ]= - x_\beta\bar\omega_{j \beta}^\alpha $. We then calculate at any point of $N_U$, \begin{align*} (\bar D^Z)^* (r \bar A_r \xi) + r \bar A^*_r (\bar D^Z \xi) &= c^j (h_j + \phi_j^1)( x_\alpha \bar A_a\xi) + x_\alpha \bar A^*_\alpha ( c^j(h_j + \phi_j^0) \xi ) \\ &= c^j [h_j + \phi_j , x_\alpha\bar A_\alpha] \xi + x_\alpha ( c^j \bar A_\alpha + \bar A_\alpha^* c^j) (h_j + \phi_j^0) \xi \\ &= x_\alpha( c^j [e_j + \phi_j , \bar A_\alpha] - \bar\omega_{j\alpha}^\beta \bar A_\beta ) \xi, \end{align*} where in the last equality $c^j \bar A_\alpha + \bar A_\alpha^* c^j =0$ by equation \eqref{eq:cond}. It follows that the term is a 0-th order operator with coefficients of order $O(r)$, as $r \to 0$. To prove \eqref{eq:cross_terms}, we write $\bar A_\alpha = c^\alpha M^0_\alpha$, so that $M_\alpha^0 \xi = \lambda_\ell \xi$. Hence continuing the preceding calculation in this case over $N_p$, \begin{align*} (\bar D^Z)^* (r \bar A_r \xi) + r \bar A^*_r (\bar D^Z \xi) &= c^j [\bar\nabla_{h_j} , x_\alpha\bar A_\alpha]\xi \\ &= c^j[\bar\nabla_{h_j} , x_\alpha c^\alpha M^0_\alpha] \xi \\ &= x_\alpha c^j c^\alpha [h_j , \lambda_\ell ] \xi, \quad (\text{by \eqref{eq:c_N_commutator_identities}}) \\ &=- x_\alpha c^\alpha (e_j(\lambda_\ell) c^j)\xi \\ &= - x_\alpha \mathfrak{c}_N(h^\alpha) \mathfrak{c}_N( \pi^* d\lambda_\ell) \xi. \end{align*} Both the left and the right hand sides of the preceding equations are independent of coordinates and frames and therefore the equations hold at any reference frame and are true everywhere on the total space $N$. The proof is now complete. \end{proof} Using Proposition~\ref{prop:properties_of_compatible_subspaces} we decompose $S\vert_Z$ into eigenbundles $S_\ell$ of $Q$ and further into eigenbundles $S_{\ell k}$ of $C$ in \eqref{eq:decompositions_of_S_ell} of eigenvalue $(n-m- 2k)\lambda_\ell$. Fixing $z\in Z$, we calculate explicitly the eigenvalues of $\slashed D^2_s\vert_z: L^2(N_z; (S_{\ell k})_z) \to L^2(N_z; (S_{\ell k})_z)$. By equation \eqref{eq:slashed_D_s_Weitzenbock}, \[ ( - \Delta + s^2 r^2 Q^2 - sC )\xi = ( - \Delta + s^2 r^2\lambda_\ell^2 - s(n-m-2k)\lambda_\ell )\xi = \lambda \xi. \] Changing variables $\{w_\alpha =(s\lambda_\ell)^{1/2} x_\alpha\}_\alpha$ and setting $\tilde \xi(w) = \xi(x)$, we obtain \[ -\Delta \tilde \xi + r^2 \tilde \xi = \left[(n-m - 2k) + \frac{\lambda}{s\lambda_\ell} \right] \tilde\xi. \] It follows by \cite{p}[Th. 2.2.2] that \[ (n-m - 2k) + \frac{\lambda}{s\lambda_\ell} \in \{ 2\lvert a\rvert + n-m:\ a \in \mathbb{Z}^{n-m}_{\geq 0}\}, \] so that the spectrum of $\slashed D^2_s\vert_z$ as unbounded operator on $L^2(N_z; (S_{\ell})_z)$ is given by, \begin{equation} \label{eq:spectrum_of_Delta_s} \{2s\lambda_\ell (\lvert a\rvert + k):\ a \in \mathbb{Z}^{n-m}_{\geq 0},\ k\in \{0,\dots ,n-m\}\}, \end{equation} for every $\ell$. In particular $\lambda=0$ is an eigenvalue if and only if $a=0$ and $k=0$ in which case the kernel is \[ \ker (\slashed {\mathcal D}_s\vert_z)= \bigoplus_\ell \varphi_{s\ell}\cdot (S^+_\ell)_z, \] where, \[ \varphi_{s\ell}:N \to \mathbb{R}, \quad \varphi_{s\ell}(v)= (s\lambda_\ell)^{\tfrac{n-m}{4}} e^{- \tfrac{1}{2} s \lambda_\ell \lvert v\rvert^2} . \] Consider the map $N \to Z\times [0, +\infty),\ v \mapsto (\pi(v), \lvert v\rvert)$. We then see that $\varphi_{s\ell}$ is the pullback under the preceding map, of $(z,r) \mapsto (s\lambda_\ell)^{\tfrac{n-m}{4}} e^{- \tfrac{1}{2}s\lambda_\ell r^2} $. \medskip \vspace{1cm} \section{The harmonic oscillator in normal directions} \label{sec:harmonic_oscillator} In this section we obtain estimates for the derivatives in the normal directions of $\xi: N \to \pi^*S^0$. We begin with the following definition: \begin{defn} \label{defn:eighted_norms_spaces} Given $\xi\in C_c^\infty(N)$ and $k, l\in \mathbb{N}_0$, we define the norms \begin{align*} \|\xi\|_{0,2,k,0}^2:&= \int_N (1+ r^2)^{k+1}\lvert\xi\rvert^2\, d\mathrm{vol}^N, \\ \|\xi\|_{1,2,k,l}^2 :&=\|\xi\|_{0,2,k,0}^2 + \int_{N} \left[(1+r^2)^k \lvert\bar\nabla^{\mathcal V} \xi\rvert^2 + l(1+r^2)^{l-1} \lvert\bar\nabla^{\mathcal H} \xi\rvert^2\right]\, d\mathrm{vol}^N, \end{align*} and define the spaces $L^2_k(N)$ and $W^{1,2}_{k, l}(N)$ by completion of $C_c^\infty(N)$ in each of these respective norms. When $k=-1$ we set $L^2_{-1}(N) := L^2(N)$. \end{defn} Note that with these definitions, the space $W^{1,2}_{k,0}(N)$ has sections admitting weak derivatives in the normal directions. By Theorem~\ref{thm:approximation_theorem_for_weighted_spaces}, \begin{align*} L^2_k(N) &:= \{ \xi\in L^2(N): \| \xi\|_{0,2,k,0}<\infty\}, \\ W^{1,2}_{k,l}(N) &:= \{ \xi\in W^{1,2}(N): \| \xi\|_{1,2,k,l}<\infty\}, \quad l \in \mathbb{N}, \\ W^{1,2}_{k,0}(N) &:= \{ \xi\in L^2(N) : \bar\nabla^{\mathcal V}\xi\in L^2(N), \ \text{and}\ \| \xi\|_{1,2,k,0}<\infty\}, \end{align*} for every $k\in \mathbb{N}_0$ and therefore we can use approximations by test functions with compact support. We have embeddings $W^{1,2}_{k+1, l}(N) \subset W^{1,2}_{k, l}(N)$. Furthermore $L^2_k(N),\ W^{1,2}_{k,0}(N)$ and $W^{1,2}_{k,l}(N),\ l \leq k+2$ are $C^\infty(Z;\mathbb{R})$-modules and multiplication by a fixed function is a continuous map in these spaces. Finally, the operator $\slashed D_s: L^2(N) \to L^2(N)$ is densely defined with domain $W^{1,2}_{0,0}(N)$ and satisfies $\slashed D_s( W^{1,2}_{k+1,0}(N)) \subset L^2_k(N)$, for every $k\geq -1$. We summarize now some basic properties following from well known facts about the harmonic oscillator: \begin{prop} \label{prop:vertical_cross} \begin{enumerate} \item Let $k\geq 0$ and set $C_0 = \min_{\ell , Z} \lambda_\ell$ and $C_1 = \max_{Z, \ell} \lambda_\ell$. There exist constant $C = C( n-m, C_0, C_1)>0 $ so that, \begin{equation} \label{eq:elliptic_estimate1_k>=0} s\|r^{k+1}\xi\|_{L^2(N)} + \| r^k \bar\nabla^{\mathcal V} \xi \|_{L^2(N)} \leq C\left(\sum_{u=0}^k s^{-\tfrac{u}{2}} \| r^{k-u} \slashed D_s \xi\|_{L^2(N)} + s^{\tfrac{1-k}{2}}\|\xi \|_{L^2(N)}\right), \end{equation} for every $s>0$ and every $\xi \in W^{1,2}_{k,0}(N;\pi^*(S^0\vert_Z))$. \item The operator, \[ \slashed D_s : L^2(N; \pi^*( S^0\vert_Z))\to L^2(N; \pi^* (S^1\vert_Z)), \] is closed and $\ker \slashed D_s \subset L^2(N; \pi^* S^0)$ is a closed subspace. \item There exist $s_0>0$ so that, when $s>s_0$, the kernel of $\slashed D_s$, is given explicitly by, \begin{equation} \label{eq:L_2_kernel_D_s_calculation} \ker \slashed D_s = \bigoplus_\ell \varphi_{s\ell}\cdot L^2 (Z; S^{0+}_\ell), \end{equation} and \begin{equation} \label{eq:kernel_D_s_calculation} \ker \slashed D_s \cap W^{1,2}(N; \pi^*(S^0\vert_Z)) = \bigoplus_\ell \varphi_{s\ell} \cdot W^{1,2}(Z; S^{0+}_\ell). \end{equation} \item For every $s> 0$ and every $\xi\in (\ker \slashed D_s)^{\perp_{L^2}}$, we have the spectral estimate \begin{equation} \label{eq:spectral_estimate} 2s C_0\|\xi\|_{L^2(N)}^2 \leq \|\slashed D_s\xi\|_{L^2(N)}^2. \end{equation} \item $\slashed D_s$ has closed range and we have the relations, \begin{equation} \label{eq:Fredholm_alternative} (\ker \slashed D_s)^{\perp_{L^2}} = \mathrm{Im\,}\slashed D_s^* \quad \text{and} \quad (\ker \slashed D_s^*)^{\perp_{L^2}} = \mathrm{Im\,}\slashed D_s. \end{equation} \end{enumerate} \end{prop} \begin{proof} We will use Hitchin's dot notation $v_\cdot$ throughout the proof instead of the $\mathfrak{c}_N(v)$. Recall the decomposition \eqref{eq:decompositions_of_S_ell} into eigenbundles of $Q^0$ and further into eigenbundles of $C^0$. We work with smooth sections $\xi: N \to \pi^*S_{\ell t}^0$ with compact support. There we have \begin{align} \label{eq:N_Weitzenboock_2} \slashed D_s^* \slashed D_s\xi = (- \Delta + s^2 r^2 \lambda_\ell^2 - s(n-m-2t)\lambda_\ell )\xi. \end{align} Multiplying \eqref{eq:N_Weitzenboock_2} by $r^{2k}$, taking inner products with $\xi$ and integrating by parts, we obtain, \begin{align*} \| r^k \slashed D_s\xi \|_2^2 + 2k\langle \slashed D_s \xi, r^{2k-1} dr_\cdot \xi\rangle_2 =& \sum_\alpha(\| r^k \partial_\alpha \xi\|_2^2 + 2k\langle\partial_\alpha \xi, x_\alpha r^{2k-2} \xi\rangle_2) \\ &+ s^2\|r^{k+1} \lambda_\ell \xi \|_2^2 + s(2t-n+m)\langle r^{2k} \lambda_\ell\xi,\xi\rangle_2. \end{align*} By applying Peter-Paul inequality to the first couple of cross terms and absorbing alike terms we obtain an estimate, \begin{equation} \label{eq:basic_ineq_for_S} s^2 C_0^2\|r^{k+1} \xi\|_2^2 + \|r^k \bar\nabla^{\mathcal V} \xi\|_2^2 \leq 6(C_1+1)(n-m)(\|r^k\slashed D_s \xi\|_2^2 + \|(k + \sqrt{s} r) r^{k-1} \xi\|_2^2), \end{equation} for every $k\in \mathbb{N}_0$, every $t\in \{0, \dots, n-m\}$ and every $s>0$. Estimate \eqref{eq:elliptic_estimate1_k>=0} is then proven, by induction on $k$, using \eqref{eq:basic_ineq_for_S} in inductive step. The fact that $\slashed D_s: L^2(N; \pi^*(S^0\vert_Z)) \to L^2(N; \pi^*(S^0\vert_Z))$ is closed, follows from estimate \eqref{eq:elliptic_estimate1_k>=0} for $k=0$. Also $\ker \slashed D_s$ is the kernel of a closed operator and therefore, a closed subspace of $L^2$. We have already shown that, \[ \ker \slashed D_s\cap C^\infty (N; \pi^* (S^0\vert_Z)) = \bigoplus_\ell \varphi_{s\ell} \cdot C^\infty(Z; S^{0+}_\ell). \] Given $\xi = \varphi_{s\ell}\cdot \eta$, with $\eta \in C^\infty(Z; S^{0+}_\ell)$ and using the substitution $t = \sqrt{s \lambda_\ell} r $, the $L^2$-norms are computed explicitly to give, \begin{equation*} \|\xi\|_{L^2(N)}^2 = \left\lvert S^{n-m-1}\right\rvert\left(\int_0^\infty t^{n-m-1} e^{-t^2}\, dr\right) \|\eta\|_{L^2(Z)}^2 = \frac{1}{2} \left\lvert S^{n-m-1} \right\rvert \Gamma \left(\frac{n-m}{2} \right) \|\eta\|_{L^2(Z)}^2, \end{equation*} that is, \begin{equation} \label{eq:Gaussian_evaluation} \|\xi\|_{L^2(N)}^2 = \pi^{\tfrac{n-m }{2}} \|\eta\|_{L^2(Z)}^2. \end{equation} Since $\bar\nabla \xi = d \varphi_{s\ell } \otimes \eta + \varphi_{s\ell} \bar\nabla \eta$, a computation using estimate \eqref{eq:elliptic_estimate1_k>=0} with $k=0, 1$ shows that there exist constants $C_2, C_3$ with \[ C_2 \| \eta\|_{W^{1,2}(Z)} \leq \|\xi\|_{W^{1,2}(N)} \leq \sqrt{s} C_3 \|\eta\|_{W^{1,2}(Z)}. \] Since $\ker \slashed D_s$ is the $L^2$ closure of $ \bigoplus_\ell\varphi_{s\ell} \cdot C^\infty(Z; S^{0+}_\ell)$, equations \eqref{eq:L_2_kernel_D_s_calculation} and \eqref{eq:kernel_D_s_calculation} follow. Recall that, by the calculation of the spectrum of $\slashed D^2_s\vert_z$ in \eqref{eq:spectrum_of_Delta_s}, the constant $2s\min_{\ell , Z} \lambda_\ell$ is a lower bound. Given now $\xi \in C^\infty(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$, Lemma~\ref{lemma:restriction_to_fibers_of_N} implies that the restriction on the fiber $N_z,\ \xi_z \in (\ker\slashed D_s\vert_z)^{\perp_{L^2}}$. Using the spectral decomposition of $\Delta_s\vert_z : L^2(N_z) \to L^2(N_z)$ in \cite{p}[Th. 2.2.2], we have, \[ 2s\min_{\ell , Z} \lambda_\ell \|\xi\|_2^2(z) \leq \langle \Delta_s \xi, \xi\rangle_2(z) = \| \slashed D_s \xi\|_2^2(z), \] for every $z\in Z$. Hence integrating over $Z$, \[ 2s\min_{\ell , Z} \lambda_\ell \|\xi\|_2^2 \leq \|\slashed D_s\xi\|_2^2. \] By completion, this inequality holds for every $\xi\in W^{1,2}_{1,0}(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$. This proves \eqref{eq:spectral_estimate}. Finally, estimate \eqref{eq:spectral_estimate} shows that $\slashed D_s$ has closed range. Relations \eqref{eq:Fredholm_alternative} then follow as general results for closed operators admitting adjoints with closed range (see \cite{k}[Ch. IV, Th. 5.13, pp.234]). \end{proof} \begin{rem} \label{rem:observations_on_W_1,2_k_0} \begin{enumerate} \item We have inclusions, \label{rem:kernel_infinite_inclusions} \[ \ker\slashed D_s \subset \bigcap_{\ell \in \mathbb{N}_0} W^{1,2}_{\ell,0}(N). \] \item \label{rem:decompositions} Because of Remark \ref{rem:observations_on_W_1,2_k_0} \eqref{rem:kernel_infinite_inclusions}, we have decompositions \begin{align*} L^2_k(N) &= (L^2_k(N)\cap\ker \slashed D_s) \oplus [L^2_k(N)\cap(\ker \slashed D_s)^{\perp_{L^2}}], \quad k\geq -1 \\ W^{1,2}_{k,0}(N) &= (W^{1,2}_{k,0}(N)\cap\ker \slashed D_s) \oplus [W^{1,2}_{k,0}(N)\cap(\ker \slashed D_s)^{\perp_{L^2}}], \quad k \geq 0. \end{align*} Given any $\xi \in L^2(N)$ we decompose it into, \[ \xi = \xi^0 + \xi^1, \quad \xi^0\in \ker \slashed D_s \quad \text{and} \quad \xi^1 \in (\ker \slashed D_s)^{\perp_{L^2}}. \] Henceforth we denote these projections with these notations. \item \label{rem:module_structures} The subspace $\ker \slashed D_s$ is a $C^\infty(Z; \mathbb{R})$-module. The same holds true for $W^{1,2}_{k,\ell}(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$, for every $k, \ell\geq 0$. \item Analogue estimates and decompositions hold for $\slashed D_s^*$. \end{enumerate} \end{rem} We used the following lemma: \begin{lemma} \label{lemma:restriction_to_fibers_of_N} Let $\xi \in C^\infty(N; \pi^*(S^0\vert_Z)) \cap(\ker\slashed D_s)^{\perp_{L^2}}$. Then the restriction on the fiber $\xi\vert_{N_p}=: \xi_p$ is a section of $(\ker\slashed D_s\vert_p)^{\perp_{L^2}}$, for every $p\in Z$. \end{lemma} \begin{proof} Let $\varrho_\varepsilon(z) = \varrho( d_Z(z,p)/ \varepsilon)$ where $d_Z(z,p)$ is the distance of $z$ from $p$ in $Z$ and $\varrho$ is a cutoff function with $\rho(0)=1$ and $\mathrm{supp\,} \varrho \subset [0,1]$. For every $\eta \in \ker\slashed D_s$ we have $\varrho_\varepsilon \cdot \eta \in\ker\slashed D_s$. By Fubini's Theorem, \bigskip \begin{align*} 0 &= \frac{1}{\mathrm{vol}(B(p, \varepsilon)\cap Z)} \int_N \langle \xi, \varrho_\varepsilon \cdot \eta\rangle\, d\mathrm{vol}^N \\ &= \frac{1}{\mathrm{vol}(B(p, \varepsilon)\cap Z)} \int_{B(p, \varepsilon)\cap Z}\varrho_\varepsilon(z)\cdot \int_{N_z} \langle \xi, \eta\rangle\, dv dz \to \int_{N_p} \langle \xi, \eta\rangle\, dv, \end{align*} as $\varepsilon \to 0^+$. Hence for every $p\in Z$, we obtain $\xi\vert_{N_p} \in (\ker(\slashed D_s)\vert_p)^{\perp_{L^2}}$. \end{proof} As a corollary we prove the following proposition with a bootstrap argument on $k\in \mathbb{N}_0$ for $\slashed D_s$ on $W^{1,2}_{k,0}$-spaces: \begin{prop} \label{prop:bootstrap_on_k} Assume $ s>0$ and $k\geq 0$. Then \[ \slashed D_s : W^{1,2}_{k,0}(N)\cap (\ker\slashed D_s)^{\perp_{L^2}} \to L^2_{k-1}(N)\cap (\ker\slashed D_s^*)^{\perp_{L^2}}, \] defines an isomorphism of $C^\infty(Z;\mathbb{R})$-modules, for every $k \geq 0$. The inverse operator $G_s$ is defined and is bounded. More precisely, there exist constant $C = C(n-m, C_0, C_1, k)>0$ so that every $\xi\in W^{1,2}_{k,0}(N)\cap (\ker\slashed D_s)^{\perp_{L^2}} $ obeys an estimate, \begin{equation} \label{eq:bootstrap_estimate} s\|r^{k+1}\xi\|_2 + \| r^k \bar\nabla^{\mathcal V} \xi \|_2 \leq C\sum_{u=0}^k s^{-\tfrac{u}{2}} \| r^{k-u} \slashed D_s \xi\|_2. \end{equation} \end{prop} \begin{proof} The operator $\slashed D_s : W^{1,2}_{k,0}(N)\cap (\ker\slashed D_s)^{\perp_{L^2}} \to L^2_{k-1}(N)\cap (\ker\slashed D_s^*)^{\perp_{L^2}}$ is injective and we have to prove that it is surjective. Let $\eta \in L^2_k(N)\cap (\ker\slashed D_s^*)^{\perp_{L^2}}$. By Proposition~\ref{prop:vertical_cross} \eqref{eq:Fredholm_alternative}, there exist unique $\xi \in W^{1,2}_{0,0}(N)\cap (\ker \slashed D_s)^{\perp_{L^2}}$ with $\slashed D_s\xi = \eta$. We prove that $\xi$ is of class $W^{1,2}_{k,0}$. Let $\rho : [0, +\infty) \to [0,1]$ with $\mathrm{supp\,} \rho \subset [0,2]$ and $ \rho^{-1}(\{1\}) = [0,1]$ and $\lvert d\rho\rvert \leq 1$. Define sequence $\rho_j(r) = \rho(r/ j)$ so that $\rho_j \to 1$ uniformly on compact subsets of $N$ and $\lvert d\rho_j\rvert \leq 1/j$. Then $\xi_j = \rho_j \cdot \xi \in \bigcap_\ell W^{1,2}_{\ell,0}(N)$ and by Dominated convergence, $\xi_j \to \xi$ and $\bar\nabla_\alpha \xi_j \to \bar\nabla_\alpha \xi$ in $ L^2(N)$ for every $\alpha$. Setting $\eta_j = \slashed D_s \xi_j$, by Dominated convergence, we also have that $\eta_j \to \eta$ in $L^2(N)$. \hfill \textit{Claim:} $\{\xi_j\}_j$ is a Cauchy sequence in $W^{1,2}_{k,0}$-norm. \proof[Proof of claim] By \eqref{eq:elliptic_estimate1_k>=0} with $k=0$, \[ s\|r(\xi_j- \xi)\|_2 \leq C(\|\eta_j - \eta\|_2 + \sqrt{s}\|\xi_j- \xi\|_2), \] so that $\xi_j \to \xi$ in $W^{1,2}_{0,0}(N)$. We work using induction. Suppose that $\{\xi_j\}_j$ is Cauchy in $W^{1,2}_{\ell-1,0}(N)$ for some $1\leq\ell\leq k$. Then $\xi \in W^{1,2}_{\ell-1,0}(N)$ and $\{r^\ell \eta_j\}_j$ is Cauchy in $L^2(N)$, since \[ \|r^\ell (\eta_j - \eta)\|_2 \leq \|r^\ell d \rho_{j\cdot} \xi\|_2 + \| (1- \rho_j) r^\ell \eta\|_2 \leq \frac{1}{j} \|r^\ell \xi\|_2 + \| (1- \rho_j) r^\ell \eta\|_2 \to 0. \] The convergence holds since $r^\ell \xi \in L^2(N)$ by inductive assumption and $r^\ell \eta \in L^2(N)$ by our initial assumption, so that Dominated convergence applies to the last term. But then, by \eqref{eq:elliptic_estimate1_k>=0} with $k=\ell$, \begin{equation*} s\|r^{\ell+1}(\xi_j- \xi_i)\|_2 + \|r^\ell \bar\nabla^{\mathcal V} (\xi_j- \xi_i)\|_2 \leq C\left(\sum_{u=0}^\ell s^{-\tfrac{u}{2}}\|r^{\ell - u}(\eta_j - \eta_i)\|_2 + s^{\tfrac{1-\ell}{2}}\|\xi_j- \xi_i\|_2\right), \end{equation*} which proves that $\{\xi_j\}_j$ is Cauchy in $W^{1,2}_{\ell,0}(N)$ and therefore $\xi \in W^{1,2}_{\ell,0}(N)$. It follows that $\{\xi_j\}_j$ is Cauchy in $W^{1,2}_{k,0}(N)$ and therefore $\xi \in W^{1,2}_{k,0}(N)$. Estimate \eqref{eq:bootstrap_estimate} follows from an application of \eqref{eq:elliptic_estimate1_k>=0}, followed by an application of \eqref{eq:spectral_estimate}. The boundedness of the inverse operator follows from the Open Mapping Theorem. \end{proof} \medskip \vspace{1cm} \setcounter{equation}{0} \section{The operator \texorpdfstring{$\bar D^Z$}{} and the horizontal derivatives} \label{sec:The_operator_bar_D_Z_and_the_horizontal_derivatives} Throughout the paragraph we will work with function spaces of sections of the bundle $\pi^*(S^0\vert_Z)\to N$, unless if we specify otherwise. The domain of $\bar D^Z : L^2(N) \to L^2(N)$ is the space $W^{1,2}_{1,1}(N)$ so that $\bar D^Z$ is densely defined. Also $\bar D^Z(W^{1,2}_{k, l}(N)) \subset L^2_{\min\{k-2, l-2\}}(N)$, for every $k\geq 1,\ l\geq 1$. Fix coordinates $ (\pi^{-1}(U), (x_j, x_\alpha)_{j,\alpha})$, orthonormal frames $\{e_j\}_j$ with their lifts $\{h_j\}_j$ and orthonormal frame $\{\sigma_k\}_k$ on $S^0\vert_U$. Recall operator $D^Z_+$ from Definition~\ref{defn:Dirac_operator_component}. \begin{prop} \begin{enumerate} \item There exist a constant $C>0$ so that, for every $s>0$ and every $\xi\in W^{1,2}_{1,1}(N)$, \begin{equation} \label{eq:elliptic_estimate2} \|\bar \nabla^{\mathcal H} \xi\|_{L^2(N)}^2 \leq \|\bar D^Z \xi\|_{L^2(N)}^2 + C( s^{-1}\|\slashed D_s \xi\|_{L^2(N)}^2 + \|r\slashed D_s \xi\|_{L^2(N)}^2 + \|\xi \|_{L^2(N)}). \end{equation} \item The operators $\bar D^Z$ and $D^Z_+$ are linked in the following way; for every $\xi = \sum_\ell \xi_\ell,\ \xi_\ell \in C^\infty(Z; S^{0+}_\ell)$, the operator $\xi \mapsto \xi^0 = \sum_\ell \varphi_{s \ell} \cdot \xi_\ell \in C^\infty(N; \pi^* S^{0+}_\ell)$ is defined in Remark~\ref{rem:observations_on_W_1,2_k_0} \eqref{rem:decompositions}. We have \begin{equation} \label{eq:br_D_Z_versus_D_Z_+} \bar D^Z \xi^0 = \sum_\ell \mathfrak{c}_N(\pi^* d \ln \varphi_{s\ell}) P_\ell^{0+} \xi^0 + (D^Z_+ \xi)^0, \end{equation} where $P^{0+}_\ell S^0 \to S^{0+}_\ell$ is the orthogonal projection. \item For every $k\geq 0$ \begin{equation} \label{eq:aux_estimate_2} \| r^k\bar\nabla^{\mathcal H}(\xi^0) \|_{L^2(N)} \leq Cs^{-\tfrac{k}{2}} (\|[(D^Z_++{\mathcal B}_{0+}^Z)\xi]^0\|_{L^2(N)} + \|\xi^0\|_{L^2(N)}) \end{equation} \end{enumerate} \end{prop} \begin{proof} Let $\xi\in C^\infty_c(\pi^{-1}(U);\pi^*(S^0\vert_Z))$. Applying $L^2$-norms with $\xi$ in \eqref{eq:bar_D_z_Weitzenbock} and integrating by parts, \[ \|\bar\nabla^{\mathcal H}\xi\|_{L^2(N)}^2 \leq \|\bar D^Z\xi\|_{L^2(N)}^2 - \langle c_T \bar\nabla \xi, \xi\rangle_{L^2(N)} + C\|\xi\|_{L^2(N)}^2. \] However by the expression of the torsion $T$ in Proposition~\ref{prop:basic_extension_bar_connection_properties}, we have that \[ \lvert\langle c_T \bar\nabla \xi, \xi\rangle_{L^2(N)}\rvert \leq C\|r \bar\nabla^{\mathcal V} \xi\|_{L^2(N)} \|\xi\|_{L^2(N)}. \] Applying Cauchy-Schwartz followed by estimate \eqref{eq:elliptic_estimate1_k>=0}, we have \[ \|\bar\nabla^{\mathcal H}\xi\|_{L^2(N)}^2 \leq \|\bar D^Z \xi\|_{L^2(N)}^2+ C( s^{-1}\|\slashed D_s \xi\|_{L^2(N)}^2 + \|r\slashed D_s \xi\|_{L^2(N)}^2 + \|\xi \|_{L^2(N)}), \] as required. Proving \eqref{eq:br_D_Z_versus_D_Z_+} is a straightforward calculation: \begin{align*} \bar D^Z \xi^0 &= \mathfrak{c}_N(h^j)( h_j(\varphi_{s\ell}) \cdot \xi_\ell + \varphi_{s\ell} \cdot \bar\nabla_{h_j} \xi_\ell) \\ &= e_j(\varphi_{s\ell}) \cdot c_j\xi_\ell + \varphi_{s\ell} \cdot c_j\bar\nabla_{e_j} \xi_\ell, \qquad (\text{since $\pi_* h_j = e_j$}) \\ &= \mathfrak{c}_N(\pi^* d \ln \varphi_{s\ell}) P_\ell^{0+} \xi^0 + (D^Z_+ \xi)^0. \end{align*} Also, for every $k\geq 0$, \begin{align*} \| r^k\bar\nabla^{\mathcal H}(\xi^0) \|_{L^2(N)}&\leq C\sum_\ell(\|r^k \lvert d_Z\varphi_{s\ell}\rvert \xi_\ell\|_{L^2(N)} + \|r^k\varphi_{s\ell} \bar\nabla\xi_\ell\|_{L^2(N)}) \\ &\leq C\|r^k(1+s r^2)\xi^0\|_{L^2(N)} + C\|r^k (\bar\nabla\xi)^0\|_{L^2(N)}) \\ &\leq Cs^{-\tfrac{k}{2}}( \|\xi^0\|_{L^2(N)} + \|(\bar\nabla \xi)^0 \|_{L^2(N)})\qquad (\text{by \eqref{eq:elliptic_estimate1_k>=0}}) \\ &= C\pi^{\tfrac{n-m}{4}}s^{-\tfrac{k}{2}} \|\xi\|_{W^{1,2}(Z)},\qquad (\text{by \eqref{eq:Gaussian_evaluation}}) \\ &\leq C\pi^{\tfrac{n-m}{4}}s^{-\tfrac{k}{2}}(\|(D^Z_+ + {\mathcal B}^Z_{0+})\xi\|_{L^2(Z)} +\|\xi\|_{L^2(Z)}) \\ &= Cs^{-\tfrac{k}{2}}(\|[(D^Z_+ + {\mathcal B}^Z_{0+})\xi]^0\|_{L^2(N)} + \|\xi^0\|_{L^2(N)}),\qquad (\text{by \eqref{eq:Gaussian_evaluation}}) \end{align*} where in the fifth row we used an elliptic estimate for $D^Z_+ + {\mathcal B}^Z_{0+}$. The proof is complete. \end{proof} We now prove an analogue of Proposition~\ref{prop:bootstrap_on_k} but involving both the vertical and horizontal derivatives: \begin{prop} \label{prop:horizontal_regularity} Let $s> 0$ and $\xi \in W^{1,2}_{0,0}(N) \cap (\ker \slashed D_s)^{\perp_{L^2}}$ so that $\slashed D_s \xi\in W^{1,2}_{k-1,k}(N)$ for some $k\geq 1$. Then $\xi \in W^{1,2}_{k,k+1}(N)$ and there are estimates: \begin{equation} \label{eq:horizontal_regularity_1} \|\bar\nabla^{\mathcal H} \xi \|_{L^2(N)} \leq Cs^{-1/2}\left(\|\bar\nabla^{\mathcal H} ( \slashed D_s \xi)\|_{L^2(N)} + \|\slashed D_s\xi\|_{L^2(N)}\right). \end{equation} \end{prop} \begin{proof} By Remark~\ref{rem:observations_on_W_1,2_k_0} \eqref{rem:module_structures}, the spaces $W^{1,2}_{k-1,k}(N)\cap (\ker \slashed D_s)^{\perp_{L^2}},\ k\geq 1$ are $C^\infty(Z; \mathbb{R})$-modules. Hence by multiplying $\xi$ with members of a partition of unity of $Z$ subordinated in a trivializations of $N \to Z$, we may assume without loss of generality that $\xi$ is supported in a single chart $\pi^{-1}(U)$ of $N$ with coordinates $(x_j, x_\alpha)_{j, \alpha}$. First we prove that $\xi$ posses weak derivatives whose weighted Sobolev class we compute. Let $\eta\in C^\infty(N)$ be a test function and decompose $\eta= \eta^0 + \eta^1$. Set $\xi_1 = G_s \eta^1$, where $G_s$ is the Green's operator of $\slashed D^*_s$, defined in Proposition~\ref{prop:bootstrap_on_k}. Then $\xi_1,\ \eta^1$ are both of class $W^{1,2}_{\ell,0}(N)$, for every $\ell\geq 0$. For fixed $j$, \[ \langle \bar\nabla_{h_j} \eta, \xi\rangle_{L^2(N)} = \langle \bar\nabla_{h_j} \eta^0, \xi\rangle_{L^2(N)} + \langle \bar\nabla_{h_j} \eta^1, \xi\rangle_{L^2(N)}, \] and we proceed to evaluate the two terms on the right hand side. Write $\eta^0 = \sum_\ell \varphi_{s\ell} \eta_\ell$ for some $\eta_\ell \in C^\infty(Z; S^{0+}_\ell)$. Then \[ \bar\nabla_{h_j} \eta^0 = \sum_\ell \left[ \left( \frac{n-m}{2} - s \lambda_\ell r^2\right) \frac{e_j (\lambda_\ell)}{2\lambda_\ell} \varphi_{s\ell} \eta_\ell + \varphi_{s\ell}\bar\nabla_{e_j} \eta_\ell \right], \] where the last term of the right hand side is again an element of $\ker\slashed D_s$. Since $\xi \in (\ker\slashed D_s)^{\perp_{L^2}}$, we obtain \[ \langle \bar\nabla_{h_j} \eta^0, \xi\rangle_{L^2(N)} = \langle \eta^0, K_j\xi\rangle_{L^2(N)}, \] where \[ K_j\xi := \sum_\ell \left[ \frac{n-m}{2} - s \lambda_\ell r^2\right] \frac{e_j(\lambda_\ell)}{2\lambda_\ell} P_\ell^{0+}\xi. \] On the other hand, using the identity \eqref{eq:cross_terms25} that is \[ \slashed D_s^* (\bar\nabla_{h_j} \xi_1) = \bar\nabla_{h_j} ( \slashed D_s^* \xi_1) - sr[\bar\nabla_{h_j}, \bar A_r^*] \xi_1, \] and integrating by parts, we obtain, \begin{align*} \langle \bar\nabla_{h_j} \eta^1, \xi\rangle_{L^2(N)} &= \langle \bar\nabla_{h_j} \slashed D_s^* \xi_1, \xi\rangle_{L^2(N)} \\ &= \langle \slashed D_s^* \bar\nabla_{h_j} \xi_1, \xi\rangle_{L^2(N)} - s \langle r [\bar\nabla_{h_j}, \bar A_r^*] \xi_1, \xi \rangle_{L^2(N)} \\ &= - \langle \eta^1, G_s^* (\bar\nabla_{h_j} \slashed D_s \xi)^1\rangle_{L^2(N)} + \langle \eta^1, G_s^* (O(sr) \xi)^1\rangle_{L^2(N)}. \end{align*} Therefore, the weak derivative in the direction $h_j$ is defined as a section $\pi^{-1}(U) \to \pi^*S^{0+}$, by \begin{equation} \label{eq:weak_derivatives} \bar\nabla_{h_j} \xi = (K_j\xi)^0 + G_s^* (\bar\nabla_{h_j} \slashed D_s \xi + O(sr)\xi)^1 \end{equation} and the global horizontal derivative is defined as $\bar\nabla^{\mathcal H} \xi = h^j\otimes \bar\nabla_{h_j} \xi \in C^\infty(N; {\mathcal H}\otimes \pi^*S^{0+})$ and is supported again in $\pi^{-1}(U)$. Combining assumption $\slashed D_s \xi \in W^{1,2}_{k-1, k}(N) \subset L^2_{k-1}(N)$ with Proposition~\ref{prop:bootstrap_on_k}, we have that $\xi \in W^{1,2}_{k,0}(N)$. By expression \eqref{eq:weak_derivatives} we have $\bar\nabla^{\mathcal H} \xi \in W^{1,2}_{k-1,0}(N) \subset L^2_{k-1}(N)$ so that $\xi\in W_{k,k+1}^{1,2}(N)$ as required. We next prove estimate \eqref{eq:horizontal_regularity_1}. We start by obtaining estimates on the first term of \eqref{eq:weak_derivatives}: let $\eta\in \ker\slashed D_s \cap C^\infty(N)$, with $\|\eta\|_{L^2(N)} \leq1$ and $\eta = \sum_\ell\varphi_{s\ell} \cdot \eta_\ell$ for some $\eta_\ell \in C^\infty (Z; S^+_\ell)$. Then \begin{align*} \lvert\langle K_j\xi , \eta\rangle_{L^2(N)}\rvert &= \lvert\langle \xi , K_j\eta\rangle_{L^2(N)}\rvert \\ &\leq C \|\xi\|_{L^2(N)} \|(1+ sr^2)\eta\|_{L^2(N)} \\ &\leq C\|\xi\|_{L^2(N)} \|\eta\|_{L^2(N)},\qquad (\text{by using \eqref{eq:elliptic_estimate1_k>=0} on $\eta$ with $k=1$}) \\ &\leq C s^{-1/2}\|\slashed D_s\xi\|_{L^2(N)}. \qquad (\text{by \eqref{eq:spectral_estimate} applied on $\xi$}) \end{align*} Therefore the $L^2$-norm of the projection to $\ker \slashed D_s$, estimates, \begin{equation} \label{eq:projection1_estimate} \|(K_j\xi)^0 \|_{L^2(N)} = \sup_{\eta\in \ker \slashed D_s, \|\eta\|_2 \leq 1} \lvert\langle K_j\xi , \eta\rangle_{L^2(N)}\rvert \leq Cs^{-1/2}\|\slashed D_s\xi\|_2. \end{equation} To estimate the $L^2$-norm of the second term of the right hand side of \eqref{eq:weak_derivatives}, we use \eqref{eq:spectral_estimate} so that, \[ \| G_s^* (\bar\nabla_{h_j} \slashed D_s \xi + O( sr)\xi)\|_{L^2(N)} \leq Cs^{-1/2}\|\bar\nabla_{h_j} ( \slashed D_s \xi) + O( sr)\xi\|_{L^2(N)}. \] By using \eqref{eq:bootstrap_estimate} on the second term of the right hand side of the preceding inequality, we obtain \[ \| G_s^* (\bar\nabla_{h_j} \slashed D_s \xi + O( sr)\xi)\|_{L^2(N)} \leq Cs^{-1/2}\left(\| \bar\nabla_{h_j} ( \slashed D_s\xi)\|_{L^2(N)} + \|\slashed D_s\xi\|_{L^2(N)}\right). \] Combining the preceding estimate with \eqref{eq:projection1_estimate} we obtain \eqref{eq:horizontal_regularity_1}. \end{proof} \medskip \vspace{1cm} \setcounter{equation}{0} \section{Separation of the spectrum} \label{sec:Separation_of_the_spectrum} Our main goal for this section is to prove the Spectrum Separation Theorem stated in the introduction. For that purpose we will use the bundles $S^+_\ell$ introduced in Definition~\ref{defn:IntroDefSp} and define a space of approximate solutions to the equation $D_s\xi = 0$. The space of approximate solutions is linearly isomorphic to a certain ``thickening'' of $\ker D_s$ by ``low'' eigenspaces of $D_s^*D_s$ for large $s$. The same result will apply to $\ker D_s^*$. The ``thickening'' will occur by a phenomenon of separation of the spectrum of $D_s^*D_s$ into low and high eigenvalues for large $s$. The following lemma will be enough for our purposes: \begin{lemma} \label{lemma:spectrum} Let $L : H\rightarrow H'$ be a densely defined closed operator with between the Hilbert spaces $H, H'$ so that $L^*L$ has descrete spectrum. Denote by $E_\mu$ the $\mu$- eigenspace of $L^*L$. Suppose $V$ is a $k$-dimensional subspace of $H$ so that \[ \lvert Lv\rvert^2 \leq C_1 \lvert v\rvert^2 \qquad\mbox{and}\qquad \lvert Lw\rvert^2 \geq C_2 \lvert w\rvert^2 \] for every $v\in V$ and every $w\in V^\perp$. Then there exist consecutive eigenvalues $\mu_1, \mu_2$ of $L^*L$ so that $\mu_1 \leq C_1, \,\mu_2 \geq C_2$. If in addition $4C_1<C_2$ then the \textbf{orthogonal projection} \[ P : \bigoplus_{\mu\leq \mu_1} E_\mu \rightarrow V, \] is an isomorphism. \end{lemma} \begin{proof} Let $\mu_1$ be the $k$-th eigenvalue of the self-adjoint operator $L^*L$, counted with multiplicity, and $\mu_2$ be the next eigenvalue. Denote by $G_k(H)$ the set of $k$- dimensional subspaces of $H$ and set $W = \oplus_{\mu\leq \mu_1} E_\mu$, also $k$-dimensional. By the Rayleigh quotients we have \[ \mu_2 = \max_{S\in G_k(H)}\left\{ \inf_{v\in S^\perp, \lvert v\rvert=1}\lvert Lv\rvert^2\right\} \geq \inf_{v\in V^\perp, \lvert v\rvert=1}\lvert Lv\rvert^2 \geq C_2, \] and also \[ \mu_1 = \max_{S\in G_{k-1}(H)}\left\{ \inf_{v\in S^\perp, \lvert v\rvert=1}\lvert Lv\rvert^2\right\}. \] But for any $k-1$-dimensional subspace $S\subset H$ there exist a vector $v_S\in S^\perp\cap V$ of unit length so that \[ \mu_1 \leq \max_{S\in G_{k-1}(H)}\left\{\lvert Lv_S\rvert^2\right\}\leq C_1, \] as required. Finally, given $w\in W$ write $w = v_0 + v_1$ with $v_0 = P(w)$ and $v_1\in V^\perp$. Then \begin{equation*} C_2\lvert w - P(w)\rvert^2 = C_2\lvert v_1\rvert^2 \leq \lvert Lv_1\rvert^2\leq 2(\lvert Lw\rvert^2 + \lvert Lv_0\rvert^2) \leq 2(\mu_1 + C_1)\lvert w\rvert^2 \leq 4C_1 \lvert w\rvert^2, \end{equation*} and so $\lvert 1_W - P\rvert^2 \leq 4\tfrac{C_1}{C_2}$. If additionally $4C_1<C_2$ and $P(w)=0$ for some $w\neq 0$ then \[ \lvert w\rvert^2 = \lvert w-P(w)\rvert^2 \leq \lvert 1_W - P\rvert^2\lvert w\rvert^2 < \lvert w\rvert^2, \] a contradiction. Hence $P$ is injective and by dimension count an isomorphism. \end{proof} We have to construct an appropriate space $V$ that will be viewed as the subspace of approximate solutions to the problem $L\xi = D_s \xi =0$. This is achieved by the following splicing construction on a fixed $m$ - dimensional component $Z=Z_\ell$ of the critical set $Z_{\mathcal A}$. Recall the subspaces $S^{+i} = \bigoplus_\ell S^{+i}_\ell$ with projections $P^{i+} = \sum_\ell P^{i+}_\ell$ and $P^{i-} :S^i \vert_Z \to (S^{i+})^\perp \subset S^i,\ i=0,1$. Recall also the operators $D^Z_+$ and ${\mathcal B}^Z_+$ introduced in Definition~\ref{defn:Dirac_operator_component}. Fix a cutoff function $\rho_\varepsilon^Z: X \to [0,1]$, supported in $B_Z(2\varepsilon) = \{ p\in X: r_Z(p) < 2\varepsilon\}$, where $r_Z$ is the distance function of a component $Z$ and taking the value $1$ in $B_Z(\varepsilon)$, so that $\lvert d\rho_\varepsilon\rvert \leq 1/\varepsilon$. Define also the bundle map ${\mathcal P}_s: \pi^*(S^{0+}\vert_Z) \to \pi^*(S^1\vert_Z)$, by \begin{align*} {\mathcal P}_s:= \sum_\ell c_Z(d_Z (\ln\varphi_{s \ell})) P_\ell^{0+} + P^1\circ \left({\mathcal B}^0+ \frac{1}{2} s r^2 \bar A_{rr} \right) - {\mathcal B}^Z_{0+}, \end{align*} where ${\mathcal B}^Z_{0+}$ is introduced in Definition~\ref{defn:Dirac_operator_component}. \begin{defn} \label{def:space_of_approx_solutions} Given $s>0,\ \ell$ and section $\xi\in \ker (D^Z_+ + {\mathcal B}^Z_{0+}),\ \xi = \sum_\ell \xi_\ell$, set \[ \xi^0 := \sum_\ell \varphi_{s\ell} \cdot\xi_\ell, \] and define $\xi^1 \in (\ker\slashed D_s)^{\perp_{L^2}}$ and $\xi^2\in C^\infty(N; \pi^*(S^0\vert_Z)^\perp)$, by solving \begin{align} \label{eq:balansing_condition_1} \slashed D_s \xi^1 + {\mathcal P}_s\xi^0 &=0, \\ \label{eq:balansing_condition_2} s\bar{\mathcal A} \xi^2 + \left(1_{E^1\vert_Z} - P^1\right)\circ \left( {\mathcal B}^0 + \frac{1}{2}s r^2 \bar A_{rr}\right) \xi^0 &=0. \end{align} We define an approximate low eigenvector of $\tilde D_s$, by \[ \xi_s := \xi^0 + \xi^1 + \xi^2 \in C^\infty(N; \pi^*(S^0\vert_Z)). \] Given $\varepsilon>0$, we define the spaces of approximate low eigenvectors of $D_s$ by, \begin{align*} V_{s,\varepsilon}^Z &:= \left\{ (\rho_\varepsilon^Z\cdot{\mathcal I}\circ\tau)(\xi_s \circ \exp^{-1}) : \xi \in \ker (D^Z_+ + {\mathcal B}^Z_+) \right \} \\ V_{s, \varepsilon} &:= \bigoplus_{ Z\in \mathrm{Comp}(Z_{\mathcal A})} V_{s,\varepsilon}^Z, \end{align*} where ${\mathcal I}$ is introduced in \eqref{eq:exp_diffeomorphism} and $\tau: C^\infty (\mathcal{N}_\varepsilon; \pi^*(E\vert_Z)) \to C^\infty( \mathcal{N}; \tilde E\vert_{\mathcal{N})_\varepsilon}$ is the parallel transport map with respect to the $\tilde\nabla^{\tilde E}$ introduced in \eqref{eq:parallel_transport_map}. We have analogue definitions of approximate low eigenvector for $D_s^*$ and we denote the subspace of approximate solutions by $W^Z_{s, \varepsilon}$. \end{defn} Note that by expansion \eqref{eq:taylorexp} elements of $W^Z_{s, \varepsilon}$ will be associated to sections in the kernel of $D^{Z*}_+ + {\mathcal B}^Z_{1+}$. Elements of $V_{s, \varepsilon}$ are smooth sections of the bundle $E^0\to X$ that are compactly supported on the tubular neighborhood $B(Z, 2\varepsilon) \subset X$ of $Z$. Let \[ V_{s, \varepsilon}^\perp :=V_{s, \varepsilon}^{\perp_{L^2}}\cap W^{1,2}(X; E). \] \begin{theorem} \label{Th:hvalue} If $D_s$ satisfies Assumptions~\ref{Assumption:transversality1}-\ref{Assumption:stable_degenerations} then there exist $\varepsilon_0>0$ and $s_0>0$ and constants $C_i = C_i(s_0)>0, \,i =1,2$ so that for every $0<\varepsilon<\varepsilon_0$ and every $s>s_0$, \begin{enumerate}[(a)] \item for every $\eta \in V_{s, \varepsilon}$, \begin{align} \label{eq:est1} \|D_s\eta\|_{L^2(X)} \leq C_1 s^{-1/2}\|\eta\|_{L^2(X)}, \end{align} \item or every $\eta \in V_{s, \varepsilon}^\perp$, \begin{align} \label{eq:est2} \| D_s\eta\|_{L^2(X)} \geq C_2\|\eta\|_{L^2(X)}. \end{align} \end{enumerate} Since the $L^2$-adjoint operator $D_s^*$satisfies the same assumptions, the constants $s_0,\, C_1,\, C_2$ can be chosen to satisfy simultaneously the analogue estimates for $D_s^*$ in place of $D_s$ and the space $W^Z_{s, \varepsilon}$ in place of $V^Z_{s, \varepsilon}$. \end{theorem} \begin{rem} \label{rem:comments_on_Th_hvalue} \begin{enumerate} \item It suffices to prove estimate \eqref{eq:est2} for an $L^2$-dense subspace of $V_{s, \varepsilon}^{\perp_{L^2}}$. Also notice that $V_{s, \varepsilon}^{Z_1}$ is $L^2$-perpendicular to $V_{s, \varepsilon}^{Z_2}$ for $Z_1 \neq Z_2$ since their corresponding sections have disjoint supports. \item($L^2$-norms using different densities) Estimate \eqref{eq:est1} refers to sections supported in tubular neighborhoods of the components of the critical set. Also in Lemma~\ref{lemma:local_implies_global}, we prove that we only have to show estimate \eqref{eq:est2} for sections supported in these tubular neighborhoods. Let $\xi\in C^\infty_c(\mathcal{N}; \pi^*(E\vert_Z))$ and $\eta = ({\mathcal I} \circ \tau) \xi \circ \exp^{-1}$. We have that, \[ \int_X \lvert\eta\rvert^2 \, d\mathrm{vol}^X = \int_{\mathcal{N}_\varepsilon} \lvert\xi\rvert^2\, d\mathrm{vol}^{\mathcal{N}_\varepsilon} \] and that, \[ \int_X \lvert D_s \eta\rvert^2 \, d\mathrm{vol}^X = \int_{\mathcal{N}_\varepsilon} \lvert\tilde D_s (\tau \xi)\rvert^2 \, d\mathrm{vol}^{\mathcal{N}_\varepsilon}. \] By estimate \eqref{eq:density_comparison}, these norms are equivalent to the corresponding $L^2(N)$ norms in the total space $N$ with volume form $d\mathrm{vol}^N$, of the normal bundles of the components of the critical set $Z_{\mathcal A}$. The later norms are the ones used in the consequent analytic proofs. \end{enumerate} \end{rem} The rest of this section is devoted to proving estimate \eqref{eq:est1}. The proof of estimate \eqref{eq:est2} will be given in the next section. The following lemma establishes existence and uniqueness of equations \eqref{eq:balansing_condition_1} and \eqref{eq:balansing_condition_2} and provide estimates of the solutions: \begin{lemma} \label{lem:balancing_condition} For every section $\xi\in \ker (D^Z_+ + {\mathcal B}^Z_{0+})$, there exist unique sections $\psi \in (\ker \slashed D_s^*)^{\perp_{L^2}}\cap \left(\bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*(S^0\vert_Z))\right)$ and $\zeta \in \bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*(S^0\vert_Z)^\perp)$ satisfying equations \eqref{eq:balansing_condition_1} and \eqref{eq:balansing_condition_2} respectively. Moreover $\psi$ and $\zeta$ obey the following estimates, \begin{align} \label{eq:Gaussian_estimate_1} \|\psi\|_{L^2(N)} &\leq C s^{-1/2}\|\xi^0\|_{L^2(N)}, \\ \label{eq:Gaussian_estimate_2} s\|r^{k+1}\psi\|_{L^2(N)} + \|r^k \bar\nabla^{\mathcal V} \psi\|_{L^2(N)} &\leq C s^{-\tfrac{k}{2}}\|\xi^0\|_{L^2(N)}, \, k\geq 0, \\ \label{eq:Gaussian_estimate_3} \|\bar\nabla^{\mathcal H} \psi\|_{{L^2(N)}} &\leq C s^{-1/2}\|\xi^0\|_{L^2(N)}, \\ \label{eq:Gaussian_estimate_4} s\|r^{k+1}\zeta \|_{L^2(N)}+ (k+1)\|r^k \bar\nabla^{\mathcal V} \zeta\|_{L^2(N)} &\leq s^{-\tfrac{k+1}{2}}\|\xi^0\|_{L^2(N)}, \, k\geq -1, \\ \label{eq:Gaussian_estimate_5} \|r^k \bar\nabla^{\mathcal H} \zeta\|_{L^2(N)} &\leq Cs^{-1 - \tfrac{k}{2}}\| \xi^0\|_{L^2(N)}, \, k\geq 0. \end{align} \end{lemma} \begin{proof} First we claim that \[ {\mathcal P}_s\xi^0 \in (\ker \slashed D_s^*)^{\perp_{L^2}} \cap W^{1,2}_{k,0}(N),\ \forall k\in \mathbb{N}_0. \] By direct calculation, for every $\ell$, \begin{equation} \label{eq:H_derivatives_of_kernel_sections} d_Z(\ln\varphi_{s\ell}) = \left(\frac{n-m}{2} - s \lambda_\ell r^2\right)\frac{d_Z\lambda_\ell}{2\lambda_\ell}. \end{equation} This is a Hermite polynomial in $r$ that is $L^2$ perpendicular to the $\ker \slashed D_s$: an arbitrary section in $\ker \slashed D_s$ is a section of the form $\phi_{s\ell'}\cdot \theta$, with $\theta\in L^2(Z ; S_{\ell'}^+)$. By applying change of coordinates $\{y_\alpha = \sqrt{s \lambda_\ell} x_\alpha\}_\alpha$, \begin{equation*} \langle c_Z(d_Z(\ln \varphi_{s\ell}))P^{1+}_\ell \xi^0 , \varphi_{s\ell'}\cdot \theta\rangle_{L^2(N)} = \delta_{\ell , \ell'} \int_N \left(\frac{n-m}{2} - \lvert y\rvert^2\right)\frac{1}{2\lambda_\ell} e^{-\lvert y\rvert^2} \langle c_Z(d \lambda_\ell) \xi, \theta\rangle\, d\mathrm{vol}^N. \end{equation*} The integral is zero when $\ell = \ell'$, since its polar part is \[ \int_0^\infty \left(\frac{n-m}{2} - r^2\right) r^{n-m-1} e^{-r^2} dr = \left(\frac{n-m}{2}\right) \Gamma\left(\frac{n-m}{2}\right) - \Gamma\left(\frac{n-m}{2} +1\right) =0. \] This proves the claim for the first term in the expression of ${\mathcal P}_s\xi^0$. For the rest of the terms we calculate the the $L^2$-projections to the subspace $\ker \slashed D_s$ as \begin{equation*} \langle {\mathcal B}^0 \xi^0 , \varphi_{s \ell'} \cdot \theta \rangle_{L^2(N)} = s^{\tfrac{n-m}{2}}\sum_\ell \int_N (\lambda_\ell \lambda_{\ell'})^{\tfrac{n-m}{4}} \exp\left(-\frac{1}{2}s(\lambda_\ell + \lambda_{\ell'})\lvert x\rvert^2\right) \langle {\mathcal B}^0 \xi_\ell, \theta\rangle\, d\mathrm{vol}^N. \end{equation*} By applying change of coordinates $\{y_\alpha = [\tfrac{1}{2}s (\lambda_\ell+ \lambda_{\ell'})]^{1/2} x_\alpha\}_\alpha$ and then calculating the polar part of the resulting integral, we obtain the constant $C_{\ell, \ell'}$ of Definition~\ref{defn:Dirac_operator_component}. Similarly, using the expression $r^2 \bar A_{rr} = x_\alpha x_\beta \bar A_{\alpha \beta}$, \begin{equation*} s\langle r^2 \bar A_{rr} \xi^0 , \varphi_{s \ell'} \cdot \theta \rangle_{L^2(N)} = s^{\tfrac{n-m}{2}+1}\sum_{\ell, \alpha, \beta} \int_N (\lambda_\ell \lambda_{\ell'})^{\tfrac{n-m}{4}} x_\alpha x_\beta \exp\left(-\frac{1}{2}s(\lambda_\ell + \lambda_{\ell'})\lvert x\rvert^2\right) \langle \bar A_{\alpha \beta} \xi_\ell, \theta\rangle\, d\mathrm{vol}^N. \end{equation*} The resulting integral is zero when $\alpha \neq \beta$ and when $\alpha = \beta$ we apply the same change of variables as with the preceding integral and we write the resulting integral as product of $n-m-1$ one dimensional Gaussian integrals, one for each normal coordinate different than the $x_\alpha$-coordinate and an integral that is $\frac{1}{2}\Gamma\left(\frac{3}{2}\right)$. The final result of the integration along the normal directions is $\frac{C_{\ell, \ell'}}{\lambda_\ell + \lambda_{\ell'}}$. Hence the $L^2$-projection of the term $\left({\mathcal B}^0 + \frac{1}{2} sr^2 \bar A_{rr}\right)\xi^0$ onto $\ker \slashed D_s$ is ${\mathcal B}^Z_{0+} \xi^0$. This finishes the proof of the claim. The claim together with Proposition~\ref{prop:bootstrap_on_k} prove that equation \eqref{eq:balansing_condition_1} has a unique solution $\psi$. Since the right hand side, is a Hermite polynomial in $r$, it belongs to the space $\bigcap_{k,l} W^{1,2}_{k,l}(N)$. By Propositions~\ref{prop:horizontal_regularity}, we have that $\psi\in \bigcap_{k,l} W^{1,2}_{k,l}(N)$ and by \eqref{eq:spectral_estimate} and \eqref{eq:elliptic_estimate1_k>=0} with $k=1$ we obtain, \begin{align} \sqrt{s}\|\psi\|_{L^2(N)} &\leq C\|\slashed D_s \psi\|_{L^2(N)} \nonumber \\ \|\slashed D_s \psi\|_{L^2(N)} &= C \| {\mathcal P}_s \xi^0\|_{L^2(N)} \leq C\| (1+ sr^2) \xi^0\|_{L^2(N)} \leq C\|\xi^0\|_{L^2(N)},\label{eq:aux_estimate_1} \end{align} so that estimate \eqref{eq:Gaussian_estimate_1} is proved. Estimate \eqref{eq:Gaussian_estimate_2} follows by combining \eqref{eq:bootstrap_estimate} with \eqref{eq:aux_estimate_1}. Finally \begin{align*} \|\bar\nabla^{\mathcal H}\psi\|_{L^2(N)} &\leq Cs^{-1/2}\left(\|\bar\nabla^{\mathcal H} ({\mathcal P}_s \xi^0)\|_{L^2(N)} + \| \slashed D_s \psi\|_{L^2(N)}\right), \quad \text{by \eqref{eq:horizontal_regularity_1},} \\ &\leq Cs^{-1/2}\left(\| (1+sr^2)(\lvert\xi^0\rvert+\lvert \bar\nabla^{\mathcal H} (\xi^0)\rvert)\|_{L^2(N)} + \| \slashed D_s \psi\|_{L^2(N)}\right) \\ &\leq Cs^{-1/2}\|\xi^0\|_{L^2(N)}, \qquad (\text{by \eqref{eq:elliptic_estimate1_k>=0}, \eqref{eq:aux_estimate_1}and \eqref{eq:aux_estimate_2}}) \end{align*} so that estimate \eqref{eq:Gaussian_estimate_3} is proved. Equation \eqref{eq:balansing_condition_2} is solvable in $\zeta$ since ${\mathcal A}$ is invertible in the subspaces where the equation is defined. Moreover we have poinwise estimates \begin{align*} sr^{k+1}\lvert\zeta\rvert &\leq (1+ sr^2) r^{k+1} \lvert\xi^0\rvert \\ r^k \lvert\bar\nabla^{\mathcal V} \zeta\rvert &\leq Cr^{k+1} (1+sr^2) \lvert\xi^0\rvert \\ sr^k\lvert\bar\nabla^{\mathcal H} \zeta\rvert&\leq r^k(1 + sr^2)[\lvert\xi^0\rvert + \lvert\bar\nabla^{\mathcal H} (\xi^0)\rvert]. \end{align*} Applying $L^2$-norms and using \eqref{eq:elliptic_estimate1_k>=0} and \eqref{eq:aux_estimate_2}, we obtain estimates \eqref{eq:Gaussian_estimate_4} and \eqref{eq:Gaussian_estimate_5}. \end{proof} We now proceed to the \begin{proof}[Proof of estimate \eqref{eq:est1} in Theorem \ref{Th:hvalue}] Choose $\xi = \sum_\ell \xi_\ell\in \ker (D^Z_+ + {\mathcal B}^Z_{0+})$ and set $\eta = {\mathcal I} \tilde \eta$ where $\tilde\eta = \rho_\varepsilon^Z \cdot \tau \xi_s$ and $\xi_s = \xi^0 + \xi^1 + \xi^2$. Denote $\rho_\varepsilon^Z = \rho_\varepsilon$. The Taylor expansion from Corollary~\ref{cor:taylorexp} gives \begin{equation} \label{eq:cor} \begin{aligned} (\tilde D + s \tilde {\mathcal A})\eta &= d\rho_{\varepsilon\cdot} \tau\xi_s + \rho_\varepsilon \cdot \tau O\left( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + r + sr^3\right)\xi^0 \\ &\quad + \rho_\varepsilon \cdot \tau O\left( r\partial^{\mathcal V} + \partial^{\mathcal H} + 1 + sr^2\right)\xi^1 \\ &\quad + \rho_\varepsilon \cdot \tau O\left(\partial^{\mathcal V} + \partial^{\mathcal H} + 1 + sr\right)\xi^2. \end{aligned} \end{equation} Here the lower order terms vanished because we used the simplifications coming from equations \begin{align*} \slashed D_s \xi^0 &=0 \\ (\bar D^Z +{\mathcal B}^Z_{0+}) \xi^0 &= \left(\sum_\ell c_Z(d_Z(\ln \varphi_{s\ell})) P^{0+}_\ell\right) \xi^0, \qquad (\text{by \eqref{eq:br_D_Z_versus_D_Z_+}}) \end{align*} together with the equations \eqref{eq:balansing_condition_1} for $\xi^1$ and and \eqref{eq:balansing_condition_2} for $\xi^2$. By using \eqref{eq:elliptic_estimate1_k>=0} and \eqref{eq:aux_estimate_2} for $\xi^0$ and \eqref{eq:Gaussian_estimate_1} up to \eqref{eq:Gaussian_estimate_3} for $\xi^1$ and estimates \eqref{eq:Gaussian_estimate_4} and \eqref{eq:Gaussian_estimate_5} for $\xi^2$, we obtain that the $L^2(N)$ norm of the error terms of expansion \eqref{eq:cor} are bounded by $Cs^{-1/2}\|\xi^0\|_{L^2(N)}$. Finally, because $d\rho$ has support outside the $\varepsilon$-neighborhood of $Z_{\mathcal A}$, the $L^2(N)$ norm of the first term on the right hand side is bounded as \begin{equation*} \int_\mathcal{N} \left\lvert d\rho_{\varepsilon\cdot}\tau\xi^0\right\rvert^2\, d\mathrm{vol}^N \leq C\|\xi\|_{L^2(Z)}^2 \int_{\varepsilon\sqrt{s}}^\infty r^{n-m-1}e^{-r^2}\, dr \leq C \pi^{\tfrac{m-n}{2}} e^{- \tfrac{s\varepsilon^2}{2}}\|\xi^0\|_{L^2(N)}^2, \quad \text{(by \eqref{eq:Gaussian_evaluation}),} \end{equation*} and \[ \|d\rho_{\varepsilon\cdot}\tau(\xi^1 + \xi^2)\|_{L^2(N)} \leq Cs^{-1}\|\xi^0\|_{L^2(N)}^2. \] Putting all together we obtain, \begin{equation} \label{eq:aux_estimate_3} \int_\mathcal{N} \lvert(\tilde D + s \tilde {\mathcal A}) \tilde\eta\rvert^2 \, d\mathrm{vol}^N \leq C s^{-1}\|\xi^0\|_{L^2(N)}^2. \end{equation} Finally we show that \begin{equation} \label{eq:aux_estimate_4} \|\xi^0\|_{L^2(N)} \leq 2\|\rho_\varepsilon \cdot\xi^0\|_{L^2(N)} \end{equation} for every $s$ sufficiently large. Indeed, by using the change to $\{y_\alpha = \sqrt{s \lambda_\ell} x_\alpha\}_\alpha$ on each component of the section $\xi^0 = \sum_\ell (\xi_\ell)^0$, we estimate \begin{equation*} \|\rho_\varepsilon \cdot (\xi_\ell)^0\|^2_{L^2(N)} \geq \int_{B(Z, \varepsilon \sqrt{s})} e^{-\lvert y\rvert^2}\lvert\xi_\ell\rvert^2\, d\mathrm{vol}^N = \left\lvert S^{n-m-1}\right\rvert\|\xi_\ell\|^2_{L^2(Z)} \int^{\epsilon\sqrt{s}}_0r^{n-m-1}e^{- r^2}\, dr. \end{equation*} But there exist $s_0 = s_0(\varepsilon)>0$, so that \[ \int^{\epsilon\sqrt{s}}_0 r^{n-m-1}e^{- r^2}\, dr> \tfrac{1}{4} \int_0^\infty r^{n-m-1}e^{- r^2}\, dr, \] for every $s>s_0$. Estimate \eqref{eq:aux_estimate_4} follow. Finally, using that $\tilde\eta = \rho_\varepsilon \tau(\xi^0+ \xi^1+ \xi^2)$ and \eqref{eq:aux_estimate_4}, \[ \|\tilde\eta\|_{L^2(N)}^2 \geq \frac{1}{4}\|\xi^0\|_{L^2(N)}^2 + 2 \langle\rho_\varepsilon \xi^0, \rho_\varepsilon (\xi^1+ \xi^2)\rangle_{L^2(N)} \] where the cross terms estimate as, \begin{align*} \lvert 2 \langle\rho_\varepsilon \xi^0, \rho_\varepsilon (\xi^1+ \xi^2)\rangle_{L^2(N)}\rvert &\leq \frac{1}{16}\|\xi^0\|_{L^2(N)}^2 + 16\| \xi^1 + \xi^2\|_{L^2(N)}^2 \\ &\leq (\frac{1}{16} + Cs^{-1})\|\xi^0\|_{L^2(N)}^2, \qquad (\text{by \eqref{eq:Gaussian_estimate_1} and \eqref{eq:Gaussian_estimate_4}}) \\ &\leq \frac{1}{8}\|\xi^0\|_{L^2(N)}^2, \end{align*} for $s>0$ sufficiently large. Therefore, we obtain \[ \|\tilde\eta\|_{L^2(N)}^2 \geq \frac{1}{8} \|\xi^0\|_{L^2(N)}^2. \] Combining this last inequality with \eqref{eq:aux_estimate_3}, we obtain \[ \int_\mathcal{N}\lvert\tilde D_s \tilde \eta\rvert^2\, d\mathrm{vol}^N \leq C s^{-1}\int_\mathcal{N} \lvert\tilde \eta\rvert^2\, d\mathrm{vol}^N. \] By \eqref{eq:density_comparison}, the volume densities $d\mathrm{vol}^\mathcal{N}$ and $d\mathrm{vol}^N$ are equivalent. Therefore, the preceding inequality holds for the density $d\mathrm{vol}^\mathcal{N}$. Since $\| D_s \eta\|_{L^2(X)} = \|\tilde D_s \tilde\eta\|_{L^2(\mathcal{N})}$ and $\|\eta\|_{L^2(X)} = \|\tilde \eta\|_{L^2(\mathcal{N})}$, estimate \eqref{eq:est1} follows. \end{proof} \vspace{2mm} Applying Lemma \ref{lemma:spectrum} we get a proof of Spectrum Separation Theorem stated in the introduction: \begin{proof}[Proof of Spectral Separation Theorem.] By Theorem~\ref{Th:hvalue}, we may choose $s_0>0$ so that the constants satisfy $4\tfrac{C_1}{s}< C_2$ for every $s> s_0$. We apply Lemma~\ref{lemma:spectrum} with $L = D_s$ and $H = L^2(X, E^0),\, H' = L^2(X, E^1)$ and $V_{s, \varepsilon}$ constructed in Definition~\ref{def:space_of_approx_solutions}. The analogue versions of Theorem~\ref{Th:hvalue} for the adjoint operator $D_s^*$ are also applied. As a result, we obtain that the for the first eigenvalue $\lambda_0$ of $D_s^* D_s$ and $D_s D^*_s$ satisfying $\lambda_0 \leq C_1 s^{-1/2}$, the orthogonal projections, \begin{equation*} \Pi^0 :\mathrm{span}^0(s, \lambda_0) \simeq V_{s, \varepsilon} \simeq \bigoplus_{Z \in \mathrm{Comp}(Z_{\mathcal A})} \ker\{ D^Z_+ + {\mathcal B}^Z_{0+} : C^\infty(Z ; S^{0+}\vert_Z) \rightarrow C^\infty(Z; S^{1+}\vert_Z)\}, \end{equation*} and \begin{equation*} \Pi^1:\mathrm{span}^1(s, \lambda_0) \simeq W_{s, \varepsilon} \simeq \bigoplus_{Z \in \mathrm{Comp}(Z_{\mathcal A})} \ker\{ D^{Z*}_+ + {\mathcal B}^Z_{1+} : C^\infty(Z ; S^{1+}\vert_Z) \rightarrow C^\infty(Z; S^{0+}\vert_Z)\}, \end{equation*} are both linear isomorphisms, for every $s>s_0$. It also follows that $N^i(s,\lambda_0) = N^i(s, C_1 s^{-1/2})$,\ i =0,1. This completes the proof when $s> s_0$. By replacing ${\mathcal A}$ with $- {\mathcal A}$, the preceding considerations prove an analogue theorem for $D_s$ with $s$ being large and negative. The bundle where approximate sections are constructed, is then changing from $S^{i+}$ to $S^{i-},\, i =0,1$. \end{proof} \begin{rem} Combining Theorem \ref{Th:hvalue} with the proof of Lemma \ref{lemma:spectrum} we obtain a bound on the error of the approximate eigensections that is if $\|\xi\|_{L^2(X)} =1$ and $D_s\xi = 0$ we have, \[ \| \xi - \Pi^i(\xi)\|_{L^2(X)}^2 \leq \frac{4C_1}{sC_2} \rightarrow 0 \quad \mbox{as}\quad s\rightarrow \infty,\quad i =0,1. \] \end{rem} \medskip \vspace{1cm} \setcounter{equation}{0} \section{A Poincar\'{e} type inequality} \label{sec:A_Poincare_type_inequality} This section is entirely devoted to the proof of estimate \eqref{eq:est2} of Theorem \ref{Th:hvalue}. We start by reducing the proof of the estimate to a local estimate for sections supported in the tubular neighborhood $B_{Z_{\mathcal A}}(4\varepsilon)$: \begin{lemma} \label{lemma:local_implies_global} If estimate (\ref{eq:est2}) is true for $\eta\in V_{s, \varepsilon}^\perp$ supported in $B_{Z_{\mathcal A}}(4\varepsilon)$ then it is true for every $\eta\in V_{s, \varepsilon}^\perp$. \end{lemma} \begin{proof} Let $\eta\in V_{s, \varepsilon}^\perp$ and recall the cutoff $\rho_\varepsilon$ used in Definition~\ref{def:space_of_approx_solutions} and define $\rho' = \rho_{2\varepsilon} :X\rightarrow [0,1]$, a bump function supported in $B_{Z_{\mathcal A}}(4\varepsilon)$ with $\rho' \equiv 1$ in $B_{Z_{\mathcal A}}(2\varepsilon)$. Write $\eta = \rho'\eta + (1-\rho')\eta = \eta_1 + \eta_2$ with supp $\eta_1 \subset B_{Z_{\mathcal A}}(4\varepsilon)$ and supp $\eta_2\subset X\backslash B_{Z_{\mathcal A}}(4\varepsilon) = \Omega(4\varepsilon)$. Then \begin{equation} \label{eq:interpol} \|D_s\eta\|^2_{L^2(X)} = \|D_s\eta_1\|^2_{L^2(X)} + \|D_s\eta_2\|^2_{L^2(X)} +2\langle D_s\eta_1, D_s\eta_2\rangle_{L^2(X)}. \end{equation} Since $\rho'\cdot \rho_\varepsilon = \rho_\varepsilon$ we have $\eta_1\in V_{s, \varepsilon}^\perp$ and by assumption there exist $C_1=C_1(\varepsilon)>0$ and $s_0 = s_0(\varepsilon)>0$ so that \[ \|D_s\eta_1\|^2_{L^2(X)}\geq C_1\|\eta_1\|^2_{L^2(X)} \] for every $s> s_0$. Also, by a concentration estimate \[ \|D_s\eta_2\|^2_{L^2(X)} \geq s^2\|{\mathcal A}\eta_2\|^2_{L^2(X)} - s\lvert\langle \eta_2, B_{\mathcal A}\eta_2\rangle_{L^2(X)}\rvert \geq (s^2 \kappa^2_{2\varepsilon} - s C_0)\|\eta_2\|^2_{L^2(X)}. \] with constants $\kappa_{2\varepsilon}$ and $C_0$ as in \eqref{eq:useful_constants}. To estimate the cross terms we calculate \[ D_s \eta_1\, =\, \rho' D_s\eta + (d\rho')_\cdot \eta,\qquad D_s\eta_2\, =\, (1-\rho') D_s\eta - (d\rho')_\cdot \eta \] and hence \begin{align*} \langle D_s\eta_1, D_s\eta_2\rangle_{L^2(X)} =& \int_X\rho'(1-\rho') \left\lvert D_s\eta\right\rvert^2\, d\mathrm{vol}^X + \int_X(1-2\rho')\langle (d\rho')_\cdot \eta, D_s\eta\rangle\, d\mathrm{vol}^X - \int_X \left\lvert(d\rho')_\cdot \eta\right\rvert^2\, d\mathrm{vol}^X \\ \geq& -\frac{1}{2}\|D_s\eta\|^2_{L^2(X)} -\frac{3}{2} \int _X \left\lvert(d\rho')_\cdot\eta\right\rvert^2\, d\mathrm{vol}^X. \end{align*} We used that $\lvert ab\rvert\leq \tfrac{1}{2}(a^2 + b^2)$ and that $(1-2\rho')^2\leq 1$. But $(d\rho')_\cdot \eta $ is supported in $\Omega(2\varepsilon)$ hence by a concentration estimate applied again \begin{align*} \int _X \left\lvert(d\rho')_\cdot\eta\right\rvert^2\, d\mathrm{vol}^X &\leq C_\varepsilon \int_{\Omega(2\varepsilon)}\left\lvert\eta\right\rvert^2\, d\mathrm{vol}^X \leq \frac{C_\varepsilon}{s^2 \kappa^2_{2\varepsilon}} \|D_s \eta\|^2_{L^2(X)} + \frac{C_\varepsilon C_0}{s \kappa^2_{2\varepsilon}}\|\eta\|_{L^2(X)}^2 \\ &\leq \frac{1}{3} \|D_s\eta\|^2_{L^2(X)} + \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2 \end{align*} for $s$ large enough. Hence \[ \langle D_s\eta_1, D_s\eta_2\rangle_{L^2(X)} \geq - \|D_s\eta\|^2_{L^2(X)} - \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2. \] Substituting to \eqref{eq:interpol} and absorbing the first term in the left hand side there is an $s_1 = s_1(\varepsilon)$ with \begin{align*} 3\|D_s\eta\|^2_{L^2(X)}&\geq \|D_s\eta_1\|^2_{L^2(X)} + (s^2 \kappa^2_{2\varepsilon} - s C_0)\|\eta_2\|^2_{L^2(X)} - \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2 \\ &\geq C_1(\|\eta_1\|^2_{L^2(X)} + \|\eta_2\|^2_{L^2(X)}) - \frac{C_\varepsilon}{s}\|\eta\|_{L^2(X)}^2 \\ & \geq C_1 \|\eta\|^2_{L^2(X)}, \end{align*} for every $s\geq s_1$. \end{proof} Since $L^2$-norms are additive on sections with disjoint supports, we can work with $\eta \in V_s^\perp$ so that the support of $\eta$ lies in a tubular neighborhood $B(Z, 4\varepsilon)$ of some individual singular component $Z$ of $Z_{\mathcal A}$. There the distance function $r$ from the set $Z$ is defined and we have the following generalization of estimate \eqref{eq:est2} of Theorem \ref{Th:hvalue}: \begin{lemma} \label{lemma:aux_estimate} There exist $\varepsilon_0>0$ and $C= C(\varepsilon_0)$ so that for every $\varepsilon \in (0, \varepsilon_0)$ there exist $s_0(\varepsilon)>0$ with the following property: for every $s > s_0$ and every $\eta \in V_{s, \varepsilon}^\perp \cap W^{1,2}_0(B_Z(4\varepsilon) ; E^0)$ and every $k=0,1,2$, \begin{align} \label{eq:est2'} \| D_s\eta\|_{L^2(X)} \geq s^{k/2}C\| r^k\eta\|_{L^2(X)}. \end{align} \end{lemma} As mentioned in Remark~\ref{rem:comments_on_Th_hvalue}, the exponential map identifies diffeomorphically this neighborhood to a neighborhood $\mathcal{N}_{2\varepsilon}$ of the zero section on the total space of the normal bundle $N$ of $Z$ and by using the maps introduced in \eqref{eq:exp_diffeomorphism}, we can prove estimate \eqref{eq:est2'} for the diffeomorphic copies $\tilde D_s$ of $D_s$ and $\tilde V_{s, \varepsilon}^\perp\cap W^{1,2}_0(\mathcal{N}_{2\varepsilon} ; \tilde E^0\vert_{\mathcal{N}_{2\varepsilon}}) $ of $V_{s, \varepsilon}^\perp \cap W^{1,2}_0( B_Z(4\varepsilon) ; E^0\vert_{B_Z(4\varepsilon)})$. The tubular neighborhood admits two different volume elements namely the pullback volume $ d\mathrm{vol}^\mathcal{N} = \exp^* d\mathrm{vol}^X$ and the volume form $d\mathrm{vol}^N$ introduced in Appendix~\ref{subApp:The_expansion_of_the_volume_from_along_Z}. The corresponding densities are equivalent per Appendix~\eqref{eq:density_comparison}. We prove estimate \eqref{eq:est2'} for the $L^2(N)$-norms and function spaces induced by $d\mathrm{vol}^N$. In the following lemmas until the end of the paragraph, we use the following conventions: given $\eta =\tau \xi$ and $\xi\in C^\infty_c(\mathcal{N}_{2\varepsilon}; \pi^*(E^0\vert_Z))$, we decompose $\eta = \eta_1 + \eta_2 = \tau (\xi_1 + \xi_2)$ where $\xi_1 = P^0 \xi $ and $\xi_2 = (1_{E^0\vert_Z} - P^0)\xi$ are sections of the bundles $\pi^*(S^0\vert_Z)$ and $\pi^*(S^0\vert_Z)^\perp$ respectively. It follows that $\xi_1$ and $\xi_2$ belong in different $\text{Cl}_n$-modules. We further decompose $\xi_1 = \xi^0_1 +\xi^1_1$ where $\xi_1^0 \in \ker \slashed D_s$ and $\xi^1_1 \in (\ker \slashed D_s)^{\perp_{L^2}} \cap \left(\bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*(S^0\vert_Z)) \right)$. We have the following basic estimate: \begin{lemma} There exist $s_0, \varepsilon_0>0$ and a constant $C=C(s_0, \varepsilon_0) >0$ so that for every $\varepsilon \in (0, \varepsilon_0)$, every $s>s_0$ and every $\eta =\tau \xi \in C^\infty_c(\mathcal{N}_{2\varepsilon}; \pi^*(E^0\vert_Z))$, we have an estimate, \begin{multline} \label{eq:Taylor_estimate} \| \slashed D_s \xi_1^1\|_{L^2(N)} + \| \slashed D_0 \xi_2\|_{L^2(N)} + s\|\bar{\mathcal A} \xi_2\|_{L^2(N)} \\ + \| \bar \nabla^{\mathcal H} \xi_1^0\|_{L^2(N)} +\| \bar \nabla^{\mathcal H} \xi_1^1\|_{L^2(N)} + \| \bar \nabla^{\mathcal H} \xi_2\|_{L^2(N)} \\ \leq C(\|\tilde D_s \eta\|_{L^2(N)} + \|\eta\|_{L^2(N)}). \end{multline} \end{lemma} \begin{proof} It is enough to prove the estimate for a section $\xi$, supported in a bundle chart $(\pi^{-1}(U), (x_j, x_\alpha)_{j,\alpha})$. We first prove the auxiliary estimates, \begin{align} \label{eq:auxiliary_estimate1} \|\slashed D_s \xi_1\|_{L^2(N)}^2 + \|\bar D^Z\xi_1\|_{L^2(N)}^2 &\leq C(\|(\slashed D_s + \bar D^Z)\xi_1\|^2_{L^2(N)} + \|\xi_1\|_{L^2(N)}^2), \end{align} then \begin{equation} \label{eq:auxiliary_estimate1.5} \|\bar\nabla^{\mathcal H} \xi_1^0\|^2_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_1^1\|^2_{L^2(N)} \leq C (\|\slashed D_s \xi_1^1\|^2_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_1\|^2_{L^2(N)} + \|\xi_1\|^2_{L^2(N)}), \end{equation} and \begin{equation} \label{eq:auxiliary_estimate2} \| \slashed D_0 \xi_2\|_{L^2(N)}^2 + \|\bar D^Z\xi_2\|_{L^2(N)}^2 + s^2 \|\xi_2\|_{L^2(N)}^2 \leq C\|(\slashed D_0 + \bar D^Z + s\bar {\mathcal A})\xi_2\|^2_{L^2(N)}. \end{equation} In proving \eqref{eq:auxiliary_estimate1} we expand the right hand side and we are led in estimating the cross term, \begin{equation} \label{eq:auxiliary_cross_terms} \begin{aligned} 2\langle \slashed D_s \xi_1, \bar D^Z\xi_1\rangle_{L^2(N)} =&\, \langle (\slashed D_s ^*\bar D^Z+ \bar D^{Z*} \slashed D_s) \xi_1,\xi_1\rangle_{L^2(N)} \\ =&\, \langle (\slashed D_s^* \bar D^Z+ \bar D^{Z*} \slashed D_s) \xi^0_1,\xi^0_1\rangle_{L^2(N)} \\ &+ 2 \langle (\slashed D_s^* \bar D^{Z*}+ D^{Z*} \slashed D_s) \xi^0_1,\xi^1_1\rangle_{L^2(N)} \\ &+ \langle (\slashed D_s^* \bar D^Z+ \bar D^{Z*} \slashed D_s) \xi^1_1,\xi^1_1\rangle_{L^2(N)}. \end{aligned} \end{equation} We further decompose $\xi^0_1 = \sum_\ell \xi^0_{1 \ell}$ where $\xi^0_{1 \ell} \in\bigcap_{k,l} W^{1,2}_{k,l}(N; \pi^*S_\ell^{0+})$. Then by using \eqref{eq:cross_terms}, the pointwise inner product is \[ \langle (\slashed D_s^* \bar D^Z+ D^{Z*} \slashed D_s) \xi^0_1,\xi^0_1\rangle(v) = - s \sum_{\alpha, \ell} x_\alpha \langle \mathfrak{c}_N (h^\alpha) \mathfrak{c}_N( \pi^* d\lambda_\ell)\xi^0_{1\ell}, \xi^0_{1\ell} \rangle(v)=0, \] because $\mathfrak{c}_N(h^\alpha) \mathfrak{c}_N (\pi^*d\lambda_\ell)\xi^0_{1\ell}$ belong to the eigenspace of $C^0$ with eigenvalue $(n-m-1)\lambda_\ell$ and $\xi^0_{1\ell}$ belong to the eigenspace with eigenvalue $(n-m)\lambda_\ell$. By Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms} \eqref{eq:cross_terms1} the operator $\slashed D_s^* \bar D^Z+ \bar D^{Z*} \slashed D_s$ is a bundle map with coefficients vanishing up to $sO(r)$ as $r \to 0^+$. Therefore the remaining terms in \eqref{eq:auxiliary_cross_terms} are estimated above by, \begin{align*} Cs(\| \xi^0_1\|_{L^2(N)} + \|\xi^1_1\|_{L^2(N)}) \|r\xi^1_1\|_{L^2(N)} &\leq C\|\xi_1\|_{L^2(N)}\|\slashed D_s \xi^1_1\|_{L^2(N)} \\ &\leq \delta \|\slashed D_s \xi_1\|_{L^2(N)}^2 + C\delta^{-1}\|\xi_1\|_{L^2(N)}^2, \end{align*} where we used \eqref{eq:bootstrap_estimate} with $k=0$. The cross term \eqref{eq:auxiliary_cross_terms} therefore estimates as \[ 2\lvert\langle \slashed D_s \xi_1, \bar D^Z \xi_1\rangle_{L^2(N)}\rvert \leq \delta\|\slashed D_s \xi_1\|^2_{L^2(N)} + C\delta^{-1} \|\xi_1\|^2_{L^2(N)}, \] for an updated constant $C$. Combining the preceding estimates and choosing $\delta>0$ small enough as suggested by the preceding constants, the term $\|\slashed D_s \xi_1\|^2_{L^2(N)}$ is absorbed to the left hand side of the expansion thus arriving at inequality \eqref{eq:auxiliary_estimate1}. To prove inequality \eqref{eq:auxiliary_estimate1.5} we again expand the right hand side and this time, we estimate the cross term, \[ 2\vert\langle \bar \nabla^{\mathcal H} \xi^0_1, \bar\nabla^{\mathcal H} \xi_1^1\rangle_{L^2(N)}\vert = 2\vert\langle \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1, \xi_1^1\rangle_{L^2(N)}\vert. \] Using again the decomposition $\xi_1^0 = \sum_\ell \xi_{1\ell}^0$, with $\xi_{1\ell}^0 = \varphi_{s \ell} \cdot \zeta_\ell$, we calculate explicitly, \begin{align*} \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1 &= - \sum_{i,\ell} \bar \nabla_{h_i} \bar\nabla_{h_i} \xi_{1\ell}^0 \\ &=- \sum_{i,\ell}\bar \nabla_{h_i}(M_{i\ell}\cdot \xi^0_{s\ell} + \varphi_{s\ell} \cdot \bar\nabla_{e_i} \zeta_\ell) \\ &= -\sum_{i,\ell}[(e_i(M_{s\ell}) - M_{i\ell}^2) \cdot \xi^0_{s\ell} +2M_{i\ell} \cdot\bar\nabla_{h_i}\xi_{s\ell}^0] + (\bar\nabla^*\bar\nabla \zeta)^0, \end{align*} where $\zeta = \sum_\ell \zeta_{\ell}$ and $M_{i\ell} :=\left(\frac{n-m}{2} - sr^2 \lambda_\ell\right) \frac{e_i(\lambda_\ell)}{2\lambda_\ell}$. We then estimate, \begin{align*} \lvert\langle \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1 , \xi_1^1\rangle_{L^2(N)}\rvert \leq &\, C\|(1+ sr^2 + s^2 r^4)\xi^0_1\|_{L^2(N)}\| \xi_1^1\|_{L^2(N)} +C\|\bar\nabla^{\mathcal H}\xi_1^0\|_{L^2(N)} \|(1+sr^2)\xi_1^1\|_{L^2(N)} \\ \leq &\, C\|\xi_1^0\|_{L^2(N)} \|\xi_1^1\|_{L^2(N)} + C\|\bar\nabla^{\mathcal H}\xi_1^0\|_{L^2(N)} [\|\xi_1^1\|_{L^2(N)}+ (\varepsilon + s^{-1/2})\|\slashed D_s\xi_1^1\|_{L^2(N)}], \end{align*} and by applying Cauchy-Schwartz, \begin{equation} \label{eq:auxiliary_cross_terms_3} 2\lvert\langle \bar\nabla^{{\mathcal H}*}\bar \nabla^{\mathcal H} \xi^0_1 , \xi_1^1\rangle_{L^2(N)}\rvert \leq \frac{1}{2} \|\bar\nabla^{\mathcal H} \xi_1^0\|^2_{L^2(N)} + C(\|\slashed D_s \xi_1^1\|^2_{L^2(N)} + \|\xi_1\|^2_{L^2(N)}), \end{equation} where in the third line of the preceding estimate, we applied \eqref{eq:elliptic_estimate1_k>=0} on $\xi_1^0$ with $k=1$ and $k=3$ and we applied \eqref{eq:bootstrap_estimate} with $k=1$ on $\xi_1^1$. Absorbing the first term of \eqref{eq:auxiliary_cross_terms_3} on the left hand side of the expansion, we obtain \eqref{eq:auxiliary_estimate1.5}. By Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms} \eqref{eq:cross_terms3}, the operator $ \bar{\mathcal A}^*\circ(\slashed D_0 + \bar D^Z) + (\slashed D_0 + \bar D^Z)^* \circ \bar{\mathcal A}$ is a bundle map and we estimate \begin{align*} 2 \lvert\langle (\slashed D_0 + \bar D^Z) \xi_2, \bar{\mathcal A} \xi_2 \rangle_{L^2(N)}\rvert \leq C \| \xi_2\|_{L^2(N)}^2 \leq C \| \bar{\mathcal A} \eta_2\|_{L^2(N)}^2, \end{align*} and by Proposition~\ref{prop:Weitzenbock_identities_and_cross_terms} \eqref{eq:cross_terms2} $\slashed D_0^* \bar D^Z + \bar D^{Z*} \slashed D_0 \equiv 0$, so that \begin{multline*} \|\slashed D_0 \xi_2\|_{L^2(N)}^2+ \|\bar D^Z\xi_2\|_{L^2(N)}^2 + s^2 \|\bar{\mathcal A} \xi_2\|_{L^2(N)}^2 \\ \leq \|(\slashed D_0 + \bar D^Z + s \bar{\mathcal A}) \xi_2\|_{L^2(N)}^2 + 2s \lvert\langle (\slashed D_0 + \bar D^Z) \xi_2, \bar{\mathcal A} \xi_2 \rangle_{L^2(N)}\rvert \\ \leq \|(\slashed D_0 + \bar D^Z + s \bar{\mathcal A}) \xi_2\|_{L^2(N)}^2 +C s\| \bar{\mathcal A} \eta_2\|_{L^2(N)}^2. \end{multline*} Choosing $s>0$ large enough we absorb the term $s\| \bar{\mathcal A} \eta_2\|_{L^2(N)}^2$ to the left hand side of the preceding inequality thus, obtaining \eqref{eq:auxiliary_estimate2}. Finally we prove \eqref{eq:Taylor_estimate}: by combining expansions \eqref{eq:taylorexp} and \eqref{eq:taylorexp1} and rearranging terms, we obtain \begin{equation} \label{eq:Taylorexp1} \tau[(\slashed D_s + \bar D^Z)\xi_1 + (\slashed D_0 + \bar D^Z + s\bar{\mathcal A})\xi_2] = D_s \eta + O(r^2 \partial^{\mathcal V} + r\partial^{\mathcal H} + 1) \xi + O(sr^2)\xi_1 + O(sr) \xi_2. \end{equation} Taking $L^2$-norms on and applying \eqref{eq:auxiliary_estimate1} and \eqref{eq:auxiliary_estimate2} \begin{multline} \label{eq:long_inequality} \| \slashed D_s \xi_1\|_{L^2(N)} + \| \bar D^Z \xi_1\|_{L^2(N)} + \| \slashed D_0 \xi_2\|_{L^2(N)} + \| \bar D^Z \xi_2\|_{L^2(N)} + s\|\bar{\mathcal A} \xi_2\|_{L^2(N)} \\ \leq C\| D_s \eta\|_{L^2(N)} + \| O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} + 1 + sr^2) \tau\xi_1\|_{L^2(N)} \\ + \|O( r^2\partial^{\mathcal V} + r\partial^{\mathcal H} +1+sr)\tau\xi_2\|_{L^2(N)} + C\|\xi_1\|_{L^2(N)}. \end{multline} The $L^2$ norm over the tubular region $\mathcal{N}_\varepsilon$ of the error terms for $\xi_1$ in the right hand side of \eqref{eq:Taylorexp1} are estimated as, \begin{equation*} \|O(r^2) \partial^{\mathcal V} \xi_1\|_{L^2(N)} \leq C[(\varepsilon^2+ \varepsilon s^{-1/2} + s^{-1} )\|\slashed D_s \xi_1\|_{L^2(N)} + s^{-1/2}\|\xi_1\|_{L^2(N)}], \end{equation*} by applying \eqref{eq:elliptic_estimate1_k>=0} with $k=2$, then \begin{equation*} \|r \partial^{\mathcal H} \xi_1\|_{L^2(N)} \leq C\varepsilon[(\varepsilon+ s^{-1/2})\|\slashed D_s\xi_1 \|_{L^2(N)} +\|\bar D^Z \xi_1\|_{L^2(N)} + \|\xi_1\|_{L^2(N)})], \end{equation*} by using \eqref{eq:elliptic_estimate2}, and \begin{equation*} s\|O(r^2) \xi_1\|_{L^2(N)} \leq C[(s^{-1/2} + \varepsilon)\|\slashed D_s \xi_1\|_{L^2(N)} + \|\xi_1\|_{L^2(N)}], \end{equation*} by using \eqref{eq:elliptic_estimate1_k>=0} with $k=1$. For the corresponding error terms for $\xi_2$, \[ \| O(r^2 \partial^{\mathcal V}+ r \partial^{\mathcal H} + 1+ sr) \xi_2\|_2 \leq C\varepsilon(\|\slashed D_0 \eta_2\|_2 + \|\bar \nabla^{\mathcal H} \xi_2\|_2 + s\|\bar{\mathcal A} \xi_2\|_2 ) + C\|\xi_2\|_2. \] By combining the estimates of the error terms and the preceding estimates with \eqref{eq:long_inequality} and absorbing terms on the left hand side and choosing at first $\varepsilon$ small enough and then $s>0$ large enough, we obtain \[ \| \slashed D_s \xi_1\|_2 + \| \bar \nabla^{\mathcal H} \xi_1\|_2 + \| \slashed D_0 \xi_2\|_2 + \| \bar \nabla^{\mathcal H} \xi_2\|_2 + s\|\bar{\mathcal A} \xi_2\|_2 \leq C(\| D_s \eta\|_2 + \|\eta\|_2). \] Finally, by combining the preceding estimate with \eqref{eq:auxiliary_estimate1.5}, we obtain estimate \eqref{eq:Taylor_estimate}. \end{proof} In the proofs of the following lemmas, we use the re-scaling $\{y_\alpha = \sqrt{s} x_\alpha\}_\alpha$. This is independent of the Fermi coordinates defining a global diffeomorphism of the tubular neighborhoods $\mathcal{N}^\varepsilon \to \mathcal{N}^{\sqrt{s}\varepsilon}$. The volume element re-scales accordingly as $d\mathrm{vol}^N_x = s^{\frac{m-n}{2}} d\mathrm{vol}^N_y$. Recall the orthogonal projections $P^i: E^i\vert_Z \to S^i\vert_Z,\, i =0,1$ introduced in Section~\ref{sec:Concentration_Principle_for_Dirac_Operators}. \begin{lemma} \label{lemma:perps_of_app_solutions} Suppose there exists sequence $\{s_j\}_j$ of positive numbers with no accumulation point and a sequence $\{\eta_j\} \subset W^{1,2}_0(\mathcal{N}_{2\varepsilon} , \tilde E^0\vert_{\mathcal{N}_{2\varepsilon}})$ satisfying $\sup_j \|\eta_j\|_{L^2(N)} < \infty$ and $\|\tilde D_{s_j}\eta_j\|_{L^2(N)}^2 \rightarrow 0$ as $j\rightarrow \infty$. Then $P^1 \eta_j \to 0$ in $L^2(N)$ and, after re-scaling, there exist a subsequence of $\{\xi_j\}$, of $P^0 \eta_j$ that converges $L^2_{\text{loc}}$-strongly and $W^{1,2}$-weakly on $N$, to a section $\sum_\ell\phi_{1\ell}\xi_\ell$ with $\xi_\ell\in W^{1,2}(Z, S^{0+}_\ell)$. Furthermore the section $\bar\xi = \sum_\ell \xi_\ell :Z \to S^{0+}$ satisfies, \begin{equation} \label{eq:limiting_conditions0} (D^Z_+ + {\mathcal B}^Z_{0+})\bar\xi = 0. \end{equation} \end{lemma} \begin{proof} We decompose $\eta_j = \eta_{j1} + \eta_{j2}$ into sections of $S^0$ and $(S^0)^\perp$ correspondingly. We re-scale the sequence $\{\eta_j\}_j$ around the critical set $Z$: recall the Fermi coordinates $(\mathcal{N}_U ,\,(x_k,\, x_\alpha)_{k,\alpha})$ and the parallel transport map $\tau$ from \eqref{eq:parallel_transport_map}. We define the re-scaled sections of $\pi^*S^0 \to\mathcal{N}^{2\sqrt{s_j}\varepsilon}_U$, by \begin{equation} \label{eq:modified_section} \tau\xi_{jl}(x_k,y_\alpha) = s_j^{\tfrac{m-n}{4}} \eta_{jl}\left(x_k,\frac{y_\alpha}{\sqrt{s_j}}\right),\ l=1,2 \quad \text{and}\quad \xi_j = \xi_{j1} + \xi_{j2}, \end{equation} The re-scaling is defined independently of the choice of the Fermi coordinates allowing the components $\xi_j$ over the various charts $\{\mathcal{N}_U: U\subset Z\}$ to patch together, defining sections over the tubular neighborhood $\mathcal{N}_{\sqrt{s_j}\varepsilon}$. We decompose further $\xi_{j1} = \xi_{j1}^0 + \xi_{j1}^1$ with $\xi_{j1}^0\in \ker \slashed D_1$ and $\xi_{j1}^1 \perp_{L^2} \overline{\ker\slashed D_1}^{L^2}$. The operator $\bar D^Z$ and the derivatives $\bar\nabla^{\mathcal H}$ remain invariant under the change of variables $\{y_\alpha = x_\alpha\sqrt{s_j}\}_\alpha$ while the operator $\slashed D_{s_j}$ changes to $\sqrt{s_j}\slashed D_1$. Changing variables on the left hand side of \eqref{eq:Taylor_estimate}, we obtain \begin{multline} \label{eq:Taylor_sequence} \sqrt{s_j}\|\slashed D_1 \xi_{j1}\|_{L^2(N)} + \sqrt{s_j}\| \slashed D_0 \xi_{j2}\|_{L^2(N)} + s_j\| \xi_{j2}\|_{L^2(N)} \\ + \|\bar \nabla^{\mathcal H}\xi_{j1}^0\|_{L^2(N)} + \|\bar \nabla^{\mathcal H}\xi_{j1}^1\|_{L^2(N)} + \|\bar\nabla^{\mathcal H}\xi_{j2}\|_{L^2(N)} \\ \leq C(\|\tilde D_{s_j} \eta_j\|_{L^2(N)} + \|\eta_j\|_{L^2(N)}) \leq C, \end{multline} for every $j$. We now deal with each individual component of \eqref{eq:Taylor_sequence}: \bigskip \begin{case}[The sequence $\{\xi_{j1}^0\}_j$]\hfill Changing variables on integrals of the left hand side of estimates \eqref{eq:elliptic_estimate1_k>=0} with $k=0$ and using \eqref{eq:Taylor_sequence}, we obtain a uniform bound, \begin{equation*} \|\bar\nabla \xi_{j1}^0 \|_{L^2(N)} \leq \|\bar\nabla^{\mathcal V} \xi_{j1}^0 \|_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_{j1}^0 \|_{L^2(N)} \leq C\|\xi^0_{j 1}\|_{L^2(N)} + \|\bar\nabla^{\mathcal H} \xi_{j1}^0 \|_{L^2(N)} \leq C, \end{equation*} for every $j$. Hence $\sup_j\|\xi_{j1}^0\|_{W^{1,2}(N)} < \infty$. By the weak compactness of the unit ball in $W^{1,2}(N)$, there exist a subsequence denoted again as $\{\xi_{j1}^0\}_j$ converging weakly in $W^{1,2}(N)$ to a section $\xi\in W^{1,2}(N)$. By Rellich Theorem, for every $T>0$, there exist a subsequence, denoted again as $\{\xi_{j1}^0\}_j$, converging in $L^2(Z,T)$ to, $\xi\vert_{B(Z,T)}$ . Notice that we can choose a subsequence $\{\xi_{1j}^0\}_j$ so that $\xi_{1j}^0 \to \xi$ in $L^2_{loc}(N)$ topology. Indeed this follows by obtaining the subsequence $\{\xi_{j1}^0\vert_{B(Z,T)}\}_j$ extracted by Rellich Theorem applied to $B(Z, T+1)$ and extracting a subsequence by applying again Rellich's Theorem on the larger neighborhood $B(Z, T+1),\ T\in \mathbb{N}$ and then applying a diagonal argument on $T$. It follows that $\{\xi_{j1}^0\}_j$ converges on $L^2_{loc}(N)$ and weakly on $W^{1,2}(N)$ to $\xi$. We denote the $L^2$ norm of $L^2(B(Z,T))$ by $\|\cdot\|_{2,T}$. By weak lower semicontinuity $\|\slashed D_1\xi\|_{2,T}=0$ for every $T>0$ so that $\slashed D_1\xi=0$. \end{case} \begin{case}[The sequence $\{\xi_{j1}^1\}_j$]\hfill We have estimates, \begin{align*} \|\bar\nabla^{\mathcal V} \xi_{j1}^1 \|_{L^2(N)} &\leq C\|\slashed D_1 \xi_{j1}\|_{L^2(N)} \leq C s_j^{-1/2}, \qquad (\text{by \eqref{eq:Taylor_sequence}}) \\ \|\bar\nabla^{\mathcal H} \xi_{j1}^1\|_{L^2(N)} &\leq C,\qquad (\text{by \eqref{eq:Taylor_sequence}}) \\ \sup_j\sqrt{s_j}\|\xi_{j1}^1\|_{L^2(N)} &\leq C \sup_j \sqrt{s_j}\|\slashed D_1 \xi_{j1}^1\|_{L^2(N)}< C, \qquad (\text{by \eqref{eq:spectral_estimate}.}) \end{align*} Hence $\{\xi_{j1}^1\}_j$ converges strongly on $L^2(N)$ to zero and weakly on $W^{1,2}(N)$ to the zero section. Also, by weak compactness in $L^2(N)$, there exist weak limit $\psi_1\in L^2(N)$ with \[ \sqrt{s_j} \xi_{j1}^1 \rightharpoonup \psi_1, \] on $L^2(N)$. By weak lower semicontinuity of the $L^2(N)$-norm, for every $T'<T$, every $0<h<\tfrac{1}{2}(T-T')$ and every $\alpha$, the difference quotients in the fiber directions at $(p,v)\in N$, \[ \partial_\alpha^h \psi_1(p,v) := \frac{1}{h}[\psi(p,v + h e_\alpha) - \psi(p,v)]\in E_p , \] satisfy \begin{align*} \|\partial_\alpha^h \psi_1\|_{2,T'} &\leq \liminf_j \sqrt{s_j}\|\partial^h_\alpha\xi_{j1}^1\|_{2, T'} \\ &\leq \limsup_j C_T \sqrt{s_j}\|\partial_\alpha\xi_{j1}^1\|_{2, T} \\ &\leq \limsup_j C_T \sqrt{s_j}\|\slashed D_1 \xi_{j1}^1\|_{L^2(N)}< C_T, \end{align*} where in the last couple of lines of the preceding estimate, we used \eqref{eq:bootstrap_estimate} with $k=0$ and \eqref{eq:Taylor_sequence}. Hence $\psi_1$ has uniform $L^2(N)$-bounds on the difference quotients of the normal directions and therefore has weak derivatives in the normal directions that are bounded in $L^2(N)$. Since $\sup_j \|\sqrt{s_j} \slashed D_1 \xi_{j1}\|_{L^2(N)}<\infty$, it follows by Lemma ~\ref{lemma:weak_convergence} applied to the sequence $\{\sqrt{s_j} \xi_{j1}- \psi_1\}_j$, that $\sqrt{s_j} \slashed D_1 \xi_{j1}\rightharpoonup \slashed D_1\psi_1$ in $L^2(N)$-weakly. \end{case} \begin{case}[The sequence $\{\xi_{j2}\}_j$]\hfill By \eqref{eq:Taylor_sequence}, we have estimates \begin{align*} \|\bar\nabla^{\mathcal V} \xi_{j2} \|_{L^2(N)} &= \| \slashed D_0 \xi_{j2}\|_{L^2(N)} \leq C s_j^{-1/2} \\ \|\bar\nabla^{\mathcal H} \xi_{j2} \|_{L^2(N)} &\leq C \\ s_j\| \xi_{j2}\|_{L^2(N)} &\leq C. \end{align*} Hence $\{\xi_{j2}\}_j$ converge strongly on $L^2(N)$ and weakly on $W^{1,2}(N)$ to the zero section. Similarly the sequence $\{\sqrt{s_j} \xi_{j2}\}_j$ converge strongly on $L^2(N)$ to zero and the sequence $\{\sqrt{s_j} \slashed D_0\xi_{j2}\}_j$ is bounded on $L^2(N)$. Using Lemma ~\ref{lemma:weak_convergence} the later sequence converge weakly in $L^2$ to zero. Finally by weak compactness in $L^2(N)$, there exist $\psi_2\in L^2(N)$ so that \[ s_j \xi_{j2} \rightharpoonup \psi_2. \] \end{case} To summarize, we have the $L^2(N)$-weak limits, \begin{align*} (\sqrt{s_j} \slashed D_1 + \bar D^Z)\xi_{j1} &\rightharpoonup \slashed D_1 \psi_1 + \bar D^Z\xi, \\ \sqrt{s_j} \slashed D_0 \xi_{j2},\ \bar D^Z\xi_{j2} & \rightharpoonup 0, \\ s_j \xi_{j2}&\rightharpoonup \psi_2. \end{align*} Using the expansions \eqref{eq:taylorexp} and \eqref{eq:taylorexp1}, \begin{align*} \tau^{-1}\tilde D_{s_j} \eta_j =& \left(\sqrt{s_j}\slashed D_1 + \bar D^Z + {\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right)\xi_{j1} +(\sqrt{s_j}\slashed D_0 + \bar D^Z + s_j\bar{\mathcal A})\xi_{j2} \\ &+s_j^{-1/2} O(r^2 \partial^{\mathcal N} + r\partial^{\mathcal H})\xi_j + s^{-1/2}_jO(r + r^3)\xi_{j1} +\tau O(1 + s_j^{1/2} r)\xi_{j2} . \end{align*} The $L^2(\mathcal{N}^T)$-norm of the error term estimates no more than $s_j^{-1/2} C_{T, \varepsilon} \| \xi_j\|_{1,2, T}$ for large $j$. By our assumption that $\tilde D_{s_j} \eta_j \to 0 $ in $L^2(N)$, we obtain that \[ \left\|\left(\sqrt{s_j}\slashed D_1 + \bar D^Z + {\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right)\xi_{j1} + (\sqrt{s_j}\slashed D_0 + \bar D^Z + s_j\bar{\mathcal A})\xi_{j2}\right\|_{2,T} \to 0 \] as $j\to \infty$ for every $T>0$. By weak lower semicontinuity, we conclude that $\xi, \psi_1, \psi_2$ satisfy the system of mutually $L^2(N)$-orthogonal components, \begin{align*} \slashed D_1 \xi&=0, \\ \slashed D_1 \psi_1 + \left(\bar D^Z + P^0\circ \left({\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right) \right) \xi&=0, \\ \bar{\mathcal A} \psi_2 + (1- P^0)\circ \left( {\mathcal B}^0 + \frac{1}{2}r^2 \bar A_{rr}\right)\xi &=0. \end{align*} By Proposition~\ref{prop:horizontal_regularity}, we obtain that $\psi_1 \in W^{1,2}_{k,l}(N)$, for every $k,l>0$. We further brake the second equation into components. We start by using the decompositions $S^0 = \bigoplus_\ell S^0_\ell$ so that \[ \xi = \sum_\ell \varphi_{1\ell}\cdot \xi_\ell\qquad \text{and} \qquad \bar\xi:= \sum_\ell \xi_\ell, \] where $\varphi_{1\ell} = \lambda_\ell^{\tfrac{n-m}{4}} \exp\left(-\tfrac{1}{2} \lambda_\ell r^2\right)$ and $\xi_\ell \in W^{1,2}(Z; S^{0+}_\ell)$. By \eqref{eq:br_D_Z_versus_D_Z_+}, \[ \bar D^Z \xi = \sum_\ell (\mathfrak{c}_N(\pi^* d \ln\varphi_\ell) \xi_\ell + \varphi_\ell\cdot D^Z_+\xi_\ell). \] In the proof of the claim in Lemma~\ref{lem:balancing_condition} we calculated the components of the second equation of the system belonging to $\ker\slashed D_1$. These give the equation $D^Z_+\bar\xi + {\mathcal B}^Z_{0+}\bar \xi =0$. The remaining terms of the aforementioned equation, are $L^2$-perpendicular to $\ker\slashed D_1$. They give $\slashed D_1 \psi_1 + {\mathcal P}_1 \xi =0$. This completes the proof of the existence of $\xi$ with the asserted properties. \end{proof} We used the following lemma: \begin{lemma} \label{lemma:weak_convergence} Let $\{\xi_j\}_j $ a sequence converging weakly in $L^2(N)$ to zero and possessing directional weak derivatives in $L^2$ in the necessary directions to guarantee the existence of the sequence $\{L\xi_j\}_j$ for a given differential operator $L$ of 1st order. Assume $\sup_j\|L\xi_j\|_{L^2(N)} < \infty$. Then the sequence $\{L\xi_j\}_j$ converges weakly to zero in $L^2$-norm. \end{lemma} \begin{proof} Let $\sup_j \|L\xi_j\|_{L^2(N)} = M$ and choose $\psi \in L^2(N)$ and $\varepsilon>0$. Choose smooth compactly supported section $\chi\in W^{1,2}(N)$ with $\| \chi - \psi\|_{L^2(N)} < \varepsilon/M$. Then \begin{equation*} \lvert\langle L\xi_j, \psi\rangle_{L^2(N)}\rvert \leq \lvert\langle L\xi_j, \chi\rangle_{L^2(N)}\rvert + \|L\xi_j\|_{L^2(N)}\| \psi - \chi\|_{L^2(N)} \leq \lvert\langle \xi_j, L^*\chi\rangle_{L^2(N)}\rvert + \varepsilon \end{equation*} so that $\limsup_j \lvert\langle L\xi_j, \psi\rangle_{L^2(N)}\rvert< \varepsilon$. Since this is an arbitrarily chosen $\varepsilon$, we have that $\lim_j \langle L\xi_j, \psi\rangle_{L^2(N)} =0$. This concludes the proof. \end{proof} \begin{proof}[Proof of estimate (\ref{eq:est2'}) in Lemma \ref{lemma:aux_estimate}] This is a Poincar\'{e} type inequality and we prove it by contradiction. Fix $k\in \{0,1,2\}$. Negating the statement of Lemma \ref{lemma:aux_estimate} for $\varepsilon_0 = 1/j$ and $C=j$, there exist $0< \varepsilon_j < 1/j$ with the following significance: there is an unbounded sequence of $\{s_u(\varepsilon_j)\}_u$ and $\eta_u(\varepsilon_j) \in\tilde V^\perp_{\varepsilon_j, s_u}\cap C^\infty_c(\mathcal{N}_{2\varepsilon_j} ; \tilde E^0)$ so that \[ \int_N\lvert\eta_u\rvert^2\, d\mathrm{vol}^\mathcal{N}=1 \quad \text{and}\quad j\| \tilde D_{s_u} \eta_u\|_{L^2(N)} \leq s_u^{\tfrac{k}{2}}\| r^k \eta_u\|_{L^2(N)}, \quad \text{for every $u\in \mathbb{N}$.} \] In particular, we set $s_j$ to be the first term of the unbounded sequence $\{s_u\}_u$ that is bigger than $\frac{j^2}{\varepsilon_j^2}$. We also set $\eta_j$ to be the associated compactly supported section to $s_j$, and denote $\mathcal{N}_j: = \mathcal{N}_{2\varepsilon_j}$ and $\tilde V_j^\perp:= \tilde V^{\perp_{L^2}}_{\varepsilon_j, s_j}\cap C^\infty_c(\mathcal{N}_j ; \tilde E^0\vert_{\mathcal{N}_j})$. By using induction in $j\in \mathbb{N}$, we obtain sequences $\{\varepsilon_j\}_j \subset (0, 1/j),\ \{s_j\}_j \subset (j, \infty)$ and $\{\eta_j\}_j \subset \tilde V^\perp_j$ so that, \[ \int_N \lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N} =1 \quad \text{and}\quad j\|\tilde D_{s_j}\eta_j\|_{L^2(N)}\leq s_j^{\tfrac{k}{2}}\|r^k\eta_j\|_{L^2(N)}, \quad \text{for every $j\in \mathbb{N}$.} \] When $k=0$ this implies that $\|\tilde D_{s_j}\eta_j\|_{L^2(N)} \to 0$. When $k=1$ or $2$, then set $ \tau\bar \eta_j = \eta_j$ and estimate \begin{equation*} s_j^{\tfrac{k}{2}}\|r^k\bar\eta_j\|_{L^2(N)} \leq C \left( (s_j^{-1/2} + \varepsilon_j) \|\slashed D_{s_j} \bar \eta_j\|_{L^2(N)} + \|\eta_j\|_{L^2(N)}\right) \leq C( \|\tilde D_{s_j} \eta_j\|_{L^2(N)} + 1), \end{equation*} where in the first inequality we used \eqref{eq:elliptic_estimate1_k>=0}, with $k=1$ or $k=2$ and in the second one we used \eqref{eq:Taylor_estimate} and the fact that $\|\eta_j\|_{L^2(N)} \leq 2$. It follows that \[ j\|\tilde D_{s_j}\eta_j\|_{L^2(N)}\leq C( \|\tilde D_{s_j} \eta_j\|_{L^2(N)} + 1), \] for every $j$ in which case we obtain again that $\|\tilde D_{s_j}\eta_j\|_{L^2(N)} \to 0$. We recall the re-scaled sequence $\{\xi_j\}_j\subset W^{1,2}(N; \pi^*(E^0\vert_Z))$ of $\{\eta_j\}_j$ introduced in \eqref{eq:modified_section}. By Lemma~\ref{lemma:perps_of_app_solutions} there exist a subsequence denoted again as $\{\xi_j\}_j$ that converges $L^2_{loc}$-strongly and $W^{1,2}$-weakly to a section $\sum_\ell \varphi_\ell \xi_\ell$ where $\xi_\ell \in W^{1,2}(Z; S^{0+}_\ell)$ satisfies $(D^Z_+ + {\mathcal B}^Z_{0+})\xi_\ell = 0$ for every $\ell$ and $\varphi_\ell := \lambda_\ell^{\tfrac{n-m}{4}} \exp\left( -\frac{1}{2}\lambda_\ell r^2 \right)$. \medskip \textit{Claim:} $\xi_\ell \equiv 0$ for every $\ell$. \proof[Proof of claim] By assumption $\eta_j \perp_{L^2} \tilde V_{s_j, \varepsilon_j}$. For every $j$ we construct $\xi_{s_j} = \xi_j^0 + \xi_j^1 + \xi_j^2$ where $\xi_j^0 = \sum_\ell \varphi_{s_j \ell} \cdot \xi_\ell$ and $\xi_j^1,\ \xi_j^2$ are constructed by equations \eqref{eq:balansing_condition_1} and \eqref{eq:balansing_condition_2} respectively. Using Definition~\ref{def:space_of_approx_solutions}, we have $\rho_{\varepsilon_j} \cdot \tau\xi_{s_j}\perp_{L^2} \eta_j$, for every $j$. We denote by $d\mathrm{vol}_j$, the density with density function the pullback of $d\mathrm{vol}^\mathcal{N}/ d \mathrm{vol}^N$ under the rescalling $\{y_\alpha = \sqrt{s_j} x_\alpha\}_\alpha$. The orthogonality condition writes \begin{equation} \label{eq:orthogonality_in_L_2} 0=\int_{\mathcal{N}_j} \langle \eta_j, \rho_{\varepsilon_j} \cdot \tau\xi_{s_j}\rangle\, d\mathrm{vol}^\mathcal{N} = \sum_\ell\int_N \langle \xi_j, \rho_{\varepsilon_j\sqrt{s_j}} \cdot \varphi_\ell\cdot\xi_\ell\rangle\, d\mathrm{vol}_j + \int_{\mathcal{N}_j} \langle \eta_j, \rho_{\varepsilon_j} \cdot \tau (\xi^1_j + \xi^2_j)\rangle\, d\mathrm{vol}^\mathcal{N}. \end{equation} The second integral of the right hand side obeys the bound, \begin{equation*} \left\lvert\int_{\mathcal{N}_j} \langle \eta_j, \rho_{\varepsilon_j} \cdot (\xi^1_j + \xi^2_j)\rangle\, d\mathrm{vol}^\mathcal{N}\right\rvert \leq C\|\eta_j\|_{L^2(N)} (\|\xi^1_j \|_{L^2(N)} + \|\xi^2_j \|_{L^2(N)}) \leq Cs_j^{-1/2}\sum_\ell\|\xi_\ell\|_{L^2(N)}, \end{equation*} where in the last line we used estimates of Lemma~\ref{lem:balancing_condition}). It follows that the second integral of the right hand side vanishes as $j\to \infty$. Here we also used that the density $d\mathrm{vol}^\mathcal{N}$ is equivalent to $d\mathrm{vol}^N$. On the other hand, by construction $\varepsilon_j \sqrt{s_j} >j$ for every $j$ and therefore $\rho_{\varepsilon_j \sqrt{s_j} } \to 1$ uniformly on compact subsets on $N$. Also by expansion \eqref{eq:volume-expansion} we have $\lim_j \lvert d\mathrm{vol}_j - d\mathrm{vol}^N\rvert =0$. Hence for every $T>0$, \begin{align*} \sum_\ell\int_{B(Z,T)} \varphi_\ell^2 \cdot \lvert\xi_\ell\rvert^2\, d\mathrm{vol}^N & = \lim_j \sum_\ell\int_{B(Z,T)} \langle \rho_{\varepsilon_j\sqrt{s_j}} \cdot\xi_j, \varphi_\ell\cdot\xi_\ell\rangle\, d\mathrm{vol}_j \\ &= - \lim_j\sum_\ell\int_{B(Z,T)^c} \langle \xi_j, \rho_{\varepsilon_j\sqrt{s_j}} \cdot \varphi_\ell\cdot\xi_\ell\rangle\, d\mathrm{vol}_j \\ &\leq \lim\sup_j \sum_\ell \int_{B(Z,T)^c}\lvert\langle\xi_j, \varphi_\ell \xi_\ell\rangle\rvert\, d\mathrm{vol}_j \\ &\leq \sum_\ell\left(\int_{B(Z,T)^c}\varphi_\ell^2\lvert\xi_\ell\rvert^2\, d\mathrm{vol}^N\right)^{1/2}, \end{align*} where in the second line we used \eqref{eq:orthogonality_in_L_2} the inequality in the last line follows by Cauchy-Schwarz and the fact that $\lim_j \int_N \lvert\xi_j\rvert^2 \, d\mathrm{vol}_j =1$. Letting $T\rightarrow \infty$ and using the fact that $\phi_\ell \cdot\xi_\ell\in L^2(N)$, we see that $\xi_\ell \equiv 0$, finishing the proof of the claim. Finally we fix first $T>0$ and $j$ large enough and we decompose the region $\mathcal{N}_j$ where $\eta_j$ is supported as \[ \mathcal{N}_j = B(Z,T/\sqrt{s_j}) \cup (B(Z,T/\sqrt{s_j})^c, \] It follows that for every $T>0$, as $j\rightarrow \infty$, \[ \lim_j\int_{B(Z,T/\sqrt{s_j})}\lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N}\, =\, \lim_j \int_{B(Z,T)} \lvert\xi_j\rvert^2\, d\mathrm{vol}_j\, =\, \int_{B(Z, T)}\varphi_\ell^2\cdot\lvert\xi\rvert^2\, d\mathrm{vol}^N\, =\, 0, \] And therefore \[ \lim_j \int_{B(Z, T/\sqrt{s_j})^c}\lvert\eta_j\rvert^2 d\mathrm{vol}^\mathcal{N} = 1 -\lim_j \int_{B(Z,T/\sqrt{s_j})}\lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N} = 1. \] We now obtain a contradiction from the concentration estimate. Since $\eta_j$ is compactly supported in $\mathcal{N}_j$, by Lemma \ref{lemma:normal_rates}, it satisfies the pointwise estimate, \[ \lvert\tilde {\mathcal A} \eta_j\rvert^2 \geq Cr^2 \lvert\eta_j\rvert^2, \] But then, by a concentration estimate \begin{equation*} \int_{\mathcal{N}_j}\lvert\tilde D_{s_j}\eta_j\rvert^2 \, d\mathrm{vol}^\mathcal{N} \geq s_j^2\int_{B(Z, T/\sqrt{s_j})^c}\lvert \tilde{\mathcal A}(\eta_j)\rvert^2\, d\mathrm{vol}^\mathcal{N} - C_1 s_j \geq s_j\left(C T^2 \int_{B(Z,T/(\sqrt{s_j}))^c}\lvert\eta_j\rvert^2\, d\mathrm{vol}^\mathcal{N} - C_1\right). \end{equation*} But then, for $T> 2\sqrt{C_1/C} $, the preceding estimate contradicts the estimate, \[ \lim _j \int_\mathcal{N} \lvert\tilde D_{s_j}\eta_j\rvert^2\, d \mathrm{vol}^\mathcal{N} \leq 2\lim _j \int_\mathcal{N} \lvert\tilde D_{s_j}\eta_j\rvert^2\, d \mathrm{vol}^N =0, \] where we used the volume inequality \eqref{eq:density_comparison}. This contradiction proves estimate \eqref{eq:est2'}. \end{proof} Finally we have the \begin{proof}[Proof of estimate \eqref{eq:est2} of Theorem \ref{Th:hvalue}] Estimate \eqref{eq:est2} follows for compactly supported sections $\eta \in V_{s, \varepsilon}^\perp \cap W^{1,2}_0(B_Z(4\varepsilon) ; E^0)$, from estimate \eqref{eq:est2'} by setting $k=0$. By Lemma~\ref{lemma:local_implies_global}, the same estimate is true for general sections $\eta \in V_{s, \varepsilon}^\perp$. This completes the proof. \end{proof} \medskip \vspace{1cm} \section{Morse-Bott example} \label{sec:Morse_Bott_example} On a closed Riemmanian manifold $(X^n,g)$ the bundle $E\oplus F = {\Lambda}^\mathrm{ev} T^*X \oplus {\Lambda}^{odd}T^*X$ is a Clifford algebra bundle in two ways: \begin{align} \label{eq:hatcl} c(v) = v\wedge - \iota_{v^\#}\qquad \mbox{and}\qquad \hat{c}(w) = w\wedge + \iota_{w^\#} \end{align} for $v,w\in T^*X$. One checks that these anti-commute: \begin{align} \label{eq:twocliff} c(v)\hat{c}(w)\ =\ - \hat{c}(w) c(v). \end{align} Note that $D=d+d^*$ is a first-order operator whose symbol is $c$. Fix a Morse-Bott function $f$ with critical $m_\ell$-dimensional submanifolds $Z_\ell$ and normal bundles $N_\ell$ so that the Hessian $\text{Hess}(f)_\ell:N_\ell \to N_\ell$ is symmetric nondegenerate of Morse-index $q_\ell$. Then Theorem~\ref{Th:mainT} shows that the low eigenvectors of \[ D_s = D + s{\mathcal A}_f = (d + d^*) + s\hat{c}(df) : \Omega^{ev}(X)\rightarrow \Omega^{odd}(X) \] concentrate around the critical submanifolds $Z_\ell$; fix a critical set $Z=Z_\ell$ of dimension $m$ with normal bundle $N$ and Morse index $q$. The splitting $T^*X\vert_Z= T^*Z\oplus N^*$ gives decompositions \begin{align*} \Lambda^{ev}X\vert_Z &= \Lambda^{ev}Z \otimes \Lambda^{ev}N \oplus \Lambda^{odd}Z\otimes \Lambda^{odd}N \\ \Lambda^{odd}X\vert_Z &= \Lambda^{ev}Z \otimes \Lambda^{odd}N \oplus \Lambda^{odd}Z \otimes \Lambda^{ev}N \end{align*} The normal bundle $N$ is orientable and further decompose to the orientable positive $N^+\to Z$ and negative $N^-\to Z$ eigenbundles of the Hessian, of dimension $q$. In Morse-Bott coordinates near $p\in Z$ the function takes the form \[ f(x_i ,x_\alpha)= \sum_{\alpha= 1}^{n-m} \eta_\alpha x_\alpha^2 ,\quad \text{where} \quad \eta_\alpha = \begin{cases} -1&\quad \text{when $\alpha \leq q$} \\ 1&\quad \text{when $\alpha \geq q+1$} \end{cases}, \] and the nondegenerate hessian has the form $\text{Hess}(f)\vert_Z = \text{diag}(\eta_\alpha)$. Then \[ M_\alpha^0 = - \eta_\alpha c(dx^\alpha) \hat{c}(dx^\alpha): {\Lambda}^\mathrm{ev} T_p^*X\rightarrow {\Lambda}^\mathrm{ev} T_p^*X, \ \text{for every $\alpha$} \] are invertible self-adjoint matrices with symmetric spectrum of eigenvalues $\pm 1$ that commute with each other. \begin{lemma} \begin{itemize} \item If $Z$ has index $q$ then the real line bundle $ \Lambda^qN^- \to Z$ is trivial. \item If $\phi$ belongs in the +1- eigenspace of $M_\alpha^0$ then \[ S^{0+}_\alpha =\begin{cases} \{\xi\in \Lambda^{ev}X: \xi\wedge dx^\alpha=0\},&\quad \text{when $\alpha\leq q$} \\ \{\xi\in \Lambda^{ev}X: \iota_{\partial_\alpha}\xi =0\} ,&\quad \text{when $\alpha\geq q+1$} \end{cases}. \] An analogue description holds for $M_\alpha^1$ but with $\Lambda^{odd}X$ in place of $\Lambda^{ev}X$. \item If $q$ is even, then $S^{0+} \cong \Lambda^{ev}Z \otimes \Lambda^qN^-$ and $S^{1+} =\Lambda^{odd}Z \otimes \Lambda^qN^- $, and \item If $q$ is odd, then $S^{0+} = \Lambda^{odd}Z \otimes \Lambda^qN^-$ and $S^{1+}\cong \Lambda^{ev}Z \otimes \Lambda^qN^-$. \item The Clifford map $c:T^*Z \otimes \Lambda^*X \to \Lambda^*X $ restricts to the map $\bar c:T^*Z\otimes \Lambda^{ev}Z \to \Lambda^{odd}Z$ and $c_Z:T^*Z \otimes \Lambda^{ev}Z \otimes \Lambda^q N^- \to \Lambda^{odd}Z \otimes \Lambda^q N^-$ is given by $c_Z = \bar c\otimes 1_{\Lambda^q N^-}$ when $q$ is even and by $c_Z = \bar c^*\otimes 1_{\Lambda^q N^-}$, when $q$ is odd. \end{itemize} \end{lemma} \begin{proof} The first bullet follows because $N^-\to Z$ is an orientable bundle. For the second bullet we use an orthonormal coframe $\{e^j\}_j$ at $p\in Z$ and decompose $\phi = \sum_I \lambda_I e^I$. Then, using \eqref{eq:hatcl}, $\phi\in S^{0+}_\alpha$ if and only if \begin{align*} \phi &= M_\alpha^0 \phi = - \eta_\alpha c(dx^\alpha) \hat{c}(dx^\alpha)\phi = -\eta_\alpha( dx^\alpha\wedge(\iota_{\partial_\alpha}\phi)\, -\, \iota_{\partial_\alpha}(dx^\alpha\wedge \phi)) \\ &= \eta_\alpha( \phi\, - \,2dx^\alpha\wedge (\iota_{\partial_\alpha} \phi)) \\ &= \eta_\alpha\left(\sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha\notin I\}} \lambda_I e^I - \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha \in I\}} \lambda_I e^I \right) \\ &= \begin{cases} \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha\notin I\}} \lambda_I e^I - \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha \in I\}} \lambda_I e^I , \quad \text{if $\alpha>q$} \\ -\sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha\notin I\}} \lambda_I e^I + \sum_{\{I: \lvert I\rvert\, \text{even},\ \alpha \in I\}} \lambda_I e^I , \quad \text{if $\alpha \leq q$}, \end{cases} \end{align*} where in the fourth equality we used the Cartan's identity. It follows that when $\alpha>q$ then $\lambda_I=0$ if and only if $\alpha\in I$ and when $\alpha\leq q$ then $\lambda_I=0 $ if and only if $\alpha\notin I$. Therefore we obtain the descriptions in the second bullet. Continuing the preceding argument, $\phi \in \bigcap_\alpha S^{0+}_\alpha$ then $\lambda_I=0$ if and only if $\{1, \dots, q\} \subset I$ and $\{q+1, \dots, n-m\} \cap I = \emptyset$. Hence if $q$ is even, the third bullet holds. If $q$ is odd then the fourth bullet holds. The last bullet is an easy consequence of the preceding bullets. \end{proof} The Levi-Civita connection of $X$ restricts to $Z$ and together with the restriction from the Clifford action induce the operator $d_Z+ d_Z^*: \Lambda^{ev}Z \to \Lambda^{odd}Z$. The Localization Theorem with the Poincare-Hopf Theorem then give, \begin{align*} \chi(X) &= \mathrm{index\,}\left(d+d^*: C^\infty(X;\Lambda^{ev}X) \to C^\infty(X; \Lambda^{odd}X) \right) \\ &= \sum_\ell (-1)^{q_\ell} \mathrm{index\,}\left(d_\ell+d^*_\ell: C^\infty(Z_\ell;\Lambda^{ev}Z_\ell) \to C^\infty(Z_\ell; \Lambda^{odd}Z_\ell) \right) \\ &= \sum_\ell (-1)^{q_\ell} \chi(Z_\ell), \end{align*} a well know identity emerging from Morse-Bott homology. This is the localization in E.~Witten's well-known paper on Morse Theory \cite{w1} but in the Morse-Bott case.
a70036e57b62d1caca11ccbd554db0448010497d
\section{Introduction} \label{sec:intro} In the last century, the accelerated growth of urban areas has given rise to challenges at a variety of levels. Among these, mobility stands out. The ability to efficiently move people and goods is critical to a city's social and economic success \cite{de2014navigability,jiang2016timegeo,abbar2018structural}. It is unsurprising, then, the enormous amount of economic and engineering effort that urban planners have devoted to enhance the efficiency of road networks, bus lines, and metro systems \cite{GAKENHEIMER1999671}. Unlike transportation modes that operate in exclusive spaces, such as metro lines, the uncontrolled rise in urban automotive mobility has gone hand in hand with the degradation of other modes of transportation. Of all these alternative modes, walking has suffered the most, due in large part to the fact that the amount of the streetscape allotted to vehicles invades and interferes with the pedestrian space. Nevertheless, cities exhibit a growing tendency to stop and reverse this process by fostering more active, citizen-friendly transportation modes --foot, bike and personal mobility vehicles, which compete for this public space \cite{cervero2003}. One logical consequence of this paradigm shift, is the increased level of interaction between pedestrians and motor vehicles, largely due to the overlapping use of common (or adjacent) spaces such as roads, sidewalks, and zebra-crossings. Such increase gives rise to an important, negative side-effect: a growth in pedestrian injuries and fatalities. Data from the National Highway Traffic Safety Administration (NHTSA) of the United States indicate that the number of pedestrian fatalities per year is rising in the U.S. \cite{fars2019}. After a steady decline from the mid-1990's to a low in 2009, there has been a clear and consistent reversal until 2017 (the last year of available data), when pedestrian fatalities surpassed a previous 23-year high in 1995. Traditionally, pedestrian safety research has focused on the impact of structural factors (e.g. road lanes \cite{ukkusuri2012role}, traffic network structure \cite{rifaat2011effect,moeinaddini2014relationship}, existence of direct line-of-sight between objects \cite{mecredy2012neighbourhood,fu2019investigating}, etc.). In addition, socio-behavioral factors may be concomitant, e.g. the change of individual behavior related to the use of new, distraction-causing technologies \cite{nasar2008mobile}, inside and outside of vehicles, which is not likely to diminish in the future. Also, demographic variables (socio-economic status, race, gender) may play a role as well \cite{mukoko2019examining}. Nonetheless, crashes that involve motor vehicles and pedestrians are understudied, and, at the micro level, much less so outside intersections \cite{hu2018dangerous}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.7\columnwidth]{fig1.png} \end{center} \caption{{\bf Accident distribution in Barcelona.} Relative concentration of accidents by type (vehicle-to-pedestrian, vehicle-to-vehicle).} \label{fig1} \end{figure} An enlightening example, built upon real accident data, is shown in Figure~\ref{fig1}. Quite clear even to the naked eye, accidents involving vehicles may happen throughout a city. However, when a distinction is introduced (vehicle-to-vehicle {\it vs.} vehicle-to-pedestrian), the spatial patterns where these accidents occur are mostly non-overlapping, suggesting that the configuration of the public space --the scene where the accident happens-- matters, see as well Figure~S1 in the Supplementary Information (SI). All in all, the strategies for the safe coexistence of pedestrians and vehicles demand a separate and careful examination. The combination of increasingly available street-level imagery sources and city open data portals, together with advances in the field of computer vision and larger training datasets \cite{zhou2014learning,zhou2017places}, has opened up promising new opportunities for facing challenges in urban science. Examples include the quantification of physical change and pattern identification in cities \cite{naik2017pnas,albert2017using,seiferling2017green}, \cb{road safety assessment \cite{song2018farsa}}, the prediction of human-perceived features of street scenes \cite{naik2014streetscore,liu2017machine}, the automated estimation of demographic variables across the United States \cite{gebru2017pnas} and Great Britain \cite{suel2019measuring}, or the beautification of urban images through the generation of prototypes \cite{kauer2018mapping}. Turning to transportation research, however, computer vision has focused mostly on traffic control and surveillance \cite{fadlullah2017state}, and automatic detection and collision prevention \cite{zhang2016faster,zhang2016far} for autonomous vehicles. Outside scene analysis, the Deep Learning paradigm has been exploited mostly on motor traffic \cite{polson2017deep,wu2018hybrid,zhang2018deep,wang2019enhancing,zhang2019multistep} , so far leaving aside its potential to tackle pedestrian safety. Here, we address the complexities of vehicle-to-pedestrian interaction combining the structural (scene elements) and perceptual (scene composition) aspects of the problem. \cb{Overall, the contributions of the present work can be summarized as follows: \begin{enumerate} \item Creating a dataset of urban street-level images labelled according to accidentality, based on open data municipal accident records. \item Developing a deep learning architecture, adapted from Deep Residual Networks (ResNet), for hazard index estimation in urban images, that works for both pedestrian and vehicle accidents, and is capable of producing city-wide hazard level landscapes at an unprecedented resolution of one value every 15-20 meters. \item Proposing a set of interpretability analyses to extract human meaning from the outputs of the classification, through customized implementations of Pyramid Scene Parsing networks (PSPNet), Gradient-weighted class activation mapping (GradCam++), radar plots, and a new measure of scene disorder. \item Designing a greedy heuristic to propose realistic urban interventions, based on scene segmentation, class activation mapping and k-nn algorithm, which constitutes an informed guide for planners to pedestrian safety improvements. \end{enumerate} Taken together, these points constitute a novel and comprehensive deep learning pipeline for estimating vehicle and pedestrian hazard in urban scenes, and recommending feasible physical improvements to make those same scenes safer. The building blocks of the pipeline are tailored variants of different state-of-the-art deep learning/machine learning models and techniques (Deep Residual Networks (ResNet), Pyramid Scene Parsing network (PSPNet), Gradient-weighted class activation mapping (GradCam++)). The remainder of the paper is organized as follows: in Section 2, data (collection, processing techniques and labelling) and methods (pipeline components) are described in detail; then, in Section 3, the results on the hazard index and landscape, its connection to scene composition, and intervention heuristic are presented and discussed. Finally, Section 4 summarizes the work and discusses possible gaps and lines of development.} \section{Materials and Methods} In this Section we provide the details about the datasets and Deep Learning methods that are used throughout the work. For an introduction to the Deep Learning paradigm, with a focus on transportation systems, we refer to Wang {\it et. al.} \cite{wang2019enhancing}. \subsection{Dataset collection and curation} To feed the proposed framework, we use two types of real urban data: historical accident statistics and street-level urban imagery. In the case of Madrid and Barcelona, historical accident records for the years 2010-2018 are available from the open data portals of the respective municipal governments \cite{mad2019acc,bcn2019acc}. For San Francisco, data was available from 2015-2017 and it was filtered from the University of California, Berkeley's Transport Injury Mapping System (TIMS) of California traffic accidents \cite{tims2019}. In total, the Barcelona dataset was made up of 86,414 accidents, 10,240 being pedestrian and 76,174 being vehicle accidents. The Madrid dataset had 76,026 accidents (12,533 pedestrian, 63,492 vehicle). In San Francisco, the dataset was made up of 15,492 accidents (3331 pedestrian, 12,161 vehicle). All data points are geolocated with their corresponding GPS coordinates. Besides location, due the detonating causes may be different, we distinguish between accidents where a vehicle and a pedestrian were involved (simply `pedestrian', or $P$, onwards), from vehicle-to-vehicle accidents (simply `vehicle', or $V$, onwards). The spatial distribution of empirical accident data for both vehicles and pedestrians can be seen in the SI Figure~S1. Street-level imagery was extracted from two data sources. The Google StreetView (GSV) \cite{anguelov2010google} API was used for Barcelona and Madrid. In these dataset, images are, on average, 15 meters away from each other. As we wanted to capture the view of the driver, we limited our queries to images facing directly down the direction of traffic of the street. The result of this process was a comprehensive and homogeneous set of images for both cities. For the city of San Francisco, images were provided by Mapillary \cite{mapillary2019}, a crowd-sourced alternative to GSV. With Mapillary, all user-uploaded images are available under the CC-BY-SA license. As images are uploaded by private individuals working with different equipment, different setup, different light conditions, different vehicles, and without central coordination, several distinct challenges were presented by this dataset. Firstly, for each point provided, usually a single image was available. Occasionally, this image did not fit our criteria of facing down the direction of traffic, and had to be discarded. Secondly, data was only available from a smaller part of the city, corresponding to the area covered by the Mapillary contributors. The part of San Francisco available in the dataset, consisting mostly of high-traffic streets, is shown in Figure~S2 of the SI. Combining data from different sources (GSV and Mapillary) allows us to test the robustness of our methods when dealing with similar, but not equally distributed, data . All the collected images, both for GSV and Mapillary, contain GPS locations in their metadata, which allows us to assign each street image a binary accident category (``safe'' vs. ``dangerous''). We categorize a point as ``dangerous'' if one or more accidents have occurred with a 50 meter radius of its location. Otherwise, the point is categorized as ``safe''. \cb{More details on the creation of the image dataset can be found in Section S1 of the SI, along with a more extended discussion of the trade-offs of using a radius to assign accidents to images in Section S4.} The large collection of images tagged according to accident category was divided in 6 different datasets, resulting from the combination of the three targeted cities and two accident types ($V$ and $P$). The characteristics of each dataset (number of images per dataset and category) are detailed in Table~\ref{tab:datasets_img}. Notice that the San Francisco datasets are much smaller than Barcelona and Madrid datasets. For the 6 datasets, data was randomly split into train and test sets, containing $90\%$ and $10\%$ of the images respectively. \begin{table*}[th] \centering \begin{tabular}{l|c|c|c|c|c} \hline {} & {} & \multicolumn{2}{c}{{\bf Vehicle} ($V$)} & \multicolumn{2}{c}{{\bf Pedestrian} ($P$)}\\ \hline \hline City & Total & Accident & No accident & Accident & No accident \\ \hline \hline Barcelona & 177645 & 61.8\% & 38.2\% & 48.1\% & 51.9\% \\ Madrid & 704950 & 48.3\% & 51.7\% & 29.1\% & 70.9\% \\ San Francisco & 162530 & 35.7\% & 64.3\% & 17.4\% & 82.6\% \\ \hline \end{tabular} \caption{Image dataset properties. Comparing the relative proportion of points with and without accidents across the various cities. In all 3 cities, there is a higher proportion of points with vehicle-to-vehicle accidents than vehicle-to-pedestrian accidents. Relatively less accident points in San Francisco reflects the smaller amount of accident data for that city.} \label{tab:datasets_img} \end{table*} \subsection{Hazard index estimation with Deep Learning} \cb{ A variety of Deep Learning architectures have shown to be remarkably effective for many computer vision tasks \cite{lecun2015deep,schmidhuber2015deep}. In this work we use a Residual Neural Network (ResNet) \cite{resnet_v2}, a particular architecture of Convolutional Neural Network (CNN), to estimate the {\it hazard index} ($H$) in new, unseen images. The main characteristic of ResNets is the implementation of ``shortcut connections'' that skip blocks of convolutional layers, allowing the network to learn residual mappings between layers that mitigate the vanishing gradients problem. For this critical step, all of the elements used were created from scratch -- training and test datasets, weight learning stage, etc. -- as is detailed in the following. We define our {\it hazard index} ($H$) as the probability that a target image is classified as `dangerous' by the ResNet. For this objective, we train the ResNet to first classify images between the two defined accident categories: `dangerous' and `safe'. For each street-level image, the classifier delivers a value $H$ in the range of $[0,1]$. When $H \approx 1$, the point where the image was taken is considered as dangerous. On the contrary, when $H \approx 0$, the corresponding point is considered as safe. The hazard index is defined as the output of the Softmax activation function (between 0 and 1) of the last layer of the classifier architecture: \begin{equation} H = \frac{e^{z_i}}{\sum_{j=0}^{K}e^{z_j}} \label{hazard} \end{equation} where $z$ is the output logits of the last ResNet layer, $i$ is the index of `dangerous' class and $K$ is the number of classes. $H$ can be interpreted as the probability that the point related to a given image is hazardous. To successfully train our ResNet architecture for the required classification task, we start with a pre-trained network that considers the Imagenet dataset~\cite{imagenet}, and then, via 'Transfer learning' techniques, we fine-tune the network using our data. At this stage, we remove the connections from the last layer of the pre-trained ResNet model, replace it with a new layer with two outputs (categories \textsl{dangerous} and \textsl{safe}), and randomly initialize the layer's weights. We re-trained (fine-tuned) this last layer, leaving the rest of the CNN static. To compensate for class imbalance during training stage, class weights were adjusted in the objective cross entropy loss function according to inverse class frequency: \begin{equation} w_i = \frac{1}{ln(c+r_i)} \\ \label{weights} \end{equation} with $w_i$ as the weight assigned to each class, $c$ is a parameter to control the range of the valid values, and $r_i$ is the ratio of the number of samples from each class respect the total of samples, and then \begin{eqnarray} Loss = \frac{1}{N} \sum_{i=1}^{N} w_i \cdot (y_i \cdot log(\hat{y_i})+(1-y_i) \cdot log(1-\hat{y_i})) \label{loss} \end{eqnarray} where $N$ is the number of samples, and $y_i$ and $\hat{y_i}$ are the true label and the prediction for $i$ class, respectively. In accordance with the defined accident types ($V$ and $P$), we train our ResNet to estimate two subtypes of hazard index: $H_V$ and $H_P$, corresponding to the hazard indices for vehicle-to-vehicle and vehicle-to-pedestrian accidents, respectively. Therefore, we end up training 6 models in total, two per city.} \subsection{Hazard index interpretability} One of the main shortcomings of Deep Learning techniques is (the lack of) interpretability. Certainly, deep neural networks can provide a high level of discriminative power, but at the cost of introducing many model variables, which eventually hinders the interpretability of their black-box representations \cite{adadi2018peeking}. \cb{This difficulty is especially pertinent in our case: improving pedestrian safety sometimes demands changes in the urban landscape, the question being {\it which} changes are pertinent. Here, we address this by using two different interpretability techniques. The first, scene disorder, is used to assess image complexity and the second, Class Activation Mapping (CAM), to assess which areas are more informative for the estimation of the hazard index. In particular, CAM methods have been recently shown to be successful for interpretability tasks in several fields \cite{fukui2019attention,wagner2019interpretable,desai2020ablation,patro2019u}, including medicine \cite{wang2017diabetic}.} \subsubsection{Urban scene segmentation and scene disorder} First, in order to identify what objects are in the scene, and where they are positioned, we use urban scene segmentation. The goal of the semantic image segmentation task is to assign a category label to each pixel of an image. Segmentation provides a comprehensive breakdown of the physical elements visible in the scene. It predicts the label, location and mask for each object. For this task, we used a high-performance method called Pyramid Scene Parsing Network (PSPNet) \cite{pspnet} architecture, pre-trained with the Cityscapes dataset \cite{cityscapes}. PSPNet is a state-of-the-art deep learning model that exploits the capability of both global and local context information aggregation through several pyramid pooling layers. It has shown outstanding performance on several semantic segmentation benchmarks. Cityscapes is a real-world, vehicle-egocentric dataset for semantic urban scene understanding which contains 25K pixel-annotated images taken in different weather conditions. Images in Cityscapes are annotated with 30 urban object categories, but we used a subset of those (19) in our image repository segmentation --those that are common and relevant in driver-perspective scenes (e.g. ``car'', ``road'', ``sidewalk'', ``person'', ``traffic light'', etc.; see right-most labels in Figure~\ref{fig:CamAndSegmentation}). On top of the image segmentation outcome, we propose a measure of scene disorder inspired by the gray-tone spatial-dependence matrix \cite{haralick1973textural}, also known as Gray-level co-occurrence matrix (GLCM), which captures the amount of transitions between adjacent pixels labelled with different categories. It is known that complex images (related to scene disorder) may cause a division of attention \cite{moray1959attention,kahneman1973attention,alvarez2004capacity,richards2010development} and, as a consequence, reduce attention towards objects that are relevant to urban hazard. Originally, GLCM characterizes the texture of an image by calculating how often pairs of pixels with specific values are adjacent in a specified spatial configuration. In our measure of scene disorder, the frequency of pair of pixels of different values is calculated over the segmented image, where the value of a pixel corresponds to an urban object category, instead of a gray intensity like the usual GLCM. We perform the calculation as follows: \begin{equation} SD = \sum_{i=0}^{m}\sum_{j=0}^{n} \delta \left[ I(i,j) \neq I(i + \Delta i , j + \Delta j) \right] \label{spadisorder} \end{equation} where $\delta[x]$ is the Kronecker delta, valued 1 if the condition $x$ is met, and 0 otherwise; and $\Delta i$ and $\Delta j$ represent an offset of 1, to compute the amount of pixel value transitions in two directions (right and below). With this definition, the measure $SD$ is incremented by 1 for every pair of neighboring pixels that have differing values. Examples of scene disorder measures can be seen in Figure~\ref{fig:spatial_disorder}. \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{fig2.png} \end{center} \caption{{\bf Illustrating the concept of scene disorder.} Segmented images with low $SD = 0.15$ scene disorder (a); mild $SD = 0.39$ scene disorder (b); and high scene disorder $SD = 0.81$ (c).} \label{fig:spatial_disorder} \end{figure} \subsubsection{Interpretability through Activation Mapping} Moving on to the second step of our interpretability process, Class Activation Mapping (CAM) \cite{cam} and related techniques (e.g.~gradient-weighted class activation mapping (GradCAM++) \cite{gradcam,gradcamplus}) are used to interpret, visually, the patterns of images that are informative of a specific image category \cite{ventura2017interpreting,adadi2018peeking}, meaning, in our case, the regions that have influenced the most about the decision taken by the classifier for a certain class, in our case, classifying an image as 'dangerous'. GradCAM++ was used to identify the regions of the image that are dangerous. Given an input image and a our trained CNN model, GradCAM++ generates a localization map by the use of the gradient information of the specific target class 'dangerous' to compute the target class weights of each feature map of the last convolutional layer of the CNN before the final classification. The final localization map is synthesized from the aggregated sum of these target class weights. Generating a GradCAM++ map for the 'dangerous' class helps to visually identify the specific patterns and objects learned by the CNN in order to differentiate between 'safe' and 'dangerous' scenes. Since the images have been fully segmented, we can retrieve the objects that overlap with the dangerous regions. Analyzing frequencies, we can recover what object categories are more relevant to determine $H_V$ or $H_P$. Figure~\ref{fig:CamAndSegmentation} shows one example per city in the first column and visualizations of the described techniques in the other columns. In particular, second and third column display $H_P$ and $H_V$, respectively, with the corresponding Class Activation Map. Areas in red color are those that are more relevant to the hazard index, that is, areas that strongly contribute to increase the hazard indexes. Last column shows the automatic segmentation of the images. \subsection{A greedy heuristic to improve $H$} \label{sec:greedy} \cb{The combination of the Class Activation Mapping and image segmentation described in the previous section gives us insight into which regions and objects of a scene contribute most to its estimated hazard level. While this information is already relevant, it provides users with no concrete recommendations for structural changes to the scene that might make it safer. Accordingly, as a final step in the pipeline, we propose a strategy to exploit the large pool of images available in order to identify, for each scene, realistic and potentially low-cost physical alterations that would diminish $H_P$ and $H_{V}$ the most.} \begin{figure*}[h!] \begin{center} \includegraphics[width=1\columnwidth]{fig3.png} \end{center} \caption{{\bf Image hazard reduction flowchart.} Processing pipeline to improve the most hazardous parts of a street-level image $i$, comparing the new image with similar partner images $j$, and arriving at a new $H_P$ and $H_{V}$ for the original image.} \label{fig:process} \end{figure*} To this end, we take advantage of the methodologies developed in the previous steps. On the one hand, the segmentation task allows us to identify which objects among $C$ categories are present in a given scene (and to what extent). On the other, CAM provides information regarding which regions of the scene contribute most to the estimated hazard score. With this information at hand, for every image $i$ we build a vector of characteristics $v_i \in \mathbb{R}^{C}$, containing information of the relative area of category $C$ in $i$. For the target scene (the one for which we intend to reduce the hazard levels), we construct an additional surrogate vector of characteristics, $\tilde{v}_i$, in which we discard those regions that contribute most to $H_P$, i.e.~we only consider regions of $i$ where the class activation is mild-to-low ($< 0.7$), see first and second blocks in Figure~\ref{fig:process}. Next, we deploy an exhaustive search to find the five mirror images $j$ for $\tilde{v}_i$, with their respective vectors of characteristics $v_j$, such that their hazard index is lower: \begin{eqnarray} & \mbox{argmin}_{j} ||\tilde{v}_i - v_j||_{2}\\ & H^{j}_P < H^{i}_P \nonumber \\ & H^{j}_V < H^{i}_V \nonumber \label{eq_mirror} \end{eqnarray} In other words, we seek the most similar locations in the city that have smaller $H_P$ and $H_{V}$ than $i$, see Fig.~\ref{fig:process} for a schematic representation of the process. The search for mirror images is limited to structurally similar scenes (compared to the original one), in order to promote simple and feasible interventions. \cb{We emphasize that this strategy is designed to be used in tandem with human users, who will be able to judge which recommendations are realistic. The choice of five images allows for some diversity in the range of interventions recommended.} Finally, we remark that our approach is very similar to the regressive $k$-nearest neighbor ($k$-nn) algorithm \cite{harrington2012machine}, as opposed to a more sophisticated, Deep Learning-based mechanism for image ``safe-fication'' (following the concept of ``beautification'' in ref. \cite{kauer2018mapping}). These techniques lie beyond the scope of the present work. \section{Experiments and Results} \cb{ \subsection{Hazard index Estimation} We begin the results section by assessing how well our trained ResNet performs the required classification task for the six datasets we have defined, considering the cities of Barcelona, Madrid, and San Francisco. Images belonging to the `dangerous' class are defined as positive, while those belonging to the `safe' class are defined as negative. In the training stage, the parameter $c$ of the loss function was experimentally assigned as 1. For our results, we focus on the following measures: recall, precision and accuracy; and the indicators: FP (False positives), TP (True Positives), TN (True Negatives) and FN (False negatives). Recall refers to the fraction of samples detected as dangerous over the total number of dangerous samples in the dataset (TP over TP+FN). Precision is the fraction of the true dangerous points detected, over the number of points detected as dangerous by the ResNet (TP over TP+FP). Accuracy measures how good the system is at detecting dangerous points (TP+TN over all the samples). As we can see in Table~\ref{tab:accuracy}, the obtained accuracy is outstanding for all datasets, considering that the CNN training stage relies only on visual information, along with a binary tag indicating the occurrence (or not) of an accident within a 50m radius (sensitivity with respect to radii is discussed in Section~S4.1 and Figure~S7 of the SI). As illustrated examples of hazard index estimation, see the scores in the central columns of Figure~\ref{fig:CamAndSegmentation}. } \begin{table*}[th] \centering \begin{tabular}{l|c|c|c|c|c|c|c} & Recall & Prec. & Acc. & FP & TP & TN & FN \\ \hline \hline Barcelona $P$ & 0.86 & 0.72 & 0.75 & 17.8\% & 45.4\% & 29.8\% & 7\%\\ Barcelona $V$ & 0.77 & 0.84 & 0.82 & 7.1\% & 37.9\% & 44.1\% & 10.9\%\\ \hline \hline Madrid $P$ & 0.76 & 0.75 & 0.75 & 12.4\% & 37.5\% & 38\% & 12.1\%\\ Madrid $V$ & 0.73 & 0.74 & 0.75 & 12\% & 35.2\% & 40.1\% & 12.7\%\\ \hline \hline San Francisco $P$ & 0.63 & 0.81 & 0.76 & 6.6\% & 29\% & 47.7\% & 16.7\%\\ San Francisco $V$ & 0.61 & 0.82 & 0.74 & 6.3\% & 30.1\% & 44.7\% & 18.9\%\\ \end{tabular} \caption{Results of the Deep Learning approach for accident prediction, considering a 50 meters radius. Rows labelled as $P$ and $V$ correspond to pedestrian-to-vehicle and vehicle-to-vehicle accident dataset, respectively. Results for other radii can be seen on Table~S1 of the SI.} \label{tab:accuracy} \end{table*} \cb{Additionally, we compared the performance of different ResNet and other state-of-the-art architectures against the Barcelona dataset. Metrics like F1-score, area under the Precision and Recall (PR) curve, and the area under the Receiver Operating Characteristic (ROC) curve were used for comparison as well. The F1-measure provides a balance between precision and recall in a single score: \begin{equation} F1=2\cdot\frac{precision\cdot recall}{precision+recall} \end{equation} Whereas the PR curve represents the balance between the measures precision and recall through different thresholds between 0 and 1. The ROC curve plots the false positive rate versus the true positive rate through different thresholds, like the PR curve. The results presented in Table~\ref{tab:dl_models} show that the ResNet-v2-50 offers the highest performance for this particular image classification task. } \cb{Discerning between safe and dangerous locations in a binary fashion might be limiting in several practical scenarios, such as the prioritization of urban interventions to improve pedestrian safety. To assess to what extent we can produce finer results, we have also implemented the method in \cite{frank2001simple} to learn an ordinal regressor. In this case, the Barcelona pedestrian dataset was divided in four rating classes: \textsl{no-danger}, \textsl{mild-danger}, \textsl{danger} and \textsl{high-danger}. Images tagged as `no-danger', correspond those images where no accidents were observed. Images in the class `mild-danger' had one accident nearby, images in class `danger' have between 2 and 5 accidents nearby. Finally, images belonging to class `high-danger' have more than 5 accidents in their vicinity. The dataset proportions were approximately 85k, 34k, 40k and 17k images samples, respectively. The method in \cite{frank2001simple} relies on several binary classifiers. We used our same ResNet architecture for each of those binary classifiers. After training, we obtained a balanced accuracy of 0.47 (with a the dummy classifier accuracy of 0.25) which is comparable to the performance reported in \cite{song2018farsa} for a similar task. That is, the ResNet architecture can also provide competitive results for a finer assessment of pedestrian safety.} \begin{table*}[t] \centering \begin{tabular}{l|c|c|c|c|c|c} Model & Acc. & Prec. & Rec.l & F1-Score & PR & ROC \\ \hline \hline VGG16 \cite{simonyan2014very}& 0.61 & 0.58 & 0.96 & 0.72 & 0.78 & 0.59 \\ \hline VGG19 \cite{simonyan2014very}& 0.68 & 0.73 & 0.62 & 0.67 & 0.77 & 0.68 \\ \hline Inception-V3 \cite{szegedy2016rethinking} & 0.70 & 0.70 & 0.75 & 0.72 &0.79 & 0.70 \\ \hline Inception-V4 \cite{szegedy2017inception}& 0.57 & 0.80 & 0.24 & 0.37 & 0.72 & 0.59 \\ \hline Mobilenet \cite{howard2017mobilenets} & 0.62 & 0.77 & 0.39 & 0.52 &0.74 & 0.63 \\ \hline ResNet-v1-50 \cite{resnet} & 0.61 & 0.80 & 0.35 & 0.49 &0.75 & 0.63 \\ \hline ResNet-v1-101 \cite{resnet}& 0.59 & 0.56 & 0.99 & 0.71 &0.78 & 0.57 \\ \hline ResNet-v1-152 \cite{resnet}& 0.67 & 0.71 & 0.62 & 0.66 & 0.76 & 0.67 \\ \hline ResNet-v2-50 \cite{resnet_v2}& \textbf{0.75} & 0.72 & 0.87 & \textbf{0.78} & \textbf{0.82} & \textbf{0.74} \\ \hline ResNet-v2-101 \cite{resnet_v2}& 0.72 & 0.75 & 0.70 & 0.72 & 0.80 & 0.72 \\ \hline ResNet-v2-152 \cite{resnet_v2}& 0.72 & 0.74 & 0.72 & 0.73 & 0.80 & 0.72 \\ \end{tabular} \caption{\cb{Results of the Deep Learning approach for accident prediction, considering different classification architectures.}} \label{tab:dl_models} \end{table*} \begin{figure*}[h!] \begin{center} \includegraphics[width=1\columnwidth]{fig4.pdf} \end{center} \caption{{\bf Deep Learning approach: classification, segmentation and interpretability.} The figures display image examples from Barcelona, San Francisco and Madrid, one location per row. First column shows the original street view image. Second and third columns correspond to the obtained CAM for pedestrian and vehicle datasets, respectively. The last column corresponds to the outcome of the segmentation task. The example in Barcelona location (top row) is classified as dangerous for pedestrians (note the score in each picture), but safe for vehicles. The second example, corresponding to a Madrid location, is classified as dangerous for vehicles, but safe for pedestrians. Finally, the third example, corresponds to a San Francisco location. Notice that, in this last case, the location is dangerous for both pedestrian and vehicle, but the CAM highlights different regions: areas increasing the hazard for pedestrians may not coincide with those increasing hazard for vehicles. Images courtesy of Google, Inc. and Mapillary.} \label{fig:CamAndSegmentation} \end{figure*} \subsection{Urban hazard landscape} The first remarkable outcome of the described methodology (in particular, Section~2.2) is a fine-grained map of hazard indices throughout the cities under study. The Deep Learning approach, together with the short distance intervals between consecutive images, allows us to quantify the safety of all city locations at a microscopic level, i.e. every 15 meters approximately (see Figures~S3 and S4 in the SI), independently of whether accidents have occurred at a given site or not. \begin{figure*} \begin{center} \includegraphics[width=1\columnwidth]{fig5.png} \end{center} \caption{{\bf Spatial distribution of hazard index}. Distribution of high-hazard points for pedestrians and vehicles across all three cities of study. Points displayed are those for which hazard is high for pedestrians (vehicles) but not for vehicles (pedestrians).} \label{fig:hazardHistograms} \end{figure*} To give a complete picture of hazard for pedestrians and vehicles, and to highlight their differences, Figure~\ref{fig:hazardHistograms} shows the spatial distribution of points that were identified as very hazardous for pedestrians ($H_P \geq 0.66$), but with low-to-moderate hazard for vehicles ($H_V < 0.66$), and vice-versa. As can be seen, in both Madrid and Barcelona, areas of high hazard for pedestrians alone are highly concentrated in the denser, older city centers. High levels of vehicle hazard tend to be distributed around arterial roads, as well as some distinct neighborhoods (e.g. Sant Mart\'i-Poble Nou, middle right corner in Barcelona). San Francisco presents an interesting case in which the two spatial distributions are nearly homogeneous. This can likely be explained by the bias towards residential, medium-density areas in our image coverage for the city (see Materials and Methods for further discussion). Notably, we lacked image coverage in high-density downtown San Francisco, as well as peripheral low-density districts. With the inclusion of such zones, it is possible that clearer spatial patterns would emerge, although they might be distinct from those of denser European cities like Barcelona and Madrid \cite{louf2014typology}. Nevertheless, it should be noted that competitive levels of precision and accuracy were still achieved in San Francisco, indicating that our method is robust to relatively homogeneous training data. \cb{Furthermore, it shows that the classifier need not only be applied to comprehensive collections of images from an entire city, but can function well on sufficiently rich, spatially homogeneous samples of images}. Separate visualizations for pedestrian and vehicle hazards are available in the SI, Figure~S3. Worth highlighting, there has been no previous attempt to associate a given street image with traffic hazard levels --unlike other urban attributes (e.g. beauty \cite{quercia2014shortest,naik2017pnas}, or security \cite{naik2014streetscore}). Here, we do so under the assumption that street-level imagery is a good proxy for both the structural and perceptual complexity of the city landscape. Typically, traffic-related risk is either aggregated to the macro-level (neighborhoods, census tracts, even counties)\cite{huang2010county,ukkusuri2012role,chen2016effects}, or painstakingly micro-tailored to very specific settings (e.g. considering only zebra-crossings \cite{olszewski2016pedestrian}). However, initiatives like Vision Zero, involving governments and organizations worldwide, demand new streams of data and methodologies that help address the street safety challenge at the finest level {\it and} at scale. This is achieved here combining images and accident data. \subsection{Mapping safety to scene composition} The second (segmentation) and third (Class Activation Mapping, CAM) processing steps complete the data analysis pipeline, linking hazard indices, $H_{P}$ and $H_{V}$, to specific objects found in street-level images. In practice, such link is established combining the information in the central and right columns of Figure~\ref{fig:CamAndSegmentation}. Mapping each pixel label (e.g. ``road'', ``sidewalk'', etc.) to its corresponding activation level (heatmap in central columns of Figure~\ref{fig:CamAndSegmentation}) provides a quantification of the contribution of that pixel to the overall hazard score of the image. Thus, at the city level, we can obtain a global perspective of the categories that most contribute to the hazard index. \cb{ Figure~\ref{fig:radar} (panels a and b) illustrates this for the central area of Barcelona. These radar plots show the level of object fixation of the CAM model for pedestrians (a) and cars (b). In both cases, the blue line represents safe scenes ($H < 0.33$), while dangerous ones ($H > 0.66$) are shown in red. Specifically, we plot the ratio between the amount of CAM fixation on a given category (in safe and dangerous scenes), with respect to the CAM fixation on that category across all the images of the dataset. Thus, values below 1 in the radar plots are underrepresented, while those above 1 are overrepresented. We would like to highlight that we have restricted the analysis to the city center, to avoid an exaggeration of the presence of natural elements (vegetation and sky) in low accident risk images. Remarkably, the presence of people in a scene is correlated to a dangerous classification for both vehicle-to-pedestrian and vehicle-to-vehicle predictions. Low buildings and/or wide streets (tantamount to a clear vision of the sky) correlate to safer scenes for pedestrians, whereas the presence of buildings implies a safer environment for vehicles. Also, the absence of vegetation, such as trees, could be contributing to a safe classification for vehicles. } Radar plots for Madrid (see SI, Fig.~S5) show high resemblance to the Barcelona ones, while those for San Francisco (Fig.~S6) show completely different patterns: for pedestrians, the presence of sidewalks --and not people-- is identified as the strongest driver for high $H_{P}$. Again, the distinct layouts and walking habits of European and North American cities may be directly related to these emergent patterns. \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{fig6.png} \end{center} \caption{{\bf Hazard level interpretability.} {\bf Top:} Radar plots showing the level of object fixation of the CAM model for pedestrian (a) and cars (b). For both, the blue area corresponds to images classified as safe ($H < 0.33$), while scenes classified as dangerous ($H > 0.66$) are mapped on the plot as red. To build these radars, each individual image is mapped to the radar categories (a relevant subset of those detected by the segmentation task), and the average of such mappings is shown. {\bf (c)} The plot shows the triple relationship between $H_{P}$, $H_{V}$ and the color-coded level of disorder (adapted from \cite{haralick1973textural}) --which increases towards warmer colors as the levels of hazard increase. The plot corresponds to Barcelona.} \label{fig:radar} \end{figure} Moving further, we can relate hazard levels to the scene complexity. While the radar plots show interesting information, they are blind to specific scene compositions in urban scenes, i.e. whether categories appear in a clustered or fragmented way. To grasp this information, we quantify scene disorder ($SD$) as defined in Equation~\ref{spadisorder}, see Methods above. Figure~\ref{fig:radar}c shows an hexbin scatter plot of hazard indices ($H_{V}$ against $H_{P}$), with a color-coded third dimension that corresponds to scene disorder, normalized in the range $[0, 1]$. A first observation is that $H_{P}$ and $H_{V}$ are positively correlated. More interestingly, it is clear that more complex scenes (warmer colors) correspond to more dangerous ones. In Figure~S5c of the SI, an even clearer trend is shown for Madrid. On the other hand, the level of disorder in San Francisco scenes is high when $H_{P} \approx H_{V} \approx 1$, but not clearly related to either $H_{P}$ or $H_{V}$ for the rest of values, see Figure~S6c. All in all, the connection between image complexity and hazard (especially for vehicles) suggests that more research is needed in this direction. While certain distractions are very explicit (e.g. attending the mobile phone), the perils of scene disorder are subtle and implicit (in the sense that they are not obvious on visual inspection). \subsection{An informed guide to pedestrian safety improvements} A precipitate analysis of Figure~\ref{fig:radar} may render unfeasible interventions: substitution of built space with larger green areas, building height reduction, or street widening would suffice to improve pedestrian safety, but they do not represent a realistic approach. Instead, we resort on the greedy strategy developed in Section~\ref{sec:greedy} to propose interventions conducive to scene alterations that diminish $H_P$ and $H_{V}$ most. Figure~\ref{fig:heuristics}a shows the results of the application of this optimization to the set of images in Barcelona (Figure~S8 in SI for Madrid and San Francisco). In some occasions the hazard index cannot be reduced (points near the $(1,1)$ coordinate). And yet, many locations present a potential to decrease the hazard levels, even observing, for some scenarios, extreme improvements (points near the $(0,0)$ coordinate). The grey intensity in Fig.~\ref{fig:heuristics}a reflects the density of observations in that area. \cb{To provide a baseline for comparison, panel b shows alternative results considering a dummy $k$-nn regressor, that does not take our hazard index into account. Ratios larger than 1 indicate an increase in $H_{V}$ or $H_{P}$, and ratios lower than 1 indicate a decrease. The average in both dimensions is close to zero, evidencing that, with a dummy regressor, we have no guarantee of reducing either pedestrian or vehicle hazard.} Figure~\ref{fig:heuristics}c shows a selection of two targets and their most similar mirror image, illustrating some common interventions proposed by the heuristic (more examples, for the three cities under study, can be found in Figure~S9 of the SI). Visually, all of them seem to point at simplifications of the original image -- mostly removing objects on sidewalks. \begin{figure*}[h!] \begin{center} \includegraphics[width=0.9\columnwidth]{fig7.png} \end{center} \caption{{\bf Hazard reduction: results. (a)} Expected improvement for pedestrian and vehicle hazards, with respect to their original values. The horizontal axis corresponds to the ratio between the improved and the original pedestrian hazard index, $\tilde{H}_P / H_{P}$; while the vertical axis represents the equivalent ratio for vehicles, $\tilde{H}_V / H_{V}$. Grey intensity represents the density of observations in a given area of the plot. \cb{{\bf (b)} Expected improvement of a dummy $k$-nn algorithm that only considers similarity between images. This can be regarded as a baseline for results in panel (a)} {\bf (c)} Examples of original and mirror images in Barcelona and Madrid. {\bf (d)} Chord diagram representing an aggregate overview of proposed interventions in Barcelona. The most notable outcome from the diagram is the propensity to reduce the space allotted to roads and buildings, exchanging it emptier, greener scenes.} \label{fig:heuristics} \end{figure*} Finally, Figure~\ref{fig:heuristics}d provides a visual overview of the most frequent interventions predicted by our optimization scheme, in the case of Barcelona. The color of the link connecting two categories expresses the source of that link. The most notable changes point --perhaps unsurprisingly-- to the need to reconfigure urban scenes towards greener and wider spaces: indeed, both categories 'road' and 'building' contribute largely to 'nature', while the latter does the same towards 'sky'. Madrid presents an almost identical trend, while San Francisco shows a less clear pattern (although the relevance of 'nature' and 'sky' is still clear). Both diagrams are available in the SI, Figure~S10. Overall, the estimations and insights from the panels in Fig.~\ref{fig:heuristics} can provide initial indications to urban planners about achieving potential reductions of a local hazard score, both in terms of which items could be removed or relocated. \section{Discussion} \label{sec:discussion} As cities become increasingly populated, the interactions among pedestrians and motorized vehicles become permanent. This translates into a growing number of pedestrian-vehicle accidents. Complementary to the efforts by urban planners, public authorities and sensor technology designers, we present here an automated scheme that exploits a wide range of Computer Vision methods (classification, segmentation and interpretability techniques) to reduce traffic-related fatalities. The proposed processing pipeline, conveniently fed with rich sources of open data, renders an holistic characterization of a city's hazard landscape, capturing the physical (scene structure) and perceptual (scene complexity) characteristics from a car driver's point of view. Beyond its informative value, the hazard landscape provides actionable insights to planners. The main strength of our proposal lies in its simplicity, and its potentially universal applicability out of a comprehensive street image collection and a rich accident dataset. Even crowd-sourced imagery, which is unavoidably diverse and often sparse, provides a solid starting point to quantify safety at a below-segment level. A global, automated, data-driven endeavour towards improving pedestrian safety is not out of reach, considering the advances in cities' public data portals, and the wide coverage of proprietary services like Google Street View or open initiatives like Mapillary. Our approach opens a promising line of development. \cb{The hazard landscape is defined at an unprecedented, sub-segment resolution level --roughly a hazard score every 15 meters-- through an automated and scalable classification process}. This is well beyond macroscale approaches (e.g. crash hotspots), and extends the emphasis on intersections \cite{hu2018dangerous}. Such fine-grained map adds a valuable geoinformation layer to those already in use --traffic and pollution levels \cite{xu2019unraveling}, land and underground transportation systems, crime, etc.-- enabling better route design: safe paths, along with clean, beautiful, or shortest ones. Additionally, segmentation and interpretability methods unveil the relationship between potential danger and specific objects in urban scenes. What's more, the disposition of those objects is related to hazard indices, adding a perceptual-attentional link to other possible concomitant variables that affect vehicle and pedestrian safety. Along this line, our work \cb{can be used in conjunction with other similar pipelines, such as \cite{song2018farsa}, which automates road safety assessment in terms of infrastructure and estimates road attributes, or may contribute to more focused analysis, relating what a person pays attention to while driving \cite{palazzi2018predicting}. Additionally, further information such as temporal accident data, or factors known to influence accident rate (e.g. weather, lighting condition, distraction, asphalt conditions, road signaling) could be included by using, for instance, a multi-branch convolutional neural network, to obtain a richer prediction model.} On the other hand, the step from descriptive (hazard landscape) to actionable insights paves the way to automatized, computer-aided prioritization of urban interventions. The proposed heuristic towards safety improvements can serve as a novel tool for planners and policy makers, and might trigger the development of more sophisticated approaches such as the use of Generative Adversarial Networks to produce virtual, plausible alternatives to target scenes (seeking for instance ``safe-fication'', instead of ``beautification'' \cite{kauer2018mapping}). These techniques could be complemented with intervention cost quantification, considering as well cost-safety gain trade-offs. \section*{Acknowledgements} All authors acknowledge financial support from the Direcci\'on General de Tr\'afico (Spain), Project No. SPIP2017-02263, as well as TIN2015-66951-C2-2-R and RTI2018-095232-B- C22 grants from the Spanish Ministry of Science, Innovation and Universities (FEDER funds). CB and DR acknowledge as well the support of a doctoral grant from the Universitat Oberta de Catalunya (UOC). CB, DM and AL acknowledge the NVIDIA Hardware grant program. Street network data copyrighted OpenStreetMap contributors and available from https://www.openstreetmap.org.
53cde59fa9d5d4559ea9271bd8278bbd4a68e2c8
\section{Introduction}\noindent K\"ahler manifolds are the Hermitian manifolds which possesses the symplectic structure obeying the specific compatibility condition with the Riemann (and/or complex) structure \cite{arnold}. Being highly common objects in almost all areas of theoretical physics, these manifolds usually appear as configuration spaces of the particles and fields. Only in a limited number of physical problems they appear as phase spaces, mostly for the description of various generalizations of tops, Hall effect (including its higher-dimensional generalizatons, see, e.g. \cite{nair} and refs therein), etc. Respectively, the number of the known nontrivial (super)integrable systems with K\"ahler phase spaces is very restricted, and their study does not attract much attention. The widely known integrable model with K\"ahler phase space extensively studying nowadays is compactified Ruijsenaars-Schneider model with excluded center of mass, whose phase space is complex projective space \cite{RS}. On the other hand, there are some indications that K\"ahler phase spaces can be useful for the study of conventional Hamiltonian systems, i.e. for the systems formulated on cotangent bundle of Riemann manifolds. A very simple example of such system is one-dimensional conformal mechanics formulated in terms of Lobachevsky plane (``noncompact complex projective plane") treated as a phase space \cite{lobach}. Such description, being quite elegant, allows immediate construction of $\mathcal{N}=2M$ superconformal extension associated with $su(1,1|M)$ superalgebra. Recently the similar formulation of some higher-dimensional systems was given in terms of $su(1,N)$-symmetric K\"ahler phase space treated as the non-compact version of complex projective space \cite{kns}. In such approach all symmetries of the generic superintegrable conformal-mechanical systems acquire interpretation in terms of the powers of the $su(1,N)$ isometry generators. The maximally superintegrable generalizations of the Euclidean oscillator/Coulomb systems has also been considered, all the symmetries of these superintegrable systems were expressed via $su(1,N)$ isometry generators as well. However, the supersymmetrization aspects of that system was not considered there at all. In the present paper we construct the $\mathcal{N}$-extended superconformal extensions of the systems considered in \cite{kns}, as it was done in \cite{lobach} for one-dimensional case. Namely, we consider the systems with $su(1,N|M)$-symmetric $(N|M)_{\mathbf{C}}$-dimensional K\"ahler phase superspace (in what follow we denote it by $\widetilde{\mathbb{CP}}^{N|M}$) and relate their symmetries with the isometry generators of the super-K\"ahler structure. We construct this superspace reducing the $(N+1|M)_{\mathbf{C}}$-dimensional complex pseudo-Euclidean superspace by the $U(1)$-group action and then identify the reduced phase superspace with noncompact analogue of complex projective superspace constructed in \cite{jmp}. We parameterize this superspace by the complex bosonic variable $w$, $ {\rm Im}\; w < 0$, by the $N-1$ complex bosonic variables $z^\alpha \in [0, \infty),\; {\rm arg }\;z \in [0; 2\pi)$, and by $M$ complex fermionic coordinates $\eta^A$. Thus, it can be considered as the $N$-dimensional extension of the Klein model of Lobachevsky plane \cite{dnf}. This allows us to connect the complex coordinate $w$ with the radial coordinate and momentum of the conformal-mechanical system spanned by $su(1,1)$ subalgebra, and separate the $su(1,1)$ generators interpreting them as Hamiltonian, conformal boosts and dilatation operators. The rest bosonic generators $z^\alpha $ parameterize the angular part of integrable conformal mechanics with Euclidean configuration spaces\footnote{The convenience of the separation of the radial coordinates from the angular one in the study of conformal mechanics and in their supersymmetrization was demonstrated, e.g., in \cite{angular}}. Relating the angular coordinates and momenta with the action-angle variables, we describe all symmetries of the generic superintegrable conformal-mechanical systems in terms of the powers of the $su(1,N)$ isometry generators. An important aspect of proposed approach is the choice of canonical coordinates where all fermionic degrees of freedom appear only in the angular part of the Hamiltonian. Furthermore, we construct the super-analogues of the maximally superintegrable generalizations of the Euclidean oscillator/Coulomb systems considered in \cite{kns} as follows: we preserve the form of Hamiltonian expressed via generators of $su(1,1)$ subalgebra but extend the phase space $\widetilde{\mathbb{CP}}^{N}$ to phase superspace $ \widetilde{\mathbb{CP}}^{N|M}$. As a result, we find that these superextensions preserve all symmetries of the initial bosonic Hamiltonians and possess maximal set of functionally-independent fermionic integrals, i.e. they remains superintegrable in the sense of super-Liouville theorem. We also find, that the constructed oscillator-like systems (in contrast with Coulomb-like ones) possess deformed $\mathcal{N}=2M, d=1$ Poincar\'e supersymmetry (see \cite{ivanovsidorov}, and express all the symmetries of these superintegrable systems via $su(1,N)$ isometry generators as well.\\ The paper organized as follows.\\ In {\sl Section 2} we present the basic facts on K\"ahler supermanifolds and construct, by the Hamiltonian reduction, the non-compact complex projective superspace $\widetilde{\mathbb{CP}}^{N|M}$ in the parametrization similar to those of Klein model. In {\sl Section 3} we analyze the symmetry algebra of $\widetilde{\mathbb{CP}}^{N|M}$ and extract from it the $su(1,N|M)$-superconformal systems. In {\sl Section 4} we introduce the canonical coordinates which naturally split radial and angular parts of the Hamiltonian and relate the angular part with the systems formulating in terms of action-angle variables. In the {\sl Section 5} we construct superintegrable supergeneralizations of oscillator- and Coulomb-like systems. In {\sl Section 6} we represent the K\"ahler structure of phase superspace in the Fubini-Study-like form. We conclude the paper by the outlook and final remarks in {\sl Section 7}. \section{ Noncompact complex projective superspace}\noindent The (even) $(N|M)$-dimensional K\"ahler supermanifold can be defined as a complex supermanifold with symplectic structure given by the expression \be \Omega=\imath (-1)^{p_I(p_J+1)}g_{I{\bar J}}dZ^I\wedge d{\bar Z}^J,\quad d\Omega=0, \ee with $Z^I$ denoting $N$ complex bosonic coordinates and $M$ complex fermionic ones. The $p_I:=p(Z^I)$ is Grassmanian parity of coordinate: it is equal to zero for bosonic coordinate and to one for the fermionic one. Through the paper we will use the following conjugation rule: $\overline{Z^I Z^J}=\overline{Z}^I \overline{Z}^J$, $\overline{\overline{Z}^I Z^J}= {Z}^I \overline{Z}^J$,$\overline{\overline{Z}^I \overline{Z}^J}= {Z}^I {Z}^J$, for both bosonic and fermionic variables. The ``metrics components" $g_{I {\bar J}}$ can then be locally represented in the form \be g_{I {\bar J}}= \frac{\partial^L}{\partial Z^I} \frac{\partial^R}{\partial {\bar Z}^J} K (Z,{\bar Z}), \ee where $\partial^{L(R)}/\partial Z^I$ denotes left(right) derivatives. The Poisson brackets associated with this K\"ahler structure looks as follows \be \{ f,g\}=\imath\left( \frac{\partial^R f}{\partial \bar Z^I} g^{{\bar I}J} \frac{\partial^L g}{\partial Z^J} -(-1)^{p_Ip_J} \frac{\partial^R f}{\partial Z^I} g^{\bar JI} \frac{\partial^L g }{\partial \bar Z^J} \right) ,\quad {\rm where}\quad g^{{\bar I}J}g_{J{\bar K}}=\delta^{\bar I}_{\bar K}, \quad \overline{g^{\bar I J}} = (-1)^{p_Ip_J}g^{\bar J I}, \ee As in the pure bosonic case, the isometries of K\"ahler manifolds are given by the {\sl holomorphic Hamiltonian vector fields}, \be \mathbf{V_\mu}:=\{h_\mu(Z, \bar Z),\quad\}= V^I(Z)\frac{\partial^L}{\partial Z^I}+ {\bar V}^I({\bar Z})\frac{\partial^L}{\partial {\bar Z}^I}, \ee where $h_\mu(Z, \bar Z)$ are real functions called Killing potentials (see, e.g. \cite{lnp,jmp} for the details). Our goal is to study the systems on the K\"ahler phase space with $su(1,N|M)$ isometry superalgebra. For the construction of such phase space it is convenient, at first, to present the linear realization of $u(1,N|M)$ superconformal algebra on the complex pseudo-Euclidean superspace $\mathbb{C}^{1,N|M}$ equipped with the canonical K\"ahler structure (and thus, by the canonical supersymplectic structure)and then reduced it by the action of $U(1)$ generator. It is instructive to present this reduction in details. Let us equip, at first, the $(N+1|M)$-dimensional complex superspace with the canonical symplectic structure \be \Omega_0=\imath \sum_{a,b =0}^{N}\gamma_{a\bar b}dv^a \wedge d{\bar v}^b +\sum_{A=1}^M d\eta^A\wedge d{\bar \eta}^A, \ee with $v^a, \bar{v}^a$ being bosonic variables, and $\eta^A, {\bar \eta}^A$ being fermionic ones, and with the matrix $\gamma_{a\bar b}$ chosen in the form \be \gamma_{a\bar b}= \left( \begin{array}{c|c} \begin{matrix} 0 & -i \\ i & 0 \end{matrix} & \\ \hline & \begin{matrix} -1 & \\ & \ddots & \\ & & -1 \end{matrix} \end{array} \right), \qquad a,b=N,0,1,...,N-1 . \label{mat0}\ee With this supersymplectic structure we can associate the Poisson brackets given by the relations \be \{v^a,{\bar v}^b\}=-\imath\gamma^{\bar b a}, \quad \{\eta^A, \bar \eta^B\}=\{\bar \eta^B, \eta^A\}=\delta^{A\bar B}, \qquad \gamma^{\bar a b}\gamma_{b \bar c}=\delta^{\bar a}_{\bar c }. \label{pb0}\ee Equivalently, \be \{v^0,\bar v^N \}=1, \qquad \{v^N,\bar v^0 \}=-1, \qquad \{v^\alpha, \bar v^\beta\}=\imath \delta^{\a \bar \b},\qquad \{\eta^A, \bar \eta^B\}=\{\bar \eta^B, \eta^A\}=\delta^{A\bar B} \ee Here we introduced the indices $\alpha,\beta=1,\ldots, N-1$. On this superspace we can define the linear Hamiltonian action of $u(1,N|M)=u(1)\times su(1,N|M)$ superalgebra \begin{align} &\{h_{a\bar b},h_{c\bar d}\}=-\imath\left(h_{a \bar d}\gamma^{\bar c b}- h_{c \bar b}\gamma^{\bar a d}\right),\quad \{\Theta_{A\bar a}, {\bar\Theta}_{\bar B b}\}=h_{b \bar a}\delta^{B \bar A}-R_{A\bar B}\gamma^{\bar b a}, \quad \{\Theta_{A\bar a}, h_{b\bar c}\}=-\imath\Theta_{A\bar c}\gamma^{\bar b a}, &\label{ulnm1} \\ & \{R_{A\bar B},R_{C\bar D}\}=\imath \left(R_{A \bar D}\delta^{B \bar C}-R_{C\bar B}\delta^{D\bar A}\right),\quad \{\Theta_{A\bar a}, R_{C\bar D}\}=-\imath\Theta_{C\bar a}\delta^{D\bar A}, & \label{u1nm2}\end{align} where \be h_{a\bar b}=\bv^a v^b,\qquad \Theta_{A\bar a}=\bar\eta^A v^a,\qquad R_{A\bar B}=\imath\bar\eta^A\eta^B . \ee The $u(1)$ generator defining the center of $u(N.1|M)$ is given by the expression \be J=\gamma_{a\bar b}v^a {\bar v}^b+\imath\eta^A\bar\eta^A\;:\{J, h_{a\bar b}\}=\{J, \Theta_{A\bar a}\}=\{J, R_{A\bar B}\}=0. \label{J}\ee Hence, reducing the system by the action of this generator we will get the "non-compact" projective super-space $\widetilde{\mathbb{CP}}^{N|M}$ (i.e. the supergeneralization of noncompact projective space $\widetilde{\mathbb{CP}}^{N}$), which is $(2N|2M)$-(real)dimensional space. For performing the reduction by the action of generator \eqref{J} we have to choose, at first, the $2N$ real ($N$ complex) bosonic and $2M$ real ($M$ complex) fermionic functions commuting with $J$. Then, we have to calculate their Poisson brackets and restrict the latters to the level surface \be J=g. \label{ls}\ee As a result we will get the Poisson brackets on the reduced $(2N|2M)$-(real) dimensional space, with that $U(1)$-invariant functions playing the role of the latter's coordinates. The required functions could be easily found as \be w=\frac{v^N}{v^0}, \quad z^\a =\frac{v^\a}{v^0}, \quad \th^A =\frac{\h^A}{v^0}\;: \qquad \{w, J\}=\{z^a,J\}=\{\th^A, J\}=0, \qquad {\rm and } \quad c.c. . \label{rf}\ee Calculating their Poisson brackets and having in mind the expression following from \eqref{ls}, \be A:=\left.\frac{1}{v^0\bar v^0}\right\rvert_{J=g}= \frac1g\left(\imath(w-\bar w)-\sum_{\g=1}^{N-1} z^\g\bar z^\g + \imath \sum_{C=1}^{M} \th^C \bar \th^C \right), \label{A}\ee we get the reduced Poisson brackets defined by the following non-zero relations (and their complex conjugates) \be \{w, \bar w \}=-A(w-\bar w), \quad \{z^\a, \bar z^\b \}=\imath A \delta^{\a \bar \b}, \quad \{\th^A, \bar \th^B \}=A\delta^{A\bar B},\quad \{w,\bar z^\alpha \}=A\bar z^\alpha, \quad \{w,\bar \th^A \}=A\bar \th^A \label{pbb}\ee These Poisson brackets are associated with the supersymplectic structure \begin{align} \Omega =\frac{\imath}{g} &\left[\frac{1}{A^2}dw \wedge d\bar w -\frac{\imath z^\a}{A^2}dw \wedge d\bar z^\a- \frac{\th^A}{A^2}dw\wedge d\bar\th^A \right.\nonumber \\ &+\frac{\imath \bar z^\a}{A^2}dz^\a\wedge d\bar w + \left(\frac{g\de_{\a\bar \b}}{A}+\frac{\bar z^\a z^\b}{A^2}\right) dz^\a \wedge d\bar z^\b- \frac{\imath \bar z^\a \th^A}{A^2}dz^\a \wedge d\bar \th^A \nonumber \\ &-\frac{\bar \th^A}{A^2}d\th^A\wedge d\bar w +\frac{\imath \bar \th^A z^\a }{A^2} d\th^A\wedge d\bar z^\a- \left. \left(\frac{\imath g \de_{A\bar B}}{A} +\frac{\bar \th^A \th^B}{A^2}\right)d\th^A\wedge d\bar \th^B \right]. \end{align} It is defined by the K\"ahler potential \be {{\mathcal{K}}}=-g\log (\imath({{w}}-{\bar{w}})- z^\alpha\bz^\alpha +\imath\theta^A\bar\theta^A). \label{skah}\ee In what follows we will call this space `` noncompact projective superspace $\widetilde{\mathbb{CP}}^{N|M}$ ". The isometry algebra of this space is $su(1,N|M)$, which can be easily obtained by the restriction of the generators \eqref{ulnm1},\eqref{u1nm2} to the level surface \eqref{ls}. It is defined by the following Killing potentials \begin{align} &H:=v^N\bar v^N\vert_{J=g}=\frac{w \bar w}{A}, &\quad &K:=v^0\bar v^0\vert_{J=g} = \frac{1}{A}, &\quad &D:=(v^N \bar v^0 + v^0 \bar v^N)\vert_{J=g}=\frac{w+\bar w}{A},\label{b1}\\ &H_{\a}:=\bar v^\a v^N\vert_{J=g}=\frac{\bar z^\a w}{A}, &\quad &K_\a:=\bar v^\a v^0\vert_{J=g}=\frac{\bar z^\a}{A}, &\quad &h_{\a \bar \b}:=\bar v^\a v^\b\vert_{J=g} = \frac{\bar z^\a z^\b}{A},\label{b2}\\ &Q_A:=\bar \h^A v^N\vert_{J=g}= \frac{\bar \th^A w}{A}, &\quad &S_A:= \bar \h^A v^0\vert_{J=g}= \frac{\bar \th^A}{A}, &\quad &\Theta_{A \bar \a}:=\bar \h^A v^\a\vert_{J=g}=\frac{\bar \th^A z^\a}{A},\label{b3}\\ &R_{A\bar B}:=\imath\bar \h^A \h^B \vert_{J=g} =\imath \frac{\bar \th^A \th^B}{A}\label{b4}. \end{align} Constructed super-K\"ahler structure can be viewed as a higher dimensional analogue of the Klein model of Lobachevsky space, where the latter is parameterized by the lower half-plane. One can choose, instead of non-diagonal matrix \eqref{mat0}, the diagonal one, $\gamma_{a\bar b}=diag(1,-1,\ldots, -1)$. In that case the reduced K\"ahler structure will have the Fubini-Study-like form (see Section VI). In the next Section we will analyze the isometry algebra defined by these generators in details. Presented choice \eqref{mat0} is motivated by its convenience for the analyzing superconformal mechanics. Indeed, in that case the generators \eqref{b1} define conformal subalgebra $su(1,1)$ and are separated from the rest $su(N,1)$ generators. Thus they can be interpreted as the Hamiltonian of conformal mechanics, the generator of conformal boosts and the generator of dilatation. In the next section we will analyze in details these superconformal mechanics and their dynamical defined by the generators \eqref{b1},\eqref{b2},\eqref{b3},\eqref{b4}. \section{$su(1,N|M)$ superconformal algebra} The generators (Killing potentials) \eqref{b1},\eqref{b2},\eqref{b3},\eqref{b4} form $su(1,N|M)$ superalgebra given by \eqref{ulnm1},\eqref{u1nm2} with $\gamma_{a\bar b}$ defined in \eqref{mat0}. Its explicit expression with separated $su(1,1)$ subalgebra is represented below. For the convenience it is divided into three sectors: "bosonic", "fermionic" and "mixed" ones. \subsubsection*{Bosonic sector: $su(1,N)\times u(M)$ algebra } The bosonic sector is the direct product of the $su(1,N)$ algebra defined by the generators \eqref{b1},\eqref{b2}, and the $u(M)$ algebra defined by the R-symmetry generators \eqref{b4}. Explicitly, the $su(1,N)$ algebra is given by the relations \begin{align} &\{ H , K \}=-D, \quad \{H , D \}=-2H, \quad \{K , D \}=2K,&\label{bg1}\\ & \{H , K_\a \}=-H_{\a }, \quad \{H , H_{\a }\}=\{H , h_{\a \bar \b}\}=0, &\\ & \{K , H_{\a }\}=K_\a, \quad\{K , K_\a \}=\{K , h_{\a \bar \b}\}=0, &\\ & \{D, {K}_\a \}=-K_\a, \quad \{D , H_{\a}\}=H_{\a }, \quad \{D , h_{\a \bar \b} \}= 0, &\\ & \{K_\a ,K_\b \}= \{H_{\a},H_{\b }\}=\{K_{\alpha},H_{\beta}\}=0, &\\ & \{K_\a , {\overline K}_{\b}\}= -\imath K\delta_{\a \bar \b},\quad \{H_{\a }, {\overline H}_{\b} \} = -\imath H \delta_{ \a \bar\b}, \quad \{h_{\a \bar \b}, h_{\g \bar \delta}\}=\imath(h_{\a \bar \delta} \delta_{\g \bar\b}-h_{\g \bar \b}\delta_{ \a \bar\delta}), &\\ & \{K_\a , h_{\b \bar \g} \}=-\imath K_\b \delta_{\a \bar \g}, \quad \{H_{\a},h_{\b \bar \g} \} = -\imath H_{\b} \delta_{ \a \bar\g},\qquad \{K_\a , {\overline H}_\b\}= h_{\a \bar \b} +\frac{1}{2}\left(I-\imath D\right)\delta_{ \a \bar\b}, & \end{align} where \be I:=g+\sum_{\gamma=1}^{N-1} h_{\gamma \bar\gamma}+\sum_{C=1}^M R_{C \bar C} \label{casimir}\ee The R-symmetry generators form $u(M)$ algebra and commutes with all generators of $su(1,N)$: \be \{R_{A\bar B}, R_{C \bar D}\}= \imath(R_{A\bar D}\delta_{C\bar B}-R_{C\bar B}\delta_{A\bar D}) , \quad\{R_{A\bar B},(H;K;D; K_{\a};H_{\a};h_{\a \bar \b})\}=0. \ee It is clear that the generators $H,D, K$ form conformal algebra $su(1,1)$, the generators $h_{\alpha\bar\beta}$ form the algebra $u(N-1)$, and all together - the $su(1,1)\times u(N-1)$ algebra. Notice, that the generator $I$ in \eqref{casimir} defines the Casimir of conformal algebra $su(1,1)$: \be \mathcal{I}:=\frac{1}{2}I^2=\frac{1}{2}D^2-2HK .\label{cas}\ee Hence, choosing $H$ as a Hamiltonian, we get that $H_\alpha$, $h_{\alpha\bar\beta}, R_{A\bar B}$ define its constant of motion. Similarly, choosing the generator $K$ as a Hamiltonian, we get that it has constants of motion $K_\alpha, h_{\alpha\bar\beta}, R_{A\bar B}$. \subsubsection*{"Fermionic" sector} The Poisson brackets between fermionic generators \eqref{b3} have the form \begin{align} & \{S_{A},{\overline S}_{ B}\}=K\delta_{A\bar B}, \quad \{Q_{A}, {\overline Q}_{B}\}=H\delta_{A\bar B}, \quad \{S_{A},{\overline Q}_{ B}\} = -\imath R_{A \bar B} +\frac{\imath}{2}\left(I-\imath D\right)\delta_{A \bar B} , &\label{f1}\\ & \{\Theta_{A \bar \a}, {\overline\Theta}_{B\bar\b}\}= R_{A\bar B}\delta_{\b \bar \a}+h_{\b \bar \a}\delta_{A\bar B}, \quad \{S_{A},{\overline\Theta}_{B\bar \a }\}= K_\a \delta_{A\bar B}, \quad \{Q_{A}, {\overline\Theta}_{B\bar\a}\}= H_{\a}\delta_{A\bar B}, &\label{f2}\\ & \{S_{A}, S_B \}=\{Q_{A }, Q_{B}\}= \{\Theta_{A \bar \a}, \Theta_{B \bar \b}\}= \{S_{A},Q_{B}\}=\{S_{A}, \Theta_{B \bar \a}\}= \{Q_{A}, \Theta_{B \bar \a}\}=0. &\label{f3} \end{align} Hence, the functions $Q_A$ play the role of supercharges for the Hamiltonian $H$, and the functions $S_A$ define the supercharges of the Hamiltonian given by the generator of conformal boosts $K$. \subsubsection*{"Mixed" sector} The mixed sector is given by the relations \begin{align} & \{H,Q_{A}\}=\{H, \Theta_{A\bar \a}\}=0,\quad \{H,S_{A}\}=-Q_{A }, &\\ & \{K,S_{A}\}=\{K,\Theta_{A\bar \a}\}=0, \quad\{K,Q_{A}\}=S_{A}, &\\ & \{D,S_{A}\}=-S_{A}, \quad \{D,Q_{A }\}=Q_{A}, \quad \{D,\Theta_{A\bar \a}\}=0 &\\ & \{Q_{A}, {\overline K}_\a \}=-\Theta_{A\bar \a}, \quad \{Q_{A}, H_{\a}\}=\{Q_{A}, {\bar H}_{\a}\}=\{Q_{A}, {\bar K}_{\a}\}=\{Q_{A},h_{\a \bar \b}\}=0, &\\ & \{S_{A}, {\overline H}_{\a} \}=\Theta_{A\bar \a}, \quad \{S_{A}, {K}_{\a}\}=\{S_{A}, {\bar K}_{\a}\}=\{S_{A}, {H}_{\a}\}=\{S_{A},h_{\a \bar \b}\}=0, &\\ & \{\Theta_{A\bar \a}, K_{\b} \}=\imath S_{A} \delta_{\b \bar \a}, \quad \{\Theta_{A\bar \a},H_{\b}\}=\imath Q_{A}\delta_{\b\bar \a}, \quad \{\Theta_{A\bar\a}, {\bar H}_\alpha\}=\{\Theta_{A\bar\a}, {\bar K}_\alpha\}=0,\quad \{\Theta_{A\bar \a},h_{\b\bar \g}\}=\imath \Theta_{A \bar \g}\delta_{\b \bar \a},\\ & \{S_{A}, R_{B\bar C}\}=-\imath S_B\delta_{A\bar C}, \quad \{Q_{A}, R_{B\bar C}\}=-\imath Q_{B}\delta_{A\bar C},\quad \{\Theta_{A\bar \a}, R_{B\bar C}\}=-\imath \Theta_{B\bar \a}\delta_{A\bar C}.\label{mlast} & \end{align} Looking to the all Poisson bracket relations together we conclude that \begin{itemize} \item The bosonic functions $H_{\alpha}$, $h_{\alpha\bar\beta}$ , and the fermionic functions $Q_A$, $\Theta_{A\bar\alpha}$ commute with the Hamiltonian $H$ and thus, provide it by the superintegrability property \footnote{In accord with super-analogue of Liouville theorem \cite{shander} the system on $(2N.M)$ phase superspace is integrable iff it possess $N$ commuting bosonic integrals (with nonvanishing and functionally independent bosonic parts) and $M$ fermionic ones }; \item The bosonic functions $K_{\alpha}$, $h_{\alpha\bar\beta}$ and the fermionic functions $S_A$,$\Theta_{A\bar\alpha} $ commute with the generator $K$. Hence, the Hamiltonian $K$ defines the superintegrable system as well. \item The triples $(H, H_{\alpha}, Q_A, )$ and $(K, K_{\alpha}, S_A, )$ transform into each other under the discrete transformation \be (w, z^\alpha, \theta^A)\to (-\frac1w,\frac{z^\alpha}{w},\frac{\theta^A}{w} )\quad\Rightarrow D\to -D, \quad\left\{\begin{array}{ccc}(H, H_{\alpha}, Q_A,)&\to &(K, -K_{\alpha}, - S_A),\\ (K, K_{\alpha}, S_A)&\to &(H,H_{\alpha}, Q_A,) \end{array}\right. . \label{duality}\ee \item The functions $h_{\alpha\bar\beta}, \Theta_{A\bar\alpha}$ are invariant under discrete transformation \eqref{duality}. Moreover, they appear to be constants of motion both for $H$ and $K$. Hence, they remain to be constants of motion for any Hamiltonian being the functions of $H,K$. In particular, adding to the Hamiltonian $H$ the appropriate function of $K$, we get the superintegrable oscillator- and Coulomb-like systems with dynamical superconformal symmetry (see Section V) . \item The superalgebra $su(1,N|M)$ admits 5-graded decomposition \cite{5gr,grading} \be\label{5gr0} su(1,N|M)= \mathfrak{f}_{-2} \oplus \mathfrak{f}_{-1} \oplus \mathfrak{f}_{0} \oplus \mathfrak{f}_{+1}\oplus \mathfrak{f}_{+2} \qquad\textrm{with}\qquad \left[ \mathfrak{f}_i, \mathfrak{f}_j\right] \subseteq \mathfrak{f}_{i+j} \quad\textrm{for} \ i,j\in\left\{ -2,-1,0,1,2\right\}, \ee where $\mathfrak{f}_i=0$ for $|i|>2$ is understood. The subset $ \mathfrak{f}_{0} $ includes the generators $D, h_{\alpha\bar\beta}, \Theta_{A\bar\alpha},{\overline\Theta}{A\bar\alpha}, R_{A\bar B}$, the subsets $ \mathfrak{f}_{-2} $ and $ \mathfrak{f}_{2} $ contain only generators $H$ and $K$, respectively, while the subsets $ \mathfrak{f}_{-1} $ and $ \mathfrak{f}_{1} $ contain the generators $H_\alpha, {\bar H}_\alpha, Q_A, {\bar Q}_A$ and $K_\alpha, {\bar K}_\alpha, S_A, {\bar S}_A$. \end{itemize} Let us conclude this section by the following remark. It is easy to see, that the generator \eqref{casimir} commutes the generators $H,D,K, S_A,Q_A, R_{A\bar B}$. Hence, these generators form superconformal algebra $su(1, 1|M)$ with central charge $\sqrt{2\mathcal{I}}$ \eqref{cas} (being the Casimir of $su(1,1|M))$ as well) \begin{align} &\{ H , K \}=-D, \quad \{H , D \}=-2H, \quad \{K , D \}=2K,\quad \{S_{A},{\overline S}_{ B}\}=K\delta_{A\bar B}, \quad \{Q_{A}, {\overline Q}_{B}\}=H\delta_{A\bar B},\nonumber &\\& \{S_{A},{\overline Q}_{ B}\} = -\imath R_{A \bar B} +\frac{\imath}{2}\left(\sqrt{2\mathcal{I}}-\imath D\right)\delta_{A \bar B},\nonumber &\\& \{H,S_{A}\}=-Q_{A },\quad \{K,Q_{A}\}=S_{A},\quad\{H,Q_{A}\}=\{K,S_{A}\}=0, \quad \{D,S_{A}\}=-S_{A}, \quad \{D,Q_{A }\}=Q_{A},\nonumber &\\& \{R_{A\bar B}, R_{C \bar D}\}= \imath(R_{A\bar D}\delta_{C\bar B}-R_{C\bar B}\delta_{A\bar D}),\quad \{S_{A}, R_{B\bar C}\}=-\imath S_B\delta_{A\bar C}, \quad \{Q_{A}, R_{B\bar C}\}=-\imath Q_{B}\delta_{A\bar C}. \label{su11N}\end{align} In the next section we will express presented $su(1, N|M)$ generators in appropriate canonical coordinates and in this way we will relate presented formulae with the superextensions of conventional conformal mechanics. \section{Canonical coordinates and action-angle variables}\noindent To define the canonical coordinates we pass from the complex bosonic coordinates $w$, $z^\alpha$ \be w=x+\imath y , \quad z^\alpha = q_\alpha {\rm e}^{\imath\varphi_\alpha}, \quad{\rm where}\qquad y <0,\quad q_\alpha\geq 0, \quad \varphi_\alpha\in [0, 2\pi ),\quad q^2:=\sum_{\alpha=1}^{N-1} q_\alpha^2 < -2y. \ee Then we re-define fermionic ones such that the new variables will have canonical Poisson brackets. For this purpose we write down the symplectic/K\"ahler one-form and identify it with the canonical one \be \mathcal{A}= -\frac{g}{2}\frac{dw+d\bar w-\imath (z^\a d \bar z^\a-\bar z^\a dz^\a) +\th^A d\bar \th^A +\bar \th^A d \th^A}{\imath(w-\bar w)-z^\g \bar z^\g + \imath \th^C \bar \th^C} := p_x dx+\pi_\a d\vp_\a + \frac{1}{2}\chi^A d\bar \chi^A + \frac{1}{2}\bar \chi^A d\chi^A \ee After some calculations and canonical transformation $(p_x,x) \rightarrow (-\frac{r^2}{2}, \frac{p_r}{r})$, one can obtain \be w=\frac{p_r}{r}-\imath \frac{I}{r^2}, \quad z^\a =\frac{\sqrt{2\pi_\a}}{r}{\rm e}^{\imath \vp_\a}, \quad \th^A = \frac{\sqrt{2}}{r}\chi^A, \ee where $r,p_r, \pi_\alpha, \varphi_\alpha, \chi^A,\bar\chi^A$ are canonical coordinates. \be \{r,p_r\}=1,\quad \{\varphi_\b, \pi_\a\}=\delta_{\a \b},\quad \{\chi^A,\bar\chi^B\}=\delta^{A\bar B},\qquad \pi_a\geq 0, \quad \varphi^a\in[0,2\pi ), \quad r>0. \ee They expresses via initial ones as follows \be p_r=\frac{w+\bar w}{2}\sqrt{\frac{2}{A}},\quad r=\sqrt{\frac{2}{A}},\quad \pi_\a=\frac{z^\a \bar z^\a }{A}, \quad \varphi_\a= \arg(z^\a),\quad \chi^A=\frac{\theta^A}{\sqrt{A}}, \quad c.c., \label{cc} \ee where \be I= g+\sum_{\a=1}^{N-1} \pi_\a+ \sum_{A=1}^{M}\imath\bar\chi^A\chi^A\;,\qquad A:= \frac{\imath(w-\bar w)-z^\g\bar z^\g +\imath \th^C \bar \th^C}{g} = \frac{2}{r^2}. \ee In these canonical coordinates the isometry generators read \begin{align} &H=\frac{p_r^2}{2}+\frac{I^2}{2r^2}, \quad K=\frac{r^2}{2},\quad D= p_r r,&\label{can1} \\ &H_{\a}=\sqrt{\frac{\pi_\a}{2}}{\rm e}^{-\imath \vp_\a}\left({p_r} - \imath \frac{I}{r}\right), \quad K_\a =r\sqrt{\frac{\pi_\a}{2}}{\rm e}^{-\imath \vp_\a},\quad h_{\a \bar \b}=\sqrt{\pi_\a \pi_\b}{\rm e}^{-\imath(\vp_\a-\vp_\b)},&\label{can2} \\ &Q_{A}=\frac{\bar \chi^A}{\sqrt{2}} \left(p_r-\imath\frac{\sqrt{2\mathcal{I}}}{r}\right), \quad S_{A}= \frac{\bar \chi^A}{\sqrt{2}}r,\quad \Theta_{A \bar \a}=\bar \chi^A \sqrt{\pi_\a}{\rm e}^{\imath \vp_\a}, \quad R_{A\bar B}=\imath\bar \chi^A \chi^B.&\label{can3} \end{align} Interpreting $r$ as a radial coordinate, and $p_r$ as radial momentum, we get the superconformal mechanics with angular Hamiltonian given by \be \mathcal{I}=\frac{I^2}{2}:=\frac12\left(I_0+(\bar\chi\chi)\right)^2, \qquad{\rm with}\qquad I_0:=g+\sum_{\a=1}^{N-1} \pi_\a,\quad (\bar\chi\chi):=\sum_{A=1}^{M}\imath\bar\chi^A\chi^A\;. \label{casimir2}\ee So, the fermionic part of superconformal Hamiltonian is encoded in its angular part. \\ The explicit dependence of the Hamiltonian $H$ and the supercharges $Q_A$ on the fermions is as follows \be H=H_0+\frac{I_0(\bar\chi\chi)}{r^2}+\frac{(\bar\chi\chi)^2}{2r^2}, \qquad Q_A=-\frac{\bar \chi^A}{\sqrt{2}} \left(p_r-\imath\frac{I_0}{r}-\imath\frac{(\bar\chi\chi)}{r}\right),\qquad \ee while the dependence of bosonic integrals $H_\a$ on fermions is given by the expression \be H_{\a}= H^0_\alpha - \frac{K_\alpha (\bar\chi\chi)}{2K}, \ee where \be H_0:=\frac{p^2_r}{2}+\frac{I^2_0}{2r^2},\quad H^0_\a=\sqrt{\frac{\pi_\a}{2}}{\rm e}^{-\imath \vp_\a}\left({p_r} - \imath \frac{I_0}{r}\right)\; :\quad\{H^0_\a, H^0\}=0. \ee So, proposed superconformal Hamiltonian $H$ inherits all symmetries of initial Hamiltonian $H_0$ (given by $H^0_\alpha, h_{\alpha\bar\beta}$).\\ Looking at the functional dependence of the angular Hamiltonian $\mathcal{I}$ from the angular variables $\varphi^\a, \pi_\a$ one can expect that the set of conformal mechanics admitting proposed $su(1, N|M)$ superconformal extensions seems to be very restricted. However, it is not the case, since we it is not necessary to interpret $\varphi^\a$ as a coordinate of the configuration space, and $\pi_\a$ as its canonically conjugated momentum. Instead, since $\pi_\a$ define a constant of motion of the bosonic Hamiltonian $H_0$ (and of the respective angular Hamiltonian $\mathcal{I}_0=H_0K/2 -D^2$), we can interpret it as the action variable $I_\a$, and consider $\varphi^\a$ as a respective angle variable $\Phi_\a$ . Furthermore, suppose that $\pi_\alpha,\varphi_\alpha$ are related with the action-angle variables $(I_\alpha,\Phi_\alpha)$ of some $(N-1)$-dimensional angular mechanics by the relations \be \pi_\alpha=n_\alpha I_\alpha,\quad \varphi_\alpha=\frac{\Phi_\alpha}{n_\alpha},\qquad {\rm where}\quad n_\alpha\in \mathbb{N}, \qquad \{\Phi_\alpha, I_\beta\}=\delta_{\alpha\beta},\qquad \Phi_\alpha \in[0,2\pi). \label{paa}\ee Upon this identification the bosonic part of the angular Hamiltonian \eqref{casimir2} takes a form \be \widetilde{\mathcal{I}}_0=\frac12\left(g+\sum_{\alpha=1}^{N-1}n_\alpha I_\alpha \right)^2,\qquad{\rm with}\quad n_\alpha\;\in\;\mathbb{N}, \label{angularGen}\ee but the bosonic generators $H_{\alpha}, S_{\alpha}, h_{\alpha\bar\beta}$, become locally defined, $\varphi_\alpha\;\in \;[0, 2\pi/n_\alpha)$, and fail to be constants of motion. To get the globally defined bosonic generators we have to take their relevant powers, \be {\widetilde H}_\alpha :=(H_{\alpha})^{n_\alpha},\quad {\widetilde K}_{\alpha} := (K_{\alpha})^{n_\alpha},\quad {\widetilde h}_{\alpha\bar\beta} :=(h_{\alpha\bar\beta})^{n_\alpha n_\beta} \ee as well as replace the fermionic generator $\Theta_{A\bar\alpha}$ by the following one \be \widetilde{\Theta}_{A\bar\alpha} = (H_{\alpha})^{n_\alpha-1}\Theta_{A\bar\alpha}. \ee As a result, the dynamical (super)symmetry algebra becomes nonlinear deformation of $su(1,N|M)$ . The angular Hamiltonian \eqref{angularGen} defines the class the superintegrable generalizations of the conformal mechanics, and of the oscillator- and Coulomb-like systems on the $N$-dimensional Euclidean spaces \cite{rapid}. As a particular case, this class of systems includes the "charge-monopole" system \cite{monopole}, Smorodinsky-Winternitz system \cite{sw}(for the explicit expressions of the action-angle variables of these systems see, respectively,\cite{saghatel} and \cite{galajinsky}), as well as the rational Calogero models \footnote{To our best knowledge, action-angle variables for the angular part of the rational Calogero models are not yet constructed explicitly. However, we have at hand the spectrum of the angular part of rational Calogero model \cite{flp}. Taking its (semi)classical limit we can conclude that it has the form \eqref{angularGen}, see, e.g.\cite{rapid}}. Thus, proposed systems can be considered as their $2M$ superconformal extensions. Since the generators $Q_A,S_A, R_{A\bar B}$ remain unchanged upon above identification (as well as the expression of the angular Hamiltonian \eqref{cas} via generators $H,K,D$), we conclude that listed generators form superconformal algebra $su(1,1|N)$ with central charge \eqref{su11N}.\\ Finally, notice that in \eqref{angularGen} the nonzero constant $g\neq 0 $ appears, and the range of validity of the action variables is fixed to be $I_\alpha\in [0,\infty)$. As a result, standard free particle and conformal mechanics cannot be included in the proposed description, since for these systems we should choose $g=0, I_\alpha \in [0,\infty)$. To exclude this constant we should replace the initial generators by the following ones \be \mathcal{H}:=H-\frac{g(g-2I )}{4K}, \qquad \mathcal{H}_{\alpha}:={H}_{\alpha}+\imath g\frac{K_\alpha}{2K}, \qquad \mathcal{Q}_{A}:= Q_{A}-\imath g\frac{S_{A}}{2K}. \ee This deformation will further ``non-linearize" the dynamical supersymmetry algebra $su(1,N|M)$. \section{Oscillator- and Coulomb-like Systems} In the previous section we mentioned that the angular Hamiltonian \eqref{angularGen} defines the superintegrable deformations of $N$-dimensional oscillator and Coulomb system \cite{rapid}, while in \cite{kns} the examples of such systems on noncompact projective space $\widetilde{\mathbb{CP}}^N$ playing the role of phase space were constructed. So, one can expect that on the phase superspace $\widetilde{\mathbb{CP}}^{N|M}$ one can construct the super-counterparts of that systems, which presumably, possess (deformed) $\mathcal{N}=2M, d=1$ Poincar\'e supersymmetry. Below we examine this question and show that our claim is corrects in some particular cases. \subsection{Oscillator-like systems} We define the supersymmetric oscillator-like system by the the phase space $\widetilde{\mathbb{CP}}^{N|M}$ (equipped with the Poisson brackets \eqref{pbb}) by the Hamiltonian \be H_{osc}=H+\omega^2K, \label{osc} \ee where the generators $H,K$ are given by \eqref{b1}. In canonical coordinates \eqref{cc}it reads \be H_{osc} =\frac{p_r^2}{2}+\frac{(g+\sum_{\alpha=1}^{N-1}\pi_\a+ \sum_{A=1}^{M} \imath \bar\chi^A\chi^A )^2}{r^2}+\frac{\omega^2r^2}{2}. \ee This system possesses the $u(N)$ symmetry given by the generators $h_{\alpha\bar\beta}$ defined in \eqref{b2}(among them $N-1$ constants of motion $\pi_\alpha $ are functionally independent), the $U(M)$ R-symmetry given by the generators $R_{A\bar B}$ \eqref{b4} as well as $N-1$ hidden symmetries given by the generators \be M_{\alpha\beta}=(H_{\alpha}+\imath\omega K_\alpha )(H_{\beta}-\imath\omega K_\beta )=\frac{\bar{z}^\alpha \bar{z}^{\beta}}{A^2}(w^2+\omega^2) \;:\quad \{H_{osc}, M_{\alpha\beta}\}=0,\label{M}\ee The generators \eqref{M} and the $su(N)$ generators $h_{\alpha\bar \beta}$ form the following symmetry algebra \be \{h_{\alpha\bar\beta}, M_{\gamma\delta}\}=\imath \left(M_{\alpha \delta}\delta_{\gamma\bar \beta}+M_{\gamma \alpha}\delta_{\delta\bar \beta} \right), \quad \{M_{\alpha\beta}, M_{\gamma\delta}\}=0, \ee \be \{M_{\alpha\beta}, {\overline M}_{\gamma\delta}\}=\imath \left(4 \omega^2 I h_{\a\bar \delta}h_{\b \bar \g} - \frac{M_{\a\b} \bar M_{\g \delta}} { h_{\a\bar \g}} \delta_{\a\bar \g}- \frac{M_{\a\b} \bar M_{\g \delta}} { h_{\a\bar \delta}} \delta_{\a\bar \delta}- \frac{M_{\a\b} \bar M_{\g \delta}} { h_{\b\bar \g}} \delta_{\b\bar \g}- \frac{M_{\a\b} \bar M_{\g \delta}} { h_{\b\bar \delta}} \delta_{\b\bar \delta} \right), \ee with $I$ given by \eqref{casimir} and summation over repeated indices is not assumed. Besides, this system has a fermionic constants of motion $\Theta_{A\bar \alpha}$ defined in \eqref{b3} Hence, it is superintegrable system in the sense of super-Liouville theorem, i.e. it has $2N-1$ bosonic and $2M$ fermionic, functionally independent, constants of motion \cite{shander}. Further generalization to the systems with angular Hamiltonian \eqref{angularGen} is straightforward.\\ Let us show, that for the even $M=2k$ this system possess the deformed $\mathcal{N}=2k$ Poincar\'e supersymmetry, in the sense of papers \cite{ivanovsidorov}. For this purpose we choose the following Ansatz for supercharges \be \mathcal{Q}_A=Q_A+\omega C_{A B} {\bar S}_B, \label{tQ}\ee with the constant matrix $C_{A B}$ obeying the conditions \be C_{A B}+C_{BA}=0,\qquad C_{AB}{\overline C}_{BD}=-\delta_{A\bar D}\label{C} \ee For sure, the condition \eqref{C} assumes that $M$ is an even number, $M=2k$. Calculating Poisson brackets of the functions \eqref{tQ} we get \be \{\mathcal{Q}_A,\bar{\mathcal{Q}}_B\}=H_{osc} \delta_{AB},\qquad \{\mathcal{Q}_A,\mathcal{Q}_B\}=-\imath\omega\mathcal{G}_{AB}, \qquad \{ \bar{\mathcal{Q}}_A ,\bar{\mathcal{Q}}_B \}=\imath\omega\bar{\mathcal{G}}_{AB}, \ee where \be \mathcal{G}_{AB}:= C_{A C}R_{B\bar C} +C_{B C}R_{A\bar C} , \qquad \mathcal{G}_{\bar A\bar B}:= \bar{\mathcal{G}}_{AB}= \bar C_{AC}R_{C \bar B}+ \bar C_{BC} R_{C\bar A}, \qquad \bar{\mathcal{G}}_{AB} = \bar C_{AC}\bar C_{DB}\mathcal{G}_{DC}. \ee Then we get that the algebra of generators $\mathcal{Q}_A$, $\mathcal{H}_{osc}$, $\mathcal{R}_A^B$ is closed indeed: \begin{align} &\{\mathcal{Q}_A,H_{osc}\}=\omega C_{AB} \mathcal{Q}_B, \qquad \{\mathcal{G}_{AB},H_{osc}\}=0, &\\ &\{\mathcal{Q}_A, \mathcal{G}_{BC}\}= \imath(C_{AB}\mathcal{Q}_C+C_{AC}\mathcal{Q}_B), \qquad \{\mathcal{Q}_A,\bar{\mathcal{G}}_{BC}\}= -\imath(\bar C_{BD}\mathcal{Q}_D \delta_{A \bar C}+\bar C_{CD}\mathcal{Q}_D \delta_{A\bar B}). &\end{align} Hence, for the $M=2k$ the above oscillator-like system \eqref{osc} possesses deformed $\mathcal{N}=4k$ supersymmetry. In the particular case $M=2$ the choice of the matrix $C_{AB}$ is unique(up to unessential phase factor): $C_{AB}:={\rm e}^{\kappa}\varepsilon_{AB}$. In that case the above relations define the superalgebra $su(1|2)$-deformation of $\mathcal{N}=4$ Poincar\'e supersymmetric mechanics studied in details in \cite{ivanovsidorov}. For the $k\geq 2$ the choice of matrices $C_{AB}$ is not unique, and we get the family of deformed $\mathcal{N}=4k$ Poincar\'e supersymmetric mechanics. \\ Let us present other deformed $\mathcal{N}=2M$ Poincar\'e supersymmetric systems whose bosonic part is different from those of \eqref{osc} but nevertheless, has the oscillator potential. For this purpose we choose another Ansatz for supercharges (in contrast with previous case $M$ is not restricted to be even number) \be \widetilde{\mathcal{Q}}_A=Q_A+\imath \omega S_A. \ee These supercharges generates the $su(1|M)$ superalgebra, and thus generalizes the systems considered in \cite{ivanovsidorov} to arbitrary $M$, \begin{align} &\{\widetilde{\mathcal{Q}}_A,\bar{\widetilde{{\mathcal{Q}}}}_B\}= \mathcal{H}_{osc}\delta_{AB}-\omega\mathcal{R}^A_B, \qquad\{\widetilde{\mathcal{Q}}_A,\widetilde{\mathcal{Q}}_B\}=0,\qquad \{\mathcal{R}_A^{\;B},\mathcal{R}_C^{\;D}\}=\imath (\mathcal{R}_A^{\;D}\de^B_C -\mathcal{R}_C^{\;B} \de^D_A) &\label{oscQ1}\\ & \{ \widetilde{\mathcal{Q}}_A,\mathcal{R}^C_B\}=\imath\left(\frac{1}{M}\widetilde{\mathcal{Q}}_A \delta_{B \bar C}+ \widetilde{\mathcal{Q}}_B \delta_{A \bar C}\right) ,\qquad \{ \widetilde{\mathcal{Q}}_A,\mathcal{H}_{osc}\}=\imath\omega\frac{2M-1}{M}\widetilde{\mathcal{Q}}_A, &\label{oscQ2} \end{align} where \be \mathcal{H}_{osc}:= H_{osc} - \omega(I+\frac1M \sum_{C}R_{C\bar C}),\qquad \mathcal{R}_A^{\;B}:=R_{A\bar B}- \frac1M \delta_{A}^{B}\sum_{C}R_{C\bar C} \label{tho}\ee with $I$ defined by \eqref{casimir}. Hence, the Hamiltonian get the additional bosonic term proportional to the Casimir of conformal group. In canonical coordinates \eqref{cc} it reads \be \mathcal{H}_{osc}=\frac{p_r^2}{2}+\frac{\mathcal{I}}{r^2}+\frac{\omega^2 r^2}{2} -\omega\left( \sqrt{2\mathcal{I}}+\frac{1}{M}(\bar \chi \chi)\right). \ee This Hamiltonian, seemingly, describes the oscillator-like systems specified by the presence of external magnetic field. So, choosing $\widetilde{\mathbb{CP}}^{N|M}$ as a phase superspace, we can easily construct superintegrable oscillator-like systems which possess deformed $\mathcal{N}=2M, d=1$ Poincar\'e supersymmetry. \subsection{Coulomb-like systems} Now, let us construct on the phase space $\widetilde{\mathbb{CP}}^{N|M}$ with the Poisson bracket relations \eqref{pbb}, the Coulomb-like system given by the Hamiltonian \be H_{Coul}=H+\frac{\gamma}{\sqrt{2 K}}, \ee where the generators $H,K$ are defined by \eqref{b1}. The bosonic constants of motion of this system are given by the $u(N-1)$ symmetry generators $h_{\a\b}$ , and by the $N-1$ additional constants of motion \be R_{\alpha}=H_{\alpha}+\imath \gamma\frac{ K_{\alpha }}{I\sqrt{2 K}}\;:\quad\{H_{Coul},R_{\alpha}\}=\{H_{Coul},h_{\alpha\bar \beta}\}=0, \label{Coul}\ee where $H_\alpha, K_\alpha,\h_{\alpha\bar\beta}$ are defined by \eqref{b2}. These generators form the algebra \be \{R_\alpha,\bar R_{\bar\beta}\}=-\imath\delta_{\alpha \bar \beta}\Bigg(H_{Coul}-\frac{\imath\gamma^2}{2I^2}\Bigg)+\frac{\imath\gamma^2 h_{\alpha\bar\beta}}{2I^3},\quad \{h_{\alpha\bar\beta},R_\gamma\}=\imath \delta_{\gamma \bar \beta}R_{\alpha},\qua \{R_\alpha,R_\beta\}=0. \ee Besides, proposed system has $2M$ fermionic constants of motion given by $\Theta_{A\bar\alpha}$, and $u(M)$ R-symmetry given by $R_{A\bar B}$. Hence, it is superintegrable in the sense of super-Liouville theorem \cite{shander}. So, we constructed the maximally superintegrable Coulomb problem with dynamical $SU(1,N|M)$ superconformal symmetry which inherits all symmetries of initial bosonic system. \\ One can expect, that in analogy with oscillator-like system, our Coulomb-like system would possess (deformed) $\mathcal{N}=2M$-super-Poincar\'e symmetry for $M=2k$ and $\gamma >1$. However, it is not a case. Indeed, let us choose the following Ansatz for supercharges \be \mathcal{Q}_A=Q_A+\sqrt{2\gamma} C_{A B}\frac{ {\bar S}_B}{(2K)^{3/4}}, \ee with the constant matrix $C_{A B}$ obeying the conditions \eqref{C}, $M=2k$ and $\gamma>0$. Calculating their Poisson brackets we find \begin{align} &\{\mathcal{Q}_A,\bar {\mathcal{Q}}_B\}= H_{Coul} \de_{A\bar B}+\frac{3}{2}\frac{\sqrt{2\g}}{(2K)^{7/4}}\left( S_A\bar C_{BD} S_D +\bar S_B C_{AD}\bar S_D \right),&\\ & \{\mathcal{Q}_A,\mathcal{Q}_B\}= -\frac{\imath\sqrt{2\g}}{2(2K)^{3/4}}(C_{BD}\mathcal{R}_{A}^D+C_{AC}\mathcal{R}_{B}^D),\quad \{\mathcal{Q}_A, \mathcal{R}_{B}^{\;C}\}=-\imath \mathcal{Q}_{B}\de_{A\bar C} , \end{align} where $\mathcal{R}^A_B$ is defined in \eqref{tho}. Further calculating the Poisson brackets of $\mathcal{Q}_A$ with the generators appearing in the r.h.s. of the above expressions we get that the superalgebra is not closed. For example, \be \{\mathcal{Q}_A, H_{Coul}\}=\frac{3\g}{(2K)^{3/2}} S_A+\frac{\sqrt{2\g}}{(2K)^{3/4}}C_{AB}\left( \mathcal{\bar Q}_B -\frac{3}{4K} \bar S_B D \right). \ee Hence, proposed supercharges do not yield closed deformation of $\mathcal{N}=2M$-super-Poincar\'e algebra.\\ Let us choose another Ansatz for supercharges (as above we assume that $\gamma >0$) \be \widetilde{\mathcal{Q}}_A=Q_A+ \imath\sqrt{2\gamma} e^{\imath \frac{\pi}{2}}\frac{S_A}{(2K)^{3/4}}, \ee which yields \be \{\widetilde{\mathcal{Q}}_A,\bar{\widetilde{\mathcal{Q}}}_B\}= \mathcal{H}_{Coul}\de_{A\bar B}+ \frac{\sqrt{2\g}}{2(2K)^{3/4}}\mathcal{R}^B_A , \qquad \{\widetilde{\mathcal{Q}}_A,\widetilde{\mathcal{Q}}_B\}=0,\qquad \{\widetilde{\mathcal{Q}}_A,\mathcal{R}^C_B \} =\imath\left(\frac1M \widetilde{\mathcal{Q}}_A\de_{B \bar C} -\widetilde{\mathcal{Q}}_B\de_{A\bar C}\right), \ee where \be \mathcal{H}_{Coul}=H_{Coul}-\frac{\sqrt{2\g}}{(2K)^{3/4}}\left( I-\frac{1}{2M}\sum_C R_{C\bar C}\right), \ee with $I$ and $\mathcal{R}^A_B$ are defined, respectively, in \eqref{casimir2} and \eqref{tho}. In canonical coordinates \eqref{cc} this Hamiltonian reads \be \mathcal{H}_{Coul}=\frac{p_r}{2}+\frac{\mathcal{I}}{r^2}+\frac{\gamma}{r}-\frac{\sqrt{2\gamma}}{r^{3/2}} \left(g+\sum_\alpha \pi_\alpha+\frac{2M-1}{2M}(\bar \chi \chi)\right). \ee However, one can easily check that proposed supercharges do not yield closed deformation of Poincar\'e superalgebra as well, e.g. \be \{ \widetilde{\mathcal{Q}}_A, \frac{ \mathcal{R}^C_B }{(2K)^{3/4}}\}= \frac{\imath}{ (2K)^{3/4}}\left(\frac1M \widetilde{\mathcal{Q}}_A\de_{B \bar C} -\widetilde{\mathcal{Q}}_B\de_{A\bar C}\right)+\frac{3 }{2}\frac{S_A}{(2K)^{7/4}}\mathcal{R}_B^C \ee So, proposed superextensions of Coulomb-like systems, being well-defined from the viewpoint of superintegrability, do not possess neither $\mathcal{N}=2M$ supersymmetry, no its deformation. The $su(1,N|M)$ superalgebra plays the role of dynamical algebra of that systems. \section{ Fubini-Study-like K\"ahler structure} The above considered super-Ka\"hler structure is obviously the higher-dimensional super-analogue of the Klein model of Lobachevsky space. On the other hand, Lobachevsky space has other common parametrization as well, which is known as Poincar\'e disc \cite{dnf}. The higher-dimensional generalization of Poincar\'e disc parameterizing the noncompact complex projective space is quite similar to the Fubini-Study structure for $\mathbb{CP}^N$. It is defined by the K\"ahler potential \be \mathcal{K}=-g\log( 1-\sum_{a=1}^N z^a\bar z^a). \ee For the obtaining of the super-analogue of this potential from $\mathbb{C}^{1,N|M}$, one should pass from the matrix \eqref{mat0} to the diagonal matrix $\gamma_{a\bar b}=\diag{1,-1,\ldots,-1}$. This can be done by the transformation \be v^0\to\frac{v^0+v^N}{\sqrt{2}},\qquad v^N\to \frac{v^0-v^N}{\imath \sqrt{2}}. \label{K-P} \ee On the reduced phase space \eqref{K-P} corresponds to the transformation \be w\to\imath\frac{z^N-1}{z^N+1}, \qquad z^\a \to \sqrt{2}\frac{z^\a}{z^N+1},\qquad\th^A\to\sqrt{2}\frac{\th^A}{z^N+1}. \ee Thus we will get the Fubini-Study-like K\"ahler potential \be \mathcal{K}=-g \log(1-z^c \bar z^c + \imath \th^C \bar \th^C), \label{PoinPot}\ee which defines the following K\"ahler structure \be \Omega= \frac{\imath}{g}\left[\left(\frac{g \de_{a\bar b}}{\tilde{A}}+ \frac{\bar z^a z^b}{\tilde{A}^2}\right) dz^a \wedge d\bar z^b + \frac{\imath\bar \th^A z^a}{\tilde{A}^2} d\th^A \wedge d\bar z^a -\frac{\imath \bar z^a \th^A}{\tilde{A}^2} dz^a\wedge d\bar \th^A - \left(\frac{g\de_{A\bar B}}{\tilde{A}}+\frac{\bar \th^A \th^B}{\tilde{A}^2}\right)d\th^A \wedge d\bar \th^B \right], \ee where we have used a similar notation as in \eqref{A} \be \tilde{A}:=\frac{1-z^c \bar z^c + \imath \th^C \bar \th^C}{g}. \label{Atilde}\ee The respective Poisson brackets read: \be \{z^a,\bar z^b\} =\imath {\tilde A}\left( {\de^{a\bar b}-z^a\bar z^b }\right), \qquad \{z^a,\bar \th^A\}=\imath {\tilde A} {z^a \bar \th^A}, \qquad \{\th^A, \bar \th^B\}= {\tilde A}\left({\de^{A\bar B}+\th^A\bar \th^B }\right). \ee Now let us introduce the canonical coordinates, but now taking the symplectic/K\"ahler one form associated with the K\"ahler potential \eqref{PoinPot}, i.e. the one that define "Fubini-Study"-like metric. Then, as before, one needs to identify it with the canonical one, and this canonical coordinates will play the role of "Cartesian" coordinates instead of the "spherical" ones discussed above. \be \tilde{\mathcal{A}}= -\frac{g}{2}\frac{\imath (\bar z^a d{z}^a -{z}^a d \bar {z}^a)+ {\th}^A d\bar {\th}^A +\bar {\th}^A d{\th}^A}{1- {z}^c \bar {z}^c +\imath {\th}^C \bar {\th}^C} := p_ad\varphi_a +\frac{1}{2}{\chi}^A d\bar {\chi}^A + \frac{1}{2}\bar {\chi}^A d{\chi}^A. \ee It leads to the relations \be {z}^a = \sqrt{\frac{p_a}{g+p-\imath {\chi}^C \bar {\chi}^C}} e^{\imath \varphi_a}, \qquad {\th}^A = \frac{\sqrt{2}}{r}{\chi}^A, \qquad p=\sum_a p_a \;, \label{fsc}\ee or \be p_a=\frac{z^a\bar z^a}{\tilde{A}},\qquad \varphi_a=\arg (z^a),\qquad \chi^A=\frac{\th^A}{\sqrt{\tilde{A}}}, \ee where $\tilde{A}$ is defined by \eqref{Atilde}. These coordinates are related with \eqref{cc} as follows: \be p_\a =\pi_\a,\quad p_N= \frac{1}{4}\left(p^2_r + \left(r-\frac{\sqrt{2\mathcal{I}}}{r}\right)^2\right), \quad \varphi_N = \arctan\left(\frac{2xy}{(x-y)(x+y)}\right), \ee where \be x=1-\frac{p_r^2}{r^2}-\frac{2\mathcal{I}}{r^4}, \quad y=\frac{p_r}{r}, \ee while $ {\chi}^A$ and $\varphi_\alpha$ remains unchanged after transition from one parameterization to the other. Finally, let us draw readers attention to the complete similarity of the bosonic part of \eqref{fsc} with the equations mapping parameterizing compactified Ruijsenaars-Schneider model with excluded centre of mass to the complex projective (phase) space $\mathbb{CP}^N$. This prompt us, at first, to construct the conformal-invariant analogue of that model by replacing the complex projective space by its noncompact analogue $\widetilde{\mathbb{CP}}^N$. Then one can try to construct its $su(1,N|M)$-supeconformal extension by further replacement of $\widetilde{\mathbb{CP}}^N$ by $\widetilde{\mathbb{CP}}^{N|M}$. \section{Concluding remarks} In this paper we suggested to construct the $su(1,N|M)$-superconformal mechanics formulating them on phase superspace given by the non-compact analogue of complex projective superspace $\widetilde{\mathbb{CP}}^{N|M}$. The $su(1,N|M)$ symmetry generators were defined there as a Killing potentials of $\widetilde{\mathbb{CP}}^{N|M}$. We parameterized this phase space by the specific coordinates allowing to interpret it as a higher-dimensional super-analogue of the Lobachevsky plane parameterized by lower half-plane (Klein model). Then we transited to the canonical coordinates corresponding to the known separation of the "radial" and "angular" parts of (super)conformal mechanics. Relating the "angular" coordinates with action-angle variables we demonstrated that proposed scheme allows to construct the $su(1,N|M)$ supeconformal extensions of wide class of superintegrable systems. We also proposed the superintegrable oscillator- and Coulomb- like systems with a $su(1,N|M)$ dynamical superalgebra, and found that oscillator-like systems admit deformed $\mathcal{N}=2M$ Poincar\'e supersymmetry, in contrast with Coulomb-like ones. In fact, proposed scheme demonstrated the effectiveness of the supersymmetrization via formulation of the initial systems in terms of K\"ahler phase space and further superextension of the latters. In order to relate considered systems with the conventional ones (with Euclidean configuration spaces), we restricted ourself by the non-compact complex projective superspace. So, we are sure that applying the same approach to the conventional (compact) complex projective spaces we can find many new integrable systems as well and construct their unexpected extended supersymmetric extensions. Proposed scheme could obviously be extended to the systems on complex Grassmanians (and on their noncompact analogues). In particular, we expect to find, in this way, the $\mathcal{N}$-supersymmetric extensions of compactified spin-Ruijsenaars-Schneider models. Moreover, it seems to be straightforward task to apply proposed approach to the systems with generic $U(N)$-invariant K\"ahler phase spaces locally defined by the K\"ahler potential $\mathcal{K}\left( z^a\bar z^a\right)$. We expect that it can be done in terms of K\"ahler phase superspace locally defined by the potential \be \widetilde{\mathcal{K}}=\mathcal{K}\left( z^a\bar z^a+\imath \eta^a\bar\eta^A\right). \ee In this way we expect to construct the $\mathcal{N}=2M$ supersymmetric extensions of the systems with curved (Riemann) configuration space as well, in particular, of the so-called $\kappa$-deformations (i.e. spherical/hyperbolic generalizations) of conformal mechanics, oscillator and Coulomb systems\cite{ranada,shmavon}. Finally, notice that considered phase superspace is not associated with external algebra of initial bosonic manifold, and thus, it is not related with the superfield approach. Thus, it is interesting to consider the systems with $(N|kM)$-dimensional K\"ahler phase superspaces defined by the potentials \be \widetilde{\mathcal{K}}=\mathcal{K} \left( z^a\bar z^a\right)+ F \left(\imath g_{a\bar b}\eta^a_\alpha\bar\eta^b_\alpha \right) ,\qquad F'(0)={\rm const}, \ee and construct, in this way, the $\mathcal{N}=kN$ supersymmetric mechanics. Very preliminary attempt in this direction was done in \cite{npps} where the $\mathcal{N}=2$ supersymmetric extensions of the systems with generic K\"ahler phase space was considered. However, this promising direction was not further developed since that time. We plan to consider listed problems elsewhere. \acknowledgements This work was supported by the Russian Foundation of Basic Research grant 20-52-12003 (S.K., A.N.) and by the Armenian Science Committee projects 20RF-023, 21AG-1C062 (E.Kh., A.N.) and 21AA-1C001(E.Kh.). The work of E.Kh. was completed within the Regional Doctoral Program on Theoretical and Experimental Particle Physics Program sponsored by VolkswagenStiftung, and within ICTP Network Program NT-04.
4e41987ef5b22c171007e0d81f854b702aa6a710
\section{Introduction} Consider a normal-form game with loss function $\ell^o$. This is the ``original game.'' As an example, the volunteer's dilemma (see Table~\ref{tab:VD}) has each player choose whether or not to volunteer for a cause that benefits all players. It is known that all pure Nash equilibria in this game involve a subset of the players free-riding the contribution from the remaining players. $M$ players, who initially do not know $\ell^o$, use no-regret algorithms to individually choose their action in each of the $t=1 \ldots T$ rounds. The players receive limited feedback: suppose the chosen action profile in round $t$ is $a^t=(a^t_1, \ldots, a^t_M)$, then the $i$-th player only receives her own loss $\ell^o_i(a^t)$ and does not observe the other players' actions or losses. Game redesign is the following task. A game designer -- not a player -- does not like the solution to $\ell^o$. Instead, the designer wants to incentivize a particular target action profile $a^\dagger$, for example ``every player volunteers''. The designer has the power to redesign the game: before each round $t$ is played, the designer can change $\ell^o$ to some $\ell^t$. The players will receive the new losses $\ell^t_i(a^t)$, but the designer pays a design cost $C(\ell^o, \ell^t, a^t)$ for that round for deviating from $\ell^o$. The designer's goal is to make the players play the target action profile $a^\dagger$ in the vast majority ($T-o(T)$) of rounds, while the designer only pays $o(T)$ cumulative design cost. Game redesign naturally emerges under two opposing contexts: \begin{itemize} \item A benevolent designer wants to redesign the game to improve social welfare, as in the volunteer's dilemma; \item A malicious designer wants to poison the payoffs to force a nefarious target action profile upon the players. This is an extension of reward-poisoning adversarial attacks (previously studied on bandits~\citep{jun2018adversarial,liu2019data,ma2018data,ming2020attack,guan2020robust, garcelon2020adversarial,bogunovic2021stochastic,zuo2020near,lu2021stochastic} and reinforcement learning~\citep{zhang2020adaptive,ma2019policy,rakhsha2020policy,sun2020vulnerability,huang2019deceptive}) to game playing. \end{itemize} For both contexts the mathematical question is the same. Since the design costs are measured by deviations from the original game $\ell^o$, the designer is not totally free in creating new games. Intuitively, the following considerations are sufficient for successful game redesign: \begin{enumerate}[leftmargin=*] \item Do not change the loss of the target action profile, i.e. let $\ell^t(a^\dagger)=\ell^o(a^\dagger), \forall t$. If game redesign is indeed successful, then $a^\dagger$ will be played for $T-o(T)$ rounds. As we will see, $\ell^t(a^\dagger)=\ell^o(a^\dagger)$ means there is no design cost in those rounds under our definition of $C$. The remaining rounds incur at most $o(T)$ cumulative design cost. \item The target action profile $a^\dagger$ forms a strictly dominant strategy equilibrium. This ensures no-regret players will eventually learn to prefer $a^\dagger$ over any other action profiles. \end{enumerate} We formalize these intuitions in the rest of the paper. \section{The Game Redesign Problem} We first describe the original game without the designer. There are $M$ players. Let $\mathcal{A}_i$ be the finite action space of player $i$, and let $A_i=|\mathcal{A}_i|$. The original game is defined by the loss function $\ell^o: \mathcal{A}_1 \times \ldots \mathcal{A}_M \mapsto \setfont{R}^M$. The players do not know $\ell^o$. Instead, we assume they play the game for $T$ rounds using no-regret algorithms. This may be the case, for example, if the players are learning an approximate Nash equilibrium in zero-sum $\ell^o$ or coarse correlated equilibrium in general sum $\ell^o$. In running the no-regret algorithm, the players maintain their own action selection policies $\pi_i^t\in \Delta^{\mathcal{A}_i}$ over time, where $\Delta^{\mathcal{A}_i}$ is the probability simplex over $\mathcal{A}_i$. In each round $t$, every player $i$ samples an action $a_i^t$ according to policy $\pi_i^t$. This forms an action profile $a^t=(a^t_1, \ldots, a^t_M)$. The original game produces the loss vector $\ell^o(a^t)=(\ell_1^o(a^t),..., \ell_M^o(a^t))$. However, player $i$ only observes her own loss value $\ell_i^o(a^t)$, not the other players' losses or their actions. All players then update their policy according to their no-regret algorithms. We now bring in the designer. The designer knows $\ell^o$ and wants players to frequently play an arbitrary but fixed target action profile $a^\dagger$. At the beginning of round $t$, the designer commits to a potentially different loss function $\ell^t$. Note this involves preparing the loss vector $\ell^t(a)$ for all action profiles $a$ (i.e. ``cells'' in the payoff matrix). The players then choose their action profile $a^t$. Importantly, the players receive losses $\ell^t(a^t)$, not $\ell^o(a^t)$. For example, in games involving money such as the volunteer game, the designer may achieve $\ell^t(a^t)$ via taxes or subsidies, and in zero-sum games such as the rock-paper-scissors game, the designer essentially ``makes up'' a new outcome and tell each player whether they win, tie, or lose via $\ell^t_i(a^t)$; The designer incurs a cost $C(\ell^o, \ell^t, a^t)$ for deviating from $\ell^o$. The interaction among the designer and the players is summarized as below. \begin{algorithm} {Designer knows $\ell^o$, $a^\dagger$, $M$, $\mathcal{A}_i,...,\mathcal{A}_M$, and player no-regret rate $\alpha$} \caption*{\textbf{Protocol}: Game Redesign} \label{prot:protocol} \begin{algorithmic} \FOR{$t=1,2,\ldots, T$} \STATE Designer prepares new loss function $\ell^t$.\label{protocol:design} \STATE Players form action profile $a^t=(a_1^t,...,a_M^t)$, where $a_i^t\sim\pi_i^t, \forall i\in [M]$. \STATE Player $i$ observes the new loss $\ell_i^t(a^t)$ and updates policy $\pi_i^t$. \STATE Designer incurs cost $C(\ell^o, \ell^t, a^t)$.\label{perturbation_implement} \ENDFOR \end{algorithmic} \end{algorithm} The designer has two goals simultaneously: \begin{enumerate}[leftmargin=*] \item To incentivize the players to frequently choose the target action profile $a^\dagger$ (which may not coincide with any solution of $\ell^o$). Let $N^T(a)=\sum_{t=1}^T \ind{a^t=a}$ be the number of times an action profile $a$ is chosen in $T$ rounds, then this goal is to achieve $\E{}{N^T(a^\dagger)}=T-o(T)$. \item To have a small cumulative design cost $C^T := \sum_{t=1}^T C(\ell^o, \ell^t, a^t)$, specifically $\E{}{C^T} = o(T)$. \end{enumerate} The per-round design cost $C(\ell^o, \ell^t, a)$ is application dependent. One plausible cost is to account for the ``proposed changes'' in all action profiles, not just what is actually chosen: an example is $C(\ell^o, \ell^t, a^t)=\sum_a \|\ell^o(a)-\ell^t(a)\|$. Note that it ignores the $a^t$ argument. In many applications, though, only the chosen action profile costs the designer: an example is $C(\ell^o, \ell^t, a^t)= \|\ell^o(a^t)-\ell^t(a^t)\|$. This paper uses a slight generalization of the latter cost: \begin{assumption} The non-negative designer cost function $C$ satisfies $\forall t, \forall a^t, C(\ell^o, \ell^t, a^t)\le \eta\|\ell^o(a^t)-\ell^t(a^t)\|_p$ for some Lipschitz constant $\eta$ and norm $p\ge 1$. \end{assumption} This implies no design cost if the losses are not modified, i.e., when $\ell^o(a^t)=\ell^t(a^t)$, $C(\ell^o, \ell^t, a^t)=0$ . \section{Assumptions on the Players: No-Regret Learning} \label{sec:no-regret-learning} The designer assumes that the players are each running a no-regret learning algorithm like EXP3.P~\citep{bubeck2012regret}. It is well-known that for two-player ($M=2$) zero-sum games, no-regret learners could approximate an Nash Equilibrium~\citep{blumlearning}. More general results suggest that for multi-player ($M\ge 2$) general-sum games, no-regret learners can approximate a Coarse Correlated Equilibrium~\citep{hart2000simple}. We first define the player's regret. We use $a_{-i}^t$ to denote the actions selected by all players except player $i$ in round $t$. \begin{definition}{(Regret)}.\label{def:regret} For any player $i$, the best-in-hindsight regret with respect to a sequence of loss functions $\ell_i^t(\cdot, a_{-i}^t), t\in [T]$, is defined as \begin{equation}\label{eq:regret} R_i^T=\sum_{t=1}^T \ell_i^t(a_i^t, a_{-i}^t) - \min_{a_i\in \mathcal{A}_i} \sum_{t=1}^T \ell_i^t(a_i, a_{-i}^t). \end{equation} The expected regret is defined as $\E{}{R_i^T}$, where the expectation is taken with respect to the randomness in the selection of actions $a^t, t\in [T]$ over all players. \end{definition} \begin{remark} The loss functions $\ell_i^t(\cdot, a_{-i}^t), t\in [T]$ depend on the actions selected by the other players $a_{-i}^t$, while $a_{-i}^t$ further depends on $a^1,...,a^{t-1}$ of all players in the first $t-1$ rounds. Therefore, $\ell_i^t(\cdot, a_{-i}^t)$ depends on $a^1_i,...,a^{t-1}_i$. That means, from player $i$'s perspective, the player is faced with a non-oblivious (adaptive) adversary~\citep{slivkins2019introduction}. \end{remark} \begin{remark} Note that $a_i^* :=\ensuremath{\mbox{argmin}}_{a_i\in \mathcal{A}_i} \sum_{t=1}^T \ell_i^t(a_i, a_{-i}^t)$ in~\eqref{eq:regret} would have meant a baseline in which player $i$ always plays the best-in-hindsight action $a_i^*$ in all rounds $t \in [T]$. Such baseline action should have caused all other players to change their plays away from $a^1_{-i}, ..., a^T_{-i}$. However, we are disregarding this fact in defining~\eqref{eq:regret} . For this reason,~\eqref{eq:regret} is not fully counterfactual, and is called the best-in-hindsight regret in the literature~\citep{bubeck2012regret}. The same is true when we define expected regret and introduce randomness in players' actions $a^t$. \end{remark} Our key assumption is that the learners achieve sublinear expected regret. This assumption is satisfied by standard bandit algorithms such as EXP3.P~\citep{bubeck2012regret}. \begin{assumption}{(No-regret Learner)} We assume the players apply no-regret learning algorithm that achieves expected regret $\E{}{R_i^T}=O(T^\alpha), \forall i$ for some $\alpha\in[0, 1)$. \end{assumption} \section{Game Redesign Algorithms} There is an important consideration regarding the allowed values of $\ell^t$. The original game $\ell^o$ has a set of ``natural loss values'' $\L$. For example, in the rock-paper-scissors game $\L=\{-1,0,1\}$ for the player wins (recall the value is the loss), ties, and loses, respectively; while for games involving money it is often reasonable to assume $\L$ as some interval $[L, U]$. Ideally, $\ell^t$ should take values in $\L$ to match the semantics of the game or to avoid suspicion (in the attack context). Our designer can work with discrete $\L$ (section~\ref{sec:attack_discrete_value}); but for exposition we will first allow $\ell^t$ to take real values in $\tilde \L =[L, U]$, where $L=\min_{x\in \L} x$ and $U=\max_{x\in L} x$. We assume $U$ and $L$ are the same for all players and $U>L$, which is satisfied when $\L$ contains at least two distinct values. \subsection{Algorithm: Interior Design} The name refers to the narrow applicability of Algorithm~\ref{alg:interior_design}: the original game values for the target action profile $\ell^o(a^\dagger)$ must all be in the interior of $\tilde \L$. Formally, we require $\exists \rho\in (0, \frac{1}{2}(U-L)]$, $\forall i, \ell_i^o(a^\dagger)\in[L+\rho, U-\rho]$. In Algorithm~\ref{alg:interior_design}, we present the interior design. The key insight of Algorithm~\ref{alg:interior_design} is to keep $\ell^o(a^\dagger)$ unchanged: If the designer is successful, $a^\dagger$ will be played for $T-o(T)$ rounds. In these rounds, the designer cost will be zero. The other $o(T)$ rounds each incur bounded cost. Overall, this will ensure cumulative design cost $C^T = o(T)$. For the attack to be successful, the designer can make $a^\dagger$ the strictly dominant strategy in any new games $\ell$. The designer can do this by judiciously increasing or decreasing the loss of other action profiles in $\ell^o$: there is enough room because $\ell^o(a^\dagger)$ is in the interior. In fact, the designer can design a time-invariant game $\ell^t=\ell$ as Algorithm~\ref{alg:interior_design} shows. \begin{algorithm}{} \caption{Interior Design} \label{alg:interior_design} \begin{algorithmic} \REQUIRE the target action profile $a^\dagger$; the original game $\ell^o$. \ENSURE a time-invariant game $\ell$ constructed as follows: \STATE \begin{equation}\label{eq:interior_design} \forall i, a, \ell_i(a)=\left\{ \begin{array}{ll} \ell_i^o(a^\dagger)- (1-\frac{d(a)}{M})\rho & \mbox{ if } a_i= a_i^\dagger, \\ \ell_i^o(a^\dagger)+\frac{d(a)}{M}\rho& \mbox{ if } a_i\neq a_i^\dagger, \end{array} \right. \end{equation} where $d(a)=\sum_{j=1}^M \ind{a_j=a_j^\dagger}$. \end{algorithmic} \end{algorithm} \begin{restatable}{lemma}{lemPostAttackCost} \label{lem:post-attack-cost} The redesigned game~\eqref{eq:interior_design} has the following properties. \begin{enumerate}[leftmargin=*] \item $ \forall i, a, \ell_i(a)\in \tilde \L$, thus $\ell$ is valid.\label{lem:post-attack-cost-prop1} \item For every player $i$, the target action $a_i^\dagger$ strictly dominates any other action by $(1-\frac{1}{M})\rho$, i.e., $\ell_i(a_i, a_{-i})= \ell_i(a_i^\dagger, a_{-i})+(1-\frac{1}{M})\rho, \forall i, a_i\neq a_i^\dagger, a_{-i}$.\label{lem:post-attack-cost-prop2} \item $\ell(a^\dagger)=\ell^o(a^\dagger)$.\label{lem:post-attack-cost-prop3} \item If the original loss for the target action profile $\ell^o(a^\dagger)$ is zero-sum, then the redesigned game $\ell$ is also zero-sum.\label{lem:post-attack-cost-prop4} \end{enumerate} \end{restatable} The proof is in appendix. Our main result is that Algorithm~\ref{alg:interior_design} can achieve $\E{}{N^T(a^\dagger)}=T-O(T^\alpha)$ with a small cumulative design cost $\E{}{C^T} = O(T^\alpha)$. It is worth noting that even though many entries in the redesigned game $\ell$ can appear to be quite different than the original game $\ell^o$, their contribution to the design cost is small because the design discourages them from being played often. \begin{restatable}{theorem}{thmAttackVersionOne} \label{thm:attack_version_01} A designer that uses Algorithm~\ref{alg:interior_design} can achieve expected number of target plays $\E{}{N^T(a^\dagger)}=T-O(MT^\alpha)$ while incurring expected cumulative design cost $\E{}{C^T}=O(\eta M^{1+\frac{1}{p}}T^\alpha)$. \end{restatable} \begin{proof} Since the designer perturbs $\ell^o(\cdot)$ to $\ell(\cdot)$, the players are equivalently running no-regret algorithms under loss function $\ell$. Note that according to~\lemref{lem:post-attack-cost} property~\ref{lem:post-attack-cost-prop2}, $a_i^\dagger$ is the optimal action for player $i$, and taking a non-target action results in $(1-\frac{1}{M})\rho$ regret regardless of $a_{-i}$, thus the expected regret of player $i$ is \begin{equation} \begin{aligned} &\E{}{R_i^T} = \E{}{\sum_{t=1}^T \ind{a_i^t\neq a_i^\dagger}(1-\frac{1}{M})\rho}\\ &= (1-\frac{1}{M})\rho \left(T-\E{}{ N_i^T(a_i^\dagger)}\right) \end{aligned} \end{equation} Rearranging, we have \begin{equation}\label{eq:bound_Na_01} \forall i, \E{}{N_i^T(a_i^\dagger)}= T-\frac{M}{(M-1)\rho}\E{}{R_i^T} \end{equation} Applying a union bound over $M$ players, \begin{equation}\label{eq:bound_Na_02} \begin{aligned} &T-\E{}{N^T(a^\dagger)}=\E{}{\sum_{t=1}^T\ind{a^t\neq a^\dagger}}\\ &=\E{}{\sum_{t=1}^T\ind{a_j^t\neq a_j^\dagger \text{ for some } j}}\le \E{}{\sum_{t=1}^T \sum_{j=1}^M \ind{a_j^t\neq a_j^\dagger}}\\ &= \sum_{j=1}^M \E{}{\sum_{t=1}^T\ind{a_j^t\neq a_j^\dagger}}=\sum_{j=1}^M \left(T-\E{}{N_j(a_j^\dagger)}\right)\\ &=\sum_{j=1}^M\frac{M}{(M-1)\rho}\E{}{R_i^T}=\frac{M^2}{(M-1)\rho} O(T^\alpha)=O(MT^\alpha). \end{aligned} \end{equation} where the second-to-last equation is due to the no-regret assumption of the learner. Therefore, we have $\E{}{N^T(a^\dagger)}= T-O(M T^{\alpha})$. Next we bound the expected cumulative design cost. Note that by design $\ell^o(a^\dagger)=\ell(a^\dagger)$, thus when $a^t=a^\dagger$ by our assumption on the cost function we have $C(\ell^o, \ell, a^t)=0$. On the other hand, when $a^t\neq a^\dagger$ by Lipschitz condition on the cost function we have $C(\ell^o, \ell, a^t)\le \eta M^{\frac{1}{p}}(U-L)$. Therefore, the expected cumulative design cost is \begin{equation} \begin{aligned} &\E{}{C^T}=\E{}{\sum_{t=1}^T C(\ell^o, \ell, a^t)}\\ &\le \eta M^{\frac{1}{p}}(U-L)\E{}{\sum_{t=1}^T\ind{a^t\neq a^\dagger}}\\ &=\eta M^{\frac{1}{p}}(U-L)\left(T-\E{}{N^T(a^\dagger)}\right)\\ &=\eta M^{\frac{1}{p}}(U-L)\sum_{j=1}^M\frac{M}{(M-1)\rho}\E{}{R_i^T}=O(\eta M^{1+\frac{1}{p}}T^\alpha), \end{aligned} \end{equation} where the last equality used~\eqref{eq:bound_Na_02}. \end{proof} We have two corollaries from~\thmref{thm:attack_version_01}. First, the standard no-regret algorithm EXP3.P~\citep{bubeck2012regret} achieves $\E{}{R_i^T}=O(T^{\frac{1}{2}})$. Therefore, by plugging $\alpha=\frac{1}{2}$ into~\thmref{thm:attack_version_01} we have: \begin{corollary} If the players use EXP3.P, the designer can achieve expected number of target plays $\E{}{N^T(a^\dagger)}=T-O(MT^\frac{1}{2})$ while incurring expected cumulative design cost $\E{}{C^T}=O(\eta M^{1+\frac{1}{p}}T^{\frac{1}{2}})$. \end{corollary} If the original game $\ell^o$ is two-player zero-sum, then the designer can also make the players think that $a^\dagger$ is a pure Nash equilibrium. \begin{restatable}{corollary}{ColNE} Assume $M=2$ and the original game $\ell^o$ is zero-sum. Then with the redesigned game $\ell$~\eqref{eq:interior_design}, the expected averaged policy $\E{}{\bar \pi^T_i}=\E{}{\frac{1}{T}\sum_t \pi_i^t}$ converges to a point mass on $a_i^\dagger$. \end{restatable} \begin{proof} The new game $\ell$ is also a two-player zero-sum game. The players applying no-regret algorithm will have their average actions $\E{}{\bar \pi^T}$ converging to an approximate Nash equilibrium. We use $\pi_i^t(a)$ to denote the probability of player $i$ choosing action $a$ at round $t$. Next we compute $\E{}{\bar \pi_i^T(a^\dagger)}$. Note that this expectation is with respect to all the randomness during game playing, including the selected actions $a^{1:T}$ and policies $\pi^{1:T}$. For any $t$, when we condition on $\pi^t$, we have $\E{}{\ind{a_i^t=a}\mid \pi^t}=\pi_i^t(a)$. Therefore, we have $\forall i$ \begin{equation} \begin{aligned} &\E{}{\bar \pi^T_i(a_i^\dagger)}=\frac{1}{T}\E{}{ \sum_{t=1}^T\pi_i^t(a_i^\dagger)}\\ &= \frac{1}{T}\E{\pi^{1:T}}{\sum_{t=1}^T\E{a^t}{\ind{a_i^t=a_i^\dagger}\mid \pi^t}}\\ &=\frac{1}{T}\E{\pi^{1:T}}{\E{a^{1:T}}{\sum_{t=1}^T\ind{a_i^t=a_i^\dagger}\mid \pi^{1:T}}}\\ &=\frac{1}{T}\E{\pi^{1:T}}{\E{a^{1:T}}{N_i^T(a_i^\dagger)\mid \pi^{1:T}}}\\ &=\frac{1}{T}\E{}{N_i^T(a_i^\dagger)}=\frac{T-O(T^\alpha)}{T}\rightarrow 1. \end{aligned} \end{equation} Therefore, asymptotically the players believe that $a_i^\dagger, i\in [M]$ form a Nash equilibrium. \end{proof} \subsection{Boundary Design} When the original game has some $\ell^o(a^\dagger)$ values hitting the boundary of $\tilde \L$, the designer cannot apply Algorithm~\ref{alg:interior_design} directly because the loss of other action profiles cannot be increased or decreased further to make $a^\dagger$ a dominant strategy. However, a time-varying design can still ensure $\E{}{N^T(a^\dagger)}=T-o(T)$ and $\E{}{C^T} = o(T)$. In Algorithm~\ref{alg:boundary_design}, we present the boundary design which is applicable to both boundary and interior $\ell^o(a^\dagger)$ values. \begin{algorithm} \caption{Boundary Design} \label{alg:boundary_design} \begin{algorithmic}[1] \REQUIRE the target action profile $a^\dagger$; a loss vector $v\in \setfont{R}^M$ whose elements are in the interior, i.e., $\forall i, v_i \in [L+\rho, U-\rho]$ for some $\rho>0$; the regret rate $\alpha$; $\epsilon\in (0, 1-\alpha)$; the time step $t$. \ENSURE a time-varying game with loss $\ell^t$. \STATE Use $v$ in place of $\ell^o(a^\dagger)$ in~\eqref{eq:interior_design} and apply the interior design~\ref{alg:interior_design}. Call the resulting time-invariant game the ``source game'' $\underline \ell$. \STATE Define a ``destination game'' $\overline \ell$ where $\overline \ell(a)=\ell^o(a^\dagger), \forall a$. \STATE Interpolate the source and destination games: \begin{equation}\label{eq:boundary_design} \ell^t= w_t \underline\ell + (1-w_t)\overline \ell \end{equation} where \begin{equation}\label{eq:wt} w_t=t^{\alpha+\epsilon-1} \end{equation} \end{algorithmic} \end{algorithm} The designer can choose an arbitrary loss vector $v$ as long as $v$ lies in the interior of $\tilde \L$. We give two exemplary choices of $v$. \begin{enumerate}[leftmargin=*] \item Let the average player cost of $a^\dagger$ be $\bar \ell(a^\dagger)=\sum_{i=1}^M\ell_i^o(a^\dagger)/M$, then if $\bar \ell(a^\dagger)\in(L, U)$, one could choose $v$ to be a constant vector with value $\bar \ell(a^\dagger)$. The nice property about this choice is that if $\ell^o$ is zero-sum, then $v$ is zero-sum, thus property~\ref{prop4:TVA} is satisfied and the redesigned game is zero-sum. However, note that when $\bar \ell(a^\dagger)$ does hit the boundary, the designer cannot choose this $v$. \item Choose $v$ to be a constant vector with value $(L+U)/2$. This choice is always valid, but may not preserve the zero-sum property of the original game unless $L=-U$. \end{enumerate} The designer applies the interior design on $v$ to obtain a ``source game'' $\underline \ell$. Note that the target action profile $a^\dagger$ strictly dominates in the source game. The designer also creates a ``destination game'' $\overline \ell(a)$ by repeating the $\ell^o(a^\dagger)$ entry everywhere. The boundary algorithm then interpolates between the source and destination games with a decaying weight $w_t$. Note after interpolation~\eqref{eq:boundary_design}, the target $a^\dagger$ still dominates by roughly $w_t$. We design the weight $w_t$ as in~\eqref{eq:wt} so that cumulatively, the sum of $w_t$ grows with rate $\alpha+\epsilon$, which is faster than the regret rate $\alpha$. This is a critical consideration to enforce frequent play of $a^\dagger$. Also note that asymptotically, $\ell^t$ converges toward the destination game. Therefore, in the long run, when $a^\dagger$ is played the designer incurs diminishing cost, resulting in $o(T)$ cumulative design cost. \begin{restatable}{lemma}{lemPostAttackCostVersionTwo} \label{lem:post-attack-cost-version2} The redesigned game~\eqref{eq:boundary_design} has the following properties. \begin{enumerate}[leftmargin=*] \item $\forall i, a, \ell^t_i(a)\in\tilde \L$, thus the loss function is valid.\label{prop1:TVA} \item For every player $i$, the target action $a_i^\dagger$ strictly dominates any other action by $(1-\frac{1}{M})\rho w_t$, i.e., $\ell_i^t(a_i, a_{-i})= \ell_i^t(a_i^\dagger, a_{-i})+(1-\frac{1}{M})\rho w_t, \forall i, t, a_i\neq a_i^\dagger, a_{-i}$.\label{prop2:TVA} \item $\forall t, C(\ell^o, \ell^t, a^\dagger)\le \eta \|\ell^o(a^\dagger)-\ell^t(a^\dagger)\|_p w_t\le \eta (U-L)M^{\frac{1}{p}}w_t$\label{prop3:TVA} \item If the original loss for the target action profile $\ell^o(a^\dagger)$ and the vector $v$ are both zero-sum, then $\forall t, \ell^t$ is zero-sum.\label{prop4:TVA} \end{enumerate} \end{restatable} Given~\lemref{lem:post-attack-cost-version2}, we provide our second main result. \begin{restatable}{theorem}{thmAttackVersionTwo} \label{thm:attack_version_2} $\forall \epsilon\in (0, 1-\alpha]$, a designer that uses Algorithm~\ref{alg:boundary_design} can achieve expected number of target plays $\E{}{N^T(a^\dagger)}=T-O(MT^{1-\epsilon})$ while incurring expected cumulative design cost $\E{}{C^T}=O(M^{1+\frac{1}{p}}T^{1-\epsilon}+M^{\frac{1}{p}}T^{\alpha+\epsilon})$. \end{restatable} \begin{remark} By choosing a larger $\epsilon$ in~\thmref{thm:attack_version_2}, the designer can increase $\E{}{N^T(a^\dagger)}$. However, the cumulative design cost can grow. The design cost attains the minimum order $O\left(M^\frac{1}{p}(1+M)T^{\frac{1+\alpha}{2}}\right)$ when $\epsilon=\frac{1-\alpha}{2}$. The corresponding number of target action selection is $\E{}{N^T(a^\dagger)}=T-O(M T^{\frac{1+\alpha}{2}})$ \end{remark} \begin{proof} Under game redesign, the players are equivalently running no-regret algorithms over the game sequence $\ell^1, \ldots, \ell^T$ instead of $\ell^o(\cdot)$. By~\lemref{lem:post-attack-cost-version2} property~\ref{prop2:TVA}, $a_i^\dagger$ is always the optimal action for player $i$, and taking a non-target action results in $(1-1/M)\rho w_t$ regret regardless of $a_{-i}$, thus the expected regret of player $i$ is \begin{equation} \begin{aligned} &\E{}{R_i^T} = \E{}{\sum_{t=1}^T \ind{a_i^t\neq a_i^\dagger}(1-\frac{1}{M})\rho w_t}\\ &=(1-\frac{1}{M})\rho \E{}{\sum_{t=1}^T \ind{a_i^t\neq a_i^\dagger} w_t}. \end{aligned} \label{eq:ERiT} \end{equation} Now note that $ w_t=t^{\alpha+\epsilon-1}$ is monotonically decreasing as $t$ grows, thus we have \begin{equation} \begin{aligned} &\sum_{t=1}^T \ind{a_i^t\neq a_i^\dagger} w_t\ge \sum_{t=N_i(a_i^\dagger)+1}^T t^{\alpha+\epsilon-1}\\ &=\sum_{t=1}^T t^{\alpha+\epsilon-1}-\sum_{t=1}^{N_i(a_i^\dagger)} t^{\alpha+\epsilon-1}. \end{aligned} \end{equation} Next, by examining the area under curve, we obtain \begin{equation} \begin{aligned} \sum_{t=1}^T t^{\alpha+\epsilon-1}&\ge \int_{1}^{T} t^{\alpha+\epsilon-1} dt=\frac{1}{\alpha+\epsilon} T^{\alpha+\epsilon} - {1 \over \alpha+\epsilon}. \end{aligned} \end{equation} Similarly, we can also derive \begin{equation} \begin{aligned} \sum_{t=1}^{N_i(a_i^\dagger)} t^{\alpha+\epsilon-1}&\le \int_{0}^{N_i(a_i^\dagger)}t^{\alpha+\epsilon-1} dt=\frac{1}{\alpha+\epsilon}\left(N_i^T(a_i^\dagger)\right)^{\alpha+\epsilon}. \end{aligned} \end{equation} Therefore, we have \begin{equation}\label{eq:bound_ind} \begin{aligned} &\sum_{t=1}^T \ind{a_i^t\neq a_i^\dagger} w_t \ge \frac{1}{\alpha+\epsilon} \left(T^{\alpha+\epsilon}-\left(N_i^T(a_i^\dagger)\right)^{\alpha+\epsilon}\right) - {1 \over \alpha+\epsilon}\\ &=\frac{1}{\alpha+\epsilon}T^{\alpha+\epsilon}\left(1-(1-\frac{T-N_i^T(a_i^\dagger) }{T})^{\alpha+\epsilon}\right) - {1 \over \alpha+\epsilon}\\ &\ge \frac{1}{\alpha+\epsilon}T^{\alpha+\epsilon} \frac{T-N_i^T(a_i^\dagger) }{T}(\alpha+\epsilon)- {1 \over \alpha+\epsilon}\\ &=T^{\alpha+\epsilon}-T^{\alpha+\epsilon-1} N_i^T(a_i^\dagger)- {1 \over \alpha+\epsilon}. \end{aligned} \end{equation} The inequality follows from the fact $(1-x)^c \le 1-cx$ for $x,c \in (0,1)$. Plug back in~\eqref{eq:ERiT} we have \begin{equation} \begin{aligned} &\E{}{R_i^T}=(1-\frac{1}{M})\rho\E{}{\sum_{t=1}^T \ind{a_i^t\neq a_i^\dagger} w_t}\\ &\ge (1-\frac{1}{M})\rho\E{}{\left(T^{\alpha+\epsilon}-T^{\alpha+\epsilon-1} N_i^T(a_i^\dagger)- {1 \over \alpha+\epsilon}\right)}\\ &=(1-\frac{1}{M})\rho\left(T^{\alpha+\epsilon}-T^{\alpha+\epsilon-1}\E{}{N_i^T(a_i^\dagger)}- {1 \over \alpha+\epsilon}\right) \end{aligned} \end{equation} As a result, we have \begin{equation} \begin{aligned} \forall i, &\E{}{N_i^T(a_i^\dagger)}\ge T-\frac{M}{(M-1)\rho}\E{}{R_i^T} T^{1-\alpha-\epsilon}-\frac{1}{\alpha+\epsilon}T^{1-\alpha-\epsilon}\\ &=T-\frac{M}{(M-1)\rho}O(T^\alpha) T^{1-\alpha-\epsilon}-\frac{1}{\alpha+\epsilon}T^{1-\alpha-\epsilon}\\ &=T-O(T^{1-\epsilon})-O(T^{1-\alpha-\epsilon})\\ &=T-O(T^{1-\epsilon}). \end{aligned} \end{equation} By a union bound similar to~\eqref{eq:bound_Na_02}, we have $\E{}{N^T(a^\dagger)}=T-O(MT^{1-\epsilon})$. We now analyze the cumulative design cost. Note that by~\lemref{lem:post-attack-cost-version2} property~\ref{prop3:TVA}, when $a^t=a^\dagger$, $C(\ell^o, \ell^t, a^t)\le \eta (U-L)M^{\frac{1}{p}} w_t$. On the other hand, when $a^t\neq a^\dagger$, we have \begin{equation} \begin{aligned} C(\ell^o, \ell^t, a^t)&\le \eta \|\ell^o(a^t)-\ell^t(a^t)\|_p\le \eta(U-L) M^{\frac{1}{p}}. \end{aligned} \end{equation} Therefore, the expected cumulative design cost is \begin{equation} \begin{aligned} &\E{}{C^T} \le \eta(U-L) M^{\frac{1}{p}}\E{}{\sum_{t=1}^T\ind{a^t\neq a^\dagger}}\\ &+\eta (U-L)M^{\frac{1}{p}}\E{}{\sum_{t=1}^T\ind{a^t= a^\dagger} w_t}\\ &\le \eta (U-L)M^{\frac{1}{p}}(T-\E{}{N^T(a^\dagger)})+\eta (U-L)M^{\frac{1}{p}}\sum_{t=1}^T w_t. \end{aligned} \end{equation} $T-\E{}{N^T(a^\dagger)}=O(MT^{1-\epsilon})$ is already proved. Also note that \begin{equation} \begin{aligned} \sum_{t=1}^T w_t&=\sum_{t=1}^T t^{\alpha+\epsilon-1}\le \int_{t=0}^T t^{\alpha+\epsilon-1}=\frac{1}{\alpha+\epsilon} T^{\alpha+\epsilon}. \end{aligned} \end{equation} Therefore, we have \begin{equation} \begin{aligned} \E{}{C^T}&\le (U-L)\eta M^{\frac{1}{p}}O(MT^{1-\epsilon})+\frac{\eta (U-L)}{\alpha+\epsilon} M^{\frac{1}{p}} T^{\alpha+\epsilon}\\ &=O(M^{1+\frac{1}{p}}T^{1-\epsilon}+M^{\frac{1}{p}}T^{\alpha+\epsilon}). \end{aligned} \end{equation} \end{proof} \begin{corollary} Assume the no-regret learning algorithm is EXP3.P. Then by picking $\epsilon=\frac{1}{4}$ in~\thmref{thm:attack_version_2}, a designer can achieve expected number of target plays $\E{}{N^T(a^\dagger)}=T-O(MT^{\frac{3}{4}})$ while incurring $\E{}{C^T}=O\left(M^\frac{1}{p}(1+M)T^{\frac{3}{4}}\right)$ design cost. \end{corollary} \subsection{Discrete Design} \label{sec:attack_discrete_value} In previous sections, we assumed the games $\ell^t$ can take arbitrary continuous values in the relaxed loss range $\tilde \L=[L, U]$. However, there are many real-world situations where continuous loss does not have a natural interpretation. For example, in the rock-paper-scissors game, the loss is interpreted as win, lose or tie, thus $\ell^t$ should only take value in the original loss value set $\L=\{-1, 0, 1\}$. We now provide a discrete redesign to convert any game $\ell^t$ with values in $\tilde \L$ into a game $\hat\ell^t$ only involving loss values $L$ and $U$, which are both in $\L$. Specifically, the discrete design is illustrated in Algorithm~\ref{alg:discrete_design}. \floatname{algorithm}{Algorithm} \begin{algorithm} \caption{Discrete Design} \label{alg:discrete_design} \begin{algorithmic} \REQUIRE the target action profile $a^\dagger$; a loss vector $v\in \setfont{R}^M$ whose elements are in the interior, i.e., $\forall i, v_i \in [L+\rho, U-\rho]$ for some $\rho>0$; the regret rate $\alpha$; $\epsilon\in (0, 1-\alpha)$; the time step $t$. \ENSURE a time-varying game with loss $\hat \ell^t\in \L$ as below: \STATE \begin{equation}\label{eq:discrete_design} \forall i, a, \hat \ell_i^t(a)=\left\{ \begin{array}{ll} U & \mbox{ with probability $\frac{\ell_i^t(a)-L}{U-L}$ } \\ L & \mbox{ with probability $\frac{U-\ell_i^t(a)}{U-L}$}. \end{array} \right. \end{equation} \end{algorithmic} \end{algorithm} It is easy to verify $\E{}{\hat\ell^t}=\ell^t$. In experiments we show such discrete games also achieve the design goals. \subsection{Thresholding the Redesigned Game} For all designs in previous sections, the designer could impose an additional min or max operator to threshold on the original game loss, e.g., for the interior design, the redesigned game loss after thresholding becomes \begin{equation}\label{eq:interior_design_minmax} \forall i, a, \ell_i(a)=\left\{ \begin{array}{ll} \min\{\ell_i^o(a^\dagger)- (1-\frac{d(a)}{M})\rho, \ell^o(a)\} & \mbox{ if } a_i= a_i^\dagger, \\ \max\{\ell_i^o(a^\dagger)+\frac{d(a)}{M}\rho, \ell^o(a)\}& \mbox{ if } a_i\neq a_i^\dagger, \end{array} \right. \end{equation} We point out a few differences between~\eqref{eq:interior_design_minmax} and~\eqref{eq:interior_design}. First,~\eqref{eq:interior_design_minmax} guarantees a dominance gap of ``at least'' (instead of exactly) $(1-\frac{1}{M})\rho$. As a result, the thresholded game can induce a larger $N^T(a^\dagger)$ because the target action $a^\dagger$ is redesigned to stand out even more. Second, one can easily show that~\eqref{eq:interior_design_minmax} incurs less design cost $C^T$ compared to~\eqref{eq:interior_design} due to thresholding. \thmref{thm:attack_version_01} still holds. However, thresholding no longer preserves the zero-sum property~\ref{lem:post-attack-cost-prop4} in~\lemref{lem:post-attack-cost} and~\lemref{lem:post-attack-cost-version2}. When such property is not required, the designer may prefer~\eqref{eq:interior_design_minmax} to slightly improve the redesign performance. The thresholding also applies to the boundary and discrete designs. \begin{table*}[t] \begin{minipage}{0.47\linewidth} \begin{center} \begin{tabular}{cc|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Other players}\\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{exists a volunteer} & \multicolumn{1}{c}{no volunteer exists} \\\cline{3-4} \multirow{2}*{Player $i$} & volunteer & $0$ & $ 0$ \\\cline{3-4} & not volunteer & $-1$ & $10$ \\\cline{3-4} \end{tabular}% \vspace{0.3cm} \caption{The loss function $\ell^o_i$ for individual player $i=1 \ldots M$ in the Volunteer Dilemma.} \label{tab:VD} \end{center} \end{minipage \hfill \begin{minipage}{0.47\linewidth} \begin{center} \begin{tabular}{cc|c|c|c|} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{Number of other volunteers}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2}\\\cline{3-5} \multirow{2}*{Player $i$} & volunteer & $-2/3$ & $-1/3$ & $0$ \\\cline{3-5} & not volunteer &\hspace {3mm} $10$ \hspace {3mm} & \hspace {3mm}$1/3$ \hspace {3mm} & $2/3$ \\\cline{3-5} \end{tabular} \vspace{0.3cm} \caption{The redesigned loss function $\ell_i$ for individual player $i$ in VD.} \label{tab:VD_redesigned} \end{center} \end{minipage} \vspace{-0.3cm} \end{table*} \section{Experiments} We perform empirical evaluations of game redesign algorithms on four games --- the volunteer's dilemma (VD), tragedy of the commons (TC), prisoner's dilemma (PD) and rock-paper-scissors (RPS). Throughout the experiments, we use EXP3.P~\citep{bubeck2012regret} as the no-regret learner. The concrete form of the regret bound for EXP3.P is illustrated in the appendix~\ref{sec:exact_form}. Based on that, we derive the exact form of our theoretical upper bounds for~\thmref{thm:attack_version_01} and~\thmref{thm:attack_version_2} (see~\eqref{eq:interior_exact_bound_N}-\eqref{eq:boundary_exact_bound_C}), and we show the theoretical value for comparison in our experiments. We let the designer cost function be $C(\ell^o, \ell^t, a^t)=\|\ell^o(a^t)-\ell^t(a^t)\|_p$ with $p=1$. For VD, TC and PD, the original game is not zero-sum, and we apply the thresholding~\eqref{eq:interior_design_minmax} to slightly improve the redesign performance. For the RPS game, we apply the design without thresholding to preserve the zero-sum property. The results we show in all the plots are produced by taking the average of 5 trials. \subsection{Volunteer's Dilemma (VD)}\label{sec:VD_game} In volunteer's dilemma (Table~\ref{tab:VD}) there are $M$ players. Each player has two actions: volunteer or not. When there exists at least one volunteer, those players who do not volunteer gain 1 (i.e. a $-1$ loss). The volunteers, however, receive zero payoff. On the other hand, if no players volunteer, then every player suffers a loss of 10. As mentioned earlier, all pure Nash equilibria involve free-riders. The designer aims at encouraging all players to volunteer, i.e., the target action profile $a_i^\dagger$ is ``volunteer'' for any player $i$. Note that $\forall i, \ell_i^o(a^\dagger)=0$, which lies in the interior of $\L=[-1, 10]$. Therefore, the designer could apply the interior design Algorithm~\ref{alg:interior_design}. The margin parameter is $\rho=1$. We let $M=3$. In table~\ref{tab:VD_redesigned}, we show the redesigned game $\ell$. Note that when all three players volunteer (i.e., at $a^\dagger$), the loss is unchanged compared to $\ell^o$. Furthermore, regardless of the other players, the action ``volunteer'' strictly dominates the action ``not volunteer'' by at least $(1-\frac{1}{M})\rho=\frac{2}{3}$ for every player. When there is no other volunteers, the dominance gap is $\frac{32}{3}\ge(1-\frac{1}{M})\rho$, which is due to the thresholding in~\eqref{eq:interior_design_minmax}. We simulated play for $T=10^4,10^5,10^6,10^7$, respectively on this redesigned game $\ell$. In Figure~\ref{fig:VD_TIA}\subref{fig:VD_TIA_N}, we show $T-N^T(a^\dagger)$ against $T$. The plot is in log scale. The standard deviation estimated from 5 trials is less than $3\%$ of the corresponding value and is hard to see in log-scale plot, thus we do not show that. We also plot our theoretical upper bound in dashed lines for comparison. Note that the theoretical value indeed upper bounds our empirical results. In Figure~\ref{fig:VD_TIA}\subref{fig:VD_TIA_cost}, we show $C^T$ against $T$. Again, the theoretical upper bound holds. As our theory predicts, for the four $T$'s the designer increasingly enforces $a^\dagger$ in 60\%, 82\%, 94\%, and 98\% of the rounds, respectively; The per-round design costs $C^T/T$ decreases at 0.98, 0.44, 0.15, and 0.05, respectively. \begin{figure}[t] \begin{subfigure}{0.48\linewidth} \centering \includegraphics[width=1\textwidth]{Fig/VD_TIA_N.png} \caption{Number of rounds with $a^t\neq a^\dagger$ grows sublinearly} \label{fig:VD_TIA_N} \end{subfigure} \hfill \begin{subfigure}{0.48\linewidth} \centering \includegraphics[width=1\textwidth]{Fig/VD_TIA_cost.png} \caption{The cumulative design cost grows sublinearly too} \label{fig:VD_TIA_cost} \end{subfigure} \caption{Interior design (Algorithm~\ref{alg:interior_design}) on VD with $M=3$. The dashed lines show the slope of sublinear rate $\sqrt{T}$ in log-log scale.} \label{fig:VD_TIA} \end{figure} \begin{figure*}[t!] \centering \begin{subfigure}{0.92\textwidth} \begin{subfigure}{0.325\textwidth} \centering \includegraphics[width=1\textwidth]{Fig/TC_TIA_N.png} \caption{Number of rounds with $a^t\neq a^\dagger$. The dashed line is the theoretical upper bound.} \label{fig:TC_TIA_N} \end{subfigure} \hfill \begin{subfigure}{0.325\textwidth} \centering \includegraphics[width=1\textwidth]{Fig/TC_TIA_cost.png} \caption{The cumulative design cost. The dashed line is the theoretical upper bound.} \label{fig:TC_TIA_cost} \end{subfigure} \hfill \begin{subfigure}{0.305\textwidth} \centering \includegraphics[width=1\textwidth]{Fig/TC_loss_change.png} \caption{Loss change $\ell_1(a)-\ell_1^o(a)$. When $a=a^\dagger=(10,10)$, the loss is unchanged.} \label{fig:TC_loss_change} \end{subfigure} \end{subfigure} \caption{Interior design (Algorithm~\ref{alg:interior_design}) on Tragedy of the Commons.} \label{fig:TC_TIA} \vspace{-0.5cm} \end{figure*} \begin{table*}[htb] \begin{minipage}{0.33\linewidth} \begin{center} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{mum} & \multicolumn{1}{c}{fink} \\\cline{3-4} & mum & $2,2$ & $5,1$ \\ \cline{3-4} & fink & $1,5$ & $4,4$ \\\cline{3-4} \end{tabular} \vspace{0.4cm} \caption{The original loss $\ell^o$ of PD.} \label{tab:PD_original} \end{center} \end{minipage \hfill \begin{minipage}{0.33\linewidth} \begin{center} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{mum} & \multicolumn{1}{c}{fink} \\\cline{3-4} & mum & $2,2$ & $1.5,2.5$ \\ \cline{3-4} & fink & $2.5,1.5$ & $4,4$ \\\cline{3-4} \end{tabular} \vspace{0.4cm} \caption{The redesigned loss $\ell$ of PD.} \label{tab:PD_attacked} \end{center} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \begin{center} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $0,0$ & $1,-1$ & $-1,1$ \\ \cline{3-5} & $P$ & $-1,1$ & $0,0$ & $1,-1$ \\\cline{3-5} & $S$ & $1,-1$ & $-1,1$ & $0,0$ \\\cline{3-5}\ \end{tabular} \caption{The original loss $\ell^o$ of RPS.} \label{tab:RPS} \end{center} \end{minipage} \vspace{-0.4cm} \end{table*} \subsection{Tragedy of the Commons (TC)} Our second example is the tragedy of the commons (TC). There are $M=2$ farmers who share the same pasture to graze sheep. Each farmer $i$ is allowed to graze at most 15 sheep, i.e., the action space is $\mathcal{A}_i=\{0,1,...,15\}$. The more sheep are grazed, the less well fed they are, and thus less price on market. We assume the price of each sheep is $p(a)=\sqrt{30-\sum_{i=1}^2 a_i}$, where $a_i$ is the number of sheep that farmer $i$ grazes. The loss function of farmer $i$ is then $\ell_i^o(a)=-p(a)a_i$, i.e. negating the total price of the sheep that farmer $i$ owns. The Nash equilibrium strategy of this game is that every farmer grazes $a_i^*=\frac{60}{2M+1}=12$ sheep, and the resulting price of a sheep is $p(a^*)=\sqrt{6}$. It is well-known that this Nash equilibrium is suboptimal. Instead, the designer hopes to maximize social welfare: $$ \sqrt{30-\sum_{i=1}^2 a_i}\left( \sum_{i=1}^2 a_i\right),$$ which is achieved when $\sum_{i=1}^2 a_i=20$. Moreover, to promote equity the designer desires that the two farmers each graze the same number $20/M=10$ of sheep. Thus the designer has a target action profile $a_i^\dagger=10, \forall i$. Note that the original loss function takes value in $[-15\sqrt{15}, 0]$, while the loss of the target profile is $\ell_i^o(a^\dagger)=-10\sqrt{10}$, thus this is the interior design scenario, and the designer could apply Algorithm~\ref{alg:interior_design} to produce a new game $\ell$. Due to the large number of entries, we only visualize the difference $\ell_1(a)-\ell^o_1(a)$ for player 1 in Figure~\ref{fig:TC_loss_change}; the other player is the same. We observe three patterns of loss change. For most $a$'s, e.g., $a_1\le 6$ or $a_2\ge 11$, the original loss $\ell_1^o(a)$ is already sufficiently large and satisfies the dominance gap in~\lemref{lem:post-attack-cost}, thus the designer leaves the loss unchanged. For those $a$'s where $a_1^\dagger=10$, the designer reduces the loss to make the target action more profitable. For those $a$'s close to the bottom left ($a_1> a_1^\dagger$ and $a_2\le 10$), the designer increases the loss to enforce the dominance gap $(1-\frac{1}{M})\rho$. We simulated play for $T=10^4,10^5, 10^6$ and $10^7$ and show the results in Figure~\ref{fig:TC_TIA}. Again the game redesign is successful: the figures confirm $T- O(\sqrt{T})$ target action play and $O(\sqrt{T})$ cumulative design cost. Numerically, for the four $T$'s the designer enforces $a^\dagger$ in 41\%, 77\%, 92\%, and 98\% of rounds, and the per-round design costs are 9.4, 4.2, 1.4, and 0.5, respectively. \subsection{Prisoner's Dilemma (PD)} Out third example is the prisoner's dilemma (PD). There are two prisoners, each can stay mum or fink. The original loss function $\ell^o$ is given in Table~\ref{tab:PD_original}. The Nash equilibrium strategy of this game is that both prisoners fink. Suppose a mafia designer hopes to force $a^\dagger=$(mum, mum) by sabotaging the losses. Note that $\forall i, \ell_i^o(a^\dagger)=2$, which lies in the interior of the loss range $\L=[1, 5]$. Therefore, this is again an interior design scenario, and the designer can apply Algorithm~\ref{alg:interior_design}. In Table~\ref{tab:PD_attacked} we show the redesigned game $\ell$. Note that when both prisoners stay mum or both fink, the designer does not change the loss. On the other hand, when one prisoner stays mum and the other finks, the designer reduces the loss for the mum prisoner and increases the loss for the betrayer. We simulated plays for $T=10^4, 10^5, 10^6$, and $10^7$, respectively. In Figure~\ref{fig:PD_TIA} we plot the number of non-target action selections $T-N^T(a^\dagger)$ and the cumulative design cost $C^T$. Both grow sublinearly. The designer enforces $a^\dagger$ in 85\%, 94\%, 98\%, and 99\% of rounds, and the per-round design costs are 0.71, 0.28, 0.09, and 0.03, respectively. \begin{figure}[t] \centering \begin{subfigure}{0.48\linewidth} \centering \includegraphics[width=1\textwidth]{Fig/PD_TIA_N.png} \caption{Number of rounds with $a^t\neq a^\dagger$. The dashed line is the theoretical upper bound.} \label{fig:PD_TIA_N} \end{subfigure} \hfill \begin{subfigure}{0.48\linewidth} \centering \includegraphics[width=1\textwidth]{Fig/PD_TIA_cost.png} \caption{The cumulative design cost. The dashed line is the theoretical upper bound.} \label{fig:PD_TIA_cost} \end{subfigure} \caption{Interior design on Prisoner's Dilemma.} \label{fig:PD_TIA} \vspace{-0.4cm} \end{figure} \begin{table*}[t!] \hspace{-1cm} \centering \begin{minipage}[c]{0.33\textwidth} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $-0.5,0.5$ & $0,0$ & $-0.5,0.5$ \\ \cline{3-5} & $P$ & $0,0$ & $0.5,-0.5$ & $0,0$ \\\cline{3-5} & $S$ & $0,0$ & $0.5,-0.5$ & $0,0$ \\\cline{3-5}\ \end{tabular} \caption*{(a) $\ell^t (t=1)$.} \label{tab:RPS_loss_t1} \end{minipage} \hspace{-0.9cm} \begin{minipage}[c]{0.33\textwidth} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $0.62,-0.62$ & $0.75,-0.75$ & $0.62,-0.62$ \\ \cline{3-5} & $P$ & $0.75,-0.75$ & $0.87,-0.87$ & $0.75,-0.75$ \\\cline{3-5} & $S$ & $0.75,-0.75$ & $0.87,-0.87$ & $0.75,-0.75$ \\\cline{3-5}\ \end{tabular} \caption*{\hspace{0.7cm} (b) $\ell^t (t=10^3)$.} \label{tab:RPS_loss_t2} \end{minipage} \begin{minipage}[c]{0.33\textwidth} \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $0.94,-0.94$ & $0.96,-0.96$ & $0.94,-0.94$ \\ \cline{3-5} & $P$ & $0.96,-0.96$ & $0.98,-0.98$ & $0.96,-0.96$ \\\cline{3-5} & $S$ & $0.96,-0.96$ & $0.98,-0.98$ & $0.96,-0.96$ \\\cline{3-5}\ \end{tabular} \caption*{\hspace{0.9cm}(c) $\ell^t (t=10^7)$.} \label{tab:RPS_loss_tinf} \end{minipage} \caption{The redesigned RPS games $\ell^t$ for selected $t$ (with $\epsilon=0.3$). Note the target entry $a^\dagger=(R,P)$ converges toward $(1,-1)$.} \label{tab:RPS_loss_t} \vspace{-0.8cm} \end{table*} \begin{table*}[t!] \hspace{-1cm} \begin{minipage}[c]{0.33\textwidth} \centering \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $1,1$ & $1,1$ & $-1,1$ \\ \cline{3-5} & $P$ & $-1,-1$ & $1,-1$ & $-1,-1$ \\\cline{3-5} & $S$ & $-1,1$ & $-1,-1$ & $-1,-1$ \\\cline{3-5}\ \end{tabular} \caption*{\hspace{0.5cm}(a) $\hat\ell^t (t=1)$.} \label{tab:RPS_loss_t1_discrete} \end{minipage} \begin{minipage}[c]{0.33\textwidth} \centering \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $1,-1$ & $1,1$ & $-1,-1$ \\ \cline{3-5} & $P$ & $1,-1$ & $1,-1$ & $1,-1$ \\\cline{3-5} & $S$ & $1,-1$ & $1,-1$ & $1,1$ \\\cline{3-5}\ \end{tabular} \caption*{\hspace{1cm}(b) $\hat\ell^t (t=10^3)$.} \label{tab:RPS_loss_t2_discrete} \end{minipage} \begin{minipage}[c]{0.33\textwidth} \centering \begin{tabular}{cc|c|c|c|} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c}{$P$} & \multicolumn{1}{c}{$S$} \\\cline{3-5} & $R$ & $1,-1$ & $1,-1$ & $1,-1$ \\ \cline{3-5} & $P$ & $1,-1$ & $1,-1$ & $1,-1$ \\\cline{3-5} & $S$ & $1,-1$ & $1,-1$ & $1,-1$ \\\cline{3-5}\ \end{tabular} \caption*{\hspace{1cm}(c) $\hat\ell^t (t=10^7)$.} \label{tab:RPS_loss_tinf_discrete} \end{minipage} \caption{Instantiation of discrete design on the same games as in Table~\ref{tab:RPS_loss_t}. The redesigned loss lies in $\L=\{-1,0,1\}$.} \label{tab:RPS_loss_t_discrete} \end{table*} \begin{figure*}[t!] \centering \begin{minipage}{0.48\textwidth} \begin{subfigure}{1\textwidth} \begin{subfigure}{0.48\textwidth} \includegraphics[width=1\textwidth]{Fig/RPS_TVA_N.png} \caption{Number of rounds with $a^t\neq a^\dagger$. The dashed lines are the theoretical upper bound.} \label{fig:RPS_TVA_N} \end{subfigure} \hfill \begin{subfigure}{0.48\textwidth} \includegraphics[width=1\textwidth]{Fig/RPS_TVA_cost.png} \caption{The cumulative design cost. The dashed lines are the theoretical upper bound.} \label{fig:RPS_TVA_cost} \end{subfigure} \end{subfigure} \caption{Boundary design on RPS.} \label{fig:RPS_TVA} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \begin{subfigure}{1\textwidth} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{Fig/RPS_TVA_prob_N.png} \caption{Number of rounds $a^t\neq a^\dagger$} \label{fig:RPS_TVA_prob_N} \end{subfigure} \hfill \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{Fig/RPS_TVA_prob_cost.png} \caption{Cumulative design cost $C^T$} \label{fig:RPS_TVA_prob_cost} \end{subfigure} \end{subfigure} \caption{Discrete redesign for $a^\dagger=(R,P)$ with natural loss values in $\L$. The dashed lines are the corresponding boundary design with unnatural loss values.} \label{fig:RPS_TVA_prob} \end{minipage} \end{figure*} \subsection{Rock-Paper-Scissors (RPS)} While redesigning RPS does not have a natural motivation, it serves as a clear example on how boundary design and discrete design can be carried out on more socially-relevant games. The original game $\ell^o$ is in Table~\ref{tab:RPS}. \textbf{Boundary Design.} Suppose the designer target action profile is $a^\dagger=(R, P)$, namely making the row player play Rock while the column player play Paper. Because $\ell^o(a^\dagger)=(1,-1)$ hits the boundary of loss range $\tilde \L=[-1,1]$, the designer can use the Boundary Design Algorithm~\ref{alg:boundary_design}. For simplicity we choose $v$ with $v_i=\frac{U+L}{2}, \forall i$. Because in RPS $U=-L=1$, this choice of $v$ also preserves the zero-sum property. Table~\ref{tab:RPS_loss_t} shows the redesigned games at $t=1, 10^3$ and $10^7$ under $\epsilon=0.3$. Note that the designer maintains the zero-sum property of the games. Also note that the redesigned loss function always guarantees strict dominance of $a^\dagger$ for all $t$, but the dominance gap decreases as $t$ grows. Finally, the loss of the target action $a^\dagger=(R, P)$ converges to the original loss $\ell^o(a^\dagger)=(1, -1)$ asymptotically, thus the designer incurs diminishing design cost. We ran Algorithm~\ref{alg:boundary_design} under four values of $\epsilon=0.1,0.2,0.3,0.4$, resulting in four game sequence $\ell^t$. For each $\epsilon$ we simulated game play for $T=10^4, 10^5, 10^6$ and $10^7$. In Figure~\ref{fig:RPS_TVA}\subref{fig:RPS_TVA_N}, we show $T-N^T(a^\dagger)$ under different $\epsilon$ (solid lines). We also show the theoretical upper bounds of~\thmref{thm:attack_version_2} (dashed lines) for comparison. In Figure~\ref{fig:RPS_TVA}\subref{fig:RPS_TVA_cost}, we show the cumulative design cost $C^T$ under different $\epsilon$. The theoretical values indeed upper bound our empirical results. Furthermore, all non-target action counts and all cumulative design costs grow only sublinearly. As an example, under $\epsilon=0.3$ for the four $T$'s the designer forces $a^\dagger$ in 34\%, 60\%, 76\%, and 88\% rounds, respectively. The per-round design costs are 1.7, 1.2, 0.73 and 0.40, respectively. The results are similar for the other $\epsilon$'s. Choosing the best $\epsilon$ and $v$ is left as future work. We note that empirically the cumulative design cost achieves the minimum at some $\epsilon\in(0.3, 0.4)$ while~\thmref{thm:attack_version_2} suggests that the minimum cost is at $\epsilon^*=0.25$ instead. We investigate this inconsistency in the appendix~\ref{sec:optimal_eps}. \textbf{Discrete Design.} In our second experiment on RPS, we compare the performance of discrete design (Algorithm~\ref{alg:discrete_design}) with the deterministic boundary design (Algorithm~\ref{alg:boundary_design}). Again, the target action profile is $a^\dagger=(R,P)$. Recall the purpose of discrete design is to only use natural game loss values, in the RPS case $\L=\{-1,0,1\}$ instead of unnatural real values in the relaxed $\tilde\L=[-1,1]$, to make the redesign ``less detectable'' by players. We hope to show that discrete design does not lose much potency even with this value restriction. Figure~\ref{fig:RPS_TVA_prob} shows this is indeed the case. Discrete design performance nearly matches boundary design. For example when $\epsilon=0.3$, for the four $T$'s discrete design enforces $a^\dagger$ 35\%, 59\%,75\% and 88\% of the time. The per-round design costs are 1.7, 1.2, 0.79, and 0.41, respectively. Overall, discrete design does not lose much performance and may be preferred by designers. Table~\ref{tab:RPS_loss_t_discrete} shows the redesigned ``random'' games at $t=1, 10^3$ and $10^7$ under $\epsilon=0.3$. Note that the loss lies in the natural range $\L=\{-1,0,1\}$. Also note that the loss function converges to be a constant function that takes the target loss value $\ell^o(a^\dagger)$. Finally, we point out that in general, the discrete design does not preserve the zero-sum property. \section{Conclusion and Future Work} In this paper, we studied the problem of game redesign where the players apply no-regret algorithms to play the game. We show that a designer can force all players to play a target action profile in $T-o(T)$ rounds while incurring only $o(T)$ cumulative design cost. We develop redesign algorithms for both the interior and the boundary target loss scenarios. Experiments on four game examples demonstrate the performance of our redesign algorithms. Future work could study defense mechanisms to mitigate the effect of game redesign when the designer is malicious; or when the designer is one of the players with more knowledge than other players and willing to intentionally lose. \newpage
c486758010593b85e79fe6be210242bd53723548
\section{Introduction} Many IoT applications require the knowledge about the various properties of IoT devices that provide some data about the physical world and can act upon the environment. The properties may for instance include the information on: \begin{itemize} \item type and unit of data, (e.g., temperature in °C), \item resolution, frequency of data (e.g., $512\times512$ pixels every hour), \item possible actions performed by the device (e.g., switch on), \item raised alarms (e.g., overheating), \item geographic location of the device (e.g., $(+28.61, -80.61)$ WGS84/GPS coordinates), \item logical location of the device (e.g., Room 235 on Floor 14). \end{itemize} Several initiatives aimed at expressing and structuring this kind of IoT and M2M metadata: Sensor Markup Language (SenML)~\cite{senml}, IPSO Alliance Framework~\cite{IPSO}, and oneM2M Base ontology~\cite{AlayaMMD15}. The World Wide Web Consortium (W3C) schemes for the semantic Web such as RDF,\footnote{\url{http://www.w3.org/RDF}} OWL,\footnote{\url{http://www.w3.org/TR/owl-ref}} SPARQL\footnote{\url{http://www.w3.org/TR/sparql11-query/}} also allow understanding and discovery of IoT data. For expressing specific IoT semantics, W3C proposed a Semantic Sensor Network (SSN) ontology\footnote{\url{https://www.w3.org/2005/Incubator/ssn/ssnx/ssn}} that allows the description of sensors and their characteristics addressing the issue of interoperability of metadata annotations. The Web of Things initiative of W3C\footnote{\url{https://www.w3.org/WoT}} aims at unifying IoT with digital twins for sensors, actuators, and information services exposed to applications as local objects with \emph{properties}, \emph{actions}, and \emph{events}. W3C Thing Description (TD)\footnote{\url{https://www.w3.org/TR/wot-thing-description}} expressed in JSON-LD\footnote{\url{https://www.w3.org/TR/json-ld}} covers the behavior, interaction affordances, data schema, security configuration, and protocol bindings. Thing Description allows for attaching rich semantic metadata to IoT devices, however, this format is oriented towards processing by non-constrained applications running for example in the Cloud or in the Edge to become the base for sophisticated discovery and search services offered on Web servers for IoT users and applications. However, we can notice that discovery and search based on semantic metadata also happens in constrained IoT environments---an IoT device needs to discover other devices and choose the right one for further communication or collaboration. In this case, semantic metadata of IoT devices need to be encoded in a highly compact way to reduce the overhead in usually bandwidth limited networks. In this paper, we propose a scheme for representing semantic metadata of IoT devices in compact identifiers or names to enable simple discovery and search with standard DNS servers. The idea of the scheme is inspired by the Static Context Header Compression (SCHC)\footnote{\url{https://tools.ietf.org/html/rfc8724}} approach to IP header compression. In SCHC, two devices that exchange IP packets compress headers based on pre-established contexts. Instead of a full header, a device inserts the information about the context to use and some short information required to reconstruct the header. In this way, a 40 byte IPv6 header can be compressed down to just a few bytes. Our scheme defines a binary identifier as a sequence of bits composed of a Context to use and several fields corresponding to semantic properties specific to the Context. The bit string is then encoded as \ttt{base32} characters and registered in DNS. Thus, the DNS name encodes in a compact form the semantic metadata of an IoT device. We define several Contexts of identifiers expressing different semantic metadata to fit the most popular device characteristics (other can also be defined): \begin{enumerate} \item hierarchical semantic properties, \item logical location of the device, \item geographic location of the device. \end{enumerate} The first one corresponds to the structured representation of the attributes of Thing Description and two others cover the geographical information about an IoT device. We instantiate the scheme for encoding geographic location in case of LoRa networks and show how to construct a 64 bit geo-identifier of LoRa devices. Furthermore, we use the compact semantic DNS names to offer support for search and discovery. In constrained environments, providing full-fledged database search functionality may be difficult. Instead, we propose to take advantage of the DNS system as the basic functionality for querying and discovering the semantic properties related to IoT devices. Our encoding scheme of semantic metadata structures the DNS names similarly to IP prefixes: a longer prefix represents more specific information and shortening a prefix corresponds to more general information, thus allowing for some range or extended topic queries. For instance, if the name represents a geographical location, a longer name represents a smaller area and a shorter name corresponds to a larger zone that encompasses the smaller area designated by the longer name. Finally, we describe two prototypes supporting DNS queries on geo-identifiers. Querying DNS based on semantic names can bring interesting features to many IoT applications: finding devices corresponding to a given property, placement on a map of all sensors belonging to a given application, sending commands to the devices in a chosen region, or gathering data from chosen devices based on their geographical location. The paper makes the following contributions: \begin{enumerate} \item we define a scheme based on Contexts for compact encoding of different types of metadata in DNS names, \item we take advantage of geohashes to instantiate the scheme for encoding geographic location, \item we propose a means for simple and minimal discovery of IoT devices and searching for their characteristics based on standard DNS functions, \item we explore an idea of using DNS to store and publish IoT data, \item we validate the proposed schemes with preliminary prototypes supporting DNS queries on geo-identifiers. \end{enumerate} \section{Related Work} \label{sec:related} We briefly review previous work related to expressing semantic properties of IoT devices and compact encoding of geographical location. \subsection{Semantic Properties of IoT Devices} As mentioned in the introduction, several initiatives considered the problem of expressing metadata of IoT devices and M2M communications: Sensor Markup Language (SenML)~\cite{senml}, IPSO Alliance Framework~\cite{IPSO}, and oneM2M Base ontology~\cite{AlayaMMD15}. Kovacs et al. proposed a system architecture for achieving worldwide semantic interoperability with oneM2M~\cite{7785888}. The Semantic Sensor Network (SSN) ontology allows the description of sensors and their characteristics~\cite{HallerJCLTPLGAS19}. An important initiative of W3C aimed at creating the semantic Web of Things~\cite{PfistererRBKMTHKPHKLPR11} to enable unambiguous exchange of IoT data with shared meaning. Novo and Di Francesco~\cite{NovoF20} discussed solutions that extend the Web of Things architecture to achieve a higher level of semantic interoperability for the Internet of Things. Nevertheless, many proposed approaches do not address the constraints of IoT devices that do not match the size and the form of semantic descriptions usually developed in the traditional W3C setting. For instance, Novo and Di Francesco~\cite{NovoF20} reported performance results coming from a testbed composed of two computers connected to an 802.11 network. We proposed DINAS~\cite{AmorettiAFRD17}, a scheme based on Bloom filters for creating compact names from node descriptions and a service discovery protocol for short-range IoT networks running RPL. Other work emphasizes the importance of DNS for IoT~\cite{9133283}. \subsection{WGS84 aka GPS} WGS84 is a common format for encoding geographical coordinates used in GPS, composed of two numbers in degrees of the form \ttt{ddd.ddddddd}, where \ttt{d} stands for a degree digit. Degrees are expressed as numbers between $-180$ and $+180$ for longitude, and a number between $-90$ and $+90$ for latitude (locations to the west and to the south are negative), e.g., $(+28.61, -80.61)$ corresponds to the location of the Cape Canaveral Space Center. Expressing a given geographical location is always done with a given precision, and when decoding a position, all methods return the center of the square representing all possible positions. For example, if we decode $(28^\circ\mathrm{N}, 80^\circ\mathrm{W})$, we know the position is in the square between $(28^\circ\mathrm{N}, 80^\circ\mathrm{W})$ and $(29^\circ\mathrm{N}, 81^\circ\mathrm{W})$, and we will return $(28.5^\circ\mathrm{N}, 80.5^\circ\mathrm{W})$ to minimize the error. Table~\ref{tab:size} represents the longitudinal resolution at the equator and at a latitude of $45^\circ\mathrm{N}/\mathrm{S}$ with an increasing number of decimal figures and the corresponding number of bits to represent them. The idea is to relate the size of a region to the number of bits used for representing a given geographical coordinate and thus relate the size of a region to the size of an identifier. We can observe that 8 decimal figures encoded on 26 bits are sufficient to represent the location at the precision of around 1\,m. \begin{table}[t] \centering \caption{Longitudinal decimal degree precision} \begin{tabular}{ccr@{.}lr@{.}l} \toprule \bf \# of figures & \bf \# of bits & \multicolumn{2}{c}{\bf Equator} & \multicolumn{2}{c}{\bf $45^\circ \mathrm{N}/\mathrm{S}$} \\ \midrule 3 & 9 & 111 & 3200 km & 78 & 710 km \\ 4 & 12 & 11 & 1320 km & 7 & 871 km \\ 5 & 16 & 1 & 1132 km & 787 & 100 m \\ 6 & 19 & 111 & 3200 m & 78 & 710 m \\ 7 & 22 & 11 & 1320 m & 7 & 871 m \\ 8 & 26 & 1 & 1132 m & 787 & 100 mm \\ \bottomrule \end{tabular}\label{tab:size} \end{table} \subsection{Geoprefixes, Geohashes, Plus Codes} In previous work, we defined the notion of a {\em geoprefix} for IPv6 networks~\cite{brunisholzDataTweetUsercentricGeocentric2016}: the location of each device is encoded in its IPv6 multicast address and an application can send a packet to all devices corresponding to a given prefix representing a geographic area (a geocast). Niemeyer~\cite{geohash} proposed {\em geohash}, an encoding of WGS84 coordinates based on Morton codes~\cite{morton1966computer} that computes a 1-dimensional value from the 2-dimensional GPS coordinates by interleaving the binary representations of the coordinates and further represented as ASCII characters. In this method, to encode a given location, we proceed by a dichotomy. Starting with the full interval (${[-180; +180]}$ for longitude, ${[-90; +90]}$ for latitude), we split the interval in two (${[-90; 0]}$ and ${[0; +90]}$ for latitude), then, if the location is in the higher half, we add bit \ttt{1} to the encoding of the coordinate, or else, we add bit \ttt{0}, and we repeat the operation with the new interval, building the encoding bit by bit, until we reach the desired precision. When decoding, once the last bit is reached, the decoded location is at the center of the remaining interval (for latitude, if we have only one bit with value \ttt{1}, we would decode that the latitude is ${+45}$). With this method, each additional bit halves the size of the interval. Once both latitude and longitude are represented this way, their binary codes are intermingled to produce a unique value: odd bits represent latitude and even bits represent longitude as presented in Table~\ref{fig:geohash_mix}. For example, the resulting encoding of latitude \ttt{1011 1100 1001} and longitude \ttt{0111 1100 0000} is \ttt{0110 1111 1111 0000 1000 0010}. \begin{table}[t] \centering \caption{Combining latitude and longitude encoded in a unique binary value} \begin{tabular}{rl} Long. & \ttt{0.1.1.1.1.1.0.0.0.0.0.0.0} \\ Lat. & \ttt{ .1.0.1.1.1.1.0.0.1.0.0.1.} \\ \midrule Result & \ttt{0110111111110000010000010} \\ \end{tabular}\label{fig:geohash_mix} \end{table} \begin{table}[t] \centering \caption{Longitudinal decimal degree precision and the size of a geohash} \begin{tabular}{p{0.7cm}p{0.7cm}p{0.7cm}rrr} \toprule length & lat bits & lng bits & lat error & lng error & error\\ \midrule 1 & 2& 3 & $\pm$ 23° & $\pm$ 23°& $\pm$ 2500 km\\ 2 & 5& 5 & $\pm$ 2.8° & $\pm$ 5.6°& $\pm$ 630 km \\ 3 & 7& 8 & $\pm$ 0.70° & $\pm$ 0.70°& $\pm$ 78 km \\ 4 & 10 & 10 & $\pm$ 0.087° & $\pm$ 0.18°& $\pm$ 20 km \\ 5 & 12 & 13 & $\pm$ 0.022° & $\pm$ 0.022°& $\pm$ 2.4 km\\ 6 & 15 & 15 & $\pm$ 0.0027° & $\pm$ 0.0055°& $\pm$ 610 m\\ 7 & 17 & 18 & $\pm$ 0.00068° & $\pm$ 0.00068°& $\pm$ 76 m \\ 8 & 20 & 20 & $\pm$ 0.000085° & $\pm$ 0.00017°& $\pm$ 19 m \\ 9 & 22 & 23 && & \\ 10 & 25 & 25 && & $\pm$ 59 cm \\ 11 & 27 & 28 && & \\ 12 & 30 & 30 && & $\pm$ 1.84 cm\\ \bottomrule \end{tabular}\label{tab:size2} \end{table} {\em Geohash-36},\footnote{\url{http://en.wikipedia.org/wiki/Geohash-36}} originally developed for compression of world coordinate data, divides the area into 36 squares and generates a full character describing which sub-square contains the position. Google Maps uses {\em Plus Codes}~\cite{pluscodes,pluscodes2} made up of a sequence of digits chosen from a set of $20$. The digits in the code alternate between latitude and longitude. The first four digits describe a one degree latitude by one degree longitude area, aligned on degrees. A Plus Code is 10 characters long with a plus sign before the last two: \begin{enumerate} \item The first four characters are the area code describing a region of roughly 100 $\times$ 100 kilometers. \item The last six characters are the local code, describing the neighborhood and the building, an area of roughly 14 $\times$ 14 meters. \end{enumerate} As an example, let us consider the Parliament Buildings in Nairobi, Kenya located at the \ttt{6GCRPR6C+24} plus code: \ttt{6GCR} is the area from \ttt{2S 36E} to \ttt{1S 37E}. \ttt{PR6C+24} is a 14-meter wide by 14-meter high area within \ttt{6GCR}. The \ttt{+} character is used after eight digits, to break the code up into two parts and to distinguish codes from postal codes. \section{Compact Encoding of IoT Metadata}\label{sec:encoding} The main objective of this paper is to design a scheme for encoding semantic properties in DNS names so that IoT devices could discover relevant nodes using with DNS name resolution. Figure~\ref{fig:data} gives an example of how it can be done in the context of LoRa devices. Note that DNSSEC guarantees the information integrity. \begin{figure}[t!] \begin{centering} \includegraphics[width=\linewidth]{data-v3-crop} \caption{General scheme for identifiers and names.} \label{fig:data} \end{centering} \end{figure} We propose to assign \emph{self-certifying names} to IoT devices: the name derives from a public key to enable secure establishing of the identity of a device without relying on an external PKI infrastructure. The self-certifying name is constructed as a hash of public key $K_p$ similarly to Bitcoin addresses: \[A = ripemd160(sha256(K_p))\] then $A$ is encoded with \ttt{base32} (20 characters) giving the DNS name $N$. \ttt{base32} encoding represents a binary string with \ttt{0-9} digits and some lower case letters (excluding characters hard to distinguish like \ttt{i}, \ttt{l}, \ttt{o}). We cannot use \ttt{base58check} like in Bitcoin because of capital/low case letters in \ttt{base58check} (DNS names do not distinguish between capital/low case letters). The nice feature of this scheme is that devices can check whether a public key from the TLSA DNS record corresponds to the name and if authentication is enforced (signature with the private key $K_s$) to be sure a device communicates with the right peer. Then, we can derive an 8 byte \ttt{EUI64} identifier from $A$ with SHA-3($A$). \ttt{EUI64} identifiers are required in some networks like LoRa---we can obtain the LoRa \ttt{DevEUI} identifier derived from $K_p$ and then use it to construct an IPv6 address. We will also show below that the \ttt{DevEUI} of a LoRa device can represent its geographical location. In addition to the self-certifying name, we define other names (DNS aliases) that represent device properties encoded in a compact way. Moreover, we want the encoding scheme to take advantage of some discovery functionalities of DNS by requiring that a name is structured as an IP prefix---smaller prefix means a more general query. \subsection{Encoding Hierarchical Semantic Properties} \label{sec:properties} \begin{figure}[t!] \begin{centering} \includegraphics[width=\linewidth]{structure} \caption{Structure of a binary semantic identifier (fields of 5 bits or a multiple of 5 bits).} \label{fig:struct} \end{centering} \end{figure} Figure~\ref{fig:struct} presents the structure of an identifier encoding a semantic tree of several levels presented below. A Context defines how to interpret the encoded semantic properties. The first type of Contexts we define is a hierarchical representation of properties in a form of a semantic tree with leaves corresponding to properties (see example in Figure~\ref{fig:tree}). The identifier encoding the semantic properties is the binary code generated when traversing the tree. To be able to express the binary identifier of a level with one or more \ttt{base32} letters, each field is 5 bits or a multiple of 5 bits (e.g., the Level 3 field composed of 10 bits). In this way, we avoid problems of dealing with padding if the size of the binary identifier is not a multiple of 5 bits. Note that the structure of a given identifier (the size of each field) is defined by a given Context. \begin{figure}[t!] \begin{centering} \includegraphics[width=\linewidth]{tree} \caption{Semantic attributes encoded as a quadtree.} \label{fig:tree} \end{centering} \end{figure} Figure~\ref{fig:tree} presents an example of a semantic quadtree of degree $n=4$ for simplicity. In fact, we use $n=32$ for 5 bits assigned to each level of the tree, which gives the possibility of representing $32^2=1024$ attributes at Level 2 or more, e.g., $32,768$ at Level 3 with the field of 10 bits. We may have trees with different numbers of nodes at each level depending on the need of representing more or less properties (e.g., $n=1024$, if we assign 10 bits to a level), the only limitation being that the level degree should be a power of $2$. Circles correspond to non-terminal nodes and squares to leaves. The position in the tree determines the code of a property, for instance, the property at leaf $11$ has the code of \ttt{1101} corresponding to the traversal of the \ttt{11} branch and then, the \ttt{01} one. One identifier (so one DNS name) corresponds to one leaf in the tree. The encoding scheme structures the DNS names similarly to IP prefixes: a longer prefix represents more specific information and shortening a prefix corresponds to more general information, thus allowing for some range or property queries, e.g., the shorter code of \ttt{11} represents all properties with \ttt{1100} \ttt{1101} \ttt{1110} \ttt{1111} identifiers. Below, we present an example of creating a semantic name for a temperature sensor with metadata expressed in the following W3C Thing Description:\footnote{\url{https://www.w3.org/TR/wot-thing-description}} \begin{footnotesize} \begin{alltt} \textbf{"@context": [} \textbf{ "https://www.w3.org/2019/wot/td/v1",} \textbf{...],} \textbf{"@type": "saref:TemperatureSensor",} \textbf{...} \textbf{"properties": \textbraceleft} \textbf{ "temperature": \textbraceleft} \textbf{ "description": "Weather Station Temperature",} \textbf{ "type": "number",} \textbf{ "minimum": -32.5,} \textbf{ "maximum": 55.2,} \textbf{ "unit": "om:degree_Celsius",} \textbf{ "forms": [...]} \textbf{ \textbraceright,} \textbf{ ...} \textbf{\textbraceright,} \textbf{...} \end{alltt} \end{footnotesize} \noindent where some attributes refer to external vocabularies such as SAREF (Smart Appliances REFerence Ontology)\footnote{\url{https://ontology.tno.nl/saref/}} and OM (Ontology of Units of Measure)\footnote{\url{http://www.ontology-of-units-of-measure.org/page/om-2}}. The encoded binary identifier is composed of the following fields (\ttt{base32} encoding in parenthesis\footnote{In all our encoding examples, we use the \textbf{\texttt{\footnotesize base32}} version defined for geohashes that encode geographical coordinates instead of the version described by RFC 4648, \url{https://tools.ietf.org/html/rfc4648}}.): \begin{itemize} \item \ttt{00001} — Context-1 (\ttt{1}) \item \ttt{01100} — properties (\ttt{d}) \item \ttt{00001} — temperature (\ttt{1}) \item \ttt{00101} — unit (\ttt{5}) \item \ttt{00010} — degree\_Celsius (\ttt{2}) \end{itemize} We assume that Context-1 corresponds to the semantic tree generated according to the TD context with 5 bits per level, \ttt{properties} is the 12th attribute, \ttt{temperature} the 1st one, and the value of the unit \ttt{degree\_Celsius} is the 2nd possible value. The binary identifier of \ttt{00001 01100 00001 00101 00010} results in the \ttt{1d152} DNS name. \subsection{Encoding Logical Location} \label{sec:logical} In many use cases, an IoT application may benefit from metadata about localization in a logical form. For instance, when defining group communication for the Constrained Application Protocol (CoAP), RFC~7390\footnote{\url{https://tools.ietf.org/html/rfc7390}} considered a building control application that wants to send packets to a group of nodes represented by the following name: \ttt{all.bu036.floor1.west.bldg6.example.com}. \noindent Logically, the group corresponds to \emph{"all nodes in office bu036, floor 1, west wing, building 6"}. Such hierarchical groups of fully qualified domain naming (and scoping) provide a logical description of places that may complement other precise geographical information that we consider in the next section. We can observe that there is an inclusion relationship between elements of the description: office \ttt{bu036} is on the \ttt{floor 1}, at the \ttt{west} wing of \ttt{building 6}. A specific Context can represent the inclusion relationship in a semantic tree, so we can express a given logical location with a binary identifier and an encoded DNS name. Let us assume that we have up to 32 buildings, 32 floors, and 1024 rooms. To encode the location of Room 214 on Floor 19 in Building 7, we define the binary identifier composed of the following fields (\ttt{base32} encoding in parenthesis): \begin{itemize} \item \ttt{00010} - Context-2 (\ttt{2}) \item \ttt{00111} - Building 7 (\ttt{7}) \item \ttt{10011} - Floor 19 (\ttt{m}) \item \ttt{01011} - Room 376 - first part (\ttt{c}) \item \ttt{11000} - Room 376 - second part (\ttt{s}) \end{itemize} We assume that Context-2 corresponds to the semantic tree with 5 bits at the 1st and 2nd levels, and 10 bits at the 3rd level as in Figure~\ref{fig:struct}. The binary identifier \ttt{00010 00111 10011 01011 11000} results in the \ttt{27mcs} DNS name. \section{Encoding Geographic Location} \label{sec:geo-encoding} Many IoT applications require precise information on the geographical location of IoT devices---when a sensor provides some measurement data, one of the most important additional information is the localization of the data source, usually stored as metadata. We can use GPS for localization, however, adding GPS to an IoT device increases its cost and energy consumption, which may make their cost prohibitive for many large scale IoT applications. We take the example of LoRa networks to consider the problem of representing geographical locations in identifiers and DNS names. We propose a scheme to define the \emph{geo-identifier} of a LoRaWAN device in a way that encodes its geographical location. Figure~\ref{fig:struct2} presents its structure with two fields: 5 bits for the Context and 59 bits for encoding geographical coordinates of different forms. The Context gives the information about the type of encoding. \begin{figure}[t!] \begin{centering} \includegraphics[width=0.95\linewidth]{deveui} \caption{Structure of a geo-identifier on 64 bits.} \label{fig:struct2} \end{centering} \end{figure} LoRaWAN defines \ttt{DevEUI}, a unique 64 bit identifier in the IEEE EUI-64~\cite{EUI-64} configured on a device. In the activation process of the device, it obtains \ttt{DevAddr}, a 32 bit identifier in the current network allocated by the Network Server. We propose to use a geo-identifier as \ttt{DevEUI}, store \ttt{DevEUI} as a DNS name, and provide a lookup service-based DNS service discovery that returns names corresponding to a geographical region. Then, we can encode a binary string in the geohash variant of \ttt{base32} giving us a name. For instance, the binary value of \ttt{01101 11111 11000 00100 00010} (25 bits) results in the \ttt{ezs42} geohash composed of 5 characters, the representation having a location error of $\pm$ 2.4 km. With 59 bits for encoding the latitude and the longitude, a geoprefix or a geohash result in a resolution of a few cm. Table~\ref{tab:size2} presents the size of the \ttt{base32} encoded geohash, the number of bits representing longitude and latitude, and the precision of the decoded value. Geohashes offer interesting features: i) similar geohashes represent nearby positions and ii) a longer geohash represents a smaller area and shortening it reduces the precision of both its coordinates to represent a larger region. \begin{table}[t] \centering \caption{Practical example of a geohash} \begin{tabular}{lll} geohash & Latitude & Longitude \\ \midrule \texttt{dr5r7p4rx6kz} & 40.689167 & -74.044444 \\ \texttt{\textbf{dr5r}7p4} & 40.69 & -74.04 \\ \texttt{\textbf{dr5r}111} & 40.61 & -74.13 \\ \end{tabular}\label{tab:pref_example} \end{table} Table~\ref{tab:pref_example} presents a practical example of the prefix property of a geohash. In the table, the second geohash is a prefix of the first one, so it is more precise and is contained in the second one. The second and the third geohash have a common prefix, so they are in the same region (easily computable with the common prefix) but do not overlap. Plus Codes can be shortened relative to a reference location, which reduces the number of digits to use and we can define the reference location in a given Context. They have similar properties to geohashes and geoprefixes---they represent areas and the size of the area depends on the code length. We can store a string version of the geohash and Plus Code in DNS as the names of an IoT device and enable some geographical/proximity searches using the \ttt{geohash.org} site or Google Maps (Plus Code required). The only constraint of using geo-identifiers for \ttt{DevEUI} is the fact that \ttt{DevEUI} does not have the EUI-64 format anymore, which may be an obstacle for some applications. On the other hand, we gain the possibility of linking the device location with its identifier. \section{Device Discovery with DNS Queries}\label{sec:impl} In this section, we discuss the issue of how we can query the DNS system to discover some properties of IoT devices and find relevant devices corresponding to those properties. We propose to use the defined semantic identifiers and DNS names to offer support for search and discovery. In constrained environments, providing full-fledged database search functionality may be difficult. Instead, we propose to take advantage of the DNS system as the basic functionality for querying and discovering the semantic properties related to IoT devices. \subsection{DNS Service Discovery}\label{sec:dns-sd} DNS-Based Service Discovery (DNS-SD)~\cite{dnssd} is a functionality of DNS to discover services in a network. Information about a given service is stored in the DNS database as an \ttt{SRV} record of the form: \begin{footnotesize} \begin{alltt} \textbf{<Instance>.<Service>.<Domain> IN SRV <data>} \end{alltt} \end{footnotesize} \noindent and gives the target host and the preassigned port at which the service instance can be reached. The \ttt{TXT} record for the same name gives additional information about this instance in a structured form using key/value pairs. A DNS client can discover the list of available instances of a given service type using a query for a DNS \ttt{PTR} record with a name of the form \ttt{<Service>.<Domain>} which returns a set of zero or more \ttt{PTR} records giving the \ttt{<Instance>} names of the services that match the queried \ttt{<Service>}. Each \ttt{PTR} record is structured as such: \begin{footnotesize} \begin{alltt} \textbf{<Service>.<Dom> IN PTR <Instance>.<Service>.<Dom>} \end{alltt} \end{footnotesize} The \ttt{<Instance>} portion of the \ttt{PTR} data is a user-friendly name consisting of UTF-8 characters, so rich-text service names and subdomains are allowed and encouraged, for instance: \begin{footnotesize} \begin{alltt} \textbf{LoRa temp sensor.Room 7.\_iot.\_udp.example.com.} \end{alltt} \end{footnotesize} The \ttt{<Service>} portion of the query consists of a pair of DNS labels, following the convention already established for \ttt{SRV} records, for instance, the \ttt{PTR} entry for name \ttt{\_http.\_tcp.local.}: \begin{footnotesize} \begin{alltt} \textbf{\_http.\_tcp.local. PTR web-page.\_http.\_tcp.local.} \end{alltt} \end{footnotesize} \noindent advertises a ``web-page'' accessible over HTTP/TCP. We propose to use this mechanism for querying DNS to find devices relevant to properties or locations expressed in as our semantic names. For example, the following query: \begin{footnotesize} \begin{alltt} \textbf{\_dr5r7p4r.\_iot.\_udp.iot.org IN PTR} \end{alltt} \end{footnotesize} \noindent would look for IoT devices near the Statue of Liberty. \subsection{Structuring Queries as Subdomains}\label{sec:subdomains} The DNS system stores resource records in a hierarchical tree in which servers can delegate the management of subdomains. For example, DNS Server A, authoritative for \ttt{example.fr} can delegate the management of the records for \ttt{data.example.fr} to DNS Server B. In a similar way, we can delegate the management of a given geographical region to a specific server whose region is included in the encompassing region of the delegating domain. For instance, if we want to delegate the management of the New York area to a city-managed DNS server, we need to configure a ``New York area'' subdomain and delegate it. The \ttt{in-addr.arpa} domain, in charge of reverse DNS lookups already uses this kind of method to delegate the management of an IPv4 address to the owner: when making a reverse DNS query on \ttt{1.2.3.4}, we query \ttt{4.3.2.1.in-addr.arpa} that corresponds to the chain of delegations of the \ttt{1.in-addr.arpa} subdomain for the \ttt{1.0.0.0/8} and so on. We can use a similar method to split semantic names into multiple subdomains to easily delegate some properties or locations to other servers. Here is an example for geo-identifiers: instead of having to encode all possible geo-identifiers under the \ttt{\_iot.\_udp.iot.org} domain, we create the \ttt{dr.\_iot.\_udp.iot.org} subdomain and let it handle all areas with the \ttt{dr} geoprefix (encompassing the east coast of the USA). This subdomain can delegate subdomains to other servers if needed. Similarly, the server in charge of the \ttt{dr} prefix (east coast) can delegate the \ttt{dr5r} area (encompassing New-York) to another server (a city managed server for example). Then, the administrator of this server can choose to handle the \ttt{7p.5r.dr} subdomain, as it represents an area of 600 meters around the Statue of Liberty. As a result, instead of querying \ttt{dr5r7p.\_iot.\_udp.iot.org}, we can query for instance \ttt{7p.5r.dr.\_iot.\_udp.iot.org} and let each subdomain administrator choose if she wants to delegate some areas to other servers. There are several ways to split a given semantic name into multiple subdomains so the user has to know the number of characters in a given subdomain to use it in a query. The number of bytes in each subdomain also influences the kind of queries a user can do. For example, setting $2$ characters per subdomain, like in \ttt{34.12.\_iot.\_udp.iot.org}, makes it impossible to query directly for devices with the \ttt{123} prefix, so the user has to either query the whole prefix \ttt{12.\_iot.\_udp.iot.org} and then filter the relevant results, or query all \ttt{3[0-f].12.\_iot.\_udp.iot.org} domains ($16$ queries). Thus, we need to choose the subdomain size carefully. We propose three schemes for splitting geo-identifiers: a static subdomain length, a dynamic subdomain length, and multiple subdomain lengths. \textbf{Static subdomain length.} We set size $S$ for all subdomains. In this way, the user can split the geo-identifier in several groups of size $S$ (rounded up or down, depending on the preference of a query on the encompassing zone and then filtering, or making multiple sub-queries) without any additional knowledge. The drawback is the lack of flexibility and the arbitrary choice of $S$ that may be suitable for a given area but not for another one. \textbf{Dynamic subdomain length.} Each domain has a \ttt{TXT} record that gives the size of the subdomains related to a given area. For example, \ttt{12.\_iot.\_udp.iot.org IN TXT "len=3"} informs the user that under the \ttt{12} subdomain, each subdomain has length $3$, so one can query \ttt{345.12.\_iot.\_udp.iot.org}. This scheme supports the right subdomain length for each region: in a dense area where we need multiple precise subdomain delegations, we can set a small length to obtain precise subdivision and in sparse areas where we do not need small subdivisions (seas, fields), we can use a larger length (3 or 4). The scheme supports multiple subdomain lengths in the same query like in \ttt{6.345.12.\_iot.\_udp.iot.org} as each subdomain can set its size. The drawback is the need for recursively querying different subdomains for their \ttt{TXT} records to know the length of each field before splitting the query the right way. \textbf{Multiple subdomain lengths.} There are multiple ways to get to a given subdomain, so multiple ways of splitting the geo-identifier are possible and valid. For example, both \ttt{345.12.\_iot.\_udp.iot.org} and \ttt{5.34.12.\_iot.\_udp.iot.org} are valid and point to the same area. In this way, the users do not have to query for \ttt{TXT} records and can split their queries as they want. However, it may be hard to encode all ways of splitting the geo-identifier into subdomains in a resource record. We can simplify this method with \ttt{CNAME} records, the same way the \ttt{in-addr.arpa} domain handles the delegation of subnetworks with arbitrary size\footnote{\url{https://tools.ietf.org/html/rfc2317}} by defining multiple \ttt{CNAME} records. For example, if two different servers need to handle the prefixes \ttt{12a} and \ttt{12b} but the \ttt{12.\_iot.\_udp.iot.org} domain only defines subdomains of length $2$, we can insert the following records: \begin{footnotesize} \begin{alltt} \textbf{a NS server.handling.a.12.area} \textbf{a0 CNAME 0.a.12.\_iot.\_udp.iot.org} \textbf{a1 CNAME 1.a.12.\_iot.\_udp.iot.org} \textbf{a2 CNAME 2.a.12.\_iot.\_udp.iot.org} \textbf{\ldots} \textbf{af CNAME f.a.12.\_iot.\_udp.iot.org} \end{alltt} \end{footnotesize} We can apply the same approach to all $16$ \ttt{bX.12} records. Once the \ttt{CNAME} records are created, a user querying \ttt{a2.12.\_iot.\_udp.iot.org} will be redirected to \ttt{2.a.12.\_iot.\_udp.iot.org}, so she will try to resolve the \ttt{a.12} part and will receive an \ttt{NS} entry pointing to the server in charge of the \ttt{12a} area. Therefore, with these records, the user does not have to know how the delegation in the \ttt{12} area works, the query does not change from her point of view, but with \ttt{CNAME} and \ttt{NS} records, we can transparently delegate parts of the subdomain. Moreover, this method allows for easy modification of the server in charge of (authoritative for) \ttt{a.12} because changing the \ttt{NS} entry is easy and the \ttt{CNAME} records remain the same. The method may generate many \ttt{CNAME} entries but they are simple to generate automatically and do not need to change often. We can use the subdomain splitting for different contexts like for logical localizations. In this particular case, we can easily encode the properties in different subdomains because they are naturally ordered (a room on a given floor in a given building). For example, if the Context for Logical Localization is \ttt{2}, the position of a device in Building 1 on Floor 5 in Room 56 is as follows (with \ttt{base32} geohash in the parenthesis): \begin{itemize} \item Context-2: $2$ - (\ttt{2}) \item Building: $1$ - (\ttt{1}) \item Floor: $5$ - (\ttt{5}) \item Room: $56$ - (\ttt{1s}) \end{itemize} So, to get the sensors in this room, we send the following query: \begin{footnotesize} \begin{alltt} \textbf{\_1s.\_5.\_1.\_2.\_iot.\_udp.iot.org IN PTR} \end{alltt} \end{footnotesize} \subsection{Use of \ttt{AXFR} for the Result Set} Another way of obtaining the result set from a DNS server is to use the DNS Zone Transfer Protocol (\ttt{AXFR})~\cite{axfr} that returns all records in a zone. When a client sends an \ttt{AXFR} query message to an authoritative server, it answers with all resource records stored in the zone. This feature can be used to return the results of a query on subdomains describing a property or a geographical area of the interest. For instance, to get all devices and the corresponding data stored in the zone in \ttt{123456}, the user can use the following command: \begin{footnotesize} \begin{alltt} \textbf{dig @server AXFR 56.34.12.\_iot.\_udp.iot.fr} \end{alltt} \end{footnotesize} \section{Prototype Implementation of Semantic Discovery} \label{sec:proto} We have implemented two prototypes for geo-identifiers of LoRa devices. Their extension to consider other types of semantic names is undergoing. The prototypes for geo-identifiers are available to the public to encourage reproducibility. We show below some examples of their utilization with commands using the \ttt{dig} tool. These prototypes only consider geohashes encoded in the domain name without the use of the Context described in Section~\ref{sec:encoding}. The first prototype,\footnote{\url{https://github.com/dsg-unipr/geo-dns}} which takes advantage of the Node-based \emph{dns2} module~\cite{dns2} and the Redis in-memory database,\footnote{\url{https://redis.io}} allowed us to quickly deploy and test the concepts based on hardcoded data. \begin{figure}[t \centering \includegraphics[width=0.8\linewidth]{geo-lora}\\[-1ex] \caption{Prototype for LoRa geo-identifiers based on DNS-SD.} \label{fig:proto} \vspace{-2ex} \end{figure} The second prototype\footnote{\url{https://github.com/fabrizior/coredns}} uses the CoreDNS DNS server written in Go.\footnote{\url{https://coredns.io}} CoreDNS is highly flexible thanks to plugins that perform different functions: DNS, Kubernetes service discovery, Prometheus metrics, rewriting queries, and many more. We have modified the \textit{file} plugin that enables serving zone data from an RFC 1035-style master file. In our prototypes, Applications or Network Servers that want to discover the location of LoRa devices can query a DNS server to find the devices matching some criteria based on their location. In other types of networks, devices themselves can directly query a DNS server. In a LoRa network with geo-identifiers, when registering a device, the Network or Join Server register several records in the DNS database. First, an \ttt{SRV} record giving the domain and ports where the Network Server managing a given device can be queried. Then, \ttt{PTR} records that allows finding the device based on its geo-identifier or name: \begin{footnotesize} \begin{alltt} \textbf{<name>.\_iot.\_udp.<Domain> IN SRV <port> <domain>} \textbf{\_<geo-i>.\_iot.\_udp.<Domain> IN PTR <name>} \end{alltt} \end{footnotesize} \ttt{name} being the semantic name like described in Section~\ref{sec:encoding}. This name of the given domain is unique and describes the properties of the device. \ttt{geo-i} is the geo-identifier of the device encoded in multiple subdomains as described in Section~\ref{sec:subdomains}. When an application needs to find all devices in a given area, it can query DNS for all devices in the matching subdomain by sending a query like: \begin{footnotesize} \begin{alltt} \textbf{\_<geo-i>.\_iot.\_udp.<Domain> IN PTR}, \end{alltt} \end{footnotesize} where \ttt{geo-i} can be split into multiple subdomains if needed. The DNS server answers with the list of all \ttt{PTR} records in the queried subdomain, and therefore, in the represented area. Each \ttt{PTR} record gives the semantic name of a device in the area. Once the application knows the name of the devices in the area, it can query the DNS server for an \ttt{SRV} record with the different semantic name and get the Network Server managing the devices. We have implemented this method in our prototypes and both of them can be queried with the \ttt{dig} tool.\footnote{The address 127.0.0.1 used in the following examples should be replaced with the actual IP address of the DNS server.} Upon receiving a \ttt{PTR} query for a specific \ttt{<Service>}, the server returns all instances of that service type in the subdomain: \begin{scriptsize} \begin{alltt} \textbf{\# dig @127.0.0.1 -p 53 \_dr.\_iot.\_udp -t PTR} \textbf{;; QUESTION SECTION:} \textbf{;\_dr.\_iot.\_udp. IN PTR} \textbf{;; ANSWER SECTION:} \textbf{\_dr.\_iot.\_udp. 100 IN PTR humidity.dr12.\_iot.\_udp.} \textbf{\_dr.\_iot.\_udp. 100 IN PTR temperature.dr34.\_iot.\_udp.} \textbf{\_dr.\_iot.\_udp. 100 IN PTR temperature.dr56.\_iot.\_udp.} \end{alltt} \end{scriptsize} Then, once the application has obtained the semantic name of the device (for example, \ttt{temperature.dr56}), it can query the server for an \ttt{SRV} record with this name, which will contain the domain and ports at which access the device. It can also ask for \ttt{TXT} records to get additional data about the device. For example, still using \ttt{dig}: \begin{scriptsize} \begin{alltt} \textbf{\# dig @127.0.0.1 -p 53 temperature.dr56.\_iot.\_udp -t ALL} \textbf{;; QUESTION SECTION:} \textbf{;temperature.dr56.\_iot.\_udp. IN ALL} \textbf{;; ANSWER SECTION:} \textbf{temperature.dr56.\_iot.\_udp. 100 IN SRV 10 20 8080 dr56.unipr.it.} \textbf{temperature.dr56.\_iot.\_udp. 100 IN TXT "temperature=14"} \end{alltt} \end{scriptsize} Finally, when an A query for the \ttt{<Domain>} managing a device is received, the server returns the IP address of the Network Server the LoRa device is associated with. For example, if the following \ttt{dig} command is executed: \begin{scriptsize} \begin{alltt} \textbf{\# dig @127.0.0.1 -p 53 dr56.unipr.it -t A} \textbf{;; QUESTION SECTION:} \textbf{;dr56.unipr.it. IN A} \end{alltt} \end{scriptsize} we obtain the following answer: \begin{scriptsize} \begin{alltt} \textbf{;; ANSWER SECTION:} \textbf{dr56.unipr.it. 100 IN A 160.78.28.203} \end{alltt} \end{scriptsize} \section{DNS as a Source of IoT Data}\label{sec:dns-data} In the previous sections, we have presented the schemes for encoding device properties in domain names to discover devices by querying the DNS infrastructure. Once the user discovers some relevant devices, she still needs to contact them with different protocols to obtain data or set up data delivery process with the COAP Observe option for instance. We can also take advantage of the DNS infrastructure as a public store for IoT data in a similar way to the Cloud. Many IoT applications store data in the Cloud for further processing and access by clients. The idea of DNS as a source of IoT data is to use a \ttt{TXT} record associated with a name of an IoT device to store its data so that a large number of users can access the data in DNS instead of getting them directly from the device. As a \ttt{TXT} record linked to a domain is usually already filled with human readable data related to the domain, we can add dynamically created records. Once the IoT data is stored in the \ttt{TXT} record, users will benefit from the DNS caching infrastructure efficient dissemination: recursive resolvers will cache its content and keep the data until the time-to-live (\ttt{ttl}) of the record expires. Then, the recursive resolvers will query the authoritative server to get the new record and the updated data from the device. With this method, the end users do not need to know what kind of a protocol should be used to contact the device, as data are stored in a standard \ttt{TXT} record and no direct contact between the user and the device is required. \subsection{Encoding Data in \ttt{TXT} Records} RFC 6950\footnote{\url{https://tools.ietf.org/html/rfc6950}} describes under what conditions an application can use DNS to store data and provides several recommendations and warnings indicated by other RFCs. RFC 1464\footnote{\url{https://tools.ietf.org/html/rfc1464}} formalized the \ttt{<key>=<value>} format for storing data in \ttt{TXT} records, so in the case of the example of a temperature sensor, the DNS entry should be \ttt{<domain> IN <ttl> TXT "temperature=14"}. Not all types of data should be placed in DNS: records with a large size can be used by attackers as an amplifier to generate a lot of traffic~\cite{Rossow14amplificationhell} (this is why \ttt{.com} records are limited to 1460 bytes). Therefore, this solution may not be suitable for all kinds of sensors. For example, a device taking periodic $512 \times 512$ pictures would generate data that should not be put on DNS, instead, the user will have to find a way to contact the device directly, or its Network Server to get the data from a suitable source. \subsection{Updating Data in \ttt{TXT} Records} To keep data in the DNS record updated, there should be a process or an entity that gets the data from the device and updates the corresponding \ttt{TXT} record. For non-constrained devices, an IoT device could update its own record, but for most constrained devices, this kind of operation may be too costly, so another entity can update data. For LoRa networks, all data from the devices go through the Network Server. As this server is not constrained, it can update the \ttt{TXT} record on behalf of the device using, for example, secure Dynamic DNS Update protocol extension~\cite{zone} and a standard Unix \ttt{nsupdate} command to insert the new values in the zone file of the authoritative DNS server. Because the data are not updated in real time, it is important to choose a suitable \ttt{ttl} value of the \ttt{TXT} record, so that the data are ``marked'' as out of date when new values are available. The \ttt{ttl} value must take into account the frequency at which the Network Server retrieves the new data from the device and updates the corresponding DNS record. For example, if the Network Server retrieves the temperature data and dynamically updates \ttt{TXT} records every hour, then the \ttt{ttl} value should be ``synchronized'' and also set to one hour so that the information stored in caches of local DNS resolvers, which request the data on behalf of local clients, is also up to date. We can also use the Incremental Transfer mechanism (\ttt{IXFR})\footnote{\url{https://tools.ietf.org/html/rfc1995}} designed to transfer only a modified part of a zone, for example, the updated \ttt{TXT} records with the changed temperature. Each time the zone is dynamically updated by, for example, the Network Server, the serial number of its zone is increased. Therefore, after the initial \ttt{AXFR} transfer, the client should keep record of the Start of Authority (\ttt{SOA}) serial number of the transferred zone. Next, the client can send an \ttt{IXFR} request with the registered version number so that the authoritative name server responds only with the deleted and added resource records since the version known by the \ttt{IXFR} client up to the current version of the zone stored by the authoritative server. For example, to get new data related to the \ttt{123456} location, the client can use the following command: \begin{footnotesize} \begin{alltt} \textbf{dig @server IXFR=[old-ser] 56.34.12.\_iot.\_udp.iot.fr} \end{alltt} \end{footnotesize} \section{Conclusion}\label{sec:conclusion} \label{sec:conclusion} In this paper, we have proposed a scheme for representing semantic metadata of IoT devices in compact identifiers and DNS names to enable simple discovery and search with standard DNS servers. Our scheme defines a binary identifier as a sequence of bits composed of a Context and several bits of fields encoding semantic properties specific to the Context. The bit string is then encoded as a character string, stored in DNS. In this way, we may take advantage of the DNS system as the basic functionality for querying and discovery of semantic properties related to IoT devices. We have defined specific Contexts for hierarchical properties as well as logical and geographic locations. For this last part, we have developed two prototypes that manage geo-identifiers in LoRa networks to show that the proposed scheme can take advantage of the standard DNS infrastructure. Regarding future work, we will thoroughly assess the proposed approach using deployed LoRaWAN devices. Furthermore, we plan to further investigate the idea of DNS as a source of IoT data, with particular attention to the problem of getting data from the devices and updating the \ttt{TXT} records. \section*{Acknowledgments} This work has been partially supported by the French Ministry of Research projects PERSYVAL-Lab under contract ANR-11-LABX-0025-01 and DiNS under contract ANR-19-CE25-0009-01. \bibliographystyle{IEEEtran}
ddc2f2ecf3b87bd0451caa000012bc90a7aee072
\section{Introduction} The study of transport through conductors coupled by the Coulomb interaction is a promising research field since the late 70's, when Pogrebinskii\cite{pogrebinskii} proposed an alternative way of measuring inner properties of solids which involved two electrically isolated 2D conductors (or layers) placed close together. The measurement protocol was based on the mutual friction (i.e. Coulomb-mediated scattering processes) felt by charge carriers belonging to different conductors due to long-range interactions. In these scattering processes momentum and energy can be transferred between the layers in spite of being electrically isolated from each other. A spectacular effect of this mutual friction is the Coulomb drag\cite{pogrebinskii,drag2,reviewdrag}, in which a charge current is induced in an unbiased conductor (known as ``passive" conductor) simply applying a bias to the other (the ``active" conductor). This effect has been extensively studied in a wide variety of systems, from layered conductors\cite{bilayer1,bilayer2,bilayer3} to smaller dimensional systems as coupled quantum wires\cite{wiredot,wireexp1,wireexp2} or even double quantum dots structures\cite{qdots1,qdots3,qdots4}. In particular, experimental investigations in systems composed of Coulomb coupled quantum dots are reported in Refs.\cite{qdots2,add1,add2,add3,add4,add5}. Even more interestingly, not only charge currents but also heat and energy flows can be induced in the unbiased conductor thanks to the energy transfer between the Coulomb coupled objects. In recent years, this phenomenon has rekindled the interest of theoretical and experimental communities in Coulomb-coupled devices, especially in their thermodynamics, due to the possibility of using such energy transfer to develop novel nanotechnologies. Some examples are, the implementation of a single-electron heat diode\cite{diode}, a self contained quantum refrigerator \cite{qrefrigerador}, and the realization in a laboratory of a Szilard engine\cite{szilard}, an energy harvester \cite{add4,add5}, as well as an autonomous Maxwell's demon\cite{demon} that convert thermal energy into work by the use of information. Among the most recent works addressing the study of heat transport and entropy production in Coulomb coupled systems, we find Refs.\cite{heatqd1, heatqd3,heatqd2,heatqd4} for double quantum dot circuits, and Ref.\cite{tdrag} in the case of coupled Coulomb-blockaded metallic islands and quantum wires. In this work we focus on several phenomena that boil down to a main question, namely what would happen if instead of a bias voltage or a thermal gradient, the active conductor is driven by time-dependent gates? The natural questions regard the effects of the friction given by the Coulomb coupling on the scope of quantum pumping at low temperatures, the energy dissipation and the efficiency of the energy transfer between the Coulomb coupled dots. Here we make a first step towards the answer to these questions, which have not been discussed to the best of our knowledge. We believe that our results can shed light into the art of manipulating charge and energy fluxes, which is a crucial task for the development of new technologies. To achieve our goal we focused on the study of time-dependent charge and energy transport in a basic setup, which could be experimentally realized and is shown in Fig \ref{fig1}. It is composed by two Coulomb coupled quantum dots, namely the active dot and the passive dot, which are coupled in series with two electron reservoirs at the same temperature and chemical potential. Only the active quantum dot is driven by the application of an adiabatic time-periodic local gate that moves its level around the Fermi energy of the reservoir. From the theoretical perspective, Coulomb-coupled quantum dot systems that are driven by bias voltages or a thermal gradients were previously studied mostly by the recourse of the master equation approach, valid in the regime where the hybridization with the reservoirs $\Gamma$ is negligible compared to the temperature and the Coulomb interaction $U$ \cite{qdots1,qdots3,qdots4,diode,qrefrigerador,heatqd1,tdrag}. Results were also presented by using the non-equilibrium non-crossing approximation (NCA) for rather high temperatures \cite{addnca}, since this approach fails at very low temperatures below the characteristic Kondo temperature. On the other hand, the use of master equations in similar systems and under the presence of adiabatic time-dependent drivings was addressed in Refs. \cite{addtdmeq1,addtdmeq2}. At low temperatures and for a small interaction $U$, $U\ll\Gamma$, Ref.\cite{heatqd2} showed that the renormalized perturbation theory (RPT) in $U/\Gamma$ offers the most reliable description. However, in this work we focus on a different interesting regime, in which the temperature is very low and the interaction $U$ is larger than the hybridization. In order to describe the adiabatically driven interacting system in this latter regime, we use the mean-field slave-spin 1 approach in Ref.\cite{slaveapin} that efficiently captures the main effects Coulomb coupling as well as the dynamical nature of the driving. As we discuss below, the slave-spin method realizes a mean-field which is suited to treat strong electron-electron interactions. \begin{figure} \includegraphics[width=0.41\textwidth]{fig1.pdf} \caption{Scheme of the theoretical model considered in this work. It consists of two single level quantum dots, which are coupled in series with two non-interacting electron reservoirs with tunneling amplitudes $w_{a/p}$. The dots are coupled to each other through a Coulomb interaction of magnitude $U$, so that charge transfer between them is forbidden. The dot that is called ``active dot" is driven out of equilibrium by a time-periodic gate $\varepsilon_a(t)$, while the other quantum point (the ``passive dot") remains undriven with a constant energy level $\varepsilon_p$. All the reservoirs are at the same temperature $T$ and have the same chemical potential $\mu$. }\label{fig1} \end{figure} The paper is organized as follows. In Sec. \ref{model} we introduce the model and the time-dependent slave-spin 1 approach within the adiabatic regime. Then in Sec. \ref{charge and energy} we compute charge and energy fluxes in the system. Sec. \ref{results} presents the results for an illustrative example. Finally Section \ref{conclusions} is devoted to the summary and conclusions. \section{Model and formalism}\label{model} We consider the setup in Fig.\ref{fig1}, where we assume the quantum dots to be single-level with spinless electrons. This is actually a simplification, which has also been assumed before in the literature, see for instance Refs. \cite{qdots4,diode,heatqd2,heatqd4}, and that certainly could be experimentally realized by the application of a magnetic field able to completely polarize the dots. The prototypical device can be thought as composed by two subsystems, namely ``active" and ``passive", each containing a quantum dot and the corresponding reservoir. Accordingly, the active subsystem holds the active dot which is driven by the time-periodic local gate $\varepsilon_a(t)$. The passive subsystem remains undriven, with the quantum dot at a constant energy level $\varepsilon_p$. These two subsystems interact only through a Coulomb repulsion between inter-dot electrons, of magnitude $U$. Thus, we describe the full system by the Hamiltonian $H_{Full}(t)={\mathcal{H}}_a(t)+{\mathcal{H}}_p+Un_an_p$, where \begin{equation} {\mathcal{H}}_\alpha={\mathcal{H}}^{dot}_\alpha+{\mathcal{H}}^{Res}_\alpha\;\;\;\text{for $\alpha=a,p$} \end{equation} represents the uncoupled active and passive subsystems, with \begin{equation}\label{hamdot} {\mathcal{H}}^{dot}_\alpha=\varepsilon_\alpha n_\alpha+\sum_{\mathclap{k_\alpha}}w_\alpha(c^\dagger_{k_\alpha}d_\alpha +d_\alpha^\dagger c_{k_\alpha}), \end{equation} for the quantum dots along with the tunneling couplings with the reservoirs, and \begin{equation}\label{ham} {\mathcal{H}}^{Res}_\alpha=\sum_{\mathclap{k_\alpha}}\varepsilon_{{k_\alpha}}c^\dagger_{k_\alpha}c_{k_\alpha}, \end{equation} for the non-interacting reservoirs, which are assumed to be at equilibrium. The third term in $H_{Full}$ describes the Coulomb interaction, where the occupation operator reads ${n}_{a/p}=d^\dagger_{a/p}d_{a/p}$. On the other hand, the operator $c_{k_{a}}(c_{k_{p}})$ and its conjugate belong to the reservoir lying in the active (passive) subsystem, and $\varepsilon_{k_a}(\varepsilon_{k_b})$ is its energy band. The corresponding dot-reservoirs tunneling amplitudes are $w_{a/b}$. The Coulomb coupling between the two quantum dots does not allow for an exchange of electrons between them. Therefore, energy transfer between active and passive subsystems will not be accompanied by any charge transfer between the two subsystems. Nevertheless, both quantum dots can certainly exchange particles with the reservoir they are attached to. In fact, as we are going to show later on in this section, energy transfer processes are accompanied (or mediated) by an induced time variation of charge in the passive dot. This charge current between the passive dot and the reservoir on the right is of zero net value when averaged over one oscillation period of the driving. Fig. \ref{fig2} shows in a more intuitive way how energy exchange processes take place. As an illustration, but without loss of generality, we consider an example in which all the reservoirs are at the same temperature $T=0$. The energy level of the active dot evolves as $\varepsilon_a(t)=\varepsilon_0\cos(\omega t)+\mu$ with $\varepsilon_0>\mu$, while $\varepsilon_p$ is constant and $\varepsilon_p\ll\mu$. In this way, the initial configuration (see sketch 1) corresponds to the active level being above the Fermi sea while the passive dot lies below. Thus, the initial occupancy is $(n_a,n_p)=(0,1)$. During the first half period, in the second sketch, the active level fills up as it goes below $\mu$. Then, the Coulomb repulsion between electrons from different dots start to be felt, and this opposes to the filling of the active dot. Therefore, in order for the active dot to be filled up, it has to pay an energy cost which is extracted from the external time-dependent driving sources. Part of the energy that the active dot receives from the sources is then delivered to the passive dot so that the electron there can tunnel above the Fermi level of the right reservoir (see 3), leaving in this way the dot empty. By last, during the second half period of the driving, the emptying process of the active dot takes place and therefore the passive dot can be occupied again. Thus, as shown in step 4, an electron of energy $\varepsilon_p$ from the right reservoir tunnels into the passive dot, generating a hole deep inside the Fermi sea. So then, an electron with an energy higher than $\mu$ decays to the hole, relaxing an amount of energy that will be then dissipated as heat in the reservoir but that could be eventually reused (or transformed) in a modified setup. \begin{figure} \includegraphics[width=0.5\textwidth]{fig2.pdf} \caption{Scheme for energy transfer mechanisms. The active dot evolves as $\varepsilon_a(t)=\varepsilon_0\cos(\omega t)+\mu$ with $\varepsilon_0>\mu$, while the levels of the passive dot levels remains constant and deep below $\mu$. 1)Initially, the active level is empty but the passive dot is filled. 2) During the 1st half of the oscillation period, the passive level goes below $\mu$, and in order to be filled, it has to pay the energy cost of the Coulomb repulsion. 3) Energy is transmitted from active to passive dots, so that the electron lying in the passive dot can tunnel above $\mu$. 4) During the 2nd half of the period, the active dot gets empty again so that the passive dot can return to be filled, dissipating heat during the process. }\label{fig2} \end{figure} Thus, we see that the driving of the active dot induces a time-dependent charge current between the passive dot and the reservoir on the right, which is neutral on average over a completed driving period. The passive dot has to receive some energy from the active subsystem in order to allow for these charge variations. This feature is properly captured by the time-dependent slave-spin 1 approach we previously introduced in Ref. \cite{slaveapin}, which is presented in the following section. \subsection{Slave-spin 1 approach and the adiabatic regime} We choose the slave-spin 1 (S-S1) mean-field approach for finite U introduced in Ref. \cite{slaveapin} as a minimal theoretical framework which captures the main effect of the electronic correlations in two-level systems, while also allowing for an analytical treatment that can be combined with a linear-response treatment for slow driving frequencies. Within the S-S1 approach, the original interacting Hamiltonian $H_{Full}(t)$ is represented in an enlarged Hilbert space that contains an auxiliary S=1 spin together with two pseudofermions. The slave spin is in correspondence with the total fermionic number of the four possible electronic configurations for the double dot system $(n_a,n_p)=\{(0,0); (1,0); (0,1); (1,1)\}$. Individual electrons are represented within this framework in terms of a pseudofermionic operator $d^*_{a/p}$ together with the $S=1$ spin. So that, operators belonging to the quantum dots are equally represented under the transformations: $d_{a/p}\rightarrow d^*_{a/p}S^-/(\hbar\sqrt{2})$ and $n_{a/p}\rightarrow n^*_{a/p}$, while the Coulomb interaction $n_an_p\rightarrow S_z(S_z+\hbar)/(2\hbar^2)$ can be rewritten in terms of the spin solely. The four physical electronic configurations are ensured by enforcing the following constraint on the total number of electrons \begin{equation}\label{constraint} n^*_a+n^*_p=\frac{S_z}{\hbar}+1. \end{equation} Plugging the above transformations for the operators into Eq. (\ref{ham}), the S-S1 Hamiltonian of the full system can be written as \begin{eqnarray}\label{hamfull2} H_{Full}^*(t)&=&\sum_{\alpha=a,p}{\mathcal{H}}^{{dot}^*}_{\alpha}(t)+{\mathcal{H}}^{Res}_\alpha\\ &&+\left(\frac{U}{2\hbar}S_z-\lambda(t)\right)\left(\frac{S_z}{\hbar}+1\right),\nonumber \end{eqnarray} with \begin{eqnarray}\label{ham2} \mathcal{H}^{{dot}^*}_\alpha(t)&=&\varepsilon^*_\alpha(t){n}^*_\alpha+\sum_{\mathclap{k_{\alpha}}}\frac{w_{\alpha}}{\hbar\sqrt{2}}\left(S^-c^\dagger_{k_{\alpha}}d^*_\alpha+\text{H.c.}\right), \end{eqnarray} where $\lambda(t)$ is the Lagrange multiplier imposing the constraint on the occupancy in Eq. (\ref{constraint}) at every time, and $\varepsilon^*_\alpha(t)=\varepsilon_\alpha(t)+\lambda(t)$ are the re-normalized energy levels of the dots. In Eq. (\ref{hamfull2}), the Hamiltonians of the reservoirs $H^{Res}_\alpha$ remain the same since they are non-interacting, so that the S-S1 representation does not apply to them. Now, as customary in other slave-particles methods, we are going to treat the problem within the mean-field approximation (MF) that consist of decoupling fermionic and spin degrees of freedom and treating the constraint on average. The latter assumptions are justified for $U\gg \{\Gamma_{a/b}, \dot{\varepsilon}_a\}$, where $\Gamma_{a/b}$ are the hybridizations with the active and passive reservoirs. Thereby, fluctuations of the spin with respect to the mean values can be neglected even under the action of the time-dependent driving, as long as the adiabatic condition $\dot{\varepsilon}_a\ll \Gamma_{a/b}$ is satisfied. Then, we should replace the components of the salve-spin $S_z$, $S^{+/-}=S_x \pm i S_y$ with their expectation values $\langle S \rangle$, and neglect their fluctuations. Consequently, the interacting Hamiltonian $H_{Full}(t)$ is mapped into a non-interacting one \begin{equation}\label{hammf} \tilde{H}_{full}(t)=\tilde{\mathcal{H}}_a(t)+\tilde{\mathcal{H}}_p(t)+\beta(t), \end{equation} where $\tilde{\mathcal{H}}_\alpha(t)$ is the effective MF Hamiltonian for the subsystem $\alpha=\{a,p\}$, with $\tilde{\mathcal{H}}_\alpha(t)=\tilde{\mathcal{H}}_\alpha^{dot}(t)+{\mathcal{H}}_{\alpha}^{Res}$, \begin{equation}\label{hamdotmf} \tilde{\mathcal{H}}_{\alpha}^{dot}(t)=\varepsilon^*_\alpha(t){n}^*_\alpha+\sum_{\mathclap{k_{\alpha}}}{w^*_{\alpha}(t)}c^\dagger_{k_{\alpha}}d^*_\alpha+\mbox{H.c.}, \end{equation} and \begin{equation}\label{betaoriginal} \beta=\frac{U}{2\hbar^2}\left(\langle S^2_z\rangle+\hbar\langle S_z\rangle\right)-\frac{\lambda}{\hbar}\left(\langle S_z\rangle+\hbar\right), \end{equation} with the re-normalized tunneling factors being $w_\alpha^*(t)=w_\alpha\langle S^-\rangle(t)/(\hbar\sqrt{2})$. Then, we can see that within the mean-field S-S1 framework ``active" and ``passive" subsystems are described as effectively uncoupled from each other, even though they actually interact via the Coulomb repulsion. In fact, all the information about the interaction between the subsystems is contained in the effective Hamiltonian parameters $\varepsilon_{a/p}^*(t)$ and $w_{a/p}^*(t)$, which are all time-periodic functions due to the periodicity of the driving. Although in the real setup the driving is merely applied to the active dot, now under the mean-field S-S1 both the dots turn out to be driven in time. This is because the Coulomb interaction is portrayed as extra time-dependent driving sources, acting locally on the levels of the dot as well as on the contact with the leads. The latter feature of this model is advantageous for describing the energy transfer mechanisms sketched in Fig. \ref{fig2}, since the induction of transport in the passive subsystem due to the interaction is simply explained in terms of extra time-dependent driving sources acting on that part of the device. For finding the effective Hamiltonian parameters $\varepsilon^*_{a/p}(t)$, $w^*_{a/p}(t)$, and the function $\beta(t)$, the coupled problem between fermionic and spin dynamics must be solved. In this work we consider the driving to be within the adiabatic regime ($ad$), namely a slow evolution in time of the parameter $\dot{\varepsilon}_a\rightarrow 0$. For this regime, as explained in detail in Ref. \cite{slaveapin}, the spin dynamics is simplified since all the spin components turn out to depend on $\langle S_z \rangle$ solely, as the only independent component. In particular, for the Hamiltonian parameters we have \begin{eqnarray}\label{relationsS} &&\vert\langle S^-\rangle^{ad}\vert^2=\hbar^2-\langle {S_z\rangle^{ad}}^2,\\ &&\langle S_z^2\rangle^{ad}=({\langle S_z\rangle^{ad}}^2+\hbar^2)/2.\nonumber \end{eqnarray} In this way, the vertical component $\langle S_z\rangle^{ad}(t)$ together with $\lambda^{ad}(t)$ constitute the full set of variables describing the the Coulomb interaction, and their dynamics is obtained by solving the following set of slave-spin equations \cite{slaveapin} composed by: The equation of motion for $S_z$, that is \begin{eqnarray}\label{consteqad} 0&=&\left(\lambda^{ad}-\frac{U}{2}\langle n^*\rangle\right)\left(1-\frac{{ \langle S_z\rangle^{ad}}^2}{\hbar^2}\right) \nonumber\\ &&+\frac{ \langle S_z\rangle^{ad}}{\hbar}\sum_{\mathclap{\alpha=a,p}}2\mbox{Re}\left\{w^*_\alpha\langle c^\dagger_{k_\alpha}d^*_\alpha\rangle\right\}, \end{eqnarray} with $n^*=n^*_a+n^*_p$, and the constraint on average \begin{equation}\label{eq12eqad} \frac{\langle S_z\rangle^{ad}}{\hbar}+1=\langle n^*\rangle=\sum_{\alpha=a,p}\langle {d^*_\alpha}^\dagger d^*_\alpha\rangle. \end{equation} In the above equations the expectation values for the pseudo-fermions $\langle n^*\rangle$ and $\langle c^\dagger_{k_\alpha} d^*_\alpha\rangle$ should be consistently evaluated within the adiabatic regime, namely in linear response in the small variation of the effective parameters $\dot{\varepsilon}^*_\alpha$ and $\dot{w}_\alpha^*$, which involve the time-derivatives of the variables $\dot{\lambda}^{ad}$ and ${\dot{\langle S_z\rangle}}^{ad}$ (for further details see Ref. \cite{slaveapin}). Then it is important to notice that even though Eqs. (\ref{consteqad}) and (\ref{eq12eqad}) appear to be stationary, they actually constitute a system of ordinary differential equations due to the presence of time-derivatives of the variables inside the electronic expectation values. Although the above set of equations could be difficult to solve, the quasi-static evolution of the system makes it possible to approximate the solutions as little variations around the static (or frozen) solutions at every time $t$: \begin{eqnarray}\label{solap} \langle S_z\rangle^{ad}(t)&\sim & S_z^{t}+\delta S_z\\ \lambda^{ad}(t)&\sim &\lambda^{t}+\delta \lambda,\nonumber \end{eqnarray} where the index $t$ means that they are static values, in the sense that the dependence on time is purely parametric, as in a series of snapshots of the system in equilibrium with frozen parameters. The first order corrections taking into account the effect of the slow driving are $\delta S_z$ and $\delta\lambda$, which depend on the frozen solutions and are $\propto \dot{\varepsilon}_a$. The expansion in Eq. (\ref{solap}) has the advantage of offering a practical description of the dynamics in terms of the frozen static values $S_z^t$ and $\lambda^t$ solely, for which Eqs. (\ref{consteqad}) and (\ref{eq12eqad}) are reduced to a stationary system of non-linear equations as presented in Ref.\cite{slaveapin}. When evaluated in the system model of this work, the latter set stationary equations reads \begin{eqnarray}\label{consteq} 0&=&\left[\lambda^t-\frac{U}{2}\left(1+\frac{ S_z^t}{\hbar}\right)\right]\left(1-\frac{{ S_z^t}^2}{\hbar^2}\right)\\ &&+\frac{ S_z^t}{\hbar}\sum_{\mathclap{\alpha=a,p}}\;\int\frac{d\varepsilon}{\pi}{\rho}^t_\alpha(\varepsilon)(\varepsilon-{\varepsilon^t_\alpha})f(\varepsilon),\nonumber \end{eqnarray} and \begin{equation}\label{eq12eq} \frac{S_z^t}{\hbar}+1=\sum_{\alpha=a,p}\int\frac{d\varepsilon}{2\pi}{\rho}_\alpha^t(\varepsilon)f(\varepsilon). \end{equation} Here ${\varepsilon^t_\alpha}=\varepsilon_\alpha(t)+\lambda^t$, and $\rho_\alpha^t(\varepsilon)=\Gamma_\alpha^t/[(\varepsilon-\varepsilon^t_\alpha(t))+i\Gamma_\alpha^t/2]$, with $\Gamma_\alpha^t=\Gamma_\alpha(\hbar^2-{S_z^t}^2)/2\hbar^2$ being the effective hybridization, is the density of states of the quantum dot in subsystem $\alpha$. We are considering the wide-band limit, where the bare hybridization reads $\Gamma_\alpha=w_\alpha^2{\theta}$ with $\theta$ being the energy independent density of states of the $\alpha$-lead. On the other hand, $f(\varepsilon)$ is the Fermi-Dirac distribution which in this work is the same for both reservoirs. Linear response terms in Eqs. (\ref{consteqad}) and (\ref{eq12eqad}) are taken into account when computing the corrections $\delta S_z$ and $\delta\lambda$, which as mentioned above, depend on the above frozen solutions and also on the spectral properties of the system. These latter can be found by solving a simple system of linear equations, but since they are not crucial for this work, we refer the reader to Ref. \cite{slaveapin} for details on how to compute them. \section{Charge and energy fluxes}\label{charge and energy} \subsection{Pumping charge} In this work we focus on the effect of the Coulomb coupling on charge and energy transfer, with a particular interest in the passive subsystem, where the transport is exclusively induced by the action of $U$. Regarding the transport of charge, electronic pumping currents flow in the contacts with the reservoirs in response to the time-periodic driving, while there cannot be any exchange of particles between the two dots. Then, the charge current entering reservoir $\alpha$, $I_\alpha(t)$, should obey the following charge conservation law \begin{equation} I_\alpha=-e\frac{d\langle n^*_\alpha\rangle}{dt}, \end{equation} that establishes a relation between the currents entering the reservoirs and the charge variations in the dots. The lowest-order contribution of the above current, $I^{(1)}_\alpha \equiv I_\alpha^{pump}$, is of first order in the variation of the Hamiltonian parameters, $\dot{\varepsilon^*_\alpha}(t)$ and $\dot{w_\alpha^*}(t)$ that are $\propto\hbar\omega$. In Ref.\cite{motor}, we have already provided an expression for the first-order (or linear response) current in the case of a single non-interacting quantum dot being driven by time-dependent couplings to the leads and also a time-dependent energy level. These calculations are applicable to the present problem within the mean-field approximation, for which active and passive dots appear to be non-interacting and uncoupled to each other. Then the current must be evaluated with the effective parameters given by the S-S1 mean-field approach, which contain all the information about the finite $U$ coupling. Consequently, the pumping current reads \begin{eqnarray}\label{curr} \frac{I_\alpha^{pump}}{e}&&=\sum_{k_\alpha}\frac{i}{\hbar}\langle\left[\tilde{H}_{full},c_{k_\alpha}^\dagger c_{k_\alpha}\right]\rangle\nonumber\\ &&=\int\frac{d\varepsilon}{2\pi}\frac{df}{d\varepsilon}\rho_\alpha^t\Gamma_\alpha^t\partial_t\left(\frac{\varepsilon-\varepsilon_\alpha^t}{\Gamma_\alpha^t}\right). \end{eqnarray} \subsection{Power and energy transfer} The Coulomb interaction does not allow for an exchange of electrons between the two quantum dots. Yet, it allows for a net energy transfer, as sketched before in Fig. \ref{fig2}. In order to study the latter energy transfer mechanisms between active and passive subsystems, we should first analyze the variation of the energy in the full system and then the way it is distributed in the two subsystems. In contrast to the pumped charge, that is conserved for the full system, the corresponding rate of change in the total energy is equal to the power developed by the external ac-sources \begin{equation}\label{power1} P^{ac}(t)=\langle \partial_t H_{full}\rangle=\dot{\varepsilon}_a(t)\langle n_a\rangle=\dot{\varepsilon}_a(t)\langle n_a^*\rangle. \end{equation} At this point, although $\langle n^*_a\rangle(t)$ could be simply computed as the time integral of the current in Eq. (\ref{curr}) for $\alpha=a$, it turns out to be more convenient to work with the effective Hamiltonian, as $P^{ac}=\langle \partial_t \tilde{H}_{full}\rangle$, because this allows us to easily identify how the energy is distributed between the two subsystems during the driving. However, at first glance, the fact that the mean-field S-S1 introduces extra time-periodic parameters for describing the interaction, implies that some care should be taken before replacing the original Hamiltonian $H_{full}$ by the approximated $\tilde{H}_{full}$ in the definition of the power as it is usually done for computing the charge current{\footnote{We stress here that transforming the Hamiltonian under the S-S1 framework $H_{full}\rightarrow H_{full}^*$ is an exact representation, while approximations are imposed only when treating the problem within mean-field with the Hamiltonian $\tilde{H}_{full}$.}}. However, in our case the substitution is justified since (as we will show in the following) \begin{equation}\label{eqH} \langle \partial_t H_{full}\rangle^{ad}=\langle \partial_t \tilde{H}_{full}\rangle^{ad} \end{equation} at least within the adiabatic regime. In order to prove the latter equality, we start by comparing Eq. (\ref{power1}) with the time derivative of the MF Hamiltonian in Eqs. (\ref{hammf}) and (\ref{hamdotmf}) \begin{eqnarray} \dot{\varepsilon}_a(t)\langle n^*_a\rangle&=&\sum_{\alpha=a,p}\dot{\varepsilon}_\alpha^*(t)\langle n^*_\alpha\rangle +2\mbox{Re}\left\{\dot{w}_\alpha^*(t) \langle c^\dagger_{k_\alpha} d^*_\alpha\rangle\right\}\nonumber\\ &&+\dot{\beta}(t), \end{eqnarray} and since $\dot{\varepsilon}^*_a(t)=\dot{\varepsilon}_a(t)+\dot{\lambda}(t)$, $\dot{\varepsilon}^*_p(t)=\dot{\lambda}(t)$ and $\dot{w}_\alpha^*(t)={w_\alpha}d_t\langle S^-\rangle/{\hbar\sqrt{2}}$, the previous equation is then reduced to \begin{equation}\label{betadot} \dot{\beta}(t)=-\dot{\lambda}(t)\langle n^*\rangle\!-\!\!\!\sum_{\alpha=a,p}\!\!\frac{\sqrt{2} w_\alpha }{\hbar}\mbox{Re}\left\{\dot{\langle S^-\rangle}\langle c^\dagger_{k_\alpha} d^*_\alpha\rangle\right\}. \end{equation} Now, taking Eq. (\ref{betaoriginal}) and using the relations between the components of the spin in (\ref{relationsS}), we can express $\beta$ in the adiabatic regime as \begin{equation}\label{bad} \dot{\beta}^{ad}=\left(\frac{U}{2}\langle n^*\rangle-\lambda^{ad}\right)\dot{\langle S_z\rangle}^{ad}-\dot{\lambda}^{ad}\langle n^*\rangle, \end{equation} and, on the other hand, we also know from the time derivative of Eq. (\ref{relationsS}) that \begin{equation}\label{ders} {\dot{\langle S^-\rangle}}^{ad}=-\frac{\hbar\langle S_z\rangle^{ad}}{\langle S^-\rangle^{ad}}{\dot{\langle S_z\rangle}^{ad}}. \end{equation} Now, plugging Eqs. (\ref{bad}) and (\ref{ders}) into (\ref{betadot}), we obtain \begin{equation} \left(\frac{U}{2}\langle n^*\rangle^{ad}-\lambda^{ad}\right)=\frac{\hbar\langle S_z\rangle^{ad}}{\vert \langle S^-\rangle^{ad}\vert^2}\!\sum_{\alpha=a,p}\!2\mbox{Re}\left\{\! w_\alpha^*\langle c^\dagger_{k_\alpha} d_\alpha^*\rangle\right\}, \end{equation} that it is the same as the slave-spin equation (\ref{consteqad}), which shows that Eq. (\ref{eqH}) is satisfied. \subsubsection{Energy distribution} Now that we have already shown the validity of Eq. (\ref{eqH}), so that the MF preserves the definition of the power $P^{ac}=\langle\partial_t\tilde{H}_{full}\rangle^{ad}$, we can start analyzing the energy distribution in the system. From Eq. (\ref{hammf}) we know that \begin{equation}\label{power} P^{ac}=\frac{d}{dt}\langle\tilde{\mathcal{H}}_a\rangle +\frac{d}{dt}\langle\tilde{\mathcal{H}}_p\rangle+\dot{\beta}, \end{equation} where the first two terms tell us that a portion of the energy delivered by the external ac-sources is temporally stored in the active and passive subsystems, while there is also a third contribution from the temporal variation of function $\beta$. This latter function, that is exclusively introduced by the MF, it is constant for stationary systems and so it is generally discarded. Nonetheless, it may not be dismissed when studying time-resolved energy transfer because its dynamical nature impacts on the power. Due to the periodicity of the MF parameters $\lambda$ and $\langle S_z\rangle$ in Eq.(\ref{betaoriginal}), $\beta$ turns out to be also a time-periodic function and, as such, its time-derivative vanishes when averaged over one driving period $\tau=2\pi/\omega$, $\overline{\dot{\beta}}\tau=\int_0^\tau \dot{\beta}dt=\beta(\tau)-\beta(0)=0$. Thus, we identify $\dot{\beta}$ as a conservative term in the power insomuch as it does not give a net contribution to the rate of change of the energy. On the other hand, the energy rates $d {\langle {\tilde{H}}_{\alpha}\rangle}/dt$ with $\alpha=\{a,p\}$ are exactly the power $P_\alpha$ developed by the effective potentials in the subsystem $\alpha$, \begin{equation}\label{powersub} P_\alpha=\frac{d}{dt}\langle\tilde{\mathcal{H}}_\alpha\rangle=\dot{\varepsilon}_\alpha^*(t)\langle n^*_\alpha\rangle +2\mbox{Re}\left\{\dot{w}_\alpha^*(t) \langle c^\dagger_{k_\alpha} d^*_\alpha\rangle\right\}. \end{equation} That will have a conservative contribution, $P^{cons}_\alpha$, and one dissipative $P_\alpha^{diss}$, so that $P_\alpha=P^{cons}_\alpha+P^{diss}_\alpha$. As explained in Ref. {\cite{singledot}} for a single dot device, the conservative component $P^{cons}_\alpha$ corresponds to an amount of energy that is temporally stored in the quantum dot with a zero net value $\overline{P^{cons}_\alpha}=0$, and it is not related to the energy transferred to the reservoirs, which is purely dissipative and therefore it is contained in $P_\alpha^{diss}$. Particularly in this work, we will focus on the dissipate components of the power in Eq. (\ref{power}), that are those giving a net transfer of energy from the active to the passive subsystem. For evaluating the corresponding dissipative component of the power in Eq. (\ref{powersub}), we again follow the procedure of Ref. \cite{motor}, to which we refer the reader for further details. Thus, \begin{equation}\label{powerdiss} P^{diss}_\alpha=\hbar\int\frac{d\varepsilon}{4\pi}\frac{df}{d\varepsilon}\left(\rho_\alpha^t\Gamma_\alpha^t\partial_t\left(\frac{\varepsilon-\varepsilon_\alpha^t}{\Gamma_\alpha^t}\right)\right)^2. \end{equation} The dissipation of energy in each subsystem will be in the form of heat deep inside the reservoirs $\alpha$, so that $P^{diss}_\alpha=\dot{Q}_\alpha$. However, due to the arrangement of the quantum dots in the device, the passive subsystem can get energy only from the active subsystem. In the way that all the heat that is dissipated in the passive reservoir must correspond to energy coming from the active subsystem. Consequently, we may identify the heat $\dot{Q}_p$ in passive reservoir as the flux of energy exchanged between the subsystems, $\dot{E}^{a\rightarrow p}\equiv\dot{Q}_p$ that goes from the active to the passive subsystem. In this way, the equation for the net energy distribution in the full system reads \begin{equation}\label{energydistribution} \overline{P^{ac}}=\overline{P^{ac}_{diss}}=\overline{P_a^{diss}}+\overline{P_p^{diss}}=\overline{\dot{Q}_a}+\overline{\dot{E}^{a\rightarrow p}}, \end{equation} and shows that the energy flux $\overline{{P}^{ac}}$ injected in the active subsystem is partly dissipated as heat $\overline{\dot{Q}_a}$ in reservoir $a$, while the rest $\overline{\dot{E}^{a\rightarrow p}}$ is transferred to the passive subsystem. In our setup, this latter energy that is exchanged between the subsystems is then dissipated as heat, but could be eventually transformed into useful work in a proper device. \section{Results}\label{results} As an illustrative example for the energy transfer mechanisms in Fig. \ref{fig2}, we consider the case $\varepsilon_a(t)=\varepsilon_0\cos(\omega t)+\mu$ within the adiabatic regime, so that $\hbar\omega\ll\Gamma_\alpha$, while $\varepsilon_p$ being a constant function. In particular, we focus on the situation in which the system is at temperature $T= 0$, and with balanced hybridizations with the reservoirs $\Gamma_a=\Gamma_p=\Gamma/2$. In what follows, we study the dynamics of charge and energy fluxes for different intensities of the Coulomb interaction $U$ and various values of the energy level of the passive dot $\varepsilon_p$. As was shown in the previous section, the pumping charge currents $I_\alpha^{pump}$ in Eq. (\ref{curr}), as well as the dissipative components of the power $P^{diss}_\alpha$ in Eq. (\ref{powerdiss}) are evaluated only at the frozen parameters $\lambda^t$ and $S_z^t$. The latter can be found by solving the non-linear system of stationary equations in Eqs. (\ref{consteq}) and (\ref{eq12eq}) at every instant of time $t$. We emphasize that the frozen picture used here considers the system to be at equilibrium at every time $t$ as in a sequence of snapshots, in the way that the time variable is treated as a parameter. In Fig. \ref{fig6} we show the average values over a single driving period of the frozen effective level of the passive dot $\overline{\varepsilon_p^t}=\varepsilon_p+\overline{\lambda^t}$, together with the re-normalizing factor of the hybridizations $\overline{\Gamma^t}/\Gamma=\overline{\Gamma^t_\alpha}/\Gamma_\alpha=(\hbar^2-\overline{{S_z^t}^2})/2\hbar^2$, which is the same for each of the reservoirs $\alpha=\{a,p\}$ as for the total $\Gamma^t=\Gamma^t_a+\Gamma^t_b$. Here, we vary $\varepsilon_p$ within a range of energies in which the passive quantum dot is always occupied. From $\varepsilon_p\ll\mu$ where the average occupation $\overline{\langle n_p^*\rangle}\rightarrow 1$, to the limit $\varepsilon_p\rightarrow \Gamma$ in which the passive dot is almost empty $\overline{\langle n_p^*\rangle} \sim 0.05$ but still occupied. Beyond this range of energies, for $\varepsilon_p>\Gamma$, the un-driven dot is empty thus the Coulomb interaction vanishes, and so does the effect we want to study. \begin{figure} \includegraphics[width=0.5\textwidth]{fig6.pdf} \caption{Averaged values over a driving period $\tau$ of the effective energy level of the passive dot $\overline{\varepsilon_p^t}$ (top panel) and the total effective hybridization $\overline{\Gamma^t}$ (bottom panel) as functions of the energy level of the passive dot $\varepsilon_p$, for different values of the interaction $U$. The parameters are: $\varepsilon_0=3\Gamma$, $\mu=0$, $\hbar\omega=10^{-3}\Gamma$, and $T=0$. All the energies, including the values of $U$ inside the legend, are expressed in units of $\Gamma$. }\label{fig6} \end{figure} As it is already known, the interaction $U$ has the effect of moving up the resonance from its non-interacting value $\mu$ (we set $\mu=0$), and this upward shift is represented in our model by the Lagrange multiplier $\lambda^t$ that is always a positive number \cite{slaveapin}. This effect can be seen in the top panel of Fig. \ref{fig6} from the fact that the effective level of the passive dot $\overline{\varepsilon_p^t}$ presents a change of sign (i.e. it crosses the Fermi level $\mu=0$) at a critical energy $\varepsilon_p^c(U)$, such that $\overline{\varepsilon_p^t}(\varepsilon_p^c)=0$, which is smaller than the chemical potential $\varepsilon_p^c(U)<\mu$. The critical energy depends on the strength of the Coulomb interaction $U$, so that the crossing occurs earlier for larger values of $U$ than for lower values of the interaction. As expected, we notice that the shift in the energy levels $\overline{\lambda^t}$ increases as the strength of the Coulomb interaction rises up, so the curves for higher values of $U$ are above those corresponding to a lower $U$. Moreover, a slope change in the curves of $\overline{\varepsilon^t_p}$ can be perceived around the critical values $\varepsilon_p^c(U)$. When the effective level of the passive dot is deep below the Fermi energy $\varepsilon_p\ll\varepsilon_p^c$, the slope $\gamma=1+d\overline{\lambda^t}/d\varepsilon_p\sim 1$, which means that the Lagrange multiplier is approximately a constant function of $\varepsilon_p$. On the contrary, $0<\gamma<1$ and then $d\overline{\lambda^t}/d\varepsilon_p<0$ as the effective level moves further above the resonance $\varepsilon_p\gg\varepsilon_p^c$, which tells us that the average energy shift $\overline{\lambda^t}$ decreases as the occupation of the passive dot gets diminished. Now we turn to the behaviour of the mean value of the total frozen hybridization $\overline{\Gamma^t}$, which provides information about the electronic configuration of the double-dot system. As explained in the previous sections, the component of the spin $S_z$ is in correspondence with the total occupation number $n_a^*+n_p^*$ through the constraint in Eq. (\ref{constraint}). In this way, we know that the two-level system is empty when $S^t_z=-\hbar$, it is double occupied for $S^t_z=\hbar$, and it is filled with a single electron when $S_z^t=0$. Therefore, the effective hybridization is $\Gamma^t=0$ when the system is either double occupied or empty, while it attains its maximum value $\Gamma^t=\Gamma/2$ at the single occupied state. Results are shown in the bottom panel of Fig. \ref{fig6}. We can see that when the level of the passive dot is deep below the Fermi sea $\varepsilon_p\ll\mu$, the double dot-system approaches the single occupied state $\overline{\Gamma^t}\rightarrow \Gamma/2$ as the intensity of $U$ increases. This is simply because in this limit $\overline{\langle n_p^*\rangle}\rightarrow 1$ and the repulsive Coulomb interaction prevents the system from the double occupancy, so that for high values of $U$ the active dot shall be almost empty on average $\overline{\langle n_a^*\rangle}\sim 0$. Whereas, when the intensity of $U$ decreases, both the dots could be simultaneously occupied for some time-interval in the oscillation period. And this leads $\overline{\Gamma^t}$ to take smaller values. On the other hand, we observe a significant change in the behaviour of $\overline{\Gamma^t}$ as the effective passive level overcomes the Fermi energy, for $\varepsilon_p>\varepsilon_p^c(U)$. Now, smaller values of $U$ lead to higher values of the hybridization. Moreover, $\overline{\Gamma^t}$ starts to decrease until it reaches half its maximum value $\overline{\Gamma^t}\sim 0.25\Gamma$ for $U=5\Gamma$ when $\varepsilon_p\rightarrow \Gamma$, which means that the system is single occupied for half the oscillation period while it is empty for the rest of the time. The latter features can be understood as follows. As the effective level of the passive dot moves further above $\mu$, its average occupancy decreases, and so does the average $\overline{\lambda^t}$. In the limit $\varepsilon_p\rightarrow \Gamma$ and for large values of $U$, the passive dot is almost empty, while the level of the active dot can oscillate between being occupied and empty $\overline{\langle n_a^*\rangle}\sim 0.45$. As the strength of the interaction $U$ is reduced, the upward shift in the energies $\overline{\lambda^t}$ gets smaller thus the effective levels are closer to $\mu$ and then they are more occupied on average. This increases the total occupation of the double-system and therefore $\overline{\Gamma^t}$ rises up, which explains the fact that the curves for higher values of $U$ are below those corresponding to a less intense interaction. \subsection{Charge current} \begin{figure} \includegraphics[width=0.5\textwidth]{fig3.pdf} \caption{Linear response coefficients of the charge currents $I_a^{pump}/\hbar\omega$ and $I_a^{pump}/\hbar\omega$, which are $\omega$-independent, as a function of time. Top panel: Non-interacting limit, $U=0$. Bottom panel: results for $U=4\Gamma$ and $\varepsilon_p=-2.5\Gamma$. Other parameters are the same as in Fig. \ref{fig6}. }\label{fig3} \end{figure} We now turn to analyze the linear response charge current in Eq. (\ref{curr}) that is pumped into the reservoirs. As an example, Fig. \ref{fig3} shows $I^{pump}_a$ and $I^{pump}_p$ as functions of time for $U=4\Gamma$ and $\varepsilon_p<\mu$, as well as when $U=0$ for which the quantum dots are disconnected from each other. We show that, as explained before, currents in the passive dot are merely induced by the finite interaction $U$, thus $I^{pump}_p\vert_{U=0}=0$. Naturally, the peaks in the charge currents $I^{pump}_\alpha$ occur within a time interval in which $\vert\varepsilon^t_\alpha-\mu\vert\lesssim\Gamma_{\alpha}^t$, but interestingly we note that charge fluctuations in the two subsystems are completely synchronized (i.e. opposite in sign). So that, during the first half period, an electron leaves the passive dot and enters the right reservoir (positive peak) at the same time when an electron from the reservoir on the left is entering the active dot (negative peak). Then the process is reversed during the second half period. Moreover, we find that the induced currents in the undriven passive dot are just smaller but same order (first order in the variation of the driving $\dot{\varepsilon_a}$) than those generated in the active dot. This is different from what was observed in Coulomb drag devices, for which drag currents are in general second order in the applied perturbation, i.e. bias voltage or thermal gradient \cite{qdots2,qdots4,diode,heatqd2,tdrag}. Something surprising is that currents are induced in the passive dot even at zero temperature, which evidence that unlike drag currents that are $\sim T^2$, the induction of transport in the passive part can not be interpreted as a rectification of thermal fluctuations \cite{cdrag}. On the other hand, we also observe a reduction in amplitude of $I^{pump}_a$ at finite $U$ with respect to the non-interacting case, which is due to the mutual friction between inter-dot electrons (i.e. Coulomb mediated scattering processes) that opposes the filling of the active dot when the passive dot is occupied. In what follows we analyze the behaviour of the maximum values of the pumped charge currents $I^{max}_\alpha\equiv\mbox{max}\{\vert I^{pump}_\alpha(t)\vert\}$. We can see from Figs. \ref{fig4} and \ref{fig5} that, as mentioned above, $I_p^{max}<I_a^{max}$ for any energy $\varepsilon_p$. Both the maximum values $I^{max}_a$ and $I^{max}_p$ exhibit a broad peak that is centered around the critical energy $\varepsilon_p^c(U)$, whose width extends over an energy range where the passive dot is able to exchange electrons with the right reservoir since $\vert{\varepsilon_p^t}-\mu\vert\lesssim {\Gamma_p^t}$ for some time-intervals in the oscillation period. On the contrary, when the un-driven dot is deep below the Fermi energy $\varepsilon_p\ll\mu$, and in the limit $\varepsilon_p\rightarrow \Gamma$, the current in the passive dot is suppressed since $\vert{\varepsilon_p^t}-\mu\vert> {\Gamma_p^t}$ for all time, while $I_a^{max}$ is still finite but much smaller than the maximum value at $U=0$ (dashed line in Fig. \ref{fig4}). As explained before, the reduction of the current $I_a^{max}$ in the active reservoir is due to the mutual friction between inter-dot electrons. Thus, in order to increase the current entering the active reservoir, the passive dot should be able to empty. This latter explains the fact that both currents exhibit the broad peak within the same range of energies. \begin{figure} \includegraphics[width=0.5\textwidth]{fig4.pdf} \caption{Maximum value of the linear response coefficient of the charge current entering the active reservoir $I^{max}_a/\hbar\omega$, as a function of $\varepsilon_p$ and for different values of the Coulomb interaction $U$. The black dashed-line corresponds to the maximum value at $U=0$ when the quantum dots are decoupled from each other. Other parameters are the same as in Fig. \ref{fig6}. }\label{fig4} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{fig5.pdf} \caption{Maximum value of the linear response coefficient of the charge current entering the passive reservoir $I^{max}_b/\hbar\omega$, as a function of $\varepsilon_p$ and for different values of the Coulomb interaction $U$. Other parameters are the same as in Fig. \ref{fig6}. }\label{fig5} \end{figure} Going more in detail, we identify three different regimes (or regions) in the behaviour of the maximum currents: \subsubsection{Region I: for ${\varepsilon_p}\leq-\varepsilon_0$} This is the light gray region in Figs. \ref{fig4} and \ref{fig5}, which is characterized by having off-peak currents. Then, $I_p^{max}\rightarrow 0$ is practically suppressed and $I_a^{max}$ is quite reduced with respect to its non-interacting value. Here, lower values of $U$ favor the generation of a charge current entering the active reservoir and make $I_p^{max}$ vanish, that is reasonable since charge variations in the passive dot are merely induced by the Coulomb coupling. The upper limit of the region was defined as the energy where there is a change in the behaviour of $I_a^{max}$ in a way that curves for a more intense interaction start to be above those corresponding to lower values of $U$, precisely at the intersection $I^{max}_a(U=2\Gamma)=I^{max}_a(U=5\Gamma)$. Within this range of energies, the level of the passive dot is deep below the Fermi energy and therefore the dot is filled $\overline{\langle n_p^*\rangle}\rightarrow 1$. Then, to allow the electron in the passive dot to tunnel into the reservoir on the right, it has to receive some energy from the active dot, that is just a portion of the power developed by the time-dependent source $\varepsilon_a(t)$ (see Eq. (\ref{energydistribution})). However, the fact that $\vert\varepsilon_p\vert>\varepsilon_0$ renders the external source $\varepsilon_a(t)$ unable to develop enough power to empty the passive dot, and that is why the current $I_p^{max}$ vanishes and $I_a^{max}$ flattens. As $\varepsilon_p\rightarrow -\varepsilon_0$ and the Coulomb coupling gets more intense by rising up $U$, an exchange of energy between the two dots becomes possible and $I_p^{max}$ starts to increase and so does $I_a^{max}$. In this region, the ordering of the curves as $U$ is varied can be explained in terms of the mean energy shift $\overline{\lambda^t}$, that was previously analyzed. As the value of $U$ rises up, the Lagrange multiplier $\overline{\lambda^t}$ increases. Therefore, the effective energy of the passive dot gets closer to $\mu$ (see Fig. \ref{fig6}), which makes $I_p^{max}$ increase. On the contrary, the effective active level is shifted farther away from $\mu$ as $U$ gets larger (from Fig. \ref{fig6}, notice that $\overline{\varepsilon_a^t}=\overline{\varepsilon_p^t}-\varepsilon_p=\overline{\lambda^t}>0$), so that the level is out of resonance which consequently reduces the current entering the active reservoir. \subsubsection{Region II: for $-\varepsilon_0<{\varepsilon_p}\leq \varepsilon_p^c$} This is the intermediate region in the Figs. \ref{fig4} and \ref{fig5}. Its upper limit is approximate and it was defined as the average critical energy $\langle\varepsilon_p^c\rangle$ over all the values of $U$ we take, which coincides with the intersection $I_p^{max}(U=2\Gamma)=I_p^{max}(U=5\Gamma)$. Within this region the effective passive dot lies on average below the Fermi level of the reservoirs $\overline{\varepsilon_p^t}\leq 0$, but it is in resonance for some time 43intervals during the oscillation period $\vert \varepsilon_p^t-\mu\vert\lesssim\Gamma_p^t$ so that electrons can be exchanged between the passive dot and its reservoir. The time-dependent Lagrange multiplier $\lambda^t$ makes the passive energy level oscillate around a mean value that gets closer to $\mu$ as the interaction $U$ increases. This settlement enhances the current entering the passive reservoir, and therefore in Fig. \ref{fig5} we can see a growth of $I_p^{max}$ as $U$ rises up. Regarding the current in the active reservoir, Fig. \ref{fig4} shows that also $I_a^{max}$ increases with $U$. Here, the Lagrange multiplier moves up the active level as well, but in this case its mean value moves away from the Fermi level as the repulsion $U$ increases. In this sense, we would expect weaker interactions to enhance the current $I_a^{max}$ since the mean effective active level $\overline{\varepsilon_a^t}=\overline{\lambda^t}$ would be closer to the resonance $\mu$. Nevertheless, there is another important factor that prevails and determines the behaviour of the $I_a^{max}$ within this region, and it is the following. For an electron to be pumped into the active dot, the passive dot should empty, since the Coulomb repulsion prevents (less or more, depending on the value of $U$) from the double occupancy of the double-dot system. For achieving that, the connection between the two dots should be reinforced by increasing the Coulomb coupling $U$ so that they can exchange more energy which induce the pumping of charge in the passive dot. \subsubsection{Region III: for ${\varepsilon_p}> \varepsilon_p^c$} The last region is distinguish by the fact that both the effective levels oscillates in time around a mean value that is above the Fermi level $\mu=0$, so $\overline{\varepsilon_a^t}\geq 0$ and $\overline{\varepsilon_p^t}\geq 0$ (see Fig. \ref{fig6}). Therefore, for the level of the active dot as well as for the passive one, larger values of $U$ imply being further above the Fermi level of the reservoirs so that when the levels move away from the resonance both currents $I_a^{max}$ and $I_p^{max}$ decrease. Hence, in Figs. \ref{fig4} and \ref{fig5}, curves for lower values of $U$ are above those corresponding to larger interactions. As the mean effective passive level goes further above the chemical energy, the current $I_p^{max}$ decreases and so does $I_a^{max}$, since charge transport in the active subsystem strongly depends on the possibility of pumping electrons in the passive subsystem. In the limit $\varepsilon_p\rightarrow\Gamma$ the effective passive level is off-resonance at all times, since $\vert\varepsilon_p^t-\mu\vert>\Gamma_p^t$ and consequently the current $I_p^{max}\rightarrow 0$ almost vanishes. As mention before, the mean occupancy of the passive dot in this limit is not completely zero so that there is still a mutual friction between inter-dot electrons that reduces the maximum current entering the active reservoir with respect to the $U=0$ case. Naturally, the Coulomb coupling between the dots has a less impact on the active current as the passive level overcomes $\mu$ and it empties. That is why the active current $I_a^{max}$ seems to stabilize in this limit at higher values than when $\varepsilon_p\ll\mu$. On the other hand, we can see that when $U<\varepsilon_0$ the peak in the active current $I_a^{max}$ exceeds the non-interacting value. This aspect has to do with the power supply from the external driving source, and it will be discussed in the following section. \subsection{Energy fluxes} In this section we study the energy that is dissipated deep inside the reservoirs of the subsystems, $\dot{Q}_a$ and $\dot{E}^{a\rightarrow p}$, whose expressions were presented in Eq. (\ref{powerdiss}). When evaluating Eqs. (\ref{curr}) and (\ref{powerdiss}) at zero temperature, an instantaneous Joule Law relation emerges \begin{equation}\label{joule} P_\alpha^{diss}(t)=R_q (I_\alpha^{pump}(t))^2 \;\;\;\text{at $T=0$,} \end{equation} with an universal resistance $R_q=h/2{e^2}$ that is the charge relaxation resistance found in Refs. \cite{singledot,res1} and also observed in Ref. \cite{res2}. In particular, for the passive subsystem $\alpha=p$, the above equation reads $\dot{E}^{a\rightarrow p}=R_q(I_p^{pump})^2$ and shows how the transfer of energy between the subsystems (from the active subsystem to passive one) is accompanied by the induction of charge pumping in the un-driven dot. Similarly, when $\alpha=a$, Eq. (\ref{joule}) relates the heat $\dot{Q}_a$ that is dissipated in the active reservoir to pumping in the driven dot. However, unlike in the passive dot, in the active subsystem not all the energy that the active dot receives from the external ac source is then dissipated as heat in the active reservoir. Here, as shown in Eq. (\ref{energydistribution}), dissipation corresponds just to a portion of the power delivered by the external source $P^{ac}_{diss}$, while the rest $\dot{E}^{a\rightarrow p}$ is transferred to the passive subsystem in order to ``palliate" the Coulomb friction. This does not happen when the two quantum dots are uncoupled, i.e. for $U=0$, since in that case the dissipated heat $\dot{Q}_a\vert_{U=0}=R_q (I_a^{pump})^2\vert_{U=0}=P^{ac}_{diss}\vert_{U=0}$ is equal to the ac power because there is no flow of energy to the passive dot. Thus, to generate a certain charge current in the active subsystem at finite $U$, the source has to inject a higher amount of energy in order to get over the friction. This fact should be reflected in the relation between $P_{diss}^{ac}$ and $I_a^{pump}$, through an effective resistance for the active dot which should be larger than the non-interacting value $R_q$. For analyzing this, we insert Eq. (\ref{joule}) in the total power, so \begin{equation}\label{joule2} P_{diss}^{ac}(t)=P_a^{diss}(t)+P_p^{diss}(t)=R(t)[I_a^{pump}(t)]^2, \end{equation} where we define \begin{equation} R(t)\equiv R_q\left(1+\left[\frac{I_p^{pump}(t)}{I_a^{pump}(t)}\right]^2\right) \end{equation} as the effective resistance, that is a manifestly positive quantity at all times. Therefore, we can see Eq. (\ref{joule2}) as a Joule law with an instantaneous effective resistance $R(t)$ for the total energy dissipation due to pumping in the active dot. Here, we can notice from Eqs. (\ref{joule}) and (\ref{joule2}), that the overcome of the non-interacting maximum current in Fig. \ref{fig4} for $\varepsilon_0>U$ is just because the source is injecting more energy with respect to the $U=0$ case. As an example, we show in Fig. \ref{fig8} the behaviour of the effective resistance $R$ for different values of the interaction $U$ when the passive level is filled $\varepsilon_p=-2.5\Gamma$. We can see that the effective resistance fulfills the relation $R(t)\geq R_q$ at all times, and that it is not universal since it depends on the interaction $U$ and the spectral properties of the dots. As expected, the effective resistance in the active dot due to the presence of an electron in the passive dot gets larger as $U$ increases. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{fig8.pdf} \caption{Effective resistance $R(t)$ in units of $R_q=h/2{e^2}$ as a function of time. Parameters are: $\varepsilon_p=-2.5\Gamma$, $\mu=0$, $T=0$, $\varepsilon_0=3\Gamma$. Inset: mean value of the resistance $\overline{R}$ over one driving period $\tau$ as a function of the interaction $U$. All the energies are in units of $\Gamma$. }\label{fig8} \end{figure} Finally, we study the efficiency of the energy transfer between the two dots over a cycle of the external driving source. This quantity measures the amount of heat leaking from the active to the passive system. The knowledge of its behavior can be crucial to guide the design of experimental set-ups. A first natural use is to monitor this quantity in order to prevent an excessive heating of the passive part or, on the other hand, in order to use the leaking heat as an energy input (work), for example in positions where an external source can not be easily connected. For that, we define the efficiency as \begin{equation} \eta_{a\rightarrow p}=\frac{\overline{\dot{E}^{a\rightarrow p}}}{\overline{P^{ac}}}\times 100, \end{equation} that is the percentage of the averaged injected power $\overline{P^{ac}}$ that is transmitted to the passive quantum dot. Fig. \ref{fig7} shows the results as function of the level of the passive dot $\varepsilon_p$, again for $\mu=0$ and $T=0$. Naturally, the broad peak in $\eta_{a\rightarrow p}$ occurs within the same range of energies as the peak of the charge pumping in the passive reservoir. We can see that the efficiency also follows the behaviour of $I_p^{pump}$ as $U$ is varied. In the sense that larger interactions improve the energy transmission when the effective passive level is filled on average $\varepsilon_p<\varepsilon_p^c$, while they make it worse when the level is above the Fermi sea $\varepsilon_p>\varepsilon_p^c$. Surprisingly we find that a maximum of around $\sim 43\%$ (maximum of $\eta_{a\rightarrow p}$ when $U=5\Gamma$) of the energy delivered by the ac source is transmitted to the passive dot, which is quite high. As it was mentioned before, this amount of transmitted energy is then delivered to the passive reservoir and dissipated there as heat. However, this energy could be eventually transformed into useful work in a proper designed setup. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{fig7.pdf} \caption{Efficiency of the energy transfer from the active quantum dot to the passive one $\eta_{a\rightarrow p}=100\times{\overline{\dot{E}^{a\rightarrow p}}}/\overline{P^{ac}} $, as a function of $\varepsilon_p$. Parameters are the same as in Fig. \ref{fig6}. }\label{fig7} \end{figure} \section{Conclusions}\label{conclusions} In this work we have studied the transport of charge and energy in two Coulomb-coupled quantum dots. We considered the setup in Fig. \ref{fig1} in which only one of the two dots (the active dot) is adiabatically driven by a time-periodic gate, while the other quantum dot remains undriven (passive dot). We have shown that although the Coulomb coupling does not allow for electron transfer between the two quantum dots, it enables a transfer of energy between them (sketched in Fig. \ref{fig2}) that eventually induces a pumping of charge also in the undriven dot, even at zero temperature. In order to treat the effects of the Coulomb interaction at low temperatures, we used the time-dependent slave-spin 1 formulation within mean-field presented in Ref. \cite{slaveapin}, which turns out to be advantageous for describing the induction of transport in the passive subsystem due to the Coulomb repulsion, since it is simply explained in terms of effective time-dependent driving fields acting on the passive quantum dot. We have found that the pumping currents that are induced in the passive dot due to the mutual friction are of the same order, even if always smaller, than those generated in the driven dot. Moreover, we identify three different regimes in the behavior of the charge fluxes as a function of the energy level of the passive dot. As far as energy transport is concerned, we have found that the dissipation in both the reservoirs due to pumping at zero temperature is given by a Joule law controlled by the universal charge relaxation resistance $R_q$. In addition, we also derived a Joule law for the total energy dissipation due to pumping in the active dot, which is now controlled by an instantaneous non-universal resistance $R(t)$. We have derived that $R(t)\geq R_q$ at all times and increases with the interaction $U$. Finally, we analyzed the efficiency of the energy transfer from the active dot to the passive one, and our results showed that for suitable parameters we can reach a regime where up to $\sim 43\%$ of the energy delivered by the ac source to the active dot is transmitted to the passive dot. Our results represent a significant advance toward a full understanding of drag effects of charge and energy in Coulomb coupled quantum dots systems which is expected to have potential implications for nanoelectronics. The development of low-dimensional electronic devices least to set-ups where the components are located very close to each other. This leads inevitably to an increased Coulomb interaction, which can have undesired consequences. For example currents can be induced in undesired regions of the device which do not contribute to its functionality, and heat can be accumulated or dissipated in inappropriate locations thus ruining or even disrupting the material. Our results provide a novel piece of information about the conditions favoring such undesired phenomena which can help to avoid or reduce their impact. While the current calculations refer to a very simplified set-up, they open the path to calculations for more realistic descriptions of the devices owing to the simplicity and the reliability of our theoretical approach. \section{ACKNOWLEDGEMENTS} We acknowledge support from the Italian Ministry of University and Research (MIUR) through the PRIN2017 project CEnTral (Protocol Number 20172H2SC4). We also thank Fabio Caleffi for useful discussions.
9e2f7ff94cd82599e8e2cc491e93453c08714ba2
\section{Introduction} The observations like Type Ia supernova \cite{1RAGFAVCA98,1PSAGGGKRA99,1BPAPARBJJBJR00,1PSAGARAPBG03}, the large scale structure \cite{1CMDGMSSWNPCS01,1TMSMABMRAKDSSHW04,1CSPWJPRANP05,1SVFCSWSMD06}, cosmic microwave background (CMB) \cite{1HSAPBABJBA00,1NCBAPARBJJBJR02,1SDNVLPHVKE03} indicate that the universe endures an accelerating expansion. The source of this phenomenon is suspected to be an exotic fluid that violates strong energy condition and possesses a large negative pressure, dubbed as dark energy. According to Planck's observational data \cite{1AUSVSAA04} about 68.3 percent of the total cosmic budget is occupied by Dark energy, while about 26.8 percent is filled with dark matter and about 4.9 percent is usual baryonic matter.There have been a prolonged attempt to reconcile the physical nature of dark energy. An overwhelming flood of dynamical dark energy models such as quintessence, tachyon \cite{1SA02}, ghost \cite{1MMNTPF15}, k-essence \cite{1CTOTYM00}, fermionic field \cite{1MR10,1MSDU13,1MSDU14}, phantom \cite{1NSOSD03}, Chaplygin gas \cite{1PVMUKAYMUPV01}, holographic dark energy (HDE) \cite{1WS98,1CEJSMTS06,1PT03}, new agegraphic dark energy (NADE) \cite{1WHCRG081,1MSDU16} and modified gravity models such as f(R) gravity \cite{1CSCSTA,1CSMDVTMTMS04}, f(T) gravity \cite{1FRFF07,1BGRFR09,1LEV10}, Ho$\check{r}$ava-Lifshitz gravity \cite{Horava1, Horava2, Horava3,MSRP181} have been proposed in literature.\\ Like Chaplygin gas family, another single-component fluid known as Van der Waals fluid \cite{Cap} has attracted much attention for unification of different fluid. The main feature of this model is to reproduce the accelerated and matter-dominated phases with a single component. The perfect fluid equation of state is not realistic in describing the all the phases of the evolution of the universe. The properties of Van der Waals fluid have been analyzed in \cite{1JRCSCMHB16}. It is indeed the holographic description of the dark energy, so by having information about the Van der Waals fluid we can obtain knowledge about the accelerating expansion of universe. In \cite{1KGM03,1KGM04} a mixture of two fluids, taking the Van der Waals fluid as dark energy and the perfect gas equation of state for the matter, have been considered to get a whole dynamics of the universe. In Ref. \cite{1KMPBCE13}, a toy model of the Universe is considered with generalized ghost dark energy, Van der Waals fluid and some modified fluids. They have studied the unusual connection among different fluids. Although there have been a lot of work \cite{1CVFTCTACS06,1SB06,1KMPBKEO14} discussing various aspects of Van der Waals fluid in the standard cosmological model with observations, there has not been done a full thermodynamic constraints of Van der Waals fluid. Thermodynamics of Chaplygin gas model has been studied by Myung \cite{Myung}. Santos et al \cite{Santos1,Santos2} have studied the thermodynamic stability of the generalized and modified Chaplygin gas models. Thermodynamics of Modified Chaplygin Gas and Tachyonic Field have been analyzed in ref. \cite{Bhatta}. Thermodynamic stability of generalized cosmic Chaplygin gas has been studied by Sharif et al \cite{Shar}. Motivated by these works, here, we examine the thermodynamic stability of Van der Waals fluid in the background of a flat FRW universe.\\ The paper is organized as follows. In section 2, we study the behavior of pressure, EoS parameter, deceleration parameter and also analyze the stability using the sign of square speed of sound. In section 3, we study the thermodynamic stability of Van der Waals fluid. We devote the last section for summarization of the results. \section{Physical Features of Van der Waals Fluid}\label{sec2} Here we assume the flat Friedmann-Robertson-Walker (FRW) model of the universe represented by the following line element: \begin{eqnarray} ds^{2}=-dt^{2}+a^{2}(t)(dr^{2}+r^{2}d\theta^{2}+r^{2}sin^{2}\theta d\phi^{2}), \end{eqnarray} where $a(t)$ is the scale factor. Now we assume the equation of state for Van der Waals fluid as in the form \cite{Cap} \begin{eqnarray} P=\frac{\gamma\rho}{1-\beta\rho}-\alpha\rho^{2}, 0\leq\gamma<1, \alpha=3p_{c}\rho_{c}^{-2},\beta=(3\rho_{c})^{-1}, \end{eqnarray} where $\rho$ and $P$ represent the energy density and pressure of the Van der Waals fluid. Here $\rho_{c}$ and $p_{c}$ represnet the critical density and the critical pressure of the Van der Waals fluid at critical point. The above equation reduces to the perfect fluid case in the limit $\alpha,\beta\rightarrow 0$. The energy density of the fluid can be written in the form: \begin{eqnarray} \rho=\frac{U}{V}, \end{eqnarray} where $U$ is the internal energy and $V$ is the volume. From classical thermodynamics, the relation between $U$, $V$ and $P$ can be written in the form \cite{Shar,Land} \begin{eqnarray} \frac{dU}{dV}=-P. \end{eqnarray} From equations (2) - (4), we get the following first order ordinary differential equation \begin{eqnarray} \frac{dU}{dV}+\frac{\gamma\frac{U}{V}}{1-\beta\frac{U}{V}}=\alpha\left(\frac{U}{V}\right)^{2}. \end{eqnarray} Assuming the binomial expansion upto first order, we obtain the approximate solution \begin{eqnarray} U\approx V\left[\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}} {2\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}\right], \end{eqnarray} where $k$ is an integration constant ($\ne 0$) which is either universal constant or a function of entropy ($S$). The above solution provides the solution of energy density as \begin{eqnarray} \rho=\left[\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}} {2\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}\right] \end{eqnarray} For small volumes ($V\approx 0$), the energy density of the Van der Waals fluid behaves like \begin{eqnarray} \rho=\left[\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\alpha\beta}} {2\alpha\beta}\right] \end{eqnarray} with the condition $(\alpha+\beta)^{2} \ge 4 (1+\gamma)\alpha\beta$ and hence the minimum value of $\rho$ is $\rho_{min}=\sqrt{\frac{1+\gamma}{\alpha\beta}}$. Now we will discuss different physical parameters of the model. \subsection{Pressure} \begin{figure} ~~~~~~~~~~~~~~~~~~~~~~~~~~\includegraphics[height=1.8in]{1.eps}~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.1~~~~~~~~~~~~~~~~~~~~~~~~\\ \textit{\textbf{Figure 1:} Plots of P versus V for $\gamma=0.7, \beta=1, \alpha=20$ and $k=5$ (blue curve), $k=10$ (red curve) and $k=20$ (green curve).}\vspace{2mm}\\ \end{figure} From equations (2) and (7), we obtain the expression pressure in terms of $V$ as in the following form: \begin{eqnarray} P&=&\frac{\gamma[(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}]} {\alpha\beta-[\beta^{2}+2(\frac{V}{k})^{2(1+\gamma)} +\beta\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}]}\nonumber\\ &&-\alpha\left[\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2} -4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}} {2[\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}]}\right]^{2}\nonumber\\ \end{eqnarray} The trajectory of pressure given by the above equation against volume is drawn in figure 1 for different values of $k=5,10,20$. Figure shows the positive and negative behavior of pressure. It is observed that the accelerating universe at small volume tends to decelerated phase of universe at large volume. \subsection{EoS Parameter} From equations (2) and (7), we obtain the equation of state parameter in terms of $V$ as in the following form: \begin{eqnarray} \omega=\frac{P}{\rho}&=&\frac{2\gamma[\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}]} {\alpha\beta-[\beta^{2}+2(\frac{V}{k})^{2(1+\gamma)}+\beta\sqrt{(\alpha+\beta)^{2} -4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}]}\nonumber\\ &&-\alpha\left[\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}} {2[\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}]}\right]\nonumber\\ &=& \begin{cases} \gamma, & V \gg k\\ -1, & \ V\ll k . \end{cases} \end{eqnarray} \begin{figure} ~~~~~~~~~~~~~~~~~~~~~~~~~~\includegraphics[height=1.8in]{2.eps}~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.2~~~~~~~~~~~~~~~~~~~~~~~~\\ \textit{\textbf{Figure 2:} Plots of $\omega$ versus V for $\gamma=0.7, \beta=1, \alpha=20$ $k=5$ (blue curve), $k=10$ (red curve) and $k=20$ (green curve).}\vspace{2mm}\\ \end{figure} The EOS parameter $\omega$ is drawn in figure 2 for different values of $k=5,10,20$. The EOS parameter transits from $-1$ to positive values as volume increases. That means it yields cosmological constant model for small volume, then it generates the quintessence region and goes to positive region (tends to $\gamma$). \subsection{Deceleration Parameter} The deceleration parameter is given by \begin{eqnarray} q=\frac{1}{2}+\frac{3P}{2\rho}&=&\frac{1}{2}+\frac{3\gamma[\alpha\beta -(\frac{V}{k})^{2(1+\gamma)}]}{\alpha\beta-[\beta^{2}+2(\frac{V}{k}) ^{2(1+\gamma)}+\beta\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}]}\nonumber\\ &&-\alpha\left[\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}} {2[\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}]}\right]\nonumber\\ &=& \begin{cases} \frac{1}{2}+\frac{3}{2}\gamma, & V \gg k\\ -1, & \ V\ll k . \end{cases} \end{eqnarray} \begin{figure} ~~~~~~~~~~~~~~~~~~~~~~~~~~\includegraphics[height=1.8in]{3.eps}~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.3~~~~~~~~~~~~~~~~~~~~~~~~\\ \textit{\textbf{Figure 3:}Plots of $q$ versus V for $\gamma=0.7, \beta=1, \alpha=20$ $k=5$ (blue curve), $k=10$ (red curve) and $k=20$ (green curve).}\vspace{2mm} \end{figure} Figure 3 presents the trajectories of the deceleration parameter $q$ against $V$ for different values of $k$. From the graph we can see that $q$ increases from $-1$ to the positive value (tends to $\frac{1+3\gamma}{2})$. It describes acceleration at small volumes whereas at large volume, it exhibits decelerating behavior for different values of $k$. So, a transition from accelerating to decelerating universe is observed. It shows $q\rightarrow -1$ with decreasing $V$ i.e it yields cosmological constant model as $V\rightarrow 1$. \subsection{Square Speed of Sound} To discuss the classical stability of the model, we need to obtain the square speed of sound which is given by \begin{eqnarray} V_{s}^{2}=\left(\frac{\partial P}{\partial \rho} \right)_{S} &=&\frac{4\gamma\left[\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\right]^{2}} {\left[\alpha\beta-\{\beta^{2}+2(\frac{V}{k})^{2(1+\gamma)} +\beta\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}\}\right]^{2}}\nonumber\\ &&-2\alpha\frac{(\alpha+\beta)+\sqrt{(\alpha+\beta)^{2}-4 (1+\gamma)\{\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}\}}} {2[\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}]}\nonumber\\ &=& \begin{cases} \gamma, & V \gg k\\\\ 4\left[\frac{\{\alpha(1+\gamma)-\beta\}\sqrt{(\alpha-\beta)^{2}-4\alpha\beta\gamma}+ 3\alpha\beta\gamma-(\alpha-\beta)^{2}}{\{\sqrt{(\alpha-\beta)^{2}-4\alpha\beta\gamma} -\alpha+\beta\}^{2}}\right], & \ V\ll k . \end{cases} \end{eqnarray} In figure 4 the squared speed of sound is plotted against $V$ for different values of $k$. We observe that for $0\le V\lesssim 9$, graph shows $V_{s}^{2}>0$ while for $V\gtrsim 9$, graph shows $V_{s}^{2}<0$. So the model is classically stable for small volume and for large volume it shows unstable behavior. \begin{figure} ~~~~~~~~~~~~~~~~~~~~~~~~~~\includegraphics[height=1.8in]{4.eps}~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.4~~~~~~~~~~~~~~~~~~~~~~~~\\ \textit{\textbf{Figure 4:}Plots of $V_{s}^{2}$ versus V for $\gamma=0.7, \beta=1, \alpha=20 $ $k=5$ (blue curve), $k=10$ (red curve) and $k=20$ (green curve).}\vspace{2mm}\\ \end{figure} \section{Thermodynamic Stability of the Van der Waals Fluid} Now we will discuss the behaviour of temperature and the thermodynamic stability of the Van der Waals Fluid. To analyze the thermodynamic stability conditions and the evolution for the Van der Waals fluid, it is necessary to determine (i) if the pressure reduces through an adiabatic expansion i.e., $\left(\frac{\partial P}{\partial V}\right)_{S}<0$, (ii) if the pressure reduces to an expansion at constant temperature $T$ i.e., $\left(\frac{\partial P}{\partial V}\right)_{T}<0$ and (iii) if the heat capacity at constant volume i.e., $C_{V}>0$.\\ Differentiating equation (9) w.r.t. $V$, we obtain \begin{eqnarray} \left(\frac{\partial P}{\partial V}\right)_{S}&=& \frac{(1+\gamma)(\frac{V}{k})^{2(1+\gamma)}}{V X^{2}\sqrt{Y}} \left[(\alpha+\beta)\sqrt{Y}+2(1+\gamma)(\frac{V}{k})^{2(1+\gamma)} -2\gamma\alpha\beta+\beta^{2}+\alpha^{2}\right]\nonumber\\ &&\times \left[4\gamma X^{2}\left(\alpha\beta-(\beta^{2}+2(\frac{V}{k})^{2(1+\gamma)}+\beta\sqrt{Y})\right)^{-2}-\alpha X^{-1}(\alpha+\beta+\sqrt{Y})\right], \end{eqnarray} where $X=\alpha\beta-(\frac{V}{k})^{2(1+\gamma)}$ and $Y=(\alpha+\beta)^{2}-4(1+\gamma)X$. When the volume is very small, the above equation reduces to zero, but for the large volume, we have the following expression \begin{eqnarray} \left(\frac{\partial P}{\partial V}\right)_{S}&&=\frac{-4(1+\gamma)\{(\beta-\alpha(1+\gamma))\sqrt{(\alpha-\beta)^{2}-4\alpha\beta\gamma} +(\alpha-\beta)^{2}-3\alpha\beta\gamma\}} {v \alpha^{2} \beta^{2}\{-\alpha+\beta+\sqrt{(\alpha-\beta)^{2} -4\alpha\beta\gamma}\}\sqrt{(\alpha-\beta)^{2}-4\alpha\beta\gamma}}\nonumber\\ &&\times \left[(\alpha+\beta)\sqrt{(\alpha-\beta)^{2}-4\alpha\beta\gamma} +\alpha^{2}+\beta^{2}-2\alpha\beta\gamma\right]. \end{eqnarray} Figure 5(a) shows that $\left(\frac{\partial P}{\partial V}\right)_{S}<0$ at large volumes and at small volume, it is positive and tends to zero as $V\rightarrow 0$. So, the adiabatic condition is satisfied for all the considered values of $k$.\\ The specific heat capacity in constant volume is defined by \begin{eqnarray} C_{V}=T\left(\frac{\partial S}{\partial T}\right)_{V}, \end{eqnarray} where the temperature can be obtained by the relation \cite{Shar} \begin{eqnarray} T=\frac{\partial U}{\partial S}=\left(\frac{\partial U}{\partial k}\right) \left(\frac{\partial k}{\partial S}\right) \end{eqnarray} Differentiating (6) with respect to $k$, we get \begin{eqnarray} \frac{\partial U}{\partial k}=\frac{2(1+\gamma)(\frac{V}{k})^{2(1+\gamma)}}{k X\sqrt{Y}}[V(1+\gamma)-U \sqrt{Y}] \end{eqnarray} From equations (16) and (17), we obtain \begin{eqnarray} T=\frac{2(1+\gamma)(\frac{V}{k})^{2(1+\gamma)}}{k X\sqrt{Y}}[V(1+\gamma) -U \sqrt{Y}]\left(\frac{\partial k}{\partial S}\right) \end{eqnarray} From equation (6), we can write (in the sense of dimensional analysis) \cite{Shar} \begin{eqnarray} [k]^{(1+\gamma)}=[U][V]^{\gamma} \end{eqnarray} Using the relation $[U]=[T][S]$, we obtain \begin{eqnarray} [k]=[T]^{\frac{1}{(1+\gamma)}}[U]^{\frac{1}{(1+\gamma)}}[V]^{\frac{\gamma}{(1+\gamma)}} \end{eqnarray} From this result, we get \begin{eqnarray} k=(\tau \nu^{\gamma}S)^{\frac{1}{(1+\gamma)}} \end{eqnarray} where $\tau$ and $\nu$ are constants having the dimensions of temperature and volume respectively. Differentiating (21), we obtain \begin{eqnarray} \frac{\partial k}{\partial S}=\frac{1}{(1+\gamma)} \left(\frac{\tau \nu^{\gamma}}{S^{\gamma}}\right)^{\frac{1}{(1+\gamma)}} \end{eqnarray} Using (18) and (22) we have \begin{eqnarray} T=\frac{2B^{\frac{1}{1+\gamma}}S^{-\frac{\gamma}{1+\gamma}}(\frac{V}{k}) ^{2(1+\gamma)}}{k X\sqrt{Y}}[V(1+\gamma)-U \sqrt{Y}] \end{eqnarray} where $B=\tau \nu^{\gamma}$. Using (6) and (21), the equation (23) becomes \begin{eqnarray} T=-\frac{2BV^{3+2\gamma}}{X'^{2}\sqrt{Y'}}[(1+\gamma)X'+((\alpha+\beta)BS+\sqrt{Y'})\sqrt{Y'}], \end{eqnarray} where $X'=B^{2}S^{2}X$ and $Y'=B^{2}S^{2}Y$. When $T=0$, the entropy $S=0$ which implies that third law of thermodynamics is satisfied for our Van der Waals fluid model. Differentiating eq.(24) with respect to $S$, we obtain \begin{eqnarray} &&\frac{\partial T}{\partial S}=\frac{2V^{3+2\gamma}B^{2}}{X'^{3}Y'^{\frac{3}{2}}} [2BS\alpha\beta\{3X'Y'(1+\gamma)-2X'^{2}(1+\gamma)^{2}+2Y'^{2}\}+\nonumber\\ &&Y'^{\frac{3}{2}}(\alpha+\beta)\{4\alpha\beta B^{2}S^{2}-X'\}+BSX'(\alpha+\beta)^{2}\{(1+\gamma)X'+2Y'\}] \end{eqnarray} Now from equation (15), we have the expression of specific heat capacity as in the following form: \begin{eqnarray} C_{V}=-[X'^{2}Y(1+\gamma)+X'Y'^{\frac{3}{2}}\{SB(\alpha+\beta)+\sqrt{Y}\}] \nonumber\\ \times \Big{(}B[2BS\alpha\beta\{3X'Y'(1+\gamma)-2X'^{2}(1+\gamma)^{2}+2Y'^{2}\} \nonumber\\ +Y'^{\frac{3}{2}}(\alpha+\beta)\{4\alpha\beta B^{2}S^{2}-X'\}+BSX'(\alpha+\beta)^{2}\{(1+\gamma)X'+2Y'\}]\Big{)}^{-1} \end{eqnarray} The specific heat $C_{V}$ is plotted as the function of volume $V$ in figure 5(b) for three different values of $k$. The positivity of specific heat is obtained for all considered values of $k$. It should be noted that when temperature $T$ is zero, the $C_{V}$ vanishes, which assures the validity of third law of thermodynamics.\\ \begin{figure} \includegraphics[height=1.8in]{5.eps}~~~\includegraphics[height=1.8in]{6.eps}~\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.5(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Fig.5(b)~~~~~~~~~~~~~~~~\\ \textit{\textbf{Figure 5:} Plots of $\frac{\partial P}{\partial V}$ and $C_{V}$ respectively against V for $\gamma=0.7, \beta=1, \alpha=20 $ $k=5$ (blue curve), $k=10$ (red curve) and $k=20$ (green curve).}\vspace{2mm} \end{figure} \section{Discussions} In this work, we have studied the thermodynamic properties of cosmological fluid described by the van der Waals equation of state in the framework of flat FRW universe. The phenomena of late time accelerated expansion of the universe is studied through different physical parameters like pressure, effective EoS, deceleration parameters and squared speed of sound. Figure 1 shows the positive and negative behavior of pressure $P$. We have also observed that at small volume, the universe is accelerating and it evolves to decelerated phase at large volume. From figure 2, we have observed that the EOS parameter $\omega$ transits from $-1$ to positive values as volume increases. That means it yields cosmological constant model for small volume, then it generates the quintessence region and goes to positive region (tends to $\gamma$). From figure 3, we have seen that the deceleration parameter $q$ increases from $-1$ to the positive value (tends to $\frac{1+3\gamma}{2})$. It describes acceleration at small volumes whereas at large volume, it exhibits decelerating behavior. So, a transition from accelerating to decelerating universe occurred. That means the deceleration parameter $q$ yields cosmological constant model for small volume and for large volume, it crosses the quintessence region. It shows $q\rightarrow -1$ with decreasing $V$ i.e it yields cosmological constant model as $V\rightarrow 1$. For the stability analysis of the model, the squared speed of sound $V_{s}^{2}$ has drawn in figure 4. We have observed that for $0\le V\lesssim 9$, graph shows $V_{s}^{2}>0$ while for $V\gtrsim 9$, graph shows $V_{s}^{2}<0$. So the model is classically stable for small volume and for large volume, it shows unstable behavior. Finally, we have examined the thermodynamic stability of the considered fluid using adiabatic, isothermal and specific heat conditions. From figure 5(a), we have seen that $\left(\frac{\partial P}{\partial V}\right)_{S}<0$ at large volumes and at small volume, it is positive and tends to zero as $V\rightarrow 0$. So, the adiabatic condition is satisfied. From figure 5(b), we have observed that he specific heat $C_{V}$ is always positive. So the third law of thermodynamics is obeyed for Van der Waals fluid. For all the figures, we have assumed the considered values of $k~(=5,10,20)$. It should be noted that when temperature $T$ is zero, the $C_{V}$ vanishes, which assures the validity of third law of thermodynamics.
fedfe51e3344d5bdaa0a2c7e1f6d45cbbb336834
\section*{Acknowledgments} \bibliographystyle{unsrtnat} \section{Conclusion}\label{lbl:conclusion} In this paper, considering the difficulty of the cross-device deployment of federated learning, we proposed WebFed, a browser-based federated learning framework with privacy preservation. By leveraging the advantage of the cross-device characteristics of web browsers, WebFed is more conducive to actual federated learning research and application landing. What's more, in order to strengthen privacy protection, we apply local differential privacy mechanism to add artificial noise to the local updates before global aggregation. Finally, we conduct the experiments on heterogeneous devices (smartphones and PC) to evaluate the performance of WebFed. The experimental results show that our browser-based framework can achieve a training effect close to the conventional framework in terms of model accuracy. We also found that an appropriate privacy budget setting can enhance the privacy protection with only a subtle impact on the training performance. \section{Framework Design}\label{lbl:framework} In this section, we first describe the architecture design of WebFed, explain the Workers and their relationships, and then we will introduce the details of the WebFedAVG algorithm. \begin{figure}[tbp] \centering \includegraphics[width=0.5\linewidth]{figures_v2/structure.png} \caption{Schematic Illustration of WebFed architecture. Icons in this figure are from \textit{https://www.flaticon.com/}. } \label{lbl:tech} \end{figure} \subsection{Overall Design of WebFed} Under WebFed, the Browser-based federated learning framework, a shared deep neural network (DNN) model is jointly trained in browsers among $N$ clients without gathering their private data. We assume there is a parameter server that is in charge of aggregating the clients' updates (weights of their local models) by convolutional methods (e.g., FedAvg, FedSGD) and broadcasting the new global model. Parameter Servers (PS) are \textit{Node.js} applications that control the communication between devices and hosts ML projects. As depicted in Fig. \ref{lbl:tech}, the clients in WebFed could be viewed as the composing of four Workers, that is: \begin{itemize} \item \textbf{UI Worker:} When the corresponding web page is opened, UI Worker is initialized. Clients can perform some intuitive operations through UI Worker, such as connecting to the server, selecting training tasks, and so on. UI Worker is mainly responsible for the interaction with the user and the specific training configuration on the browser side. \item \textbf{Data Worker:} Data Worker participates in training mainly by managing local datasets. It could optimize the entire training process by pre-loading and pre-processing the training data. \item \textbf{LDP Worker:} LDP Worker is in charge of adding artificial noise to the local training results before global aggregation. It accepts the model from the Training Worker and returns the perturbed model weights. \item \textbf{Training Worker:} Training Worker is responsible for performing specific training tasks and interacting with the server. First, it will receive the global model broadcasted by the server and then update the maintained local model. It accepts the training data provided by Data Worker and then conducts the in-browser training process. After the local training, it will then interact with the LDP Worker as Fig. \ref{lbl:tech} illustrates to obtain the model weights with artificial noise and upload them to the parameter server for global aggregation. \end{itemize} \subsection{Algorithm Design} \subsubsection{Federated learning tasks} Let $D_i$ denote the local training dataset of client $i$, where $i \in N$ and we consider the number of all training data samples is denoted as $D_{N}$, and $\bigcup_{i} D_{i}=D_{N}$. For the $j$-th training sample of client $i$ in a classification problem, it includes input training data and corresponding labels. The full parameters of the training model are denoted as $\bm{\omega}$ and the loss of $j$-th data sample is denoted as $l_j( \bm{\omega};(x_{j},y_{j}))$, shorted as $l_j(\bm{\omega})$. The training goal is to minimize the loss $L(\bm{\omega} )$ that is derived from Eq. (\ref{fwi}) and Eq. (\ref{fw}). Assuming that each client has its own training data $D_i$, then the local loss is expressed as: \begin{equation}\label{fwi} L_i(\bm{\omega})=\frac{\sum_{j=1}^{|D_i|}l_j(\bm{\omega}) }{|D_i|},\ {\rm for}\ j \in |D_i|. \end{equation} Therefore, the global loss is then calculated as: \begin{equation}\label{fw} L(\bm{\omega})=\frac{\sum_{i=1}^{N}{|D_i|}L_i(\bm{\omega}) }{D_{N}}. \end{equation} where $n$ to denote the total number of clients participating in the training process. For updating model parameters under such a distributed architecture, we utilize the classic gradient-descent methods (e.g., Adam, ASGD) which is mathematically expressed below: \begin{equation} \bm{\omega}_i(t)=\bm{\omega}_i(t-1)-\eta\nabla L_i(\bm{\omega}_i(t-1)). \end{equation} where $t$ denotes the index of training epoch, and $\eta$ denotes a step size in the gradient descent method. \begin{algorithm}[t] \small \caption{WebFedAVG} \label{alg1} \begin{algorithmic}[1] \State{\textbf{Input}: $\bm{\omega}_0$, $\epsilon$, m, T, $D_{i}$, and $\eta$ } \State{\textbf{Output}: $\bm{\omega}$} \State Register all available clients on the server side \State Initialize all clients with model $\bm{\omega}_0$ \For{round $t=1, 2, ..., T$} \State Server randomly selects $m$ clients \For{each client $i=1,2,...,m$} \State \textbf{In-browser Training:} \State $\bm{\omega}_i(t) \leftarrow \bm{\omega}_i(t-1)-\eta\nabla L_i(\bm{\omega}_i(t-1))$ \State \textbf{LDP Process:} \State $\tilde{\bm{\omega}}_{i}(t)=\bm{\omega}_{i}(t) + Lap^{i}(\Delta s/\epsilon)$ \State Upload $\tilde{\bm{\omega}}_{i}(t)$ to server \EndFor \State{The server receives all updates and does} \State \textbf{Global Aggregation:} \State $\bm{\omega}(t)\leftarrow \sum_{i} \frac{|D_{i}| {\tilde{\bm{\omega}}}_{i}(t)}{\sum_{i} |D_{i}|}$ \State{The server sends new global model $\bm{\omega}(t) $ to all clients} \For{each client $i=1,2,...,n$} \State{$\bm{\omega}_i(t) \leftarrow \bm{\omega}(t)$. } \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsubsection{Local Differential Privacy} In this subsection, we firstly introduce the related definition on $\epsilon-$LDP and then we demonstrate how to apply it into our framework. WebFed guarantees a certain degree of privacy as the clients never send their private raw data to the parameter server publicly. However, a little parameter value could still be inferred from the shared updates in the local model, thereby further leveraging privacy-preserving mechanisms is necessary. $\epsilon-$LDP inherits the features of centralized DP \cite{9069945} and is able to resist privacy attacks from the untrusted server, provides a strong criterion for privacy information preservation of distributed clients. Now, we formally define LDP as follows. \begin{definition}[Local Differential Privacy] An randomized mechanism $\mathcal{M}$ satisfies $\epsilon -$local differential privacy ($\epsilon-$LDP), where $\epsilon \geq 0$, if and only if for any input $v$ and $v^{\prime}$, and any output $y$ of $M$, we have \begin{equation} \forall y \in \operatorname{Range}(\mathcal{M}): \operatorname{Pr}[\mathcal{M}(v) \in y] \leq e^{\epsilon} \operatorname{Pr}\left[\mathcal{M}\left(v^{\prime}\right) \in y \right] \label{lbl:ldpTheroty} \end{equation} \end{definition} where, $\operatorname{Range}(\mathcal{M})$ denotes the set of all possible outputs of the algorithms $\mathcal{M}$. The notation $Pr[\cdot]$ means probability. $\epsilon$ denotes the distinguishable bound of the probability distributions of neighboring inputs to achieve almost identical output results for any neighboring inputs, which is also called the privacy budget. Basically, LDP provides a stronger level of protection compared to the centralized DP setting, because each user only sends the perturbed report. Even in the case of a malicious parameter aggregator, the privacy of each client is still protected. According to the above definition, no matter what extent of background knowledge the aggregator holds, it cannot distinguish with high confidence whether the real tuple is $v$ or $v^{\prime}$ after receiving the perturbed report under LDP protection. Therefore, deploying LDP in our WebFed framework could further ensure that model weights are protected from attacks by an aggregator with background knowledge. \subsubsection{WebFedAVG overall workflow} Based on the above details, we briefly summarize the overall workflow of WebFedAVG. As shown in Algorithm \ref{alg1}, the parameter server will initialize the training model and broadcast it to all clients before the collaboratively training process, and it will randomly select several clients as participants in each round of training. When clients receive the initial global model, they will start their in-browser training with their own local data. For privacy concerns, they will add artificial noise to their training results as indicated in line 11 of Algorithm \ref{alg1}, and then upload them to the parameter server. The server will then do weighted averaging to generate a new global model and broadcast it to all clients. At the same time, the clients selected for the next round will receive a request along with the new global model. The above procedure repeats until reaching the number of epochs or the accuracy requirement. \section{Introduction} During the past ten years, the breakthrough of machine learning (ML) technology has recently yielded encouraging results due to the development of computing power and big data, showcasing Artificial Intelligence (AI) systems able to assist life, education, and work in a variety of scenarios. However, the conventional training mode encounters the issues, such as privacy leakage of user data and massive data transmission since it needs to gather all the raw training data. Hence, the privacy-preserving issue has attracted more and more attention \cite{house2012consumer}, and it has promoted a novel machine learning technology, which is called federated learning (FL). It leaves the personal training data distributed on their local devices and enables the clients to learn a shared global model by aggregating their local training results. It is seemed to be one instance of the more general approach to address the fundamental problems of privacy, ownership, and data isolated islands \cite{bonawitz2019towards}. The emergence of FL has promoted the development of machine learning applications in many fields since it could solve the problem caused by the contradiction between model training and user data privacy, that is, collecting more user data is conducive to the training effect but may lead to user privacy leakage. \begin{figure}[tbp] \centering \includegraphics[width=0.8\linewidth]{figures_v2/webfed.png} \caption{WebFed: Browser-based Federated Learning framework} \label{lbl:systemModel} \end{figure} Nevertheless, the most existing federated learning frameworks require the participants who have the heterogeneous operating systems (Windows, macOS, iOS, and Android) to install related software and configure complicated learning environment which seriously blocks their applications. Therefore, it is necessary to develop solutions other than standard methods for large-scale machine learning, distributed optimization, and other scenarios\cite{li2020federated}. That's is why we will try to develop a browser-based federated learning framework. As an engine for developing distributed machine learning models and algorithms, the browser has the following advantages: (1) Browsers with cross-platform features and web programming languages make the software compatibility of machine learning models and algorithms obvious: nearly almost all computing devices can participate in machine learning training by contributing their computing resources to the entire learning process without any other software installation and could, with the same code, utilize the predictive models on the same devices. (2) WebGL and related technologies can make better use of integrated graphics cards to accelerate deep learning (DL) tasks, without the need for complex driver configuration of discrete graphics cards like NVIDIA that requires native DL framework \cite{ma2019moving}. (3) Browsers have great potential, are inexpensive and large in scale, and can widely bring complex ML learning and prediction to the public \cite{meeds2015mlitb}. Moreover, even FL has avoided the gathering of user's data, it still encounters the challenges on the privacy issues that the clients' information could be leaked by analyzing their uploaded parameters, such as the trained weights in deep neural networks (DNNs) \cite{9069945}. Comprehensive privacy analysis of deep learning models was firstly performed in \cite{nasr2019comprehensive}. They designed white-box inference attacks and tested the privacy leakage through the fully trained model's parameters and the updates during the whole training process. They also investigated the reasons for the information leakage of the deep learning models in the training data. To address the aforementioned information leakage issues, differential privacy (DP) is used as an element formulation of privacy in probabilistic terms to prevent privacy leakage of the information contained in its inputs by adding noise in centralized server \cite{fioretto2020differential}. The above study motivates us to develop a browser-based cross-platform FL framework with local differential privacy that is capable of performing large-scale collaborative training on heterogeneous devices and enhances privacy protection by adding artificial noise. To our knowledge, it is the state-of-the-art exploration of developing the privacy-enhanced browser-based FL framework. The contributions of this paper compose of three folds: \begin{enumerate} \item We firstly propose WebFed, a browser-based cross-platform federated learning framework in which machine learning models could be trained locally in web browsers with the clients' own local data. \item To strengthen privacy protection, we apply local differential privacy mechanism in WebFed. Before uploading the training results, each client will add artificial noise to local models' weights. Doing so could counter inference attacks without significantly negatively affecting performance. \item The experiments on heterogeneous devices are conducted to evaluate the proposed WebFed framework. It is very easy to deploy because of its cross-platform nature and the results demonstrate that WebFed could achieve good training results even it runs in web browsers. \end{enumerate} The rest of this paper is organized as follows: Sec. \ref{lbl:relatedWork} summarizes the related work. Sec. \ref{lbl:framework} details the design of WebFed. Sec. \ref{lbl:experiment} evaluates the browser-based training performance and discusses the experiment results. In Sec. \ref{lbl:conclusion}, we give the conclusion of our current work. And the future direction is talked about in Sec. \ref{lbl:futureWork}. \section{Experiment and Anslysis}\label{lbl:experiment} In this section, we mainly conduct experiments to evaluate the performance of the WebFed framework and analyze the experiment results. \addtolength{\topmargin}{0.01in} \subsection{Experiment Setup} \begin{comment} \begin{table}[bp] \caption{Model Structure } \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Layer Name} & \textbf{Output Shape}& \textbf{$\#$ Parameters}& \textbf{Kernel Size}\\ \hline $\textit{Conv2d\_1}$& $\left[batch,24,24,8\right]$& 208 & $\left[5 \times 5 \right]\times 8$ \\ $\textit{MaxPooling2d\_1}$& $\left[batch,12,12,8\right]$& 0& $\left[2\times 2\right]$\\ $\textit{Conv2d\_2}$& $\left[batch,8,8,16\right]$& 3216& $\left[5 \times 5 \right]\times 16$\\ $\textit{MaxPooling2d\_2}$& $\left[batch,4,4,16 \right]$& 0&$\left[2\times 2\right]$ \\ $\textit{Flatten\_1}$& $\left[batch,256\right]$& 0& $\textit{None}$ \\ $\textit{Dense\_1}$& $\left[batch,10\right]$&2570&$\textit{None}$ \\ \hline \end{tabular} \label{table:cnn} \end{center} \end{table} \end{comment} Here we choose a lightweight deep learning network, LeNet with two convolutional layers (kernel size 5$\times$5) and two max pooling layers (kernel size 2$\times$2), as the collaborative training model. Our evaluation is conducted on the MNIST that is the most popular benchmark dataset of Machine Learning models. \begin{table}[tbp] \caption{training devices} \label{lbl:mobileDevice} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Type} & \textbf{Name}& \textbf{Operating system}& \textbf{hardware}&\textbf{browser}\\ \hline smartphone& iphone 12& ios15 & A14 Bionic& Chrome \\ smartphone& HUAWEI & Harmony&KIRIN 980& Chrome \\ PC& Dell &Ubuntu 18.04& i7 CPU & FireFox \\ PC& Dell &Ubuntu 18.04& i7 CPU & FireFox \\ PC &MAC &BigSur& M1& Safari \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} We deploy the parameter server on a PC. For clients, we select two smartphones and three PCs to test the WebFed framework, and the specific information of the devices are shown in table \ref{lbl:mobileDevice}. To simplify the experiment, we stipulate that all devices participate in the whole training process. \subsection{Results and Analysis} \subsubsection{Comparison between WebFed and FL} With the experiment settings, we obtain the learning trend for WebFed framework of both the accuracy and loss values. Moreover, we also test on convolutional federated learning framework that does not run on the browsers, which is called FL for short in the following. In order to control the experimental variables, we apply local differential privacy to both, and the privacy budget is 3. The experiment results are demonstrated in Fig. \ref{lbl:WebFedAccuracy} and Fig. \ref{lbl:WebFedloss}, both frameworks could achieve the convergence of training after fixed epochs. In Fig. \ref{lbl:WebFedAccuracy}, we compare the accuracy between FL and WebFed. And it illustrates that after the same training time, the accuracy of the traditional federated learning framework will be slightly higher than WebFed. We also can find that the curve trend of WebFed is always showing fluctuated than FL, when the training time is about 300 seconds, the difference in accuracy between FL and WebFed is obvious, the gap is about 13\%. But as the training continues, the gap between the two frameworks is getting smaller and smaller. We think the reason for this context might be the result of the interaction of multiple factors (e.g., the approach for the selection of clients, the heterogeneity of clients, the quality of the dataset). Indeed, the performance of WedFeb is close to FL during training while the gap still exists. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures_v2/acc.png} \caption{The accuracy of WebFed versus FL on MNIST.} \label{lbl:WebFedAccuracy} \end{figure} \subsubsection{Comparison under different privacy budgets} \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figures_v2/arxivnoiseacc.png} \caption{The accuracy of WebFed versus epsilon on MNIST.} \label{lbl:noiseAccuracy} \end{figure} To explore the impact of adding artificial noise on the training process, experiments are conducted by setting different privacy budgets. We consider three cases, namely, epsilon is equal to 3, 6, and noise-free (no noise is added). We train the shared model for 200 epochs, and Fig \ref{lbl:noiseAccuracy} shows the impact of different privacy budgets on global model accuracy. We can find that the accuracy of the model keeps getting lower with the privacy budget epsilon decreases. In the early and middle stages of the training, the gap among the three is the biggest. After 50 rounds of training, when epsilon is equal to 6 and noise-free, the accuracy is about 0.8. In the case of epsilon equals 3, the accuracy is the lowest, around 0.72. Compared with the other two cases, the accuracy gap is about 10\%. After 200 epochs of training, when epsilon is equal to 3 and 6, the accuracy is about 87\% and 90\%, respectively. It is obvious that it has the highest accuracy for noise-free, about 92\%. It is because the smaller the privacy budget, the higher the requirements for privacy protection, and the more affected the training effect. Through our experiments on different privacy budgets, setting an appropriate privacy budget can reduce the impact on the training performance while strengthening privacy protection at the same time. \section{Preliminaries}\label{lbl:relatedWork} \subsection{Machine Learning in The Browser} TensorFlow.js is a library developed by Google to build and execute machine learning algorithms in JavaScript, which means the training process could be realized in the browser based on Node.js environment \cite{smilkov2019tensorflow}. It not only provides deep learning for JavaScript developers but also provides applications of deep learning in web browsers that support WebGL \cite{kletz2021open}. Recently, a browser-based ML application development tool was proposed to reduce the programming pressure of researchers \cite{ozan2021novel}. WebSocket provides full-duplex communication channels as a communication protocol on a single TCP connection. It is seemed to be able to reduce communication overhead and offer stateful, efficient communication among web servers and clients and thus could be a good approach for providing real-time information\cite{ogundeyi2019websocket}. With the breakthrough of hardware and communication technology, training on mobile devices has become possible. Due to recent advancements in the mobile web, it has enabled features previously only found in natively developed applications \cite{biorn2017progressive}. Hence, we strongly believe that more and more browser-based mobile machine learning applications will appear soon. \subsection{Privacy Issues in Federated Learning } Federated learning allows training a shared global model with a central server without gathering the clients' private data. By designing a privacy-preserving and bandwidth-efficient federated learning system, an application for the prediction of in-hospital mortality was developed in \cite{kerkouche2021privacy}. Moreover, FFD (Federated learning for Fraud Detection) was proposed to use behavior features distributed on the banks' local database to train a fraud detection model instead of the traditional fraud detection system that needs centralized data \cite{yang2019ffd}. In order to realize distributed machine learning with privacy protection, federated learning has shown great potential as it avoids the direct raw data sharing \cite{li2019asynchronous}. However, it is still possible to leak privacy which is defined as the information an adversary can learn from the model about the users even it does not need to gather their private data \cite{fredrikson2015model, hitaj2017deep}. For example, an adversary with BlackBox or Whitebox access to the target model aims to determine whether a given target data point belongs to the private training data of the target model \cite{nasr2019comprehensive}. It's more effective for membership inference attacks against deep neural networks \cite{nasr2019comprehensive} since such models can better remind their training data with their immense capacities. \subsection{Local Differential Privacy} Differential privacy is an element formulation of privacy in probabilistic terms. It can provide a strict preserve for an algorithm to prevent leakage of private information contained in its inputs for the centralized architecture \cite{fioretto2020differential}. Local differential privacy (LDP) allows the clients to perturb their information locally to provide plausible deniability of their information without the necessity of a trusted party \cite{8731512}. In \cite{8731512}, the authors exploited novel LDP mechanisms to collect a numeric attribute. In terms of worst-case noise variance, it usually has a better performance on accuracy than existing solutions. In \cite{kairouz2014extremal}, they focus on the fundamental trade-off between local differential privacy and information-theoretic utility functions, because data providers and analysts want to maximize the utility of statistical inferences performed on the released data. In \cite{8640266}, the authors propose to apply LDP to protect the clients' data by distribution estimation. In our WebFed, we enhance privacy protection by applying local differential privacy to add artificial noise to the training results before uploading to the parameter server. \subsection{Towards WebFed} Significantly different from existing works, we are the first to develop a browser-based federated learning framework. Based on the restrictions on the development and deployment of federated learning mentioned above, as well as the emergence of browser-based machine learning libraries, and the cross-platform characteristics of browsers, we propose WebFed, a browser-based federated learning framework with local differential privacy to facilitate future research related to federated learning, as well as the deployment and implementation of real-world applications. \section{Future direction}\label{lbl:futureWork} In the future, to further simplify deployment and increase the flexibility of federated learning, we plan to integrate the server into the browser. In this way, participants and servers (which could also be regarded as a participant) only need to manually select their roles in the browser and form a group for federated learning, rather than deploying servers separately. Furthermore, we will optimize corresponding mechanisms for clients' joining and leaving, and conduct more thorough experiments to evaluate the performance of WebFed. Machine learning in the browser is still in its infancy. We will further study the limitations of in-browser training, improve the existing work and then publish WebFed as an open-source framework. \section*{Acknowledgments} \bibliographystyle{unsrtnat} \section{Conclusion}\label{lbl:conclusion} In this paper, considering the difficulty of the cross-device deployment of federated learning, we proposed WebFed, a browser-based federated learning framework with privacy preservation. By leveraging the advantage of the cross-device characteristics of web browsers, WebFed is more conducive to actual federated learning research and application landing. What's more, in order to strengthen privacy protection, we apply local differential privacy mechanism to add artificial noise to the local updates before global aggregation. Finally, we conduct the experiments on heterogeneous devices (smartphones and PC) to evaluate the performance of WebFed. The experimental results show that our browser-based framework can achieve a training effect close to the conventional framework in terms of model accuracy. We also found that an appropriate privacy budget setting can enhance the privacy protection with only a subtle impact on the training performance. \section{Framework Design}\label{lbl:framework} In this section, we first describe the architecture design of WebFed, explain the Workers and their relationships, and then we will introduce the details of the WebFedAVG algorithm. \begin{figure}[tbp] \centering \includegraphics[width=0.5\linewidth]{figures_v2/structure.png} \caption{Schematic Illustration of WebFed architecture. Icons in this figure are from \textit{https://www.flaticon.com/}. } \label{lbl:tech} \end{figure} \subsection{Overall Design of WebFed} Under WebFed, the Browser-based federated learning framework, a shared deep neural network (DNN) model is jointly trained in browsers among $N$ clients without gathering their private data. We assume there is a parameter server that is in charge of aggregating the clients' updates (weights of their local models) by convolutional methods (e.g., FedAvg, FedSGD) and broadcasting the new global model. Parameter Servers (PS) are \textit{Node.js} applications that control the communication between devices and hosts ML projects. As depicted in Fig. \ref{lbl:tech}, the clients in WebFed could be viewed as the composing of four Workers, that is: \begin{itemize} \item \textbf{UI Worker:} When the corresponding web page is opened, UI Worker is initialized. Clients can perform some intuitive operations through UI Worker, such as connecting to the server, selecting training tasks, and so on. UI Worker is mainly responsible for the interaction with the user and the specific training configuration on the browser side. \item \textbf{Data Worker:} Data Worker participates in training mainly by managing local datasets. It could optimize the entire training process by pre-loading and pre-processing the training data. \item \textbf{LDP Worker:} LDP Worker is in charge of adding artificial noise to the local training results before global aggregation. It accepts the model from the Training Worker and returns the perturbed model weights. \item \textbf{Training Worker:} Training Worker is responsible for performing specific training tasks and interacting with the server. First, it will receive the global model broadcasted by the server and then update the maintained local model. It accepts the training data provided by Data Worker and then conducts the in-browser training process. After the local training, it will then interact with the LDP Worker as Fig. \ref{lbl:tech} illustrates to obtain the model weights with artificial noise and upload them to the parameter server for global aggregation. \end{itemize} \subsection{Algorithm Design} \subsubsection{Federated learning tasks} Let $D_i$ denote the local training dataset of client $i$, where $i \in N$ and we consider the number of all training data samples is denoted as $D_{N}$, and $\bigcup_{i} D_{i}=D_{N}$. For the $j$-th training sample of client $i$ in a classification problem, it includes input training data and corresponding labels. The full parameters of the training model are denoted as $\bm{\omega}$ and the loss of $j$-th data sample is denoted as $l_j( \bm{\omega};(x_{j},y_{j}))$, shorted as $l_j(\bm{\omega})$. The training goal is to minimize the loss $L(\bm{\omega} )$ that is derived from Eq. (\ref{fwi}) and Eq. (\ref{fw}). Assuming that each client has its own training data $D_i$, then the local loss is expressed as: \begin{equation}\label{fwi} L_i(\bm{\omega})=\frac{\sum_{j=1}^{|D_i|}l_j(\bm{\omega}) }{|D_i|},\ {\rm for}\ j \in |D_i|. \end{equation} Therefore, the global loss is then calculated as: \begin{equation}\label{fw} L(\bm{\omega})=\frac{\sum_{i=1}^{N}{|D_i|}L_i(\bm{\omega}) }{D_{N}}. \end{equation} where $n$ to denote the total number of clients participating in the training process. For updating model parameters under such a distributed architecture, we utilize the classic gradient-descent methods (e.g., Adam, ASGD) which is mathematically expressed below: \begin{equation} \bm{\omega}_i(t)=\bm{\omega}_i(t-1)-\eta\nabla L_i(\bm{\omega}_i(t-1)). \end{equation} where $t$ denotes the index of training epoch, and $\eta$ denotes a step size in the gradient descent method. \begin{algorithm}[t] \small \caption{WebFedAVG} \label{alg1} \begin{algorithmic}[1] \State{\textbf{Input}: $\bm{\omega}_0$, $\epsilon$, m, T, $D_{i}$, and $\eta$ } \State{\textbf{Output}: $\bm{\omega}$} \State Register all available clients on the server side \State Initialize all clients with model $\bm{\omega}_0$ \For{round $t=1, 2, ..., T$} \State Server randomly selects $m$ clients \For{each client $i=1,2,...,m$} \State \textbf{In-browser Training:} \State $\bm{\omega}_i(t) \leftarrow \bm{\omega}_i(t-1)-\eta\nabla L_i(\bm{\omega}_i(t-1))$ \State \textbf{LDP Process:} \State $\tilde{\bm{\omega}}_{i}(t)=\bm{\omega}_{i}(t) + Lap^{i}(\Delta s/\epsilon)$ \State Upload $\tilde{\bm{\omega}}_{i}(t)$ to server \EndFor \State{The server receives all updates and does} \State \textbf{Global Aggregation:} \State $\bm{\omega}(t)\leftarrow \sum_{i} \frac{|D_{i}| {\tilde{\bm{\omega}}}_{i}(t)}{\sum_{i} |D_{i}|}$ \State{The server sends new global model $\bm{\omega}(t) $ to all clients} \For{each client $i=1,2,...,n$} \State{$\bm{\omega}_i(t) \leftarrow \bm{\omega}(t)$. } \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsubsection{Local Differential Privacy} In this subsection, we firstly introduce the related definition on $\epsilon-$LDP and then we demonstrate how to apply it into our framework. WebFed guarantees a certain degree of privacy as the clients never send their private raw data to the parameter server publicly. However, a little parameter value could still be inferred from the shared updates in the local model, thereby further leveraging privacy-preserving mechanisms is necessary. $\epsilon-$LDP inherits the features of centralized DP \cite{9069945} and is able to resist privacy attacks from the untrusted server, provides a strong criterion for privacy information preservation of distributed clients. Now, we formally define LDP as follows. \begin{definition}[Local Differential Privacy] An randomized mechanism $\mathcal{M}$ satisfies $\epsilon -$local differential privacy ($\epsilon-$LDP), where $\epsilon \geq 0$, if and only if for any input $v$ and $v^{\prime}$, and any output $y$ of $M$, we have \begin{equation} \forall y \in \operatorname{Range}(\mathcal{M}): \operatorname{Pr}[\mathcal{M}(v) \in y] \leq e^{\epsilon} \operatorname{Pr}\left[\mathcal{M}\left(v^{\prime}\right) \in y \right] \label{lbl:ldpTheroty} \end{equation} \end{definition} where, $\operatorname{Range}(\mathcal{M})$ denotes the set of all possible outputs of the algorithms $\mathcal{M}$. The notation $Pr[\cdot]$ means probability. $\epsilon$ denotes the distinguishable bound of the probability distributions of neighboring inputs to achieve almost identical output results for any neighboring inputs, which is also called the privacy budget. Basically, LDP provides a stronger level of protection compared to the centralized DP setting, because each user only sends the perturbed report. Even in the case of a malicious parameter aggregator, the privacy of each client is still protected. According to the above definition, no matter what extent of background knowledge the aggregator holds, it cannot distinguish with high confidence whether the real tuple is $v$ or $v^{\prime}$ after receiving the perturbed report under LDP protection. Therefore, deploying LDP in our WebFed framework could further ensure that model weights are protected from attacks by an aggregator with background knowledge. \subsubsection{WebFedAVG overall workflow} Based on the above details, we briefly summarize the overall workflow of WebFedAVG. As shown in Algorithm \ref{alg1}, the parameter server will initialize the training model and broadcast it to all clients before the collaboratively training process, and it will randomly select several clients as participants in each round of training. When clients receive the initial global model, they will start their in-browser training with their own local data. For privacy concerns, they will add artificial noise to their training results as indicated in line 11 of Algorithm \ref{alg1}, and then upload them to the parameter server. The server will then do weighted averaging to generate a new global model and broadcast it to all clients. At the same time, the clients selected for the next round will receive a request along with the new global model. The above procedure repeats until reaching the number of epochs or the accuracy requirement. \section{Introduction} During the past ten years, the breakthrough of machine learning (ML) technology has recently yielded encouraging results due to the development of computing power and big data, showcasing Artificial Intelligence (AI) systems able to assist life, education, and work in a variety of scenarios. However, the conventional training mode encounters the issues, such as privacy leakage of user data and massive data transmission since it needs to gather all the raw training data. Hence, the privacy-preserving issue has attracted more and more attention \cite{house2012consumer}, and it has promoted a novel machine learning technology, which is called federated learning (FL). It leaves the personal training data distributed on their local devices and enables the clients to learn a shared global model by aggregating their local training results. It is seemed to be one instance of the more general approach to address the fundamental problems of privacy, ownership, and data isolated islands \cite{bonawitz2019towards}. The emergence of FL has promoted the development of machine learning applications in many fields since it could solve the problem caused by the contradiction between model training and user data privacy, that is, collecting more user data is conducive to the training effect but may lead to user privacy leakage. \begin{figure}[tbp] \centering \includegraphics[width=0.8\linewidth]{figures_v2/webfed.png} \caption{WebFed: Browser-based Federated Learning framework} \label{lbl:systemModel} \end{figure} Nevertheless, the most existing federated learning frameworks require the participants who have the heterogeneous operating systems (Windows, macOS, iOS, and Android) to install related software and configure complicated learning environment which seriously blocks their applications. Therefore, it is necessary to develop solutions other than standard methods for large-scale machine learning, distributed optimization, and other scenarios\cite{li2020federated}. That's is why we will try to develop a browser-based federated learning framework. As an engine for developing distributed machine learning models and algorithms, the browser has the following advantages: (1) Browsers with cross-platform features and web programming languages make the software compatibility of machine learning models and algorithms obvious: nearly almost all computing devices can participate in machine learning training by contributing their computing resources to the entire learning process without any other software installation and could, with the same code, utilize the predictive models on the same devices. (2) WebGL and related technologies can make better use of integrated graphics cards to accelerate deep learning (DL) tasks, without the need for complex driver configuration of discrete graphics cards like NVIDIA that requires native DL framework \cite{ma2019moving}. (3) Browsers have great potential, are inexpensive and large in scale, and can widely bring complex ML learning and prediction to the public \cite{meeds2015mlitb}. Moreover, even FL has avoided the gathering of user's data, it still encounters the challenges on the privacy issues that the clients' information could be leaked by analyzing their uploaded parameters, such as the trained weights in deep neural networks (DNNs) \cite{9069945}. Comprehensive privacy analysis of deep learning models was firstly performed in \cite{nasr2019comprehensive}. They designed white-box inference attacks and tested the privacy leakage through the fully trained model's parameters and the updates during the whole training process. They also investigated the reasons for the information leakage of the deep learning models in the training data. To address the aforementioned information leakage issues, differential privacy (DP) is used as an element formulation of privacy in probabilistic terms to prevent privacy leakage of the information contained in its inputs by adding noise in centralized server \cite{fioretto2020differential}. The above study motivates us to develop a browser-based cross-platform FL framework with local differential privacy that is capable of performing large-scale collaborative training on heterogeneous devices and enhances privacy protection by adding artificial noise. To our knowledge, it is the state-of-the-art exploration of developing the privacy-enhanced browser-based FL framework. The contributions of this paper compose of three folds: \begin{enumerate} \item We firstly propose WebFed, a browser-based cross-platform federated learning framework in which machine learning models could be trained locally in web browsers with the clients' own local data. \item To strengthen privacy protection, we apply local differential privacy mechanism in WebFed. Before uploading the training results, each client will add artificial noise to local models' weights. Doing so could counter inference attacks without significantly negatively affecting performance. \item The experiments on heterogeneous devices are conducted to evaluate the proposed WebFed framework. It is very easy to deploy because of its cross-platform nature and the results demonstrate that WebFed could achieve good training results even it runs in web browsers. \end{enumerate} The rest of this paper is organized as follows: Sec. \ref{lbl:relatedWork} summarizes the related work. Sec. \ref{lbl:framework} details the design of WebFed. Sec. \ref{lbl:experiment} evaluates the browser-based training performance and discusses the experiment results. In Sec. \ref{lbl:conclusion}, we give the conclusion of our current work. And the future direction is talked about in Sec. \ref{lbl:futureWork}. \section{Experiment and Anslysis}\label{lbl:experiment} In this section, we mainly conduct experiments to evaluate the performance of the WebFed framework and analyze the experiment results. \addtolength{\topmargin}{0.01in} \subsection{Experiment Setup} \begin{comment} \begin{table}[bp] \caption{Model Structure } \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Layer Name} & \textbf{Output Shape}& \textbf{$\#$ Parameters}& \textbf{Kernel Size}\\ \hline $\textit{Conv2d\_1}$& $\left[batch,24,24,8\right]$& 208 & $\left[5 \times 5 \right]\times 8$ \\ $\textit{MaxPooling2d\_1}$& $\left[batch,12,12,8\right]$& 0& $\left[2\times 2\right]$\\ $\textit{Conv2d\_2}$& $\left[batch,8,8,16\right]$& 3216& $\left[5 \times 5 \right]\times 16$\\ $\textit{MaxPooling2d\_2}$& $\left[batch,4,4,16 \right]$& 0&$\left[2\times 2\right]$ \\ $\textit{Flatten\_1}$& $\left[batch,256\right]$& 0& $\textit{None}$ \\ $\textit{Dense\_1}$& $\left[batch,10\right]$&2570&$\textit{None}$ \\ \hline \end{tabular} \label{table:cnn} \end{center} \end{table} \end{comment} Here we choose a lightweight deep learning network, LeNet with two convolutional layers (kernel size 5$\times$5) and two max pooling layers (kernel size 2$\times$2), as the collaborative training model. Our evaluation is conducted on the MNIST that is the most popular benchmark dataset of Machine Learning models. \begin{table}[tbp] \caption{training devices} \label{lbl:mobileDevice} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Type} & \textbf{Name}& \textbf{Operating system}& \textbf{hardware}&\textbf{browser}\\ \hline smartphone& iphone 12& ios15 & A14 Bionic& Chrome \\ smartphone& HUAWEI & Harmony&KIRIN 980& Chrome \\ PC& Dell &Ubuntu 18.04& i7 CPU & FireFox \\ PC& Dell &Ubuntu 18.04& i7 CPU & FireFox \\ PC &MAC &BigSur& M1& Safari \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} We deploy the parameter server on a PC. For clients, we select two smartphones and three PCs to test the WebFed framework, and the specific information of the devices are shown in table \ref{lbl:mobileDevice}. To simplify the experiment, we stipulate that all devices participate in the whole training process. \subsection{Results and Analysis} \subsubsection{Comparison between WebFed and FL} With the experiment settings, we obtain the learning trend for WebFed framework of both the accuracy and loss values. Moreover, we also test on convolutional federated learning framework that does not run on the browsers, which is called FL for short in the following. In order to control the experimental variables, we apply local differential privacy to both, and the privacy budget is 3. The experiment results are demonstrated in Fig. \ref{lbl:WebFedAccuracy} and Fig. \ref{lbl:WebFedloss}, both frameworks could achieve the convergence of training after fixed epochs. In Fig. \ref{lbl:WebFedAccuracy}, we compare the accuracy between FL and WebFed. And it illustrates that after the same training time, the accuracy of the traditional federated learning framework will be slightly higher than WebFed. We also can find that the curve trend of WebFed is always showing fluctuated than FL, when the training time is about 300 seconds, the difference in accuracy between FL and WebFed is obvious, the gap is about 13\%. But as the training continues, the gap between the two frameworks is getting smaller and smaller. We think the reason for this context might be the result of the interaction of multiple factors (e.g., the approach for the selection of clients, the heterogeneity of clients, the quality of the dataset). Indeed, the performance of WedFeb is close to FL during training while the gap still exists. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures_v2/acc.png} \caption{The accuracy of WebFed versus FL on MNIST.} \label{lbl:WebFedAccuracy} \end{figure} \subsubsection{Comparison under different privacy budgets} \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figures_v2/arxivnoiseacc.png} \caption{The accuracy of WebFed versus epsilon on MNIST.} \label{lbl:noiseAccuracy} \end{figure} To explore the impact of adding artificial noise on the training process, experiments are conducted by setting different privacy budgets. We consider three cases, namely, epsilon is equal to 3, 6, and noise-free (no noise is added). We train the shared model for 200 epochs, and Fig \ref{lbl:noiseAccuracy} shows the impact of different privacy budgets on global model accuracy. We can find that the accuracy of the model keeps getting lower with the privacy budget epsilon decreases. In the early and middle stages of the training, the gap among the three is the biggest. After 50 rounds of training, when epsilon is equal to 6 and noise-free, the accuracy is about 0.8. In the case of epsilon equals 3, the accuracy is the lowest, around 0.72. Compared with the other two cases, the accuracy gap is about 10\%. After 200 epochs of training, when epsilon is equal to 3 and 6, the accuracy is about 87\% and 90\%, respectively. It is obvious that it has the highest accuracy for noise-free, about 92\%. It is because the smaller the privacy budget, the higher the requirements for privacy protection, and the more affected the training effect. Through our experiments on different privacy budgets, setting an appropriate privacy budget can reduce the impact on the training performance while strengthening privacy protection at the same time. \section{Preliminaries}\label{lbl:relatedWork} \subsection{Machine Learning in The Browser} TensorFlow.js is a library developed by Google to build and execute machine learning algorithms in JavaScript, which means the training process could be realized in the browser based on Node.js environment \cite{smilkov2019tensorflow}. It not only provides deep learning for JavaScript developers but also provides applications of deep learning in web browsers that support WebGL \cite{kletz2021open}. Recently, a browser-based ML application development tool was proposed to reduce the programming pressure of researchers \cite{ozan2021novel}. WebSocket provides full-duplex communication channels as a communication protocol on a single TCP connection. It is seemed to be able to reduce communication overhead and offer stateful, efficient communication among web servers and clients and thus could be a good approach for providing real-time information\cite{ogundeyi2019websocket}. With the breakthrough of hardware and communication technology, training on mobile devices has become possible. Due to recent advancements in the mobile web, it has enabled features previously only found in natively developed applications \cite{biorn2017progressive}. Hence, we strongly believe that more and more browser-based mobile machine learning applications will appear soon. \subsection{Privacy Issues in Federated Learning } Federated learning allows training a shared global model with a central server without gathering the clients' private data. By designing a privacy-preserving and bandwidth-efficient federated learning system, an application for the prediction of in-hospital mortality was developed in \cite{kerkouche2021privacy}. Moreover, FFD (Federated learning for Fraud Detection) was proposed to use behavior features distributed on the banks' local database to train a fraud detection model instead of the traditional fraud detection system that needs centralized data \cite{yang2019ffd}. In order to realize distributed machine learning with privacy protection, federated learning has shown great potential as it avoids the direct raw data sharing \cite{li2019asynchronous}. However, it is still possible to leak privacy which is defined as the information an adversary can learn from the model about the users even it does not need to gather their private data \cite{fredrikson2015model, hitaj2017deep}. For example, an adversary with BlackBox or Whitebox access to the target model aims to determine whether a given target data point belongs to the private training data of the target model \cite{nasr2019comprehensive}. It's more effective for membership inference attacks against deep neural networks \cite{nasr2019comprehensive} since such models can better remind their training data with their immense capacities. \subsection{Local Differential Privacy} Differential privacy is an element formulation of privacy in probabilistic terms. It can provide a strict preserve for an algorithm to prevent leakage of private information contained in its inputs for the centralized architecture \cite{fioretto2020differential}. Local differential privacy (LDP) allows the clients to perturb their information locally to provide plausible deniability of their information without the necessity of a trusted party \cite{8731512}. In \cite{8731512}, the authors exploited novel LDP mechanisms to collect a numeric attribute. In terms of worst-case noise variance, it usually has a better performance on accuracy than existing solutions. In \cite{kairouz2014extremal}, they focus on the fundamental trade-off between local differential privacy and information-theoretic utility functions, because data providers and analysts want to maximize the utility of statistical inferences performed on the released data. In \cite{8640266}, the authors propose to apply LDP to protect the clients' data by distribution estimation. In our WebFed, we enhance privacy protection by applying local differential privacy to add artificial noise to the training results before uploading to the parameter server. \subsection{Towards WebFed} Significantly different from existing works, we are the first to develop a browser-based federated learning framework. Based on the restrictions on the development and deployment of federated learning mentioned above, as well as the emergence of browser-based machine learning libraries, and the cross-platform characteristics of browsers, we propose WebFed, a browser-based federated learning framework with local differential privacy to facilitate future research related to federated learning, as well as the deployment and implementation of real-world applications. \section{Future direction}\label{lbl:futureWork} In the future, to further simplify deployment and increase the flexibility of federated learning, we plan to integrate the server into the browser. In this way, participants and servers (which could also be regarded as a participant) only need to manually select their roles in the browser and form a group for federated learning, rather than deploying servers separately. Furthermore, we will optimize corresponding mechanisms for clients' joining and leaving, and conduct more thorough experiments to evaluate the performance of WebFed. Machine learning in the browser is still in its infancy. We will further study the limitations of in-browser training, improve the existing work and then publish WebFed as an open-source framework.
05068f28d9ba025410a8221f3bb9bf2125a9cde5
\section{Introduction} Modern convolutional neural networks (CNNs) require the optimization of millions of parameters by learning from a vast amount of training examples \cite{imagenet}. While this training data might be readily available for a given source domain, annotated data for the actual domain of interest -- the target domain -- might be nonexistent or very hard to obtain. For example, synthetic images could be generated en masse and used for training, while classifying unannotated real world data (\emph{e.g}\bmvaOneDot medical images) is the actual objective. Unfortunately, the domain shift between the source and target data results in a severely degraded performance when utilizing current classification approaches. \begin{figure}[t!] \vspace{0.25cm} \begin{tabular}{c} \bmvaHangBox{% \includegraphics[width=0.9\textwidth]{images/overview1_helv.pdf}% } \end{tabular} \vspace{0.25cm} \caption{Overview of our proposed resampling strategy. Before each adaptation cycle, the current state of the network is frozen and used to extract class-wise uncertainty distributions quantified by Monte Carlo dropout. Based on these uncertainty distributions, class scores are resampled and converted into a pseudo-label. The instances are then grouped together with similar instances into bins, which are later used for sampling mixed mini-batches containing both source and target examples.} \label{fig:overview_fig} \end{figure} Unsupervised domain adaptation (UDA) seeks to address the domain shift problem under the assumption that no annotated data for the target domain is available. Prior work in this area approached the problem from several different angles: Image and pixel-level methods were proposed for learning a direct image-to-image mapping between the different domains, thereby enabling the transfer of target data into the source domain (or vice versa), where straightforward training is then feasible~\cite{hoffman2017cycada,atapour2018real}. At the feature-level, UDA methods often rely on minimizing common divergence measures between the source and target distributions, such as the maximum mean discrepancy (MMD) \cite{can}, Kullback-Leibler divergence~\cite{meng2018adversarial} or by enforcing features from both domains to be indistinguishable with the help of adversarial training~\cite{hoffman2017cycada,simnet}. However, these approaches often require complicated training setups and additional stages such as domain discriminators \cite{simnet}, gradient reversal layers~\cite{ganin2014unsupervised} or other domain-specific building blocks~\cite{dsbn}. This adds both millions of parameters to be optimized and also further hyperparameters that have to be tuned. In this paper, we instead rely on a model's inherent prediction uncertainty for the unsupervised domain adaptation task, which we quantify by Monte Carlo dropout~\cite{mcdropout}. Our proposed method leverages the extracted uncertainty for dynamic resampling, assignment and reweighting of pseudo-labels and can be directly applied to any off-the-shelf neural network architecture without any modification. Furthermore, we propose domain specific smoothing (DSS) -- a label smoothing based improvement to the training pipeline for UDA setups. To summarize, our contributions are as follows: (i) We propose UBR$^2$S -- the uncertainty-based resampling and reweighting strategy utilizing a model's prediction uncertainty under Monte Carlo dropout. (ii) We introduce DSS -- the domain specific smoothing operation for UDA tasks. (iii) We evaluate our method on multiple common UDA benchmarks and achieve state-of-the-art results on both single and multi-source adaptation tasks. (iv) We show that UBR$^2$S works with a plethora of network architectures and outperforms recent methods while using only a fraction of their parameters. \section{Related Work} Unsupervised domain adaptation has been the subject of many prior works in recent years and was addressed in multiple different ways. The authors of~\cite{atapour2018real} leverage GAN-based style transfer in order to translate synthetic data into the real domain and achieve domain adaptation for a monocular depth estimation task. Similarly, Deng ~\emph{et al}\bmvaOneDot~\cite{deng2018image} apply unsupervised image-to-image translation for their person re-identification task and consider the self-similarity and domain-dissimilarity of source, target and translated images. Instead of aligning domains at the image-level, prior methods have also considered alignment at the feature-level: One of the first works in this direction was the gradient reversal layer (\textit{RevGrad}) proposed by Ganin~\emph{et al}\bmvaOneDot~\cite{ganin2014unsupervised}. They achieve domain-invariant feature representations by forcing the feature distributions from the source and target domain to be as indistinguishable as possible with a domain classifier. Related to this, Pinheiro~\cite{simnet} proposes a similarity-based classifier that combines categorical prototypes with domain-adversarial learning. Park~\emph{et al}\bmvaOneDot~\cite{park2018adversarial} show that training with their proposed adversarial dropout can also help to improve generalization. In place of adversarial training, distribution divergence measures have also been applied successfully in order to align the source and target domain at the feature-level: Long~\emph{et al}\bmvaOneDot~\cite{long2013transfer} use the maximum mean discrepancy (MMD) measure for alignment, which recently was extended by Kang~\emph{et al}\bmvaOneDot~\cite{can} for their proposed CDD loss and also used in the regularizer proposed by Gholami~\emph{et al}\bmvaOneDot~\cite{gholami2017punda}. In a similar way, Meng~\emph{et al}\bmvaOneDot~\cite{meng2018adversarial} utilize the Kullback-Leibler divergence as another distribution difference measure. Recently, Hoffman~\emph{et al}\bmvaOneDot~\cite{hoffman2017cycada} have combined both feature- and image-level approaches in their proposed CyCADA framework, which adapts feature representations by enforcing local and global structural consistency and also employs cycle-consistent pixel transformations. In terms of pseudo-labeling target instances, Saito~\emph{et al}\bmvaOneDot~\cite{saito2017asymmetric} propose an asymmetric training method that consists of three separate networks where two networks act as pseudo-labelers and the third network is trained on said labels to obtain discriminative representations for the target domain samples. Related to this, Zhang~\emph{et al}\bmvaOneDot~\cite{zhang2018collaborative} propose an iterative pseudo-labeling and sample selection approach based on an image and domain classifier. This idea is also picked up by Chen~\emph{et al}\bmvaOneDot~\cite{chen2019progressive}, who employ their progressive feature alignment network and an easy-to-hard transfer strategy for iterative training. Another direction was pursued by Chang~\emph{et al}\bmvaOneDot~\cite{dsbn}, who propose the use of domain-specific batch-normalization layers in order to deal with the distribution shift between domains. This concept was also employed in the contrastive adaptation network (CAN) proposed by~\cite{can} and proved to capture the domain specific image distributions. Most related to our work in this paper, Long~\emph{et al}\bmvaOneDot~\cite{long2018conditional} have explored the use of uncertainty for their proposed CDAN architecture and achieve domain adaptation by controlling the classifier uncertainty to guarantee transferability between domains. Han~\emph{et al}\bmvaOneDot~\cite{han2019unsupervised} quantify model uncertainty under a general Rényi entropy regularization framework and utilize it for calibration of the prediction uncertainties between the source and target domain. Gholami~\emph{et al}\bmvaOneDot~\cite{gholami2017punda} consider the Shanon entropy of probability vectors in order to minimize a classifier's uncertainty on unlabeled target domain instances. Similar to the approaches above, Manders~\emph{et al}\bmvaOneDot~\cite{manders2018adversarial} propose an adversarial training setup forcing prediction uncertainties to be indistinguishable between domains. In this work, we explore the usage of prediction uncertainties quantified under the Monte Carlo dropout~\cite{mcdropout} approximation of Bayesian inference. Unlike prior work, our proposed UBR$^2$S method leverages a model's prediction uncertainty for dynamic resampling, assignment and reweighting of pseudo-labels. Our method does not require any image- or feature-level adjustments and can thus be applied to any off-the-shelf neural network. Nevertheless, it still achieves state-of-the-art results and is also competitive when using smaller feature extractors with a fraction of the usual parameters. \section{Methodology} Unsupervised domain adaptation (UDA) seeks to address the domain shift between a source and target domain in order to maximize a model's generalization performance on the target domain while only given annotated data for the source domain. Formally, the annotated source dataset $\mathcal{D}_\mathcal{S}$ consists of input-label pairs $\{x_i^s, y_i^s\} \in \mathcal{D}_\mathcal{S}$ while the target dataset $\mathcal{D}_\mathcal{T}$ only contains unlabeled inputs $\{x_i^t\}$. Labels $y_i^s$ are elements of class set $\mathcal{C}=\{1, 2, \cdots, N\}$ with $N$ classes. Given this definition, the objective of UDA tasks is to produce accuracte predictions $y_i^t$ for every input $x_i^t$ of the target domain dataset. The method discussed in this paper is presented in the context of deep neural networks that consist of a convolutional neural network (CNN) feature extractor $f(\cdot)$ followed by a classifier $g(\cdot)$ that projects $f$'s output into a probability distribution over the class set $\mathcal{C}$. A key part of our proposal is the concept of uncertainty quantified by Monte Carlo dropout (MCD)~\cite{mcdropout}, which we will now formally introduce. Let $\mathcal{M}$ be a set of size $\lvert \mathcal{M}\rvert$ containing binary masks $m_{1..\lvert \mathcal{M}\rvert}$ sampled from a Bernoulli distribution according to the Monte Carlo dropout rate, where $\lvert \mathcal{M}\rvert$ represents the number of MCD iterations. We then evaluate all dropout-masked classifiers $g_{m \in \mathcal{M}}(x_i^t)$ for a given target domain sample $x_i^t$ and quantify its class-wise uncertainties by the mean $\mu$ and standard deviation $\sigma$ for every class $c \in C$ as follows: \begin{equation} \mu_c(x_i^t) = \frac{1}{\lvert \mathcal{M}\rvert} \sum_{m \in \mathcal{M}} \left[ g_{m}\left(f\left(x_i^t\right)\right)\right]_c,\quad \sigma_c(x_i^t) = \sqrt{\frac{1}{\lvert \mathcal{M}\rvert-1} \sum_{m \in \mathcal{M}} \left[g_{m}\left(f\left(x_i^t\right)\right]_c - \mu_c(x_i^t) \right)^2} \label{eq:mcd_mustd} \end{equation} \newline For the sake of clarity, $\mu_c(x_i^t)$ will be shortened as $\mu_{i,c}$ in the following paragraphs ($\sigma_{i,c}$ likewise). With these prerequisites, we will now introduce our proposed uncertainty-based resampling and reweighting strategy (UBR$^2$S). \subsection{Resampling Strategy}\label{section:resampling_strat} For unsupervised domain adaptation tasks, annotations are only available for the source domain. These are oftentimes used for model initialization via supervised pretraining and allow for initial pseudo-label estimates on the unannotated target domain. Due to the domain shift between source and target instances, these initial estimates are inherently noisy. Despite their noisy nature, recent research in this area \cite{dsbn,zhang2018collaborative} often relies on the class predicted with maximum probability score in order to generate a pseudo-label, thereby neglecting the possibility of other classes. For UDA tasks, however, a model's maximum probability prediction for target domain data often does not correspond with the ground truth class after the \textit{source-only} pretraining stage. Our resampling strategy will thus consider predictions other than the maximum for assignment of a pseudo-label. Given $f$ and $g$ after supervised pretraining on the source domain dataset, we start by extracting uncertainty measures $\mu_{i,c}$ and $\sigma_{i,c}$ for the c-th class of the i-th target domain sample (see Equation~\ref{eq:mcd_mustd}). As this is a continuous distribution, we first resample the i-th target instance's probability scores as $\Tilde{p}_{i,c} \sim \mathcal{N}\left(\mu_{i,c}, \sigma_{i,c}\right)$ in order to obtain discrete values. Subsequently, $\Tilde{p}_i$ is re-normalized and then used for the assignment of a pseudo-label $\psi(\frac{\Tilde{p}_i}{\sum_j \Tilde{p}_{i,j}}) = \Tilde{y}_i$ where $\psi: \mathbb{R}^{\lvert \mathcal{C} \rvert} \rightarrow \mathcal{C}$ is the weighted random sample function. This thus enables the usage of classes with non-maximum prediction scores for pseudo-labels based on the model's own predictive uncertainty. The resampling step ends by assigning the i-th target sample to the bin $\mathcal{B}^\mathcal{T}_{\nu_i}$ based on $\nu_i = \mathrm{argmax}_{c \in \mathcal{C}}\, \mu_{i,c}$. Here, bins are groups of similar samples that are later used for construction of mini-batches (see Section~\ref{section:training_loop}). The above resampling process of our proposed method is also visualized in Figure~\ref{fig:overview_fig}. \subsection{Reweighting Strategy}\label{sec:reweighting_strat} Resampling a pseudo-label as described above allows for the consideration of non-maximum predictions. However, this is accompanied by the inherent risk of sampling the wrong class. After all, the maximum prediction is a reasonable estimate for some instances. We thus need to compensate for the potential decision error when choosing one class over another and also consider the likelihood of the current resampled value. We calculate the sample likelihood (SL) as $\lambda_{\mathrm{SL}}^i = 1 - \left. \frac{\lvert \Tilde{p}_{i,\Tilde{y}_i} - \mu_{i,\Tilde{y}_i} \rvert}{2 \sigma_{i,\Tilde{y}_i}} \right\vert_0^1$ where $\cdot \big\vert_0^1$ clamps the value into range $[0, 1]$. Intuitively, this reflects the likelihood of the resampling step by measuring the deviation from the mean. However, it does not consider the risk involved with choosing the wrong class in the first place. For this reason, we determine this decision error based on the classes' uncertainty distributions by calculating the inverse of the cumulative probability $\Phi$ \wrt $\Tilde{p}_{i,\Tilde{y}_i}$ as per Equation~\ref{eq:basic}: \begin{equation} \varphi(i, c) = 1 - \Phi(\Tilde{p}_{i, \Tilde{y}_i}, \mu_{i,c}, \sigma_{i,c}), \,\, \text{with}\;\; % \Phi(x, \mu, \sigma) = \frac{1}{2} \left[ 1 + \mathrm{erf}\left(\frac{x-\mu}{\sigma \sqrt{2}}\right) \right] \label{eq:basic} \end{equation} \newline Here, $\mathrm{erf}$ denotes the Gauss error function. The final decision error $\lambda_{\mathrm{DE}}$ is then given by Equation~\ref{eq:final_de} and calculated \wrt every class besides the current label estimation $\Tilde{y}_i$. The graphical interpretation of this procedure is visualized in Figure~\ref{fig:decision_error}a. \begin{equation} \lambda_{\mathrm{DE}}^i = 1 - \mathrm{max}\left(\{\varphi \left(i, c\right) |\; \forall c \in \mathcal{C}\setminus \Tilde{y}_i\} \right) \label{eq:final_de} \end{equation} \newline Finally, we normalize the product of $\lambda_{\mathrm{SL}}$ and $\lambda_{\mathrm{DE}}$ to a distribution with its center point at 1 and use it to dynamically reweigh the element-wise loss while training on target domain samples. Therefore, the loss contribution of a given sample depends on the certainty of the currently chosen pseudo-label. An in-depth description of this procedure will be given in Section~\ref{section:training_loop}. \subsection{Domain Specific Smoothing} \label{section:dss} This section will now introduce our proposed Domain Specific Smoothing (DSS) for UDA training. DSS is based on label smoothing, which was first proposed by Szegedy \emph{et al}\bmvaOneDot \cite{labelsmoothing} and is commonly used to curb overfitting and overconfident predictions. In a normal $N$ class training setup, a label encoding function $v: \mathcal{C} \rightarrow \mathbb{R}^N$ would construct a discrete one-hot probability distribution so that one class is assigned 100\% with all other $N-1$ classes being at 0\%. With label smoothing, $v$ constructs a smoothed label vector by mapping a ground truth label (or estimated pseudo-label) $c \in \mathcal{C}$ into probability space according to Equation \ref{eq:labelsmoothing}. \begin{align} v(c)_i = \begin{cases} 1 - \varepsilon, & c = i \\ \frac{\varepsilon}{\lvert \mathcal{C} \rvert - 1}, & c \neq i \end{cases} \label{eq:labelsmoothing} \end{align} \begin{figure}[t!] \renewcommand{\arraystretch}{2.0} \begin{tabular}{cc} \bmvaHangBox{\includegraphics[width=0.47\textwidth]{images/error_smaller_helv.pdf}} & \bmvaHangBox{% \begin{minipage}{.575\textwidth} \centering \begin{adjustbox}{width=.45\textwidth} \hspace{-3cm} \setlength{\interspacetitleruled}{0pt}% \setlength{\algotitleheightrule}{0pt}% \begin{algorithm}[H] \SetAlgoLined {\small \textbf{Input.} Source-trained $f$ and $g$ with weights $\theta$\;} \For{$j=1$; $j \le T_\mathrm{cycles}$}{ $\mu, \sigma \leftarrow \mathrm{extractUncertainty}(\mathcal{D}_\mathcal{T}, f, g, \theta)$\; \For{$\text{step}=0$; $\text{step} < T_\mathrm{steps}$}{ \If{$\text{step}\, \mathrm{mod}\, 10 = 0$}{ $\Tilde{p} \leftarrow \mathrm{resample}(\mu, \sigma)$\; $\Tilde{y} \leftarrow \Psi(\frac{\Tilde{p}_i}{\sum_n \Tilde{p}_{i,n}})$\; $\forall i: \nu_i \leftarrow \mathrm{argmax}_{c \in \mathcal{C}}\, \mu_{i,c}$\; $\forall i: \mathcal{B}^\mathcal{T}_{\nu_i} \leftarrow \mathcal{B}^\mathcal{T}_{\nu_i} \cup \{x_i^t\} \in \mathcal{D}_\mathcal{T}$\; } $c \leftarrow \mathrm{sampleClasses}(\mathcal{C}, \beta)$\; $b \leftarrow \mathrm{sampleBatch}(\mathcal{B}^\mathcal{S}, \mathcal{B}^\mathcal{T}, c)$\; $\lambda_\mathrm{SL}, \lambda_\mathrm{DE} \leftarrow \mathrm{calcError(\mu, \sigma, \Tilde{p}, \Tilde{y})}$\; $\theta \leftarrow \mathrm{train}(\theta, b, \lambda_\mathrm{SL}, \lambda_\mathrm{DE})$\; } } \end{algorithm} \end{adjustbox} \end{minipage}}\\ (a) Graphical interpretation & \hspace{-2.5cm} (b) Training loop \\ \end{tabular} \caption{(a) Graphical interpretation of our proposed reweighting step. Using resampling provides the opportunity to consider a non-maximum prediction at the cost of sampling a wrong pseudo-label. Our loss reweighting step assesses this risk based on the class-wise uncertainty distributions by calculating the decision error based on $\varphi(i, c)$ and sample likelihood $\lambda_\text{SL}^i$ for the i-th target instance in a N-class classification task. (b) Training loop of UBR$^2$S.} \label{fig:decision_error} \end{figure} Here, $\varepsilon$ is the smoothing factor. When training with a cross entropy loss, probabilities are needed for the loss calculation. However, neural networks usually output unnormalized logits. Softmax normalization is thus applied to convert the logits into probabilities: $\vartheta(\ell)_i = \frac{e^{\ell_i}}{\sum_j e^{\ell_j}}$ where $\ell$ is the logit vector. Cross entropy loss is then given as $-\sum_i y_i\, \mathrm{log}\, \vartheta(\ell)_i$. Let output logit vector $\ell = [l_0, l_1]$ and training target $y = [1.0, 0.0]$ be subject of a training step with 2 classes. The objective of training with cross entropy loss is the assignment of target class $c_0$ to 100\% and $c_1$ to 0\%. Because of softmax normalization, this can only be the case when logit $\ell_0\rightarrow\infty$ or $\ell_1\rightarrow-\infty$. Due to the lack of floating point accuracy, this happens before approaching infinity in reality: $\ell=[19, 0]$ already results in a $\vartheta(\ell)\approx [1.0, 0.0]$ assignment. However, when using label smoothed target $\overline{y}=[0.8, 0.2]$, logits $\ell=[2\ \mathrm{log}\ 2, 0.0]$ are already enough to match the target probabilities of $\overline{y}$ (with $\varepsilon=0.2$). As softmax is invariant to constant addition, $\ell=[q+2\ \mathrm{log}\ 2, q]$ leads to the same result for any choice of $q \in \mathbb{R}$. The absolute difference needed between $\ell_0$ and $\ell_1$ is therefore multiple times smaller for the smoothed target ($\frac{2\ \mathrm{log}\ 2}{19} \approx 0.07$). For training with target $y$, this has the consequence of rapidly growing weights prior to the output layer in order to boost the logit values and thereby minimizing the cross entropy loss. This, however, is a prime example of overfitting as stated by Krogh~\emph{et al}\bmvaOneDot~\cite{krogh1992simple} and is also in conflict with Lawrence~\emph{et al}\bmvaOneDot~\cite{lawrence2000overfitting} who found that smaller weights tend to generalize better. Thus, label smoothing can be seen as a regularization method that diminishes this adverse influence on training. Similar to prior UDA setups~\cite{zhang2018collaborative,can}, our method constructs mini-batches using instances from both the source and target domain. While the presence of ground truth source instances can diminish the effect of wrong target pseudo-labels, constant training on source data will make the model overly focus on this domain even though the adaptation to the target domain is the actual objective. This is in conflict with prior research, which indicates that good transferability and generalization requires a non-saturated source classifier~\cite{chen2019progressive}. We extend this idea to domain adaptation tasks and propose domain specific smoothing (DSS): With DSS, label smoothing is only applied to source instances, even when training with a mixed batch. This leads to a mixture of one-hot pseudo-labels for the target domain instances and smoothed ground truth labels for the source instances. As later shown in our experiments, this helps to improve the generalization performance after the pretraining on source domain data and also improves the domain adaptation capabilities of the final model. \begin{figure}[t!] \vspace{0.25cm} \renewcommand{\arraystretch}{2.0} \begin{tabular}{cccc} \bmvaHangBox{ \begin{minipage}[c][1\width]{0.21\textwidth} \centering \includegraphics[width=0.5\textwidth]{images/VISDA/src_3_101__44_236_150.png}% \includegraphics[width=0.5\textwidth]{images/VISDA/bicycle_124728.jpg}\\ \includegraphics[width=0.5\textwidth]{images/VISDA/src_3_107__129_10_165.png}% \includegraphics[width=0.5\textwidth]{images/VISDA/bicycle_124785.jpg} \end{minipage}} & \bmvaHangBox{ \begin{minipage}[c][1\width]{0.21\textwidth} \centering \includegraphics[width=0.5\textwidth]{images/OC/headphones/amazon.jpg}% \includegraphics[width=0.5\textwidth]{images/OC/headphones/caltech.jpg}\\ \includegraphics[width=0.5\textwidth]{images/OC/headphones/dslr.jpg}% \includegraphics[width=0.5\textwidth]{images/OC/headphones/webcam.jpg}\\ \end{minipage}} & \bmvaHangBox{ \begin{minipage}[c][1\width]{0.21\textwidth} \centering \includegraphics[width=0.5\textwidth]{images/Office31/amazon/frame_0004.jpg}% \includegraphics[width=0.5\textwidth]{images/Office31/dslr/frame_0001.jpg}\\ \includegraphics[width=0.5\textwidth]{images/Office31/amazon/frame_0035.jpg}% \includegraphics[width=0.5\textwidth]{images/Office31/webcam/frame_0019.jpg} \end{minipage}} & \bmvaHangBox{ \begin{minipage}[c][1\width]{0.21\textwidth} \centering \includegraphics[width=0.5\textwidth]{images/OH/00003.jpg}% \includegraphics[width=0.5\textwidth]{images/OH/00004.jpg}\\ \includegraphics[width=0.5\textwidth]{images/OH/00017.jpg}% \includegraphics[width=0.5\textwidth]{images/OH/00019.jpg}\\ \end{minipage}}\\ (a) VisDA 2017 & (b) Office-Caltech & (c) Office-31 & (d) Office-Home\\ \end{tabular} \caption{From left to right: VisDA 2017~\cite{visda2017} with domains synthetic (train set) and real (validation and test set), Office-Caltech~\cite{officecaltech} with domains Amazon, Caltech, DSLR and Webcam, Office-31~\cite{office31} with domains Amazon, DSLR and Webcam and Office-Home~\cite{officehome} with domains Art, Clipart, Product and Real World.} \label{fig:datasets} \end{figure} \subsection{Training Loop} \label{section:training_loop} With the major parts of our training pipeline described above, we provide an overview for the UBR$^2$S training process in Figure~\ref{fig:decision_error}b. After the uncertainty extraction and resampling process, $\beta$ classes are sampled from class set $\mathcal{C}$. For every class $c$, $\frac{\lvert b \rvert}{2 \beta}$ samples are randomly drawn from the source and target bins $\mathcal{B}_c^{\mathcal{S}}$ and $\mathcal{B}_c^{\mathcal{T}}$ where $\lvert b \rvert$ is the batch size. The source bins are constructed based on the available ground truth labels while the target bins are reconstructed based on the label estimation described in Section~\ref{section:resampling_strat}. Finally, we calculate the sample likelihood and decision error for all target instances in a mini-batch and use it to reweigh their element-wise loss during training. Given the k-th target example $x_k$ in a mini-batch with a total of $K$ target instances we compute our proposed reweighted cross entropy loss as Equation~\ref{eq:target_loss} with weight $\omega$, where $v(\cdot)$ represents the label smoothing function from Section~\ref{section:dss} when using DSS and the one-hot encoding function otherwise. \begin{equation} \mathcal{L}(x_k, \Tilde{y}_k)= -\omega_k \sum_{c \in \mathcal{C}} v(\Tilde{y}_k)_c\, \mathrm{log}\left[ g\left(f\left(x_k\right)\right)_c \right],\; \mathrm{with}\;\; \omega_k = \frac{\lambda_\mathrm{DE}^k\lambda_\mathrm{SL}^k}{\frac{1}{K} \sum_j^K \lambda_\mathrm{DE}^j\lambda_\mathrm{SL}^j} \label{eq:target_loss} \end{equation} \newline For source domain instances, weight $\omega$ is set to 1. Parameters $\theta$ of $f$ and $g$ are then updated by backpropagation according to this loss. Subsequent iterations use the updated weights $\theta$ for the uncertainty extraction and training process. Therefore, only a single neural network is needed during the complete training and adaptation phase. \section{Experiments} \textbf{Datasets}. We evaluate our proposed UBR$^2$S method on four public benchmark datasets: \textit{VisDA 2017} (also known as Syn2Real-C) \cite{visda2017} is a large scale dataset for the synthetic to real UDA task. It contains 12 classes in three domains: train (152,397 synthetic 3D renderings), validation (55,388 real world images from MS COCO~\cite{mscoco}) and test (72,372 real world images from YouTube Bounding-Boxes~\cite{real2017youtube}). For comparison to state-of-the-art methods, we calculate the mean class accuracy \wrt to the challenge evaluation protocol unless otherwise noted. \textit{Office-31}~\cite{office31} is one of the most used UDA datasets and contains 4,110 images of 31 classes in a generic office setting. The images come from the three domains Amazon (product images), DSLR and Webcam. The \textit{Office-Caltech}~\cite{officecaltech} dataset is constructed from the 10 overlapping classes between the Caltech-256~\cite{caltech} and Office-31~\cite{office31} datasets for a total of four domains: Amazon (958), Caltech (1,123), DSLR (157) and Webcam (295). The \textit{Office-Home}~\cite{officehome} dataset offers 65 challenging classes from everyday office life. Its four domains Art, Clipart, Product and Real World contain a total of 15,588 images. Example images from all datasets are shown in Figure~\ref{fig:datasets}. \newline \textbf{Setup}. For our experiments, we follow the standard unsupervised domain adaptation setup (see \cite{can,dsbn}) and use all labeled source domain and all unlabeled target domain images for training. For a detailed description of the employed setup and training procedure, please refer to Appendix~\ref{sec:implementation}. \begin{table}[t] \centering \resizebox{0.7\textwidth}{!}{ \begin{tabular}{cccrrrrrrrr} \toprule \rotatebox{0}{DSS\textsubscript{Pre}} & \rotatebox{0}{DSS\textsubscript{Ada}} & \rotatebox{0}{\small Reweigh} & \rotatebox{0}{S$\underset{\text{Pre}}{\rightarrow}$R\textsubscript{test}} & \rotatebox{0}{S$\underset{\text{}}{\rightarrow}$R\textsubscript{test}} & \rotatebox{0}{Ar$\underset{\text{Pre}}{\rightarrow}$Cl} &\rotatebox{0}{Ar$\underset{\text{}}{\rightarrow}$Cl} & \rotatebox{0}{Pr$\underset{\text{Pre}}{\rightarrow}$Ar} & \rotatebox{0}{Pr$\underset{\text{}}{\rightarrow}$Ar}\\\toprule $\times$ & $\times$ & $\times$ & 49.4 & 78.7 & 44.0 & 52.5 & 51.1 & 58.1\\ $\mathcal{S}$ & $\times$ & $\times$ & \textbf{51.3} & 82.5 & \textbf{46.0} & 54.1 & \textbf{54.1} & 58.8 \\ $\mathcal{S}$ & $\mathcal{S}$ & $\times$ & \textbf{51.3} & 83.8 & \textbf{46.0} & 54.6 & \textbf{54.1} & 61.8\\ $\mathcal{S}$ & $\mathcal{T}$ & $\times$ & \textbf{51.3} & 67.0 & \textbf{46.0} & 38.6 & \textbf{54.1} & 47.5\\ $\mathcal{S}$ & $\mathcal{S},\mathcal{T}$ & $\times$ & \textbf{51.3} & 67.8 & \textbf{46.0} & 47.4 & \textbf{54.1} & 53.6\\ $\mathcal{S}$ & $\mathcal{S}$ & SL & \textbf{51.3} & 85.4 & \textbf{46.0} & 56.7 & \textbf{54.1} & 64.1\\ $\mathcal{S}$ & $\mathcal{S}$ & DE & \textbf{51.3} & 89.5 & \textbf{46.0} & 57.4 & \textbf{54.1} & 65.1\\\hline\Tstrut $\mathcal{S}$ & $\mathcal{S}$ & DE+SL & \textbf{51.3} & \textbf{89.8} &\textbf{ 46.0} & \textbf{58.3} & \textbf{54.1} & \textbf{67.0}\\ \toprule \end{tabular}} \caption{Ablation study on the VisDA 2017 S$\rightarrow$R\textsubscript{test} (mean class accuracy) and Office-Home Ar$\rightarrow$Cl, Pr$\rightarrow$Ar tasks (accuracy). Transfer tasks marked with \raisebox{0.4em}{$\underset{\text{Pre}}{\rightarrow}$} indicate results after the source only pretraining and before the adaptation step. The last table row represents our full UBR$^2$S method. \label{table:ablation}} \end{table} \subsection{Results} \textbf{Ablation Study}. We first conduct an ablation study \wrt to every part of our proposed UBR$^2$S method. For this, we use ResNet-101 on the VisDA 2017 S$\rightarrow$R (test set) task as well as ResNet-50 on the Ar$\rightarrow$Cl and Pr$\rightarrow$Ar transfer tasks from Office-Home. Results are reported in Table~\ref{table:ablation}. First, we examine our proposed domain specific smoothing (DSS) method. Our baseline does not use DSS during pretraining (DSS$_\text{Pre}^\times$) or during the adaptation phase (DSS$_\text{Ada}^\times$). Expectedly, this baseline performs the worst for all three transfer tasks due to overfitting on the source domain. Applying DSS to the source samples during the pretraining phase (DSS$_\text{Pre}^\mathcal{S}$) already improves pretrained results by almost 2\% for VisDA (49.4\% to 51.3\%) and final results by almost 4\% (78.7\% to 82.5\%). Similar trends can be observed for the Office-Home transfer tasks. Concerning DSS$_\text{Ada}$, we evaluate all possible combinations $\{\times, \mathcal{S}, \mathcal{T}, \mathcal{S}+\mathcal{T}\}$. We find that applying label smoothing to the target domain (DSS$_\text{Pre}^\mathcal{T}$ and DSS$_\text{Pre}^{\mathcal{S},\mathcal{T}}$) has a negative impact on the model's accuracy. This is consistent for all three transfer tasks and can reduce accuracy by up to 16.8\%. Conversely, adding label smoothing to the source domain always improves performance, considering both the DSS$_\text{Ada}^\times$ to DSS$_\text{Ada}^\mathcal{S}$ and DSS$_\text{Ada}^\mathcal{T}$ to DSS$_\text{Ada}^{\mathcal{S},\mathcal{T}}$ transitions. Overall, DSS$_\text{Pre}^\mathcal{S}$ with DSS$_\text{Ada}^\mathcal{S}$ consistently achieved the best results. This confirms our hypothesis from Section~\ref{section:dss} and implies that a non-saturated source classifier is needed for good transferability and generalization to new domains in UDA tasks. Further ablation studies are shown in the appendices. We continue by examining the remaining parts of our UBR$^2$S method in Table~\ref{table:ablation} using the best performing DSS$_\text{Pre}^\mathcal{S}$ with DSS$_\text{Ada}^\mathcal{S}$ setup as baseline. As mentioned in Section~\ref{sec:reweighting_strat}, the resampling process in UBR$^2$S comes with an inherent risk and the possibility for errors. This risk can be partially measured with the help of our proposed sample likelihood (SL) and used for reweighting, which already improves results by 1.6\% for VisDA and up to 2.3\% for the Office-Home tasks. It is also important to assess the current label estimation and how the chosen pseudo-label compares to other potential candidates. This is covered by our proposed decision error (DE), which -- when solely used for reweighting -- can improve results by 5.7\% for VisDA and up to 3.3\% for the Office-Home tasks. Collectively, the combination of SL and DE can further improve results by 6.0\% for VisDA and up to 5.2\% for Office-Home over the respective baselines and constitutes our full UBR$^2$S method. \begin{table*}[t] \centering \resizebox{1.0\textwidth}{!}{ \begin{tabular}{l@{\extracolsep{4pt}}rrrrrrrrrrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{3}{c}{A} & \multicolumn{3}{c}{C} & \multicolumn{3}{c}{D} & \multicolumn{3}{c}{W} & \multirow{2}{*}{Avg.}\\\cline{2-4}\cline{5-7}\cline{8-10}\cline{11-13} & C & D & W & A & D & W & A & C & W & A & C & D & \\\toprule CORAL \cite{sun2016return,rwa} & 89.2 & 92.2 & 91.9 & 94.1 & 92.0 & 92.1 & 94.3 & 87.7 & 98.0 & 92.8 & 86.7 & \textbf{100.0} & 92.6\\ GTDA+LR \cite{vascon2019unsupervised} & 91.5 & 98.7 & 94.2 & 95.4 & 98.7 & 89.8 & 95.2 & 89.0 & 99.3 & 95.2 & 90.4 & {\textbf{100.0}} & 94.8\\ RWA \cite{rwa} & 93.8 & {98.9} & {97.8} & 95.3 & \underline{99.4} & 95.9 & 95.8 & 93.1 & 98.4 & 95.3 & 92.4 & \underline{99.2} & 96.3\\ PrDA \cite{hua2020unsupervised} & 92.1 & 99.0 & \underline{99.3} & \textbf{97.2} & \underline{99.4} & 98.3 & 94.7 & 91.0 & \underline{99.7} & 95.6 & 93.4 & \textbf{100.0} & 96.6 \\ Rakshit \emph{et al}\bmvaOneDot \cite{ccduda}$\ast$ & 92.8 & {98.9} & 97.0 & {96.0} & {99.0} & 97.0 & \underline{96.5} & {\textbf{97.0}} & 99.5 & 95.5 & 91.5 & {\textbf{100.0}} & {96.8}\\ ACDA \cite{zhang2021adversarial} & \underline{93.9} & \textbf{100.0} & \textbf{100.0} & 96.2 & \textbf{100.0} & \textbf{100.0} & \textbf{96.7} & \underline{93.9} & \textbf{100.0} & \textbf{96.6} & 93.9 & \textbf{100.0} & \textbf{97.6} \\\hline\Tstrut \textbf{UBR$^2$S (ours)} & \textbf{95.5} & \underline{99.4} & \underline{99.3} & \underline{96.6} & 94.9 & \underline{99.7} & 96.2 & \underline{95.3} & \textbf{100.0} & \underline{96.2} & \textbf{95.4} & \textbf{100.0} & \underline{97.4} \\ \toprule \end{tabular}} \caption{\label{table:officecaltech_comparison}Classification accuracy (in \%) for different methods using ResNet-50 on the \textbf{Office-Caltech} dataset with domains Amazon, Caltech, DSLR and Webcam. The method marked with $\ast$ uses an ensemble setup with multiple classifiers.} \end{table*} \begin{table*}[t] \centering \begin{adjustbox}{width=1.\textwidth} \begin{tabular}{lrrrrrrrrrrrrr} \toprule Method & {aero} & {bicyc} & {bus} & {car} & {horse} & {knife} & {motor} & {person} & {plant} & {skate} & {train} & {truck} & {Avg.}\\\toprule Source only & 52.8 & 13.8 & 66.9 & 96.3 & 58.4 & 14.0 & 63.4 & 34.5 & 86.0 & 24.5 & 87.3 & 17.9 & 51.3\\ BUPT \cite{visda2017} & \underline{95.7} & 67.0 & {93.4} & \textbf{97.2} & {90.6} & 86.9 & \textbf{92.0} & 74.2 & \underline{96.3} & 66.9 & \textbf{95.2} & 69.2 & 85.4\\ CAN \cite{can} & --- & --- & --- & --- & --- & --- & --- & --- & --- & --- & --- & --- & 87.4 \\ SDAN~\cite{sdan} & 94.3 & {86.5} & {86.9} & {95.1} & {91.1} & {90.0} & \underline{82.1} & {77.9} & {96.4} & {77.2} & 86.6 & \underline{88.0} & {87.7}\\ UFAL~\cite{ringwald2020unsupervised} & 94.9 & \underline{87.0} & \underline{87.0} & \underline{96.5} & \underline{91.8} & \textbf{95.1} & 76.8 &\textbf{78.9} & \textbf{96.5} & \underline{80.7} & \underline{93.6}& 86.5 & \underline{88.8}\\\hline\Tstrut \textbf{UBR$^2$S (ours)} &\textbf{ 96.6} &\textbf{ 90.8} & \textbf{87.9} & 94.6 & \textbf{92.2} & \underline{92.8} & 77.8 & \underline{78.8} & 95.3 &\textbf{89.2} & 92.6 & \textbf{88.9} &\textbf{89.8}\\ \toprule \end{tabular} \end{adjustbox} \caption{\label{table:visda_test_sota}Per class accuracy (in \%) for different methods on the \textbf{VisDA 2017 test set} as per challenge evaluation protocol. Results are obtained using the common ResNet-101 backbone.} \end{table*} \textbf{Comparison to state-of-the-art}. We continue by comparing UBR$^2$S to other recently proposed approaches. Results for the Office-Caltech dataset are shown in Table~\ref{table:officecaltech_comparison}. Evidently, UBR$^2$S can achieve domain adaptation even for Office-Caltech's 12 diverse transfer tasks. Our proposed method achieves the best or second best result in 10 out of 12 transfer tasks (such as A$\rightarrow$C) and is also on par with ACDA~\cite{zhang2021adversarial} for the overall average. Notably, UBR$^2$S even manages to surpass the ensemble-based setup of Rakshit~\emph{et al}\bmvaOneDot~\cite{ccduda} by 0.6\%. Additionally, we report results for the VisDA 2017 test set in Table~\ref{table:visda_test_sota} and calculate the class accuracies as per challenge evaluation protocol. Our results indicate that UBR$^2$S can also achieve domain adaptation for VisDA's difficult synthetic to real transfer task and outperforms recently proposed methods. With 89.8\% mean class accuracy, UBR$^2$S also outperforms the VisDA challenge submissions SDAN~\cite{sdan} and BUPT~\cite{visda2017}. Given the current VisDA 2017 challenge leaderboard~\cite{visdaleaderboard}, UBR$^2$S would rank second place -- only behind SE~\cite{french2017self}, a 5$\times$ResNet-152 ensemble with results averaged over 16 test time augmentation runs. This, however, is not a fair comparison to our single ResNet-101 model, but still demonstrates UBR$^2$S' competitive UDA capabilities. \textbf{Visualization} Finally, we visualize the embeddings learned by UBR$^2$S in Figure~\ref{fig:tsneplot}. For this, we extract target domain features for the VisDA 2017 test set (real domain) before and after the adaptation phase and project them into 2D space via t-SNE~\cite{maaten2008visualizing}. After the source-only pretraining phase, the model clearly has not learned discriminative representations for target domain samples. Features of all classes are accumulated in one big cluster with no clear separation. After UBR$^2$S' unsupervised domain adaptation phase, visually distinct clusters for each of VisDA's 12 classes can be observed. This indicates that UBR$^2$S is also able to learn discriminative target domain representations even in the absence of target domain annotations. We provide further results in the appendices. This includes additional ablation studies, full results on the Office-Home~\cite{officehome} dataset, multi-source UDA results on the Office-31~\cite{office31} dataset, an evaluation with different network backbones and an analysis of training stability. \section{Conclusion} In this paper, we propose UBR$^2$S, the uncertainty-based resampling and reweighting strategy. UBR$^2$S' resampling phase is based on a model's prediction uncertainty quantified by Monte Carlo dropout. As resampling introduces the possibility of sampling a wrong pseudo-label, a dynamic reweighting stage is added to assess and incorporate this risk in the loss calculation. The efficacy of UBR$^2$S is shown on multiple UDA benchmark datasets such as VisDA 2017, Office-Caltech, Office-31 and Office-Home in single and multi-source domain adaptation setups in which UBR$^2$S outperforms recently proposed methods and achieves state-of-the-art results. Furthermore, we show that UBR$^2$S can be applied to any off-the-shelf CNN and works even with very small networks (such as MobileNetV2) with extremely low parameter counts. Our code is made available on the project website for reproduction of our results and to encourage further research in the area of unsupervised domain adaptation. \begin{figure} \centering \includegraphics[width=0.48\columnwidth]{images/visda_test_source.png}\hfill \includegraphics[width=0.48\columnwidth]{images/visda_test_adapted.png} \caption{Visualizations of the VisDA 2017 test set features using t-SNE. Left: After the source-only pretraining phase. Right: After the adaptation phase using UBR$^2$S.} \label{fig:tsneplot} \end{figure} \clearpage \setcounter{section}{0} \setcounter{table}{0} \setcounter{figure}{0} \renewcommand{\thetable}{\Roman{table}} \renewcommand{\thefigure}{\Roman{figure}}
f5ef166a5d90c6b8d9fb891f4577fbb3fdc61e6d
\section{Introduction} Semantic segmentation, \emph{i.e}\onedot} \def\Ie{\emph{I.e }\onedot, assigning a semantic class to each pixel of an image, is a critical task for scene comprehension. It is fraught with challenges and the state-of-the-art models proposed to tackle them usually have a huge number of parameters. The complexity of these models not only translates to long training and inference times but it also makes it impractical to deploy them in a real-world scenario due to the large amount of resources demanded. Moreover, semantic segmentation id often required to work in real-time, particularly for robotics applications such as geo sensing, precision agriculture, and, most notably, autonomous driving. Besides the complexity of the models, the process of collecting and annotating real-world data \cite{Cordts2016Cityscapes} is time-consuming and costly. A successful solution to tackle this issue is to use synthetic data generated from virtual world simulators \cite{Richter2016GTA}, \cite{Ros2016Synthia}, \cite{tavera2020idda}. Despite the much lower cost of collecting and annotating synthetic data, this technique has one major drawback: the domain shift between virtual and real world is substantial. Several unsupervised domain adaptation techniques have been proposed to address the domain gap between the synthetic (\textit{source}) and real (\textit{target}) domains; however, because they are not designed to be used in a real-world scenario and rely on a huge number of parameters, they are still vulnerable to resource and training time limits. To fully solve the real-time domain adaptation problem in semantic segmentation, we require a complete lightweight model with few parameters and that can be deployed in a practical situation with limited resources. To do this, we redesigned the BiSeNet \cite{bisenetv1} model, tailoring it to the Domain Adaptation challenge and including a novel lighter and thinner fully convolutional domain discriminator (Light\&Thin). To summarize: \begin{itemize} \item we propose a network for real-time domain adaptation in semantic segmentation, using a new lightweight and thin domain discriminator. \item we propose an ablation study to compare our Light\&Thin discriminator to a standard domain discriminator and its lightweight variant. \item we test our architecture against two synthetic-to-real situations, GTA$\to$Cityscapes and Synthia$\to$Cityscapes, proving the efficacy of our solution. \end{itemize} \section{Related Works} \subsection{Semantic Segmentation and real-time application} Thanks to the use of deep learning techniques, Semantic Segmentation has exploded in popularity in recent years. The current state-of-the-art methods are determined by the approach employed to exploit semantic information, such as fully convolutional networks \cite{long2015fully}, encoder-decoder architectures \cite{noh2015learning}, \cite{unet}, dilated convolutions \cite{chen2017deeplab}, \cite{chen2018encoder}, \cite{chen2017rethinking} or multi-scale and pyramid networks \cite{zhao2017pyramid}. Because the number of parameters in semantic segmentation networks is in the order of $10^9$ and their real-world application is rising in popularity, several researchers have investigated the feasibility of more lightweight architectures. The majority of architectures can be divided into two macro categories: (i) encoder-decoder architectures \cite{mehta2018espnet}, \cite{erfnet}, \cite{Zhang2018ShuffleNetAE}, which cost less at inference time than dilated convolution methods, (ii) two-pathway architectures, which address the loss of semantic information during the encoder-decoder mechanism's downsampling and upsampling operations. The BiSeNet family \cite{bisenetv1}, \cite{bisenetv2} is an example of this type of architecture. \subsection{Domain Adaptation} The task of bridging the gap between two different distributions is referred to as Domain Adaptation. The original answer to this problem is to employ a distance minimization algorithm, such as the MMD \cite{geng2011daml}, although alternative methods that use generative models \cite{li2019bidirectional}, \cite{kim2020learning} to condition one domain into the other have also been used. The most noteworthy solution is the adversarial training technique \cite{chang2019all}, \cite{vu2019advent}, which consists in a min-max game between the segmentation network and a discriminator in which the former attempts to trick the latter by making the distributions of the two domains identical. In any case, none of the prior solutions were applicable to real-world scenario. \section{Method} The proposed algorithm re-imagines BiSeNet tailored to the Unsupervised Domain Adaptation (UDA) task (\ref{sec:setting}). We introduce a novel real-time adversarial domain adaption framework (\ref{sec:training}) comprised of the BiSeNet semantic segmentation model and an unique lightweight and thin discriminator (Fig. \ref{fig:method}) that increases domain alignment and adaptation performance. \subsection{Setting} \label{sec:setting} The set of RGB images composed by $\mathcal{I}$ pixels is denoted by $\mathcal{X}$, and the set of semantic labels linking each pixel I with a class $c$ from a set of semantic classes $\mathcal{C}$ is denoted by $\mathcal{Y}$. We have two datasets to work with during training: the source $X_{s} = \{(x^{s}, y^{s})\}$, which consists of $|X_{s}|$ semantically annotated images, and the target $X_{t} = \{(x^{t})\}$, which consists of $|X_{t}|$ unlabeled images. The source and target annotation mask belonging to the set of semantic labels $\mathcal{Y}$ are defined as $y^{s}$ and $y^{t}$. The goal of UDA is to use both the source and target dataset $X_{s}$ and $X_{t}$ to learn a function $f$ that takes as input an image $x$ and outputs a $C$-dimensional segmentation map $P_{h,w,c}(x)$. \subsection{Training} \label{sec:training} Due to the lack of semantic information for the target distribution, we proceed to align the features derived from the source and target domains in an adversarial fashion. To do this, as well as to meet our goal of making a network smaller, portable, and deployable on limited resource devices, we require a different domain discriminator. This is why we developed and tested two different types of lighter discriminators: a less expensive version ($D_{Light}$) of the widely used Fully Convolutional discriminator \cite{radford2016unsupervised} and a shallow version ($D_{Light\&Thin}$) of the latter. Both discriminators $D$ employ depthwise separable convolution instead of the conventional convolution and are trained to discriminate between source and target domains using the following loss: \begin{equation} L_{D}(x^s, x^t) = - \sum_{h,w} \log\ D(P^s_{h,w,c}) + \log(1-D(P^t_{h,w,c})). \label{eq:loss_discriminator} \end{equation} More details on these two lightweight discriminators are presented in Sec.~\ref{sec:experiments}. The adversarial training is carried out using the features extracted by the semantic segmentation model and the domain prediction coming from a discriminator model. Both models engage in a min-max game in which the discriminator guesses the domain to which a feature belongs to and the segmentation network attempts to mislead the discriminator by making features from both domains similar. To accomplish this effect an adversarial loss $L_{adv}$ is used as follow: \begin{equation} L_{\text{adv}}(x^t) = - \frac{1}{|X_t|} \sum_{h,w} \log D(P^t_{h,w,c}). \label{eq:adversarial_loss} \end{equation} \input{Tables/gta_experiments} \input{Tables/synthia_experiments} We jointly optimize the supervised segmentation loss $L_{seg}$ on source samples and the unsupervised entropy loss $L_{adv}$ on target samples while training the BiSeNet semantic segmentation model. The following is the definition of the total loss function: \begin{equation} \frac{1}{|X_s|} \sum_{(x^s,y^s) \in X_s} L_{\text{seg}}(x^s, y^s) + \frac{1}{|X_t|} \sum_{x^t \in X_t} + \lambda L_{adv}(x^t), \label{eq:total_loss} \end{equation} where $L_{seg}$ minimize the standard cross-entropy loss defined as: \begin{equation} L_{\text{seg}}(x^s, y^s) = - \frac{1}{|X_s|} \sum_{h,w} \sum_c y_{h,w,c}^s \log(P^s_{h,w,c}) \label{eq:crossentropy_loss} \end{equation} \section{Experiments}\label{sec:experiments} \subsection{Datasets} We test our model over the two standard synthetic-to-real benchmarks in Domain Adaptation for Semantic Segmentation: GTA5$\to$Cityscapes and SYNTHIA$\to$Cityscapes. GTA5 \cite{Richter2016GTA} is made up of 24966 annotated photos from the aforementioned video-game. The standard 19-classes, which Cityscapes shares, is used for training and evaluation. SYNTHIA \cite{Ros2016Synthia} is made up of 9400 annotated images from a virtual world and belonging to the RAND-CITYSCAPES subset. The usual 19-classes shared by Cityscapes are utilized for training, whereas the assessment is performed on 16-classes using the \cite{vu2019advent} protocol. Cityscapes \cite{Cordts2016Cityscapes} is made up of 2975 real-world pictures gathered from various German cities. To test our network, we use the entire validation set of 500 photos at the original 2048x1024 resolution. \subsection{Implementation details} The segmentation model of our method is BiSeNet \cite{bisenetv1} with the Context Path (see section 3.2 of \cite{bisenetv1}) initialized with a ResNet-101\cite{he2016deep} pretrained on ImageNet. The standard discriminator used for the comparison is a common Fully Convolutional Discriminator (FCD) with 5 convolution layers with kernel size $4x4$, channel numbers $\{64, 128, 256, 512, 1\}$, padding 2 and stride 1. Its lightweight variant (FCD-Light) is obtained by substituting each convolution operation with a depthwise-separable convolution \cite{xception}, comprises of a depthwise convolution done independently over each input channels, followed by a pointwise convolution, with kernel size $1x1$. Our thinner version (FCD-Light\&Thin) has only 3 depthwise separable convolution layers with channel numbers $\{64, 128, 1\}$. Each convolution or depthwise separable convolution layer is followed by a Leaky ReLU with negative slope 0.2. PyTorch is used to implement our technique. The segmentation model is trained with batch size $4$ and SGD with an initial learning rate of $2.5\times10-4$, which is then changed at each iteration with a "poly" learning rate decay with power $0.9$, momentum $0.9$, and weight decay $0.0005$. Adam is used to train all of the discriminators, with momentum $(0.9, 0.99)$, learning rate $10-5$, and the same segmentation model scheduler. The model has undergone $30k$ iterations of training. The value of $\lambda_{adv}$ is set to $0.01$. The training images are shrunk to $(1024,512)$, whereas the evaluation is done on the $(2048,1024)$ original image dimension. We use the standard Intersection over Union metric to measure the performance of our experiments. \begin{figure*}[ht] \begin{center} \includegraphics[width=1.0\textwidth]{Images/qualitatives.pdf} \end{center} \vspace{-10pt} \caption{ Qualitative results for the GTA$\to$Cityscapes experiment. Starting from the left: RGB, FCD, FCD-Light, FCD-Light\&Thin, Ground Truth. } \vspace{-10pt} \label{fig:qualitative} \end{figure*} \subsection{Results} Table \ref{table:gta_exp} and Table \ref{table:synthia_exp} show the result on the GTA$\to$Cityscapes and SYNTHIA$\to$Cityscapes, respectively. By looking at Table \ref{table:gta_exp}, it is clear that using a typical Fully Convolutional discriminator (FCD) we get performances that are approximately half of what we would achieve if we trained directly on the target. When each convolution in this discriminator is replaced with its lightweight counterpart (FCD-Light), we get comparable results with just a $+0,29\%$ gain in accuracy. However, as seen in Table \ref{table:parameters}, the number of parameters and FLOPS decreases significantly, as does training and inference time. When the input resolution is 1024x512, the difference in parameters is 2.59 million, while the FLOPS move from 30.883G to barely 2.14G. When we use our light and shallow discriminator (Light\&Thin), the reduction in parameters and FLOPS is proportionate to an enhancement in accuracy; indeed, our solution improves performance by $+1,33\%$ over the typical FCD. Since the task is classification, using a shallow domain discriminator like ours takes less epochs to attain a local optima than a conventional DCGAN discriminator, which would require more epochs and longer training time to converge. We would want to emphasize that all of this results were collected while training on two TESLA v100 GPUs rather than on commercial hardware such as a Jetson Javier. The SYNTHYA$\to$Cityscapes experiment described in Table \ref{table:synthia_exp} shows a similar pattern. Replacing common convolutions with depthwise separable convolutions results in a small $+0.57\%$ improvement, but when utilizing our Light\&Thin discriminator, an average boost of $+2,21\%$ is attained. Figure \ref{fig:qualitative} confirms this tendency; as you can see, our Light\&Thin model allows for better segmentation, even for small classes like pedestrians, poles or traffic signs. It should be noted that these results come from models that were trained on synthetic data with a distribution that is substantially different from the real-world test set. There is still work to be done to improve performance and bridge the gap between the two domains and the existing state-of-the-art but non-real-time domain adaptation models. \input{Tables/parameters} \section{Conclusion} In this paper, we look at Real Time Domain Adaptation in Semantic Segmentation. The primary goal is to minimize model parameters as well as training and inference time in order to make the model feasible for real-world applications. We present a whole lightweight framework that includes a unique light and shallow discriminator. We evaluated our approach using the two common synthetic-to-real protocols. The results indicate that there is still work to be done in this task; future research will focus on applying our discriminator to more complex and powerful lightweight semantic segmentation models, as well as enhancing the entire framework.
4018eb650fae16b131f87c6475185a74f3be5ee0
\section{Introduction} \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{teaser_v4.pdf}\vspace*{-0.4cm} \caption{{\bf 3D texture-mapped human synthesis.} Our method receives a frame of a person, extracts her/his mesh (left side) and outputs a refined 3D shape and appearance representations of people (right side).} \label{fig_method2} \vspace{-0.3cm} \end{figure} The research in computer vision and graphics communities has made great strides forward in the last two decades, with milestones being passed from semantic segmentation~\cite{minaee2020image}, tridimensional reconstruction~\cite{natsume2019siclope,saito2019pifu,saito2020pifuhd} to synthesis of human motion~\cite{ferreira2020cag,NEURIPS2019_dancing2music}, realistic images~\cite{CycleGAN2017} and videos~\cite{wang2018vid2vid}. As more vision and graphics methods are integrated into new approaches such as differentiable rendering, more systems will be able to achieve high accuracy and quality in different tasks, in particular, generative methods for synthesizing videos with plausible human reenactment. Over the past few years, a remarkable performance of the Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} has shed a new light for the problem of synthesizing faithful real-world images of humans. Although GAN-based approaches have been achieving high quality results in generating videos~\cite{chan2018dance,wayne2018reenactgan,liu2019neural} and images~\cite{karras2018progressive,pix2pix2017,CycleGAN2017,wang2018vid2vid,Esser_2018_CVPR} of people, in general, they suffer with the high variability in the poses, unseen images from viewpoints not present in the training data, and are often limited to reason directly on the projected 2D image domain. Image-based rendering techniques~\cite{Gomes_2020_WACV,gomes2020arxiv}, for their turn, are effective solutions to create 3D texture-mapped models of people, being capable to synthesize images from any arbitrary viewpoint without using large number of images. On the other hand, image-based rendering methods require a painstaking design of the algorithm and are not capable of improving the visual quality of the synthesized images by using more data when available. In this paper, we take a step towards combining learning and image-based rendering approaches in a new end-to-end architecture that synthesizes human avatars capturing both body geometry and texture details. The proposed architecture comprises a graph convolution network (GCN) operating over the mesh with differentiable rendering to achieve high-quality results in image generation of humans. Our approach receives as input a frame of a person, estimates a generic mesh in the desired pose and outputs a representation of their shape and appearance, as shown in Figure~\ref{fig_method2}. The method provides a final 3D representation that is compatible with traditional graphic pipelines. Specifically, our architecture estimates a refined mesh and a detailed texture map to properly represent the person's shape and appearance for a given input frame. While 2D image-to-image translation methods are capable of generating plausible images, many applications such as Virtual Reality (VR) and Augmented Reality (AR)~\cite{Gallala_2019,Chen,minaee2020image} require a fully 3D representation of the person. Additionally, the view-dependence hinders the creation of new configurations of scenes where the avatar can be included. Although view interpolation can be applied to estimate a transition between a pair of camera poses, it may create artifacts and unrealistic results when adding the virtual avatar into the new scene using unseen camera poses. Video-based rendering systems are greatly benefited through the use of realistic 3D texture-mapped models, which make possible the inclusion of virtual avatars using unrestricted camera poses and the automatic modification and re-arrangement of video footage. In addition to being able to render human avatars from different viewpoints, the 3D shape also allows synthesizing new images under different illumination conditions. We experimentally showed the capabilities of our new architecture to estimate realistic 3D texture-mapped models of humans. Experiments on a variety of videos show that our method excels several state-of-the-art methods achieving the best values for appearance in terms of Structural Similarity (SSIM), Learned Perceptual Image Patch Similarity (LPIPS), Mean Squared Error (MSE), and Fréchet Video Distance (FVD). \vspace*{-0.35cm} \paragraph{Contributions.} The main contributions of this paper are threefold: i) a novel formulation for transferring appearance and reenacting human actors that produces a fully 3D representation of the person; ii) a graph convolutional architecture for mesh generation that leverages the human body structure information and keeps vertex consistency, which results in a refined human mesh model; iii) a new architecture that takes advantages of both differentiable rendering and the 3D parametric model and provides a fully controllable human model, \ie, the user can control the human pose and rendering parameters. \section{Related Work} \paragraph{Human animation by neural rendering.} Recently, we witnessed the explosion of neural rendering approaches to animate and synthesize images of people on unseen poses~\cite{wang2018vid2vid,Esser_2018_CVPR,liu2019neural}. According to Tewari~\etal~\cite{tewari2020cgf}, neural rendering is a deep image or video generation approach that enables explicit or implicit control of scene properties. The main difference among these approaches is how the control signal is provided to the network. Lassner~\etal~\cite{Lassner_GeneratingPeople} proposed a GAN called ClothNet. ClothNet produces random people with similar pose and shape in different clothing styles using as a control signal a silhouette image. In a similar context, Esser~\etal~\cite{Esser_2018_CVPR} used a conditional U-Net to synthesize new 2D human views based on estimated edges and body joint locations. Chan~\etal~\cite{chan2018dance} applied an adversarial training to map a 2D source pose to the appearance of a target subject. Wang~\etal~\cite{wang2018vid2vid} presented a general video-to-video synthesis framework based on conditional GANs, where a 2D dense UV mapping to body surface (DensePose~\cite{Guler2018DensePose}) is used as a control signal. Similarly, Neverova~\etal~\cite{NeverovaGK18} and Sarkar~\etal~\cite{sarkar2020neural} rely on DensePose as input to synthesize new views. These image-to-image translation approaches only allow for implicit control by way of representative samples, \ie, they can copy the scene parameters from a reference image/video, but not manipulate these parameters explicitly. To enable the control of the position, rotation, and body pose of a person in a target image/video, Liu~\etal~\cite{liu2019neural} proposed to use a medium-quality controllable 3D template model of people. In the same line, Liu~\etal~\cite{lwb2019} proposed a 3D body mesh recovery module based on a parametric statistical human body model SMPL~\cite{Loper_2015}, which disentangles human body into joint rotations and shape. Wu~\etal~\cite{wu2020multi} produced photorealistic free-view-video from multi-view dynamic human captures. Although these methods are capable of generating plausible 2D images, they cannot generate a controllable 3D models of people, unlike our approach, which is a desired feature in many tasks such as in rendering engines and games or virtual and augmented reality contexts~\cite{Gallala_2019,Chen,minaee2020image}. \vspace*{-0.4cm}\paragraph{3D human pose and mesh reconstruction.} Substantial advances have been made in recent years for human pose and 3D model estimation from still images. Bogo~\etal~\cite{Bogo_2016} proposed the SMPLify method, a fully automated approach for estimating 3D body shape and pose from 2D joints in images. SMPLify uses a CNN to estimate 2D joint locations and then optimize the re-projection errors of an SMPL body model~\cite{Loper_2015} to these 2D joints. Similarly, Kanazawa~\etal~\cite{kanazawaHMR18} used unpaired 2D keypoint annotations and 3D scans to train an end-to-end network to regress the 3D mesh parameters and the camera pose. Kolotouros~\etal~\cite{kolotouros2019spin} proposed SPIN, an hybrid approach combining the ideas of optimization-based method from~\cite{Bogo_2016} and regression deep network~\cite{kanazawaHMR18} to design an efficient method less sensitive to the optimization initialization, while keeping accuracy from the optimization-based method. Despite remarkable results in pose estimation, these methods are limited to estimate coarse quality generic meshes and textureless characters. Gomes~\etal~\cite{Gomes_2020_WACV,gomes2020arxiv} proposed an image-based rendering technique to create 3D textured models of people synthesized from arbitrary viewpoints. However, their method is not end-to-end and it is not capable of improving the visual quality of the synthesized images by using information of all available data in the training step. Lazova~\etal~\cite{lazova2019} automatically predict a full 3D textured avatar, including geometry and 3D segmentation layout for further generation control; however, their method cannot predict fine details and complex texture patterns. Methods like PiFu~\cite{saito2019pifu,saito2020pifuhd}, for their turn, are limited to static 3D models per frame. Their mesh represents a rigid body and the users cannot drive the human body to new poses, \ie, it is not possible to make animations where the model raise his/her hands since the arms and hands are not distinguished from the whole mesh. \vspace*{-0.4cm} \paragraph{Graph networks (GCN) and adversarial learning.} GCNs recently emerged as a powerful representation for learning from data lying on manifolds beyond n-dimensional Euclidean vector spaces. They have been widely adopted to represent 3D point clouds such as PointNet~\cite{qi2017pointnet} or Mesh R-CNN~\cite{Gkioxari_2019_ICCV}, and notably, to model the human body structure with state-of-the-art results in tasks such as human action recognition~\cite{yan2018spatial}, pose estimation~\cite{zhao2019semantic,wang2020motion} and human motion synthesis~\cite{yan2019convolutional,ferreira2020cag,ren_mm_dance}. Often these GCNs have been combined and trained in adversarial learning schemes, as in human motion~\cite{ferreira2020cag}, and pose estimation~\cite{kanazawaHMR18}. Our work leverages these features from GCNs and adversarial training to estimate 3D texture-mapped human models. \vspace*{-0.35cm} \paragraph{Differentiable rendering.} Differentiable renderers (DR) are operators allowing the gradients of 3D objects to be calculated and propagated through images while training neural networks. As stated in~\cite{kato2020differentiable}, DR connects 2D and 3D processing methods and allows neural networks to optimize 3D entities while operating on 2D projections. Loper and Black~\cite{loper2014opendr} introduced an approximate differentiable render which generates derivatives from projected pixels to the 3D parameters. Kato~\etal~\cite{kato2018renderer} approximated the backward gradient of rasterization with a hand-crafted function. Liu~\etal~\cite{liu2019softras} proposed a formulation of the rendering process as an aggregation function fusing the probabilistic contributions of all mesh triangles with respect to the rendered pixels. Niemeyer~\etal~\cite{Niemeyer_2020_CVPR} represented surfaces as 3D occupancy fields and used a numerical method to find the surface intersection for each ray, then they calculate the gradients using implicit differentiation. Mildenhall~\etal~\cite{mildenhall2020nerf} encoded a 3D point and associated view direction on a ray using periodic activation functions, then they applied classic volume rendering techniques to project the output colors and densities into an image, which is naturally differentiable. More recently, techniques~\cite{neural-actor,pumarola2020d} based on neural radiance field (NeRF) learning~\cite{nerf} are being proposed to synthesize novel views of human geometry and appearance. While these methods achieved high-quality results, they generally require multi-view data collected with calibrated cameras and have high computational cost, notably during the inference/test time. In this paper, we propose a carefully designed architecture for human neural rendering, leveraging the new possibilities offered by differentiable rendering techniques~\cite{loper2014opendr,liu2019softras,ravi2020pytorch3d,zhang2020image}. We present an end-to-end learning method that i) does not require data capture with calibrated systems, ii) is computationally efficient in test time, and iii) leverages DR and adversarial training to improve the estimation capabilities of fully controllable realistic 3D texture-mapped models of humans. \section{Methodology} \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{overview_v3.pdf} \caption{\textbf{Human synthesis architecture.} Our architecture has three main networks: \textbf{a)} the \textit{Human Mesh Estimation} that comprises a three-stage GCN and learns to fit and deform the mesh based on rendered silhouettes and shape regularizers; \textbf{b)} the \textit{Texture Network}, a CNN that is trained conditioned on the visibility map generated from the deformed mesh to create a coarse texture by rendering and optimizing on the $l_{1}$ norm; \textbf{c)} the \textit{Texture Refinement Network}, a second CNN that is conditioned on the visibility map and the coarse texture to generate the detailed texture map of the person.} \label{fig:method}\vspace*{-0.2cm} \end{figure*} In order to generate deformable 3D texture-mapped human models, our end-to-end architecture has three main components to be trained during the rendering. The first component models local deformations on the human 3D body shape extracted from the images using a three-stage GCN. In the second component, a CNN is trained to estimate the human appearance map. Similar to the GCN, the CNN is trained in a self-supervised regime using the gradient signals from the differentiable renderer. Finally, the third component comprises an adversarial regularization in the human appearance texture domain to ensure the reconstruction of photo-realistic images of people. All frames of a person are used to train the mesh and the texture networks. The frames provide different viewpoints of the person to build his/her texture map and to estimate the mesh deformation according to the motion. We apply a stage-wise training strategy. First, we train the mesh model using the silhouettes, which produce a texture-less human model. In the sequence, we freeze the mesh network and train a global texture model. Then, we freeze both the mesh model and the global texture model and learn the texture refinement generator and the discriminators parameters. In the inference time, we feed our architecture with generic meshes parameterized by the SMPL model, and then to create a refined mesh and a detailed texture map to properly represent the person's shape and appearance. Figure~\ref{fig:method} outlines these components and their relations during the training phase and Figure~\ref{fig_method2} shows these components during the inference phase. \subsection{Human Mesh Estimation} \paragraph{Shape and pose representation.} We adopt a shape representation that explores the global and local information of the human body. We encode the global information using the SMPL model parametrization~\cite{Loper_2015}, which is composed of a learned human shape distribution $\mathcal{M}$, $24$ 3D joint angles $\boldsymbol{\theta} \in \mathbb{R}^{72}$ (defining 3D rotations of the skeleton joint tree), and shape coefficients $\boldsymbol{\beta} \in \mathbb{R}^{10}$ that model the proportions and dimensions of the human body. We used SPIN~\cite{kolotouros2019spin} to regress the SMPL parameters due to its trade-off between accuracy and efficiency. While the global information provided by SMPL parametrization enables global control of human poses, they do not encode the fine details of the human body geometry (local information). We then characterize the local information by adding a set of offsets to the mesh produced by the SMPL parametrization from the GCN mesh network. \vspace*{-0.35cm} \paragraph{Mesh Refinement Network.} After computing the global human shape and pose information $P = \text{SMPL}(\boldsymbol{\theta},\boldsymbol{\beta})$, we model local deformations on the mesh with the GCN Mesh Network $G_m$. Since the SMPL parametrization only provides a coarse 3D shape, and it cannot accurately model fine structures like clothes, we feed the mesh network with the initial SMPL mesh to refine its vertex positions with a sequence of refinement stages. The network is designed to learn the set of offsets to the SMPL mesh to increase the realism of the generated views. Drawing inspiration from the network architecture of~\cite{Gkioxari_2019_ICCV}, our refinement network is composed of three blocks with six graph convolutional layers with intermediate features of size $128$. Each block in our mesh network performs four operations: \vspace*{-0.2cm} \begin{itemize} \item {\it Vertex normal computation.} This operation computes the normal surface vector for each vertex as being the weighted sum of the normals of faces containing the vertex, where the face areas are used as the weights. The resulting normal is assigned as the node feature ${f_i}$ for the vertex $v_i$ in the GCN.\vspace*{-0.2cm} \item {\it Graph convolution.} This convolutional layer propagates information along mesh edges using a aggregation strategy. Similar to~\cite{Gkioxari_2019_ICCV}, given the input vertex feature ${f_i}$, the layer updates the feature as ${f'_i = \text{ReLU}(W_0f_i + \sum_{j \in \mathcal{N}}W_1f_j)}$, where $W_0$ and $W_1$ are learned weighting matrices, and $\mathcal{N}(i)$ gives the i-th vertex’s neighbors in the mesh. \vspace*{-0.2cm} \item {\it Vertex refinement.} To improve the quality of the vertex position estimation, this operation computes vertex offsets as $u_i = \tanh(W[f'_i ; f_i ])$. $W$ is a learned weighting matrix. \vspace*{-0.2cm} \item {\it Vertex refinement clamping.} To avoid strong deformations (large $||u_i||_2$), we constraint and bound the position update of each vertex $v_i$ as ${v'_i = v_i + \min(\max(u_i,-K(v_i)),K(v_i))}$, where $K(v)$ is the 3D update bounds allowed to the vertex $v$, depending on the body part it belongs to. Each body part, \eg, face, footprints, hands, head, torso, arms, feet, \etc, have predefined bound thresholds. This operation ensures that the offsets do not exceed the threshold defined to that body part, and that the refinement of the mesh geometry do not affect the body's topology \end{itemize} \vspace*{-0.7cm} \paragraph{Loss function.} For learning the mesh refinement, our model exploits two differentiable renderers that emulate the image formation process. Techniques such as~\cite{liu2019softras,ravi2020pytorch3d} enable us to invert such renderers and take the advantage of the learning paradigm to infer shape and texture information from the 2D images. During the training, the designed loss minimizes the differences between the image silhouette extracted from the input real image $I_s$ and the image silhouette $\hat{I}_s$ of the human body obtained by rendering the deformed 3D human model $M$ into the image by \textit{SoftRenderer}, a differentiable renderer $\Pi_s(M)$. \textit{SoftRenderer} is a differentiable that synthesises the silhouette of the actor. We can define the loss of the Mesh Network $G_m$ as: \begin{equation} \mathcal{L}_m = \lambda_{gl}\mathcal{L}_{gl} + \lambda_{gn}\mathcal{L}_{gn} + \mathcal{L}_s, \end{equation} \noindent where $\mathcal{L}_{gl}$ and $\mathcal{L}_{gn}$ regularize the Laplacian and the normals consistency of the mesh respectively~\cite{ravi2020pytorch3d}, $\lambda_{gl}$ and $\lambda_{gn}$ are the weights for the geometrical regularizers, and \begin{equation} \mathcal{L}_s = 1 - \frac{\left \| \hat{I}_s\otimes I_s \right \|_1}{\left \| (\hat{I}_s + I_s) - \hat{I}_s\otimes I_s \right \|_1}, \end{equation} \noindent is the silhouette loss proposed by Liu~\etal~\cite{liu2019softras}, where ${\hat{I}_s = \Pi_s(M)}$, $M = G_m(P)$ is the refined body mesh model, and $\otimes$ is the element-wise product. \subsection{Human Texture Generation} We represent the appearance of the human model as a texture map in a fixed UV space that is applied to the refined mesh, in our case, the SMPL mesh with offsets. Our full pipeline for texture generation is depicted in Figure \ref{fig:method}-b-c and consists of two stages: a coarse texture generation and then texture refinement. In an initial stage, given the refined 3D meshes of the actor $M$, the Texture Network $G_{TN}$ learns to render the appearance of the actor in the image. This texture is also further used to condition and regularize the Texture Refinement Network $G_{RN}$ in the second stage. \vspace*{-0.35cm} \paragraph{Texture Network.} We start estimating a coarse texture map with a U-Net architecture~\cite{pix2pix2017}. The input of the network is a visibility map $x_v$ and it outputs the texture map $x_p$. The visibility map indicates which parts in the texture map must be generated to produce a realistic appearance for the refined mesh. The visibility map is extracted by a parametric function $x_v = U(M)$ that maps points of refined mesh $M$ with positive dot product between the normal vector and the camera direction vector to a point $x_v$ in the texture space. We implement $U$ as a render of the 2D UV-map considering only faces with positive dot product between the normal vector and the camera direction vector. Thus, the network can learn to synthesize texture maps on demand focusing on the important parts for each viewpoint. Figure~\ref{fig:method}-b shows a schematic representation of the Texture Network training. The \textit{HardRenderer} $\Pi_c(M,x_p)$, represents the colored differentiable renderer that computes the output coarse textured image $\hat{I}$, of model $M$ and texture map $x_p$. In our case, $\hat{I} = \Pi_c(M,G_{TN}(U(M))$. Conversely to the \textit{SoftRenderer}, the differentiable \textit{HardRenderer} is used to propagate the detailed human texture appearance information (color). Specifically, we train the Texture Network to learn to generate a coarse texture by imposing the $l_1$ norm in the person's body region of the color rendered image as: \begin{equation}\label{eq:global_text_loss} \mathcal{L}_{pt} = {\left \| (\hat{I} - I )\otimes B \right \|_1}/{\left \| B \right \|_1}, \end{equation} \noindent where $I$ is the real image in the training set and $B$ is the union of the visibility masks and real body regions. \vspace*{-0.35cm} \paragraph{Texture refinement.} We further improve the coarse texture to represent finer details. For that, we design the Texture Refinement Network $G_{RN}$ to condition the generation of a new texture map from the coarse texture, on a coherent output of the Texture Network $G_{TN}$ and the visibility map. In our adversarial training, the Texture Refinement Network acts as the generator network $G_{RN}$ and engages in a minimax game against two task-specific discriminators: the Face Discriminator $D_1$ and Image Discriminator $D_2$. The generator is trained to synthesize texture maps in order to fool the discriminators which must discern between ``real'' images and ``fake'' images, where ``fake'' images are produced by the neural renderer using the 3D texture-mapped model estimated by the Mesh and Texture Networks. While the discriminator $D_1$ sees only the face region, the Image Discriminator $D_2$ sees the whole image. These three networks are trained simultaneously and drive each other to improve their inference capabilities, as illustrated in Figure~\ref{fig:method}-c. The Texture Refinement Network learns to synthesize a more detailed texture map to deceive the discriminators which in turn learn differences between generated outputs and ground truth data. The total loss function for the generator and discriminators for the rendering is then composed of three terms: \begin{equation} \begin{aligned} \min\limits_{G_{RN}} ( \max\limits_{D_1}\mathcal{L}_{GAN}(G_{RN},D_1) + \\ \max\limits_{D_2}\mathcal{L}_{GAN}(G_{RN},D_2) + \mathcal{L}_{r}(G_{RN}) ), \end{aligned} \end{equation} \noindent where both $\mathcal{L}_{GAN}$ terms address the discriminators and $\mathcal{L}_{r}$ is a regularization loss to reduce the effects from outlier poses. Each adversarial loss is designed as follows: \begin{equation} \begin{aligned} \mathcal{L}_{GAN}(G,D) = \mathbb{E}_{y}[\log D(y)] + \\ \mathbb{E}_{x_v,x_p}[\log (1 - D(\Pi_c(M,G(x_v,x_p))))], \end{aligned} \end{equation} \noindent where $x_v$ is the visibility map, $x_p$ is the output of the Texture Network, $M$ is the refined mesh, and $y$ is the corresponding segmented real image $I \otimes B$. Finally, to reduce the effects of wrong poses, which causes mismatches between the rendered actor silhouette and silhouette of the real actor, we also add a regularization loss to prevent the GAN to apply the color of the background into the human texture. The first term of the regularization loss acts as a reconstruction of the pixels by imposing the $l_1$ norm in the person's body region and the second term enforces eventual misaligned regions to stay close to the coarse texture: \begin{equation} \begin{aligned} \mathcal{L}_r = \alpha_1{\left \| (I - \hat{I}^{RN})\otimes B \right\|_1}/{\left \| B \right \|_1} + \\ \alpha_2{\left \| (\hat{I}^{TN} - \hat{I}^{RN})\otimes C\right \|_1 }/{\left \| C \right \|_1}, \end{aligned} \end{equation} \noindent where $\alpha_1$ and $\alpha_2$ are the weights, $\hat{I}^{TN}$ is the rendered image using the coarse texture, $\hat{I}^{RN}$ is the rendered image using the refined texture, and $C$ is the misaligned regions without the face region, \ie, the image region where the predicted silhouette and estimated silhouette are different. \section{Experiments and Results} \paragraph{Datasets and baselines.} For the training of both texture models and the mesh human deformation network we considered four-minute videos provided by~\cite{gomes2020arxiv}, where the actors perform random movements, allowing the model to get different views from the person in the scene. We use the SMPL model parameters calculated by SPIN~\cite{kolotouros2019spin} and the silhouette image segmented by MODNet~\cite{MODNet} for each frame of the video. In the evaluation/test time of our 3D human rendering approach, and to provide comparisons to related methods, we conducted experiments with publicly available videos used by Chan~\etal~\cite{chan2018dance}, Liu~\etal~\cite{lwb2019}, and Gomes~\etal~\cite{gomes2020arxiv} as the evaluation sequences. We compare our method against five recent methods including V-Unet~\cite{Esser_2018_CVPR}, Vid2Vid~\cite{wang2018vid2vid}, EBDN~\cite{chan2018dance}, Retarget~\cite{gomes2020arxiv} and the Impersonator~\cite{lwb2019}. V-Unet is a famous method of image-to-image translation that uses conditional variational autoencoders to generate images based on a 2D skeleton and an image from the target actor. The approach Retarget is an image-based rendering method based on image rendering designed to perform human retargeting. Impersonator, Vid2Vid, and EBDN are generative adversarial models trained to perform human neural rendering. \vspace*{-0.35cm} \paragraph{Metrics and evaluation protocol.} We adopted complementary metrics to evaluate the quality of the approaches to asset different aspects of the generated images such as structure coherence, luminance, contrast, perceptual similarity~\cite{zhang2018perceptual}, temporal, and spatial coherence. The metrics used to perform quantitative analysis are SSIM~\cite{Wang04imagequality}, LPIPS~\cite{zhang2018perceptual}, Mean Square Error (MSE), and Fréchet Video Distance (FVD)~\cite{unterthiner2019accurate}. Following the protocol proposed by~\cite{gomes2020arxiv}, we executed all the methods in the motion sequences and transfered them to the same background. This protocol allows us to generate comparisons with the ground truth and compute the metrics for all the generated images with their respective real peers. Then, we group the values generated by the metrics in two ways: Motion Types and Actors. In the first one, for each metric, we calculate the average values of all actors making a motion sequence (\eg, ``spinning"), while in the second one, we calculate the average values of all movements performed by a given actor. This grouping allows us to analyze the capability of the methods to render the same actor performing various movements with different levels of difficulty and also to compare their capacity to render actors with different morphology performing the same movement. \vspace*{-0.35cm} \paragraph{Implementation details.} We trained our body mesh refinement network for $20$ epochs with batch size of $4$. We used AdamW~\cite{AdamW} with parameters $\beta_{1} = 0.5$, $\beta_{2} = 0.999$, weight decay = $1\times10^{-2}$, and learning rate of $1\times10^{-4}$ with a linear decay routine to zero starting from the middle of the training. We empirically set $\lambda_{1}$ and $\lambda_{2}$ to $1.0$ and $0.5$, respectively. In the Vertex refinement clamping component, we defined the set of thresholds as follows: $K \in$ \{face = $0.0$; footprints = $0.0$; hands = $0.0$; head = $0.04$; torso = $0.06$; arms = $0.02$; forearms = $0.04$; thighs = $0.04$; calves = $0.03$; feet = $0.02$\} meters. All the training and inference were performed in a single Titan XP ($12$ GB), where the GCN mesh model and the human texture networks took around $6$ and $20$ hours per actor, respectively. The inference time takes $92$ ms per frame ($90$ ms in the GCN model deformation and $2$ ms in the texture networks). Due to remarkable performance of pix2pix~\cite{pix2pix2017} in synthesizing photo-realistic images, we build our Texture Network upon its architecture. The optimizers for the texture models were configured as the same as the Mesh Network, except for the learning rates. The learning rate for the whole body and face discriminators, the global texture and refinement texture generators were set as $2\times10^{-5}$, $2\times10^{-5}$, $2\times10^{-3}$, and $2\times10^{-4}$, respectively. The parameters of the texture reconstruction was set to $\alpha_{1} = 100$ and the regularization as $\alpha_{2} = 100$. We observed that smaller values led to inconsistency in the final texture. For the training regime, we used $40$ epochs with batch size $8$. The global texture model was trained separately from the other models for $2{,}000$ steps, then we freeze the model, the texture refinement generator and the discriminators were trained. \begin{figure}[t!] \includegraphics[width=1\linewidth]{ablation_v2.pdf}\vspace*{-0.4cm} \caption{\textbf{Ablation study.} a) Results of the texture training without the refinement stage; b) Model without the Vertex Refinement Clamp layer. We observe an excessive growth of the mesh without update thresholds. The texture produced lacks details and even could not preserve the actor's face; c) shows the results for our complete model.} \label{fig:ablation_figure} \end{figure} \begin{table}[t!] \centering \caption{{\bf Ablation study}. SSIM, LPIPS, MSE, and FVD comparison by motion types. Best in bold.} \label{table:ablation_result} \resizebox{\columnwidth}{!}{% \begin{tabular}{clrrrr} \toprule \multirow{3}{*}{\bf Method} & \multicolumn{1}{c}{} & \multicolumn{4}{c}{{\bf Metrics}}\\ \cmidrule{3-6} & \multicolumn{1}{c}{} & {\centering SSIM$^1$} & {\centering LPIPS$^2$} & {\centering MSE$^2$} & {\centering FVD$^2$} \\ \midrule & \hspace*{-1.9cm} Texture Refinement Removal & $\textbf{0.869}$ & $0.136$ & $262.39$ & $795.15$ \\ & \hspace*{-1.9cm} Vertex Refinement Clamping & $0.866$ & $0.142$ & $288.04$ & $829.60$\\ & \hspace*{-1.9cm} Complete Model & $0.868$ & $\textbf{0.134}$ & $\textbf{259.79}$ & $\textbf{769.54}$ \\ & & \multicolumn{2}{c}{\scriptsize{$^1$\textit{Higher is better}}} & \multicolumn{2}{c}{\scriptsize{$^2$\textit{Lower is better}}} \\ \bottomrule \vspace{-0.8cm} \end{tabular} } \end{table} \vspace*{-0.35cm} \paragraph{Ablation Study.} We evaluate the contributions of different parts of the method to the overall view synthesis performance. We investigated the benefits from the Vertex refinement clamping component in the Mesh Refinement Network (MRN) and the use of adversarial training in the texture generation. For the first experiment, we removed the vertex refinement thresholds, letting the mesh grow loosely. All other steps of texture training were maintained. Table~\ref{table:ablation_result} shows that the performance dropped drastically when compared to our original model. A qualitative analysis of the results in Figure~\ref{fig:ablation_figure}-b demonstrates that removing the Vertex refinement clamping component led to strong wrong deformations in the hands and feet, \ie, the regions with higher pose estimation errors In the adversarial training analysis, we maintained the original Mesh Refinement Network and removed the Texture Refinement Network and its discriminators, training only the Global Texture Network using Equation~\ref{eq:global_text_loss}. Figure~\ref{fig:ablation_figure}-a shows the texture quality of the models trained with and without the adversarial regime. After removing the GAN the model could not generate textures with fine details, producing blurry results. This result is also reported in the metrics of Table~\ref{table:ablation_result}, where we show the average values calculated from all motion sequences in the test data in which the model without GAN is outperformed in all results besides SSIM. This result is coherent, since SSIM is based on low-level image features, and blurred textures can lead to higher SSIM scores. \begin{table*}[t!] \centering \caption{{\bf Comparison with state-of-the-art human neural rendering}. SSIM, LPIPS, MSE, and FVD comparison by motion and actors types. Best in bold.} \label{table:metrics_result} \resizebox{0.92\linewidth}{!}{% \begin{tabular}{@{}clrrrrrrrrrrrrr@{}} \toprule \multirow{3}{*}{\bf Metric} & \multirow{3}{*}{\bf Method} & \multicolumn{8}{c}{{\bf Motion type}} & \phantom{a} & \multicolumn{4}{c}{{\bf Actor type}}\\ \cmidrule{3-10} \cmidrule{12-15} & \multicolumn{1}{r}{} & {\centering jump} & {\centering walk} & {\centering spinning} & {\centering shake hands} & {\centering cone} & {\centering fusion dance} & {\centering pull down} & {\centering box} & \phantom{a} & {\centering S1} & {\centering S2} & {\centering S3} & {\centering S4} \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1.5cm}{\centering SSIM$^1$}}} & EBDN~\cite{chan2018dance} & $0.878$ & $0.880$ & $0.855$ & $0.859$ & $0.878$ & $0.820$ & $0.857$ & $0.858$ & \phantom{a} & $0.867$ & $0.898$ & $0.844$ & $0.834$ \\ & Imper~\cite{lwb2019} & $0.877$ & $0.880$ & $0.852$ & $0.859$ & $0.877$ & $0.816$ & $0.855$ & $0.856$ & \phantom{a} & $0.867$ & $0.896$ & $0.842$ & $0.831$\\ & Retarget~\cite{gomes2020arxiv} & $0.881$ & $0.885$ & $0.855$ & $0.860$ & $0.879$ & $0.820$ & $0.861$ & $0.869$ & \phantom{a} & $0.872$ & $0.902$ & $0.846$ & $0.834$ \\ & Vid2Vid~\cite{wang2018vid2vid} & $0.880$ & $0.884$ & $0.856$ & $0.858$ & $0.878$ & $0.821$ & $0.859$ & $0.866$ & \phantom{a} & $0.868$ & $0.901$ & $0.848$ & $0.835$ \\ & V-Unet~\cite{Esser_2018_CVPR} & $0.870$ & $0.871$ & $0.843$ & $0.847$ & $0.862$ & $0.797$ & $0.847$ & $0.857$ & \phantom{a} & $0.855$ & $0.886$ & $0.830$ & $0.826$ \\ & Ours & $\textbf{0.884}$ & $\textbf{0.890}$ & $\textbf{0.860}$ & $\textbf{0.865}$ & $\textbf{0.885}$ & $\textbf{0.824}$ & $\textbf{0.866}$ & $\textbf{0.873}$ & \phantom{a} & $\textbf{0.876}$ & $\textbf{0.908}$ & $\textbf{0.852}$ & $\textbf{0.838}$ \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1.5cm}{\centering LPIPS$^2$}}} & EBDN~\cite{chan2018dance} & $0.141$ & $0.122$ & $0.139$ & $0.138$ & $0.143$ & $0.215$ & $0.151$ & $0.170$ & \phantom{a} & $0.159$ & $0.145$ & $0.147$ & $0.159$ \\ & Imper~\cite{lwb2019} & $0.151$ & $0.134$ & $0.151$ & $0.151$ & $0.155$ & $0.239$ & $0.168$ & $0.184$ & \phantom{a} & $0.161$ & $0.165$ & $0.170$ & $0.171$ \\ & Retarget~\cite{gomes2020arxiv} & $\textbf{0.125}$ & $0.099$ & $\textbf{0.130}$ & $0.131$ & $0.128$ & $\textbf{0.206}$ & $\textbf{0.131}$ & $\textbf{0.127}$ & \phantom{a} & $\textbf{0.133}$ & $0.129$ & $\textbf{0.133}$ & $\textbf{0.143}$ \\ & Vid2Vid~\cite{wang2018vid2vid} & $0.131$ & $0.105$ & $0.126$ & $0.136$ & $0.133$ & $0.203$ & $0.142$ & $0.133$ & \phantom{a} & $0.148$ & $0.131$ & $0.129$ & $0.147$ \\ & V-Unet~\cite{Esser_2018_CVPR} & $0.147$ & $0.132$ & $0.157$ & $0.161$ & $0.174$ & $0.243$ & $0.166$ & $0.158$ & \phantom{a} & $0.184$ & $0.160$ & $0.166$ & $0.158$ \\ & Ours & $0.127$ & $\textbf{0.097}$ & $\textbf{0.130}$ & $\textbf{0.130}$ & $\textbf{0.124}$ & $\textbf{0.206}$ & $0.132$ & $\textbf{0.127}$ & \phantom{a} & $0.136$ & $\textbf{0.124}$ & $0.134$ & $\textbf{0.143}$ \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1.5cm}{\centering MSE$^2$}}} & EBDN~\cite{chan2018dance} & $306.92$ & $269.58$ & $312.69$ & $312.12$ & $266.17$ & $463.79$ & $384.23$ & $361.57$ & \phantom{a} & $324.40$ & $314.10$ & $331.83$ & $368.19$ \\ & Imper~\cite{lwb2019} & $313.43$ & $275.22$ & $344.94$ & $314.92$ & $267.39$ & $504.00$ & $404.79$ & $377.16$ & \phantom{a} & $277.98$ & $328.07$ & $358.30$ & $436.57$\\ & Retarget~\cite{gomes2020arxiv} & $237.16$ & $178.57$ & $286.86$ & $270.25$ & $237.86$ & $434.64$ & $301.66$ & $245.86$ & \phantom{a} & $\textbf{243.95}$ & $260.14$ & $294.88$ & $297.46$ \\ & Vid2Vid~\cite{wang2018vid2vid} & $257.33$ & $206.42$ & $286.18$ & $332.09$ & $253.04$ & $452.77$ & $349.28$ & $288.54$ & \phantom{a} & $312.40$ & $274.80$ & $269.81$ & $356.26$ \\ & V-Unet~\cite{Esser_2018_CVPR} & $295.04$ & $269.59$ & $354.33$ & $377.02$ & $328.68$ & $559.75$ & $417.00$ & $346.13$ & \phantom{a} & $381.77$ & $344.71$ & $362.63$ & $384.66$\\ & Ours & $\textbf{231.20}$ & $\textbf{153.23}$ & $\textbf{278.75}$ & $\textbf{254.38}$ & $\textbf{218.76}$ & $\textbf{418.58}$ & $\textbf{286.02}$ & $\textbf{237.42}$ & \phantom{a} & $247.10$ & $\textbf{236.49}$ & $\textbf{276.17}$ & $\textbf{279.42}$ \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1.5cm}{\centering FVD$^2$}}} & EBDN~\cite{chan2018dance} & $887.56$ & $273.00$ & $918.94$ & $\textbf{423.08}$ & $725.49$ & $952.22$ & $1,113.46$ & $853.26$ & \phantom{a} & $791.98$ & $751.45$ & $\textbf{560.27}$ & $826.71$\\ & Imper~\cite{lwb2019} & $1,770.31$ & $656.07$ & $1,531.64$ & $1,266.14$ & $1,051.42$ & $1,322.72$ & $1,440.94$ & $1,719.55$ & \phantom{a} & $1,270.41$ & $1,092.82$ & $1,395.81$ & $1,214.64$ \\ & Retarget~\cite{gomes2020arxiv} & $1,119.50$ & $330.91$ & $\textbf{674.99}$ & $478.93$ & $767.68$ & $791.01$ & $\textbf{988.35}$ & $\textbf{760.33}$ & \phantom{a} & $\textbf{715.00}$ & $\textbf{653.30}$ & $720.49$ & $\textbf{515.61}$ \\ & Vid2Vid~\cite{wang2018vid2vid} & $\textbf{879.94}$ & $266.6$ & $1,085.49$ & $396.31$ & $790.79$ & $997.42$ & $997.96$ & $1,069.85$ & \phantom{a} & $778.53$ & $719.80$ & $762.46$ & $574.08$ \\ & V-Unet~\cite{Esser_2018_CVPR} & $1,491.63$ & $845.44$ & $1,721.81$ & $1,257.20$ & $1,415.24$ & $1,712.93$ & $2,437.98$ & $1,816.94$ & \phantom{a} & $2,239.14$ & $1,352.10$ & $1,856.78$ & $1,108.34$ \\ & Ours & $1,114.43$ & $\textbf{233.81}$ & $1019.83$ & $542.24$ & $\textbf{614.88}$ & $\textbf{746.22}$ & $1010.24$ & $874.69$ & \phantom{a} & $881.49$ & $697.68$ & $718.21$ & $551.06$ \\ & & \multicolumn{6}{c}{\scriptsize{$^1$\textit{Higher is better}}} & \multicolumn{6}{c}{\scriptsize{$^2$\textit{Lower is better}}} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Results} \paragraph{Quantitative comparison with state of the art.} We performed the neural rendering for actors with different body shapes, gender, clothing styles, and sizes for all considered video sequences. The video sequences used in the actors' animations contained motions with different levels of difficulty, which aims to test the generalization capabilities of the methods in unseen data. Table~\ref{table:metrics_result} shows the performance for each method considering all motion and actors types in the dataset. We can see that our method achieves superior peformance as compared to the methods in most of the motion sequences and actor types considering the SSIM, LPIPS, MSE, and FVD metrics. These results indicate that our method is capable of deforming the mesh according to the shape of the given actors and then, rendering a texture optimized to fit the morphology of the person in the scene. Furthermore, our training methodology, that considers multiple views from the actor and the shape parameters, allows the generation of consistent rendering with less visual artifacts when the character is performing challenging movements, such as bending or rotating. \vspace*{-0.35cm} \paragraph{Qualitative visual analysis.} The visual inspection of synthesized actors also concur with the quantitative analysis. Figure \ref{fig:results_dataset} shows the best frames for each movement using four actors in the dataset. Our model and Retarget~\cite{gomes2020arxiv} are the only approaches capable of keeping the body scale of the authors along all scenes, while the other methods failed, in particular in the movements \textit{shake hands} and \textit{walk}. Besides generating coherent poses, our technique also generated more realistic textures in comparison to the other methods. Comparing the results of the movements \textit{jump} and \textit{spinning}, one can visualize some details as the shadow of the shirt sleeve of the actor and the shirt collar, respectively. Figure~\ref{fig:bruno-tom} illustrates a task of transferring the appearance in videos with different scenes and camera-to-actor translations and camera intrinsics. These results also demonstrate our capability of generating detailed face and body texture, producing a good fit to synthesize views of actors observed from different camera setups. \begin{figure}[t!] \vspace*{-0.3cm} \centering \includegraphics[width=.95\linewidth]{results_6m2.pdf}\vspace*{-0.2cm} \caption{\textbf{Qualitative comparison.} Movements with four different actors are represented in the rows. The results for each competitor are represented in the columns.} \label{fig:results_dataset}\vspace*{-0.3cm} \end{figure} \begin{figure}[t!] \vspace*{-0.3cm} \includegraphics[width=0.95\linewidth]{bruno_and_tom2.pdf}\vspace*{-0.2cm} \caption{\textbf{Human appearance transfer and animation with different scenes and camera setups}. The first line of each scene illustrates the original frames of the source video. On the second line is the transferred appearance of the animated virtual actor using our proposed method. The red squares highlight the face generation quality.} \label{fig:bruno-tom} \vspace*{-0.3cm} \end{figure} \vspace*{-0.2cm} \section{Conclusions} In this paper we proposed a method that produces a fully 3D controllable representation of people from images. Our key idea is leveraging differentiable rendering, GCNs, and adversarial training to improve the capability of estimating realistic 3D texture-mapped models of humans. Taking advantages of both differentiable rendering and the 3D parametric model, our method is fully controllable, which allows controlling the human pose and rendering parameters. Furthermore, we have introduced a graph convolutional architecture for mesh generation that refines the human body structure information, which results in a more accurate human mesh model. The experiments show that the proposed method has superior quality compared to recent neural rendering techniques in different scenarios, besides having several advantages in terms of control and ability to generalize to furthest views. \vspace{-0.3cm} { \paragraph*{Acknowledgments.} The authors would like to thank CAPES, CNPq, FAPEMIG, and Petrobras for funding different parts of this work. R. Martins was also supported by the French National Research Agency through grants ANR MOBIDEEP (ANR-17-CE33-0011), ANR CLARA (ANR-18-CE33-0004) and by the French Conseil Régional de Bourgogne-Franche-Comté. We also thank NVIDIA Corporation for the donation of a Titan XP GPU used in this research.} \clearpage \balance {\small \bibliographystyle{ieee_fullname}
c692c2e4a92152a8921de1f9bb02d850fa00ceed
\section{Introduction}\label{sec:introduction} The implied volatility of an option is defined as the value of the volatility of the underlying asset which, when used as input in a given pricing model, returns a theoretical value matching the current market price of the option considered. In this article we propose a methodology which allows to compute risk-neutral implied volatilities of European-options.\footnote{Here options are always assumed to have European-style exercise. Thus, the exercise type will be often omitted, for brevity.} This is accomplished without relying on any mid quote approximations. Instead, our approach can be applied starting from bid and ask quotes directly, and we outline how to use our technique under both Black-Scholes and Bachelier modeling settings. The concept of implied volatility is relevant for different reasons. First of all, Black-Scholes (but also Bachelier) implied volatilities are important \emph{quoting conventions} in financial markets. They are therefore useful as benchmarks for the calibration of option pricing models. Nonetheless, several applications of the notion of implied volatility have been investigated outside the valuation framework, and, more precisely, in forecasting analysis: there are studies investigating the use of implied volatilities to predict, amongst other things, realized volatilities \cite{evidence}, asset returns \cite{an, fu} and financial market bubbles \cite{bubbles}. Moreover, implied volatility spreads, i.e., the differences in call and put implied volatilities, have been used to forecast option returns \cite{doran} and equity premia \cite{cao}. Thus, the more accurately one can calculate implied volatilities from option prices, especially far from the at-the-money point where liquidity is lower, the better. In practical applications and analyses, implied volatilities are often calculated starting from mid option prices, as for instance in \cite{midVols}. It is a known fact that the risk-neutral price of a contract lies within an interval with lower and upper bounds given by the bid and the ask prices, respectively. However, the risk-neutral price, in general, does not coincide with the mid, despite the latter is, usually, employed as a proxy for the former. To be able to calculate the correct risk-neutral implied volatility of an option without relying on mid market approximations, one would need to model option prices in a two-price economy. One possible approach to do so is that of conic finance, introduced in \cite{cherny}. This allows to evaluate bid and ask prices of contingent claims by recognizing that, in an economy, risk cannot be fully eliminated. Therefore, markets should quote based on the notions of (static) \emph{index of acceptability} and \emph{coherent risk measure}, consistently with the risk-neutral paradigm. By characterizing the structure of the contingent claims that are considered acceptable by the market, computing bid and ask prices can be performed by means of Choquet expectations \cite{choquet1953} of the relevant terminal payoffs with respect to distorted versions of the risk-neutral distribution of the underlying asset. This static approach to conic finance has found disparate practical applications. These applications range from exotic and structured products \cite{exotic1,structured} to contingent convertibles \cite{conicCoconuts}, from capital calculations \cite{capital1,capital2} to credit valuation adjustments \cite{cva1,cva2}, and again from hedging insurance risk \cite{insurance} to implied liquidity \cite{impliedLiquidity}. For a better overview of the applications of conic finance, see \cite{bookConicFinance}. We observe that the approach to conic finance based on static indices of acceptability can be extended to a time-dependent framework, as in \cite{dynamicConicFinance}, via the notion of \emph{dynamic index of acceptability} \cite{dynamicAccIndex}. However, as our aim is that of extracting market information from the currently-available market data (i.e., European option prices), a static approach to conic finance already suits our needs. Our approach to imply risk-neutral volatilities without relying on any mid quote approximation is based on the conic finance theory of \cite{cherny} and, in particular, on \cite{michielon}, where a methodology to imply risk-neutral default probability distributions from bid and ask credit default swaps (CDSs) is outlined. Note that our methodology can be used to compute risk-neutral market-implied quantities from quoted bid and ask prices of any type of contingent claim, provided that some basic assumptions are satisfied. In particular, in the specific case of European options, our methodology requires some technical conditions to be fulfilled concerning the liquidity level of the market and the infima and suprema of the option prices with respect to changes in the volatility parameter. We observe that the fact that European option prices are strictly increasing with respect to the volatility parameter is essential for our technique to be applied (therefore, for other products, to imply a given model parameter a monotonicity condition needs to be satisfied; see Theorem \ref{th:general} for the technical details). Within the conic finance framework the concept of conic implied volatility has been introduced in \cite[Sec. 5.4.3]{bookConicFinance}. Therein it is illustrated how, given market bid and ask quotes, a distortion function, and a liquidity level, one can compute the implied volatilities that allow to price back the observed bid and ask prices, named \emph{conic implied volatilities}. This approach is different from that of calculating \emph{bid} and \emph{ask implied volatilities}, that is, the implied volatilities that allow to match quoted bid and ask option prices under risk-neutral settings. Bid implied volatilities are lower than ask implied volatilities given that bid prices are below their ask counterparts. On the contrary, \cite[Sec. 5.4.3]{bookConicFinance} show that this condition does not need to hold when conic implied volatilities are calculated. Note that the technique proposed in \cite[Sec. 5.4.3]{bookConicFinance} is outlined in the specific case of options with European exercise features under Black-Scholes specifications for underlyings paying continuous dividends. However, it can be also applied, for instance, in the case of Bachelier specifications, and it is not restricted to the equity asset class only. Our idea is fundamentally different from both the above, as in our approach we imply a single volatility given a bid and an ask, and not an implied volatility per quote (i.e., one for the bid and one for the ask) as done in the standard case and in \cite[Sec. 5.4.3]{bookConicFinance}. This because we are interested in computing implied volatilities which can be interpreted as risk-neutral ones. Further, our approach still allows to compute implied volatility spreads, as the methodology can be followed for calls and puts separately. Our method guarantees that implied risk-neutral volatilities and liquidity levels, the latter in the spirit of \cite{impliedLiquidity}, can be uniquely determined. Further, the methodology outlined here is also simple from a computational perspective, as it only requires to solve a (constrained) non-linear system with two equations and two unknowns. We highlight here that it is not our intention to advocate the usage of Black-Sholes (or Bachelier) settings in financial modeling. The reason why we provide a method to strip \emph{liquidity-free} implied risk-neutral volatilities is that option prices are often quoted in terms of Black-Scholes (or Bachelier) volatilities. In addition, both Black-Scholes and Bachelier settings can be seen as option price interpolators, and the implied volatilities they generate are often benchmark inputs in several pricing models. Therefore, their accurate calculation, which we show can be performed without relying on mid quote approximations, is of key importance in financial modeling. \medskip This paper is organized as follows. Section \ref{sec:bidAsk} provides a brief introduction to the theory of conic finance. Section \ref{sec:liquidityFreeVols} outlines how to compute risk-neutral implied volatilities starting from bid and ask option prices using the conic finance theory. In particular, it highlights how to do so in the case distortions are modeled as Wang transforms \cite{wang} by recalling the conic Black Scholes formulae of \cite[Sec. 5.4]{bookConicFinance} and by providing conic Bachelier option pricing formulae. Section \ref{sec:example} provides an illustration of the methodology outlined in this article, while Section \ref{sec:conclusion} concludes. The proof of Theorem \ref{th:general} can be found in Appendix \ref{sec:proof}. For completeness, in Appendix \ref{sec:derivation} the derivation of risk-neutral Bachelier option pricing formulae is available, while in Appendix \ref{sec:remark} a remark on a property of the Wang transform is provided. \section{Pricing in a two-price economy}\label{sec:bidAsk} The theory of conic finance introduced in \cite{cherny2009} is based on the idea that, in financial markets, risks cannot be fully hedged. Therefore, positions are taken after having weighted the possible risks and rewards connected to the instruments traded in the market. Hence, financial markets are modeled as abstract counterparties that allow trades to take place after they have passed some sort of ``quality assessment''. To do so, financial markets would need a machinery to perform this appraisal. In \cite{cherny2009} this is based on the concept of index of acceptability. Given a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, a functional $\alpha: L^\infty(\Omega, \mathcal{F}, \mathbb{P}) \to [0,+\infty]$ which assigns higher (lower) values to random variables that are expected to perform better (worse) is what an index of acceptability is. \cite{cherny2009} impose some technical conditions on the notion of index of acceptability, otherwise their class would be too wide for being of practical use. In particular, if two random cashflows are acceptable at a given level $\gamma$ (i.e., if their level of acceptability is at least $\gamma$), then the same applies to any convex combination of them. Moreover, indices of acceptability are assumed to be monotonic: if a random cashflow always outperforms a second one, then the former would be better ranked than the latter. Indices of acceptability are also supposed to be scale-invariant, i.e., the expected performance of a cashflow $X$ is the same of that of $\lambda X$, for every $\lambda>0$. Finally, the technical Fatou property needs to be satisfied: given $(X_n)_n$ a sequence of random cashflows such that, for every $n$, $|X_n|\leq 1$ and $\alpha(X_n)\geq \gamma$, then if $(X_n)_n$ converges in probability to a random cashflow $X$, then also $\alpha(X)\geq \gamma$. \cite{cherny2009} prove that, provided an index of acceptability $\alpha$, for every $x\geq0$ there exists a set $\mathfrak{Q}_x$ of probability measures absolutely continuous with respect to $\mathbb{P}$ such that $x\leq x'$ implies $\mathfrak{Q}_x\subseteq\mathfrak{Q}_{x'}$, and that \begin{equation}\nonumber \alpha(X)=\sup\left\{x\geq0:\inf_{\mathbb{Q}\in\mathfrak{Q}_x}\mathbb{E}^{\mathbb{Q}}(X)\geq0\right\}. \end{equation} The concept of index of acceptability can be then linked to that of coherent risk measure, i.e., a map $\rho:L^\infty(\Omega, \mathcal{F}, \mathbb{P}) \to [0,+\infty]$ that is transitive, sub-additive, positively homogeneous and monotonic, see \cite[Sec. 4.1]{bookConicFinance}. \cite{delbaen2009} show that a coherent risk measure can be identified with a functional such that $X\mapsto\sup_{\mathbb{Q}\in\mathfrak{Q}}\mathbb{E}^{\mathbb{Q}}(X)$, where the set $\mathfrak{Q}$ contains measures that are absolutely continuous with respect to $\mathbb{P}$. The concepts of index of acceptability and that of coherent risk measure can be tied together via the relationship \begin{equation}\label{eq:ia} \alpha(X)=\sup\left\{ x\geq0:\rho_x(-X)\leq0 \right\}, \end{equation} where $(\rho_x)_{x\geq0}$ is a family of coherent risk measures such that $\rho_x(-X)\leq\rho_{x'}(-X)$ whenever $x\leq x'$. From this it follows that $\alpha(X)\geq\gamma$ is equivalent to $\rho_\gamma(-X)\leq0$. For this reason, indices of acceptability with the aforementioned properties are often called \emph{coherent indices of acceptability}. We now recall the definition of (asymmetric) Choquet integral \cite{choquet1953} (we will, from here onwards, always omit the word ``asymmetric'', as symmetric Choquet integrals are not relevant in this framework, and we refer the interested reader to \cite[Sec. 7]{denneberg}). For a \emph{non-additive probability} $\mu$ and a random variable $X$, the Choquet integral is defined as \begin{equation}\label{eq:choquetGeneral} (\text{C}) \int_\Omega X \, d\mu \coloneqq \int_{-\infty}^0 \mu(X\geq t)-1\,dt + \int_0^{+\infty} \mu(X\geq t)\,dt. \end{equation} In \eqref{eq:choquetGeneral}, the integrals on the right-hand side should be interpreted as improper Riemann integrals. Therefore, they both exist given that their arguments are monotonic functions, which guarantees that the sets of their discontinuities have a Lebesgue measure of zero. Note, however, that their sum does not necessarily exist\footnote{If $X$ is non-negative(positive), then \eqref{eq:choquetGeneral} is guaranteed to be well-defined.}: see \cite[Sec. 5]{denneberg} for a detailed treatment of Choquet integrals. Let a risk-neutral measure $\mathbb{Q}\in\bigcap_{x\geq0}\mathfrak{Q}_x$. A \emph{distortion function} is a function from $[0,1]$ to $[0,1]$ that maps 0 to 0 and 1 to 1. For a concave distortion function $\psi(\,\cdot\,)$ we denote with $\psi(\mathbb{Q})(A)$ the (potentially non-additive) probability measure that assigns to each measurable set $A$ the probability mass $\psi(\mathbb{Q}(A))$. Given $\left(\psi_x\right)_{x\geq 0}$ an increasing family of concave distortion functions, the map $\rho_x$ such that $X\mapsto (\text{C}) \int_\Omega X \, d\psi_x({\mathbb{Q}})$ defines a coherent risk measure. Hence, as per \cite{cherny2009}, functionals of this form can be employed to describe indices of acceptability by setting \begin{equation}\label{eq:oia} \alpha(X)\coloneqq\sup\left\{x\geq0: (\text{C})\int_\Omega -X \, d\psi_x({\mathbb{Q}})\leq0\right\}. \end{equation} The tools just introduced can be now used to characterize direction-dependent pricing in financial markets. Assume that a threshold of at least $\gamma$ has been set by the market for a given contingent claim to be considered acceptable and, thus, tradable. We assume a constant risk-free rate $r$\footnote{Note that the constant risk-free rate assumption has been made only for consistency with the fact that, in this article, we consider Black-Scholes and Bachelier models. However, in the case of a time-dependent risk-free rate, all the steps outlined from here onwards would still hold.} and consider a contingent claim $X$ with a terminal payoff at time $T$. The market is then willing to buy $X$ at a price $b$ if and only if $\alpha(X-e^{-rT} b)\geq\gamma$. This is equivalent, given the assumption that the market evaluates the performance of contingent claims by means of Choquet integrals, to the condition $ b\leq-e^{-rT}(\mathrm{C})\int_\Omega -X\,d\psi_\gamma(\mathbb{Q})$. Thus, the bid price of $X$, $\mathrm{bid}_\gamma(X)$, equals \begin{equation}\label{eq:bid} \mathrm{bid}_\gamma(X)=-e^{-rT}(\mathrm{C})\int_\Omega -X\,d\psi_\gamma(\mathbb{Q}). \end{equation} As the ask price of $X$, $\mathrm{ask}_\gamma(X)$, equals $-\mathrm{bid}_\gamma(-X)$, from \eqref{eq:bid} it then immediately follows that \begin{equation}\label{eq:ask} \askGamma{X}=e^{-rT}(\mathrm{C})\int_\Omega X\,d\psi_\gamma(\mathbb{Q}). \end{equation} Note that should more than one risk-neutral measure exist, then one would need to choose which risk-neutral measure to use within formulae \eqref{eq:bid} and \eqref{eq:ask}. Further, we observe that, in the formulae provided above, the choice of the distortion function provides the modeler with a degree of freedom to describe the liquidity dynamics of the market. In addition, to different values of the distortion parameter $\gamma$ there correspond different market liquidity specifications. This framework reminds that of modeling preferences towards risk by means of utility functions. Despite utility theory characterizes agents' behavior from a micro-economic perspective (i.e., the individual preferences of each agent) while conic finance describes risk attitudes of financial markets, there are some similarities between the two approaches worth of attention. In particular, in utility theory the modeler has to choose the functional form of the utility function to be used. This is similar to the conic finance case, where a choice related to the distortion function also has to be made. Further, in utility theory one has to choose the parameter(s) of the utility function in order to describe the level of risk aversion (or risk tolerance) of an agent. Similarly, on the conic finance side, the behavior of the market is further described by the distortion parameter $\gamma$. In addition, we also point out that Choquet integrals are common tools in decision theory. In particular, we recall the results of \cite{schmeidler89} which characterize choices under uncertainty, the latter in the sense of \cite{knight}, in terms of Choquet integrals (for a representation result concerning Choquet integrals, see \cite{schmeidler86}, on which \cite{schmeidler89} is based on). We further highlight that Choquet integrals can be also applied to option pricing problems under uncertainty by means of Choquet Brownian motions, introduced in \cite{choquetBrownian}, as done in \cite{choquetOptions}. \section{Liquidity-free option implied volatilities}\label{sec:liquidityFreeVols} From here onwards we consider European options only, and we assume to be either within the Black-Scholes or the Bachelier framework. This because we are interested in backing out either log-normal or normal implied volatilities. Given a filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\in[0,T]}, \mathbb{P})$, we denote with $(X_t)_{t\in[0,T]}$ the process representing a ``generic'' underlying. In particular, by introducing an adjusted risk-neutral drift $r-\alpha$, one can define Black-Scholes dynamics via the stochastic differential equation (SDE) given, under the risk-neutral measure $\mathbb{Q}$ equivalent to $\mathbb{P}$ on $\mathcal{F}$, by \begin{equation}\label{eq:bs} dX = (r-\alpha) X dt + \sigma X dW. \end{equation} In \eqref{eq:bs}, $\sigma$ denotes the volatility term, and $(W_t)_{t\in[0,T]}$ a Brownian motion adapted to the filtration $(\mathcal{F}_t)_{t\in[0,T]}$. The parameter $\alpha$ can be defined according to the asset class considered. For instance, setting $\alpha=0$ outlines the standard Black-Scholes framework on a non-dividend paying underlying, setting $\alpha=q$ with $q$ denoting the continuous dividend yield corresponds to the Black-Scholes framework for an underlying paying continuous dividends, while setting $\alpha=r$ corresponds to the Black model for futures options; see \cite[Sec. 1.1.6]{optionFormulae} for further possible specifications. To take into accunt the possibility that the underlying asset can reach negative values, e.g., in the case of rates and oil prices\footnote{See \url{https://www.cmegroup.com/content/dam/cmegroup/notices/clearing/2020/04/Chadv20-152.pdf} for a note of the Chicago Mercantile Exchange concerning the possible use of the Bachelier formula due to the negative oil prices observed in 2020.}, option prices can also be quoted in terms of Bachelier (i.e., normal) implied volatilities. Therefore, in a similar manner as per \eqref{eq:bs} and using the same notation conventions, one can define the Bachelier SDE as \begin{equation}\label{eq:b} dX = (r-\alpha) X dt + \sigma dW. \end{equation} Via \eqref{eq:b} we have chosen to describe a generalized variant, in the sense of \cite[Sec. 1.1.6]{optionFormulae}, of the ``contemporary'' version of the Bachelier model as in \cite[Sec. 3.3]{musiela}. Note, however, that in the literature sometimes the SDE corresponding to the Bachelier model slightly differs from that outlined in \eqref{eq:b}, for instance by not considering the drift term (see \cite[Sec. 1.3.1]{optionFormulae}). In any case, independently on the exact specifications of the Bachelier SDE considered, all the Bachelier-related calculations available in this article can be performed in the same manner, up to minor rearrangements. For a given strike $K$ and maturity $T$, we denote with $\mathcal{C}$ and with $\mathcal{P}$ the prices of a call and a put option written on $X$ with such strike and maturity. However, when considering the Black-Scholes model \eqref{eq:bs} (Bachelier model \eqref{eq:b}), we will use $\callBS$ ($\callB$) and $\puttBS$ ($\puttB$), instead. Note that, depending, on the context, we will make option prices explicitly depend on specific parameters only, as it will be clear in the next sections. This to keep the notation as light as possible. In any case, the dependency on both strike and maturity will be always omitted, as redundant in our context. In practice, implied volatilities, either normal or log-normal, are backed out from call and put options separately. More precisely, implied volatilities for call options are computed starting from the mid prices of these options and, similarly, the same applies to put implied volatilites. However, in principle, one would like to compute the real risk-neutral implied volatilities, without relying on approximating risk-neutral prices by their mid counterparts. A similar problem to this has been analyzed in \cite{michielon}. Therein it is shown that, in the case of CDSs, under mild assumption concerning the liquidity level of the market and the characteristics of the default time process, it is possible to strip risk-neutral default probabilities from bid and ask CDS quotes directly in a unique manner. The considerations available in \cite{michielon} are now extended to a more general setup. In particular, the methodology we highlight here is quite general. That is, it can be applied to any contingent claim whose risk-neutral price depends on a single unknown parameter provided that the former is strictly increasing with respect to the latter, and as soon as two basic additional conditions are satisfied. That is, the range of theoretical prices obtainable by changing the free parameter, as well as the liquidity level of the market, should be ``wide enough'', as we will explain more technically in Theorem~\ref{th:general}. Let $Y$ be a contingent claim, and denote with $\PV{Y(\lambda)}$ its risk-neutral price, assumed dependent on an unknown parameter $\lambda$. Further, let $b$ and $a$ denote its quoted bid and ask prices, respectively. The main result available in \cite{michielon} is recalled in Theorem \ref{th:general} in a more general fashion. The proof of Theorem \ref{th:general} follows from Lemmas 1, 2, 3 and Theorem 1 in \cite{michielon}, as the steps outlined therein can be followed in the same manner. For completeness, we have provided the aforementioned results, adapted to the more general context considered in the present article, in Appendix \ref{sec:proof}. \begin{theorem}\label{th:general} Let $Y$ be a contingent claim whose price depends on a parameter $\lambda>0$ such that the risk-neutral price of $Y$ is strictly increasing with respect to $\lambda$. Assume that $\inf_{\lambda>0}\PV{Y(\lambda)}<b$ and that $\sup_{\lambda>0}\PV{Y(\lambda)}>a$. This uniquely identifies an interval $[\lambda_a, \lambda_b]$ such that $\lambda\in[\lambda_b, \lambda_a]$ if and only if $\inf_{\lambda>0}\PV{Y(\lambda)}<b$ and $\sup_{\lambda>0}\PV{Y(\lambda)}>a$. Moreover, assume that for every $\lambda\in[\lambda_b, \lambda_a]$ there exists $\gamma>0$ such that $\ask{Y(\lambda),\gamma}-\bid{Y(\lambda),\gamma}=a-b$. Then, the constrained non-linear system \begin{equation}\label{eq:system} \begin{cases} \bid{Y(\lambda),\gamma} =b \\ \ask{Y(\lambda),\gamma} =a \end{cases} \end{equation} with \begin{equation}\nonumber b<\PV{Y(\lambda)}<a \end{equation} admits a solution, which is also unique. \end{theorem} We observe that both call and put option prices are strictly increasing with respect to the volatility of the underlying.\footnote{Note that the conditions $\inf_{\lambda>0}\PV{Y(\lambda)}<b$ and that $\sup_{\lambda>0}\PV{Y(\lambda)}>a$ can be made more explicit in the case of European options. In particular, for a call option it results that $\inf_{\sigma>0}\call{\sigma}= e^{-rT}(e^{(r-\alpha) T}X_0-K)^+$, while $\sup_{\sigma>0}\call{\sigma}=e^{-\alpha T}X_0$. Similarly, for a put option it results that $\inf_{\sigma>0}\putt{\sigma}= e^{-rT}(K-e^{(r-\alpha) T}X_0)^+$, and that $\sup_{\sigma>0}\putt{\sigma}=e^{-rT}K$.} Denote with $b_\mathcal{C}$ ($b_\mathcal{P}$) and with $a_\mathcal{C}$ ($a_\mathcal{P}$) the quoted bid and ask price of the call (put), respectively. Ideally, one would aim to find an implied risk-neutral volatility $\sigma$ and two distortion parameters $\gamma_\mathcal{C}$ and $\gamma_\mathcal{P}$ such that the equalities \begin{equation}\label{eq:system4equations} \begin{cases} \bid{\call{\sigma},\gamma_\mathcal{C}} =\bidcall \\ \ask{\call{\sigma},\gamma_\mathcal{C}} =\askcall \\ \bid{\putt{\sigma},\gamma_\mathcal{P}} =\bidput \\ \ask{\putt{\sigma},\gamma_\mathcal{P}} =\askput \end{cases} \end{equation} with the constraints \begin{equation}\nonumbe \begin{cases} \bidcall<\PV{\call{\sigma}}<\askcall\\ \bidput<\PV{\putt{\sigma}}<\askput \end{cases} \end{equation} are satisfied. Notwithstanding, it is in general not possible to solve \eqref{eq:system4equations} due to the obvious lack of degrees of freedom. Therefore, separate volatility and liquidity parameters should be used for calls and puts, as done in practice to compute call-put volatility spreads. In particular, one can solve \begin{equation}\nonumber \begin{cases} \bid{\call{\sigma},\gamma} =b_\mathcal{C} \\ \ask{\call{\sigma},\gamma} =a_\mathcal{C} \\ \end{cases} \end{equation} with \begin{equation}\nonumber b_\mathcal{C}<\PV{\mathcal{C}(\sigma)}<a_\mathcal{C}, \end{equation} and obtain that a unique solution, in virtue of Theorem \ref{th:general}, exists, denoted with $(\sigma_\mathcal{C}, \gamma_\mathcal{C})$. Similarly, one can solve \begin{equation}\nonumber \begin{cases} \bid{\putt{\sigma},\gamma} =b_\mathcal{P} \\ \ask{\putt{\sigma},\gamma} =a_\mathcal{P} \end{cases} \end{equation} with \begin{equation}\nonumber b_\mathcal{P}<\PV{\mathcal{P}(\sigma)}<a_\mathcal{P} \end{equation} separately, and again obtain a unique solution, due to Theorem \ref{th:general}, denoted as $(\sigma_\mathcal{P}, \gamma_\mathcal{P})$. We call $\sigmacall$ and $\sigmaput$ the liquidity-free call and put implied volatilities, respectively. The quantities $\gamma_\mathcal{C}$ and $\gamma_\mathcal{P}$ denote the implied liquidity levels of the market for calls and puts, respectively. The approach proposed here is different from that proposed in \cite[Sec. 5.4.3]{bookConicFinance}. Therein, the notion of conic (Black-Scholes) implied volatility is introduced. In particular, for a fixed (and known) distortion parameter, one can then imply a volatility for the bid and one for the ask. However, our approach allows to simultaneously imply both the distortion parameter and the implied volatility directly, without therefore relying on an initial estimation procedure for the distortion itself. This because our goal is that of computing implied volatilities that can be interpreted as risk-neutral ones. In time series analysis call-put volatility spreads, as outlined in Section \ref{sec:introduction}, can be used as regression variables for forecasting analysis. By taking into account the approach outlined here one would not only have the possibility to introduce liquidity-free call-put volatility spreads in the regression model considered, but also to take into account the implied distortion (i.e., liquidity) parameters as regression variables as well. Potentially, this could enhance the explanatory power of the regression models used for prediction purposes (see \cite{cva2} for an illustration of the explanatory power of the distortion parameter as far as liquidity is concerned). \subsection{Implied volatilities with the Wang transform}\label{sec:formulae} The choice of the distortion function to be used in the bid-ask calibration problem is arbitrary, provided that it is concave. Consequently, different possibilities are available, see \cite{cherny2009} and \cite[Sec. 4.7]{bookConicFinance}. However, for distributions of normal or log-normal random variables, which are often employed in financial applications, the Wang transform \cite{wang}, which is defined as \begin{equation}\label{eq:wang} \psi_\gamma(x)\coloneqq \Phi(\Phi^{-1}(x)+\gamma) \end{equation} with $\Phi(\,\cdot\,)$ denoting the cumulative distribution function of a standard normal random variable, is a convenient choice. This because \eqref{eq:wang} still allows to obtain closed-form solutions for call and put option prices, see \cite[Sec. 5.4]{bookConicFinance}.\footnote{See Appendix \ref{sec:remark} for a remark concerning how the Wang transform can be a useful tool as soon as the distribution of a normal random variable is transformed via a non-decreasing and left-continuous function.} Therefore, under both Black-Scholes \eqref{eq:bs} and Bachelier \eqref{eq:b} settings, exact formulae can be used to calculate bid and ask option prices via the Wang transform.\footnote{Some other cases where the Wang transform produces analytical option prices formulae are those of the Sprenkle, Boness and Samuelson models (see \cite[Sec. 1.31, 1.32 and 1.33]{optionFormulae}). However, note that computing the Wang transform is computationally expensive, as this requires the evaluation of both the cumulative distribution and quantile functions of a standard normal random variable. Therefore, for large datasets and when the Wang transform does not guarantee analytical formulae to exists, then other choices for the distortion function might be more convenient (see \cite[Sec. 4.7]{bookConicFinance} for an overview).} Thus, our procedure to back out implied volatilities (and, consequentially, implied distortion parameters) can be easily implemented, with the advantage that it does not require to compute the integrals \eqref{eq:bid} and \eqref{eq:ask} numerically should the Wang transform be used. In the case of the Black-Scholes framework, one obtains that the risk-neutral price of a call option is given by \begin{equation}\nonumber \callBSarg{\alpha}=e^{-\alpha T}X_0\Phi(d_+)-e^{-rT}K\Phi(d_-), \end{equation} where \begin{equation}\nonumber d_+\coloneqq\frac{\ln\left(\frac{X_0}{K}\right)+(r-\alpha+\frac{1}{2}\sigma^2) T}{\sigma\sqrt{T}}, \end{equation} and with \begin{equation}\nonumber d_-\coloneqq d_+-\sigma\sqrt{T}=\frac{\ln\left(\frac{X_0}{K}\right)+(r-\alpha-\frac{1}{2}\sigma^2) T}{\sigma\sqrt{T}}. \end{equation} Further, \begin{equation}\nonumber \puttBSarg{\alpha}=e^{-rT}K\Phi(-d_-) -e^{-\alpha T}X_0\Phi(-d_+). \end{equation} \cite[Sec. 5.4.2]{bookConicFinance} obtain that, by considering the Wang transform under Black-Scholes settings, bid and ask prices for European calls and puts can be computed as $\bidGamma{\callBSarg{\alpha}}=\callBSarg{\alpha + \frac{\gamma\sigma}{\sqrt{T}}}$, $\askGamma{\callBSarg{\alpha}}=\callBSarg{\alpha - \frac{\gamma\sigma}{\sqrt{T}}}$, $\bidGamma{\puttBSarg{\alpha}}=\puttBSarg{\alpha - \frac{\gamma\sigma}{\sqrt{T}}}$ and, finally, $\askGamma{\puttBSarg{\alpha}}=\puttBSarg{\alpha + \frac{\gamma\sigma}{\sqrt{T}}}$. We now provide similar relationships in the case the Bachelier model \eqref{eq:b} is considered.\footnote{Risk-neutral call and put option pricing formulae for the Bachelier model are available in Appendix \ref{sec:derivation}, for completeness.} Let $\cdf{X_T}{\,\cdot\,}$ denote the time-$T$ risk-neutral distribution of the underlying asset. First of all we recall that for European vanilla options, if the underlying can reach negative values, in line with \cite[Sec. 5.5]{bookConicFinance} the following formulae can be used to calculate bid and ask European option prices: \begin{equation}\label{eq:bidCall} \bidGamma{\mathcal{C}}=e^{-rT}\int_{K}^{\infty} (x-K)\,d\distortion{\cdf{X_T}{x}}, \end{equation} \begin{equation}\label{eq:askCall} \askGamma{\mathcal{C}}=e^{-rT}\int_{K}^{\infty} (K-x)\,d\distortion{1-\cdf{X_T}{x}}, \end{equation} \begin{equation}\label{eq:bidPut} \bidGamma{\mathcal{P}}=e^{-rT}\int_{-\infty}^{K} (x-K)\,d\distortion{1-\cdf{X_T}{x}}, \end{equation} and \begin{equation}\label{eq:askPut} \askGamma{\mathcal{P}}=e^{-rT}\int_{-\infty}^{K} (K-x)\,d\distortion{\cdf{X_T}{x}}. \end{equation} Observe that under both the Black-Scholes and Bachelier specifications \eqref{eq:bs} and \eqref{eq:b} continuous probability density functions for the terminal risk-neutral distribution of the underlying asset are available. Therefore, the relationships \eqref{eq:bidCall}, \eqref{eq:askCall}, \eqref{eq:bidPut} and \eqref{eq:askPut} can be interpreted as both Riemann-Stieltjes and Lebesgue-Stieltjes integrals. Under the Bachelier dynamics \eqref{eq:b} the risk-neutral distribution of the underlying, at time $T$, is normal with mean $\bar{\mu}$ and variance $\bar{\sigma}^2$ as per \eqref{eq:mean} and \eqref{eq:variance} in Appendix \ref{sec:derivation}. If we consider a Wang transformation with distortion parameter $\gamma$ we obtain that, at time $T$, the underlying $X_T$ is still normally distributed with the same variance $\bar{\sigma}^2$, but this time with mean given by $\bar{\mu}_-\coloneqq\bar{\mu}-\gamma\bar{\sigma}$, see \cite{wang}. Therefore, we can apply relationship \eqref{eq:bidCall} and obtain that \begin{align} \bidGamma{\callB}&=e^{-rT}\int_{K}^{\infty} (x-K)\,d\distortion{\cdf{X}{x}}\nonumber\\ &=e^{-rT}\int_{K}^{\infty} \frac{x-K}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}\left( \frac{x-\bar{\mu}_-}{\bar{\sigma}} \right)^2}\,dx\nonumber\\ &=e^{-rT}\int_{\frac{K-\bar{\mu}_-}{\sigmabar}}^{\infty} \frac{\bar{\mu}_-+\sigmabar x -K}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\,dx\nonumber\\ &=e^{-rT}\left[(\bar{\mu}_--K)\int_{\frac{K-\bar{\mu}_-}{\sigmabar}}^{\infty} \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\,dx-\sigmabar\int_{\frac{K-\bar{\mu}_-}{\sigmabar}}^{\infty} \frac{-x}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\,dx\right]\nonumber\\ &=e^{-rT}\left[(\bar{\mu}_--K)\Phi\left(\frac{\bar{\mu}_--K}{\sigmabar}\right)+\sigmabar\phi\left(\frac{\bar{\mu}_--K}{\sigmabar}\right)\right].\nonumber \end{align} We can now calculate the call ask price via \eqref{eq:askCall}. First we observe, see \cite{wang}, that \begin{equation}\label{eq:helper} \distortion{1-\cdf{X_T}{x}}=\distortion{1-\Phi\left(\frac{x-\mubar}{\sigmabar}\right)}=\distortion{\Phi\left(\frac{\mubar-x}{\sigmabar}\right)}=\Phi\left(\frac{\mubar-x+\gamma\sigmabar}{\sigmabar}\right). \end{equation} By setting $\bar{\mu}_+\coloneqq\bar{\mu}+\gamma\bar{\sigma}$ we obtain that \begin{align} \askGamma{\callB}&=e^{-rT}\int_{K}^{\infty} (K-x)\,d\distortion{1-\cdf{X_T}{x}}\nonumber\\ &=e^{-rT}\int_{K}^{\infty}\frac{x-K}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mubar_+}{\bar{\sigma}}\right)^2}\,dx\nonumber\\ &=e^{-rT}\left[(\mubar_+-K)\Phi\left(\frac{\mubar_+-K}{\sigmabar}\right)+\sigmabar\phi\left(\frac{\mubar_+-K}{\sigmabar}\right)\right].\nonumber \end{align} We now calculate the ask price of an European put option via \eqref{eq:askPut}. It results that \begin{align} \askGamma{\puttB}&=e^{-rT}\int_{-\infty}^{K} (K-x)\,d\distortion{\cdf{X_T}{x}}\nonumber\\ &=e^{-rT}\int_{-\infty}^K \frac{K-x}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mubar_-}{\sigmabar}\right)^2}\,dx\nonumber\\ &=e^{-rT}\int_{-\infty}^\frac{K-\mubar_-}{\sigmabar} \frac{K-\mubar_--\sigmabar x}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}x^2}\,dx\nonumber\\ &=e^{-rT}\left[(K-\mubar_-)\int_{-\infty}^\frac{K-\mubar_-}{\sigmabar}\frac{1}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}x^2}\,dx+\sigmabar\int_{-\infty}^\frac{K-\mubar_-}{\sigmabar}\frac{-x}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}x^2}\,dx\right]\nonumber\\ &=e^{-rT}\left[(K-\mubar_-)\Phi\left(\frac{K-\mubar_-}{\sigmabar}\right)+\sigmabar\phi\left(\frac{K-\mubar_-}{\sigmabar}\right)\right].\nonumber \end{align} Recalling \eqref{eq:helper}, the bid price of the put can be calculated using \eqref{eq:bidPut}, from which it follows that \begin{align} \bidGamma{\puttB}&=e^{-rT}\int_{-\infty}^{K} (x-K)\,d\distortion{1-\cdf{X_T}{x}}\nonumber\\ &=e^{-rT}\int_{-\infty}^{K}\frac{K-x}{\sigmabar\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mubar_+}{\sigma}\right)^2}\,dx\nonumber\\ &=e^{-rT}\left[(K-\mubar_+)\Phi\left(\frac{K-\mubar_+}{\sigmabar}\right)+\sigmabar\phi\left(\frac{K-\mubar_+}{\sigmabar}\right)\right].\nonumber \end{align} To summarize, see notation in Appendix \ref{sec:derivation}, one obtains that $\bidGamma{\callB}=\callBarg{\mubar_-}$, $\askGamma{\callB}=\callBarg{\mubar_+}$, $\bidGamma{\puttB}=\puttBarg{\mubar_+}$, while $\askGamma{\puttB}=\puttBarg{\mubar_-}$. \section{Example}\label{sec:example} Here we show how liquidity-free implied volatilities can be extracted from bid and ask prices. In particular, we consider European options on four different underlyings, i.e., European call options on the S\&P 500 index, European put options on the FTSE MIB index, European call options on UBS shares, and European put options on Deutsche Telekom shares. For each of the cases considered we compute, for a given maturity (not kept unchanged for all the underlyings), bid and ask prices, risk-neutral and mid prices, absolute liquidity spreads, relative liquidity spreads, implied risk-neutral and mid volatilities, as well as implied distortion parameters. All the aforementioned calculations have been performed for all the quoted options available for which both bid and ask prices could be retrieved.\footnote{In this section plots have been constructed with respect to moneyness, defined here as the ratio between a given strike price and the value of the underlying.} The Wang transform has been chosen as distortion in all the cases analyzed. We start by considering European call options on the S\&P 500 index, for which a wide range of strikes is available. These options are very liquid, as illustrated by Figures \ref{fig:SPX_bid_ask_call} and \ref{fig:SPX_absolute_spread_call} (note that the relative bid-ask spreads for deep out-of-the-money options in Figure \ref{fig:SPX_relative_spread_call} are large due to those options having small market value). This results in risk-neutral and mid prices that are very close to each other, as shown in Figure \ref{fig:SPX_price_call}. Also the risk-neutral and mid implied volatility smiles, see Figure \ref{fig:SPX_implied_volatility_call}, are basically overlapping, as expected. The implied distortion parameters, illustrated in Figure \ref{fig:SPX_implied_distortion_call}, closely follow the trend of the relative bid-ask spreads of Figure \ref{fig:SPX_relative_spread_call}. \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{SPX_bid_ask_call} \caption{}\label{fig:SPX_bid_ask_call} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{SPX_price_call} \caption{}\label{fig:SPX_price_call} \end{subfigure} \end{center} \caption{Bid and ask prices for European call options on the S\&P 500 index expiring in 886 days (Options Price Reporting Authority), panel (a), and their corresponding risk-neutral and mid counterparts, panel (b).\label{fig:SPX}} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{SPX_absolute_spread_call} \caption{}\label{fig:SPX_absolute_spread_call} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{SPX_relative_spread_call} \caption{}\label{fig:SPX_relative_spread_call} \end{subfigure} \end{center} \caption{Absolute bid-ask spreads for the options considered in Figure \ref{fig:SPX}, panel (a), and their relative counterparts (calculated with respect to mid prices), panel (b).} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{SPX_implied_volatility_call} \caption{}\label{fig:SPX_implied_volatility_call} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{SPX_implied_distortion_call} \caption{}\label{fig:SPX_implied_distortion_call} \end{subfigure} \end{center} \caption{Risk-neutral and mid implied volatilities, panel (a), and implied liquidity levels, panel (b), for the options considered in Figure \ref{fig:SPX}.} \end{figure} We now consider European put options on the FTSE MIB index. In this case fewer strikes are traded compared to the S\&P 500 case. However, as Figure \ref{fig:FTSE_bid_ask_put} illustrates, these options are still very liquid; see also Figures \ref{fig:FTSE_absolute_spread_put} and \ref{fig:FTSE_relative_spread_put}. This is further confirmed by the low levels of the implied liquidity parameter of Figure \ref{fig:FTSE_implied_distortion_put}. We therefore still obtain risk-neutral implied volatilities and prices that are closely approximated by their mid counterparts; see Figures \ref{fig:FTSE_implied_volatility_put} and \ref{fig:FTSE_price_put}, respectively. \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{FTSE_bid_ask_put} \caption{}\label{fig:FTSE_bid_ask_put} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{FTSE_price_put} \caption{}\label{fig:FTSE_price_put} \end{subfigure} \end{center} \caption{Bid and ask prices for European put options on the FTSE MIB (Milan Stock Exchange) expiring in 249 days, panel (a), and their corresponding risk-neutral and mid counterparts, panel (b).\label{fig:FTSE}} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{FTSE_absolute_spread_put} \caption{}\label{fig:FTSE_absolute_spread_put} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{FTSE_relative_spread_put} \caption{}\label{fig:FTSE_relative_spread_put} \end{subfigure} \end{center} \caption{Absolute bid-ask spreads for the options considered in Figure \ref{fig:FTSE}, panel (a), and their relative counterparts (calculated with respect to mid prices), panel (b).} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{FTSE_implied_volatility_put} \caption{}\label{fig:FTSE_implied_volatility_put} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{FTSE_implied_distortion_put} \caption{}\label{fig:FTSE_implied_distortion_put} \end{subfigure} \end{center} \caption{Risk-neutral and mid implied volatilities, panel (a), and implied liquidity levels, panel (b), for the options considered in Figure \ref{fig:FTSE}.} \end{figure} We now consider European call options on UBS. As it is clear from Figures \ref{fig:UBS_bid_ask_call}, \ref{fig:UBS_absolute_spread_call}, \ref{fig:UBS_relative_spread_call} and \ref{fig:UBS_implied_distortion_call}, these options are less liquid than those considered in the two cases above (i.e., those on the S\&P 500 and the FTSE MIB indices, respectively). Therefore, this results in risk-neutral and implied volatility smiles that, for deep out-of-the-money, but especially for deep in-the-money options, exhibit non-negligible differences, with mid implied volatilities overestimating their risk-neutral counterparts up to 2-3\% in the former case, and up to 9-10\% in the latter case; see Figure \ref{fig:UBS_implied_volatility_call}. Note that deep in-the-money and out-of-the-money options have small vegas, which leads to risk-neutral and mid option prices being close to each other; see figure \ref{fig:UBS_price_call}. \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{UBS_bid_ask_call} \caption{}\label{fig:UBS_bid_ask_call} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{UBS_price_call} \caption{}\label{fig:UBS_price_call} \end{subfigure} \end{center} \caption{Bid and ask prices for European call options on UBS (Eurex) expiring in 345 days, panel (a), and their corresponding risk-neutral and mid counterparts, panel (b).\label{fig:UBS}} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{UBS_absolute_spread_call} \caption{}\label{fig:UBS_absolute_spread_call} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{UBS_relative_spread_call} \caption{}\label{fig:UBS_relative_spread_call} \end{subfigure} \end{center} \caption{Absolute bid-ask spreads for the options considered in Figure \ref{fig:UBS}, panel (a), and their relative counterparts (calculated with respect to mid prices), panel (b).} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{UBS_implied_volatility_call} \caption{}\label{fig:UBS_implied_volatility_call} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{UBS_implied_distortion_call} \caption{}\label{fig:UBS_implied_distortion_call} \end{subfigure} \end{center} \caption{Risk-neutral and mid implied volatilities, panel (a), and implied liquidity levels, panel (b), for the options considered in Figure \ref{fig:UBS}.} \end{figure} As the last case we consider that of European put options on Deutsche Telekom; see Figure \ref{fig:DTEG_bid_ask_put}. Also in these circumstances liquidity is not as high as in the cases of options on the S\&P 500 and FTSE MIB indices. This is illustrated by the high levels of bid-ask spreads displayed in Figures \ref{fig:DTEG_absolute_spread_put} and \ref{fig:DTEG_relative_spread_put}, and reiterated by the high implied liquidity levels of Figure \ref{fig:DTEG_implied_distortion_put}. Due to the low liquidity for both in-the-money and out-of-the-money options for this particular underlying, differences between risk-neutral and mid implied volatilities are considerable; see Figures \ref{fig:DTEG_implied_volatility_put}. In the former case we observe mid implied volatilities underestimating their risk-neutral counterparts up to 9-10\%, while in the latter case mid implied volatilities overestimate risk-neutral ones, with differences up to 14-15\%. Also in this case mid prices are good proxies for their risk-neutral counterparts; see Figure \ref{fig:DTEG_price_put}. This is again due to the fact that close to the at-the-money point liquidity is high, and far from it, even if liquidity decreases, options are not very sensitive to volatility changes. \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{DTEG_bid_ask_put} \caption{}\label{fig:DTEG_bid_ask_put} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{DTEG_price_put} \caption{}\label{fig:DTEG_price_put} \end{subfigure} \end{center} \caption{Bid and ask prices for European put options on Deutsche Telekom (Eurex) expiring in 345 days, panel (a), and their corresponding risk-neutral and mid counterparts, panel (b).\label{fig:DTEG}} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{DTEG_absolute_spread_put} \caption{}\label{fig:DTEG_absolute_spread_put} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{DTEG_relative_spread_put} \caption{}\label{fig:DTEG_relative_spread_put} \end{subfigure} \end{center} \caption{Absolute bid-ask spreads for the options considered in Figure \ref{fig:DTEG}, panel (a), and their relative counterparts (calculated with respect to mid prices), panel (b).\label{fig:DTEG_spreads}} \end{figure} \begin{figure}[H] \begin{center} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{DTEG_implied_volatility_put} \caption{}\label{fig:DTEG_implied_volatility_put} \end{subfigure \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=0.5]{DTEG_implied_distortion_put} \caption{}\label{fig:DTEG_implied_distortion_put} \end{subfigure} \end{center} \caption{Risk-neutral and mid implied volatilities, panel (a), and implied liquidity levels, panel (b), for the options considered in Figure \ref{fig:DTEG}.} \end{figure} Overall, we see that for very liquid instruments mid implied volatilities are very well approximated by their risk-neutral counterparts, as expected. When liquidity decreases, however, for in-the-money and out-of-the-money options implied risk-neutral volatilities might differ noticeably from mid volatilities. Predictably, this does not have a considerable impact on option prices. This is because close to the at-the-money point liquidity is in general high, making risk-neutral and mid implied volatilities close to each other. On the other hand, for in-the-money and out-of-the-money options liquidity can considerably affect volatilities. However, these options have low vegas, which makes their prices not very sensitive to changes in the volatility of the underlying. Nonetheless, whether risk-neutral and mid prices are close to each other is beside the point: for both risk-neutral and mid prices there is no liquidity in the market, so from a trading perspective only bid and ask prices matter. What we are interested in is assessing whether implied volatilities can be extracted from traded option quotes in a consistent manner with the risk-neutral framework. In particular, what we believe is important is to assess how considering bid and ask prices as a starting point instead of their mid counterparts can affect the shape of the volatility smile. As we have seen in the examples considered, computing implied volatilities from bid and ask prices instead of from mid prices in some cases can have a large impact on the implied volatility figures, and this can have implications in different contexts. As an example, if a smile model is calibrated by means of a least-square approach to the available implied volatilities, then differences as those observed in Figures \ref{fig:UBS_implied_volatility_call} and \ref{fig:DTEG_implied_volatility_put} would result in risk-neutral and mid volatility smiles with different shapes, as in-the-money and out-of-the-money implied volatilities would affect the calibration as a whole. Furthermore, when simulation models for over-the-counter derivatives, as for instance credit models, calibrated to implied volatilities are used, then choosing to input risk-neutral rather than mid implied volatilities might have a non-marginal impact for running contracts which, trough time, due to market movements ended up being in-the-money or out-of-the-money. Therefore, the examples considered, as well as the theoretical consistency of the methodology outlined in this article with the risk-neutral paradigm (paradigm that is not satisfied when mid prices are considered), make liquidity-free implied implied volatilities a potentially useful tool in financial modeling. \section{Conclusion}\label{sec:conclusion} In this article we have considered the problem of computing implied volatilities from bid and ask European option prices directly, i.e., without relying on mid price approximations. The methodology we have outlined relies on the conic finance framework of \cite{cherny2009}. Based on the results of \cite{michielon} it is possible, given the bid and ask prices of an option, to imply both the risk-neutral volatility and the liquidity level of the market at the same time. In particular, in the case of Black-Scholes and Bachelier specifications, this procedure results particularly efficient when the Wang transform is used, as the latter allows to analytically compute bid and ask option prices. In the case of the Bachelier model, these analytical formulae have been provided. The methodology outlined in this article relies on some intuitive and simple assumptions concerning the liquidity level of the market and the wideness of the range of option prices with respect to changes in the volatility parameter. A potential application for the technique we propose is that of constructing liquidity-free implied volatilitiy surfaces (and, consequently, corresponding implied liquidity surfaces at the same time). These liquidity-free implied volatility surfaces could be used as calibration inputs for different models under risk-neutral settings in a consistent manner. \section*{Disclaimer} The opinions expressed in this paper are solely those of the authors and do not necessarily reflect those of their current and past employers. \clearpage \bibliographystyle{apalike}
bc2193f2fca2d57391e03f74a28f3ded2ba3c061
\section{Introduction} Let ${\mathbb R}_+^d=\{x=(\widetilde{x},x_d):\, x_d>0\}$ be the upper half-space in the $d$-dimensional Euclidean space ${\mathbb R}^d$. In this paper we study the Dirichlet form $({\mathcal E}, {\mathcal F})$ on $L^2({\mathbb R}^d_+, dx)$ defined by \begin{equation}\label{e:form} {\mathcal E}(u,v):=\frac12 \int_{{\mathbb R}^d_+}\int_{{\mathbb R}^d_+} (u(x)-u(y))(v(x)-v(y))J(x,y)\, dy\, dx, \end{equation} where ${\mathcal F}$ is the closure of $C_c^{\infty}({{\mathbb R}^d_+})$ under ${\mathcal E}_1:={\mathcal E}+(\cdot, \cdot)_{L^2({{\mathbb R}^d_+},dx)}$. Our main assumption is on the jump kernel $J(x,y)$: We assume that $J(x,y)=|x-y|^{-d-\alpha}\sB(x,y)$, $\alpha\in (0,2)$, where $(x,y)\mapsto \sB(x,y)$ is a symmetric function satisfying certain H\"older-type and scaling conditions, and most importantly, is comparable to the function \begin{eqnarray}\label{e:B(x,y)} \widetilde{B}(x,y)&:=& \Big(\frac{x_d\wedge y_d}{|x-y|}\wedge 1\Big)^{\beta_1}\Big(\frac{x_d\vee y_d}{|x-y|}\wedge 1\Big)^{\beta_2} \left[ \log\Big(1+\frac{(x_d\vee y_d)\wedge |x-y|}{x_d\wedge y_d\wedge |x-y|}\Big)\right]^{\beta_3}\nonumber \\ & & \times \left[\log \Big(1+\frac{|x-y|}{(x_d\vee y_d)\wedge |x-y|}\Big)\right]^{\beta_4}. \color{black} \end{eqnarray} Here $\beta_1, \beta_2, \beta_3, \beta_4$ are non-negative parameters such that $\beta_1>0$ if $\beta_3>0$, and $\beta_2>0$ if $\beta_4>0$. Here and below, $a\wedge b:=\min \{a, b\}$, $a\vee b:=\max\{a, b\}$. The precise assumptions on $\sB(x,y)$ are given in Section 2. Although we allow that $\sB(x,y)\equiv 1$, our focus is on the case when $\beta_1 \vee \beta_2>0$. In such a case, the function $\sB(x,y)$ vanishes at the boundary of ${\mathbb R}^d_+$, and we call the corresponding Dirichlet form degenerate at the boundary. We refer to $\sB(x,y)$ as the boundary part of the jump kernel $J(x,y)$. This setting was introduced in \cite[Section 5]{KSV}. The Hunt process associated with the Dirichlet form $({\mathcal E}, {\mathcal F})$ will be denoted by $Y=(Y_t, {\mathbb P}_x)$ and its lifetime by $\zeta$. Our motivation to study the form \eqref{e:form} and the corresponding process $Y$ comes from two sources. Firstly, note that in case $J(x,y)=|x-y|^{-d-\alpha}$ (i.e.~ $\sB(x,y)\equiv 1$), the process $Y$ is the censored $\alpha$-stable process in the half-space ${\mathbb R}^d_+$ which was introduced and studied in \cite{BBC} (also for a more general state space than ${\mathbb R}^d_+$). Two main results of \cite{BBC} can be roughly described as follows: (1) There is a dichotomy between cases $\alpha\in (1,2)$ and $\alpha\in (0,1]$. In the former case the process $Y$ has finite lifetime $\zeta$ and approaches the boundary of the state space at $\zeta$, while in the latter, $Y$ is conservative and will never approach the boundary; (2) In case when the state space $D$ is a $C^{1,1}$ open set and $\alpha\in (1,2)$, the boundary Harnack principle holds with the exact decay rate $\delta_D(x)^{\alpha-1}$ (here $\delta_D(x)$ denotes the distance of the point $x$ to the boundary of $D$). Shortly after, in case of a bounded $C^{1,1}$ open set and $\alpha\in (1,2)$, sharp two-sided Green function estimates were established in \cite{CK02}. Secondly, a Dirichlet form related to \eqref{e:form} was introduced in \cite{KSV} and further studied in \cite{KSV21}. We considered the form $({\mathcal E}^{\kappa}, {\mathcal F}^{\kappa})$ where \begin{equation}\label{e:form-killing} {\mathcal E}^{\kappa}(u,v)={\mathcal E}(u,v)+\int_{{\mathbb R}^d_+}u(x)v(x)\kappa(x)\, dx\, , \end{equation} and ${\mathcal F}^{\kappa}={\mathcal F}\cap L^2({\mathbb R}^d_+, \kappa(x)dx)$. The killing function is given by $\kappa(x)=C(\alpha, p, \sB)x_d^{-\alpha}$, where $C(\alpha, p, \sB)$ is a semi-explicit strictly positive and finite constant depending on $\alpha$, $\sB$ and a parameter $p\in ((\alpha-1)_+, \alpha+\beta_1)$. The investigation of the form \eqref{e:form-killing} was initiated in \cite{KSV} and completed in \cite{KSV21} with two main results: Sharp two-sided Green function estimates for all admissible values of the parameters involved in $\widetilde{B}(x,y)$, cf.~\cite[Theorem 1.1]{KSV21}, and full identification of the parameters for which the boundary Harnack principle holds true, cf.~\cite[Theorem 1.2 and Theorem 1.3]{KSV21}. In proving those results, the strict positivity of the killing function was used in an essential way in several places. This includes the proof of finite lifetime, Carleson estimate, and the decay of the Green function at the boundary. The goal of this paper is to extend the main results of \cite{KSV21} to the Dirichlet form \eqref{e:form} (which has no killing) in case $\alpha\in (1,2)$. Due to the fact that $\lim_{p\downarrow (\alpha-1)_+}C(\alpha, p, \sB)=0$, this can be considered as a limiting case of the setting in \cite{KSV21}. Theorem \ref{t:BHP} below can be viewed as a generalization of the corresponding result in \cite{BBC} in case of the state space ${\mathbb R}^d_+$ and $\alpha\in (1,2)$, to jump kernels degenerate at the boundary, while Theorem \ref{t:Green} is related to the main result of \cite{CK02}. The following two theorems are the main contribution of this paper. For $a,b>0$ let $D_{\widetilde{w}}(a,b):=\{x=(\widetilde{x}, x_d)\in {\mathbb R}^d:\, |\widetilde{x}-\widetilde{w}|<a, 0<x_d<b\}$. Assumptions \textbf{(A1)}-\textbf{(A4)} are given in Section \ref{s:SP}. \begin{thm}\label{t:BHP} Suppose that $\alpha\in (1,2)$ and that $\sB$ satisfies \color{black} \textbf{(A1)}-\textbf{(A4)}. There exists $C \ge 1$ such that for all $r>0$, $\widetilde{w} \in {\mathbb R}^{d-1}$, and any non-negative function $f$ in ${\mathbb R}^d_+$ which is harmonic in $D_{\widetilde{w}}(2r, 2r)$ with respect to $Y$ and vanishes continuously on $B(({\widetilde{w}},0), 2r)\cap \partial {\mathbb R}^d_+$, we have \begin{equation}\label{e:TAMSe1.8new} \frac{f(x)}{x_d^{ \alpha-1}}\le C\frac{f(y)}{y_d^{\alpha-1}}, \quad x, y\in D_{\widetilde{w}}(r/2, r/2) . \end{equation} \end{thm} We would like to mention here that, even though the rate of the boundary decay of harmonic functions is different than the one in \cite{KSV}, we still use the main result of \cite{KSV} to prove Theorem \ref{t:BHP}. More specifically, to control the exit distributions in Lemma \ref{e:POTAe7.14} from below, we use the lower bound for the corresponding exit distributions in \cite{KSV}. The latter is the most complicated and technical part in \cite{KSV}. We point out in passing the following observation: to establish some boundary theory for non-local operators with singular kernels (with no critical killing), the corresponding non-local operators with critical killing can play an important role. Let $G(x,y)$, $x,y\in {\mathbb R}^d_+$, denote the Green function of the process $Y$ (see Section \ref{s:EGP} for the existence of the Green function). \begin{thm}\label{t:Green} Suppose that $\alpha\in (1,2)$ and $d > (\alpha+\beta_1+\beta_2)\wedge 2$. Assume that $\sB$ satisfies \textbf{(A1)}-\textbf{(A4)}. Then there exists $C>1$ such that for all $x,y\in {\mathbb R}^d_+$, \begin{eqnarray}\label{e:Green} \lefteqn{C^{-1} \left(\frac{x_d}{|x-y|} \wedge 1 \right)^{\alpha-1}\left(\frac{y_d}{|x-y|} \wedge 1 \right)^{\alpha-1} \frac{1}{|x-y|^{d-\alpha}} \le G (x,y)} \nonumber \\ & \le & C \left(\frac{x_d}{|x-y|} \wedge 1 \right)^{\alpha-1}\left(\frac{y_d}{|x-y|} \wedge 1 \right)^{\alpha-1} \frac{1}{|x-y|^{d-\alpha}}\, . \end{eqnarray} \end{thm} Note that in both results we have assumed that $\alpha\in (1,2)$. The case $\alpha\in (0,1]$ is qualitatively different and new methods are needed to analyze it. We leave this case for future research. Now we explain the content of the paper, our strategy of proving the results and differences to the methods used in \cite{BBC} and \cite{KSV, KSV21}. In Section \ref{s:SP} we precisely introduce the setup and assumptions on the boundary function $\sB(x,y)$, and recall some of the relevant results from \cite{KSV}. The goal of Section \ref{s:H} is to prove that in case $\alpha\in (1,2)$, the process $Y$ has finite lifetime and is therefore transient. The proof is new and relies on a Hardy-type inequality, see Proposition \ref{t:hardy}. This inequality implies that ${\mathcal F}\neq \overline{{\mathcal F}}$, where $\overline{{\mathcal F}}$ is the closure of $C_c^{\infty}({\overline {\mathbb R}^d_+})$ under ${\mathcal E}_1={\mathcal E}+(\cdot, \cdot)_{L^2({{\mathbb R}^d_+},dx)}$. This implies that $Y$ is a (proper) subprocess of $\overline{Y}$ -- the Hunt process associated with $({\mathcal E}, \overline{{\mathcal F}})$, hence the lifetime of $Y$ is finite. A consequence of finite lifetime is Corollary \ref{c:lemma4-1} which has two parts: The first one shows that the process $Y$ approaches the boundary at the lifetime, while the second part \color{black} replaces \cite[Lemma 4.1]{KSV} in the standard proof of the Carleson inequality, see Theorem \ref{t:carleson}. Section \ref{s:D} is devoted to proving Dynkin's formula for some non-compactly supported and non-smooth functions. Let \begin{equation}\label{e:operator} L_{\alpha}^\sB f(x):=\textrm{p.v.}\int_{{\mathbb R}^d_+}(f(y)-f(x))J(x,y)\, dy\ \end{equation} be the operator corresponding to the form $({\mathcal E}, {\mathcal F})$, defined for all $f:{\mathbb R}^d_+\to {\mathbb R}$ for which the principal value integral makes sense. It is straightforward to see that $L_{\alpha}^\sB x_d^{\alpha-1}=0$, which can be understood as $x\mapsto x_d^{\alpha-1}$ being harmonic in the analytic sense. See Lemma \ref{l:LB-on-g} below. In order to use probabilistic methods, it is crucial to show that this function is harmonic in the probabilistic sense. The proofs of \cite[Lemmas 3.3 and 5.1]{BBC} rely on using the isotropic stable process and its part process in ${\mathbb R}^d_+$. Since these two processes are of no help to us in the present setting, we use instead Dynkin's formula for barriers, cf.~Proposition \ref{l:dynkin-hp}. The proof of this formula is a slight modification of the arguments in \cite[Section 9]{KSV}. Section \ref{s:BHP} is devoted to the proof of Theorem \ref{t:BHP}. We first argue that the proofs of some results from \cite{KSV} in the case $p>\alpha-1$ are easily modified to the case $p=\alpha-1$. Then we show that for any function $f$ as in the statement of Theorem \ref{t:BHP}, it holds that $$ \frac{f(x)}{f(y)}\asymp \frac{ {\mathbb P}_x\big(Y_{\tau_{D_{\widetilde{w}}(r/2, r/2)}}\in D(1, 1)\big)} { {\mathbb P}_y\big(Y_{\tau_{D_{\widetilde{w}}(r/2, r/2)}}\in D(1, 1)\big)}, \quad x,y\in D_{\widetilde{w}}(r/2, r/2). $$ Since $x\mapsto x_d^{\alpha-1}$ satisfies the conditions in Theorem \ref{t:BHP}, the assertion of Theorem \ref{t:BHP} is valid. In the first part of Section \ref{s:EGP} we present the proof of Theorem \ref{t:Green}. The proof uses some results from \cite{KSV21}, scaling and the boundary Harnack principle. In the second part we give sharp estimates of the Green potential of $x_d^\gamma$ for $\gamma>-\alpha$. Again, we argue that, using the boundary Harnack principle, proofs of some lower bounds of the killed Green function obtained in \cite{KSV21} for $p>\alpha-1$ are valid without any change for the case $p=\alpha-1$. Having these Green function estimates one can apply results from \cite{AGV} to get the estimates of the Green potentials. We end the introduction with an explanation of the connection between the process $Y$ and the process $Y^{\kappa}$ associated with the Dirichlet form $({\mathcal E}^{\kappa}, {\mathcal F}^{\kappa})$. This connection is analogous to the one between the censored stable process and the killed stable process, cf.~\cite[Theorem 2.1]{BBC}. Namely, the process $Y$ can be obtained from $Y^{\kappa}$ through either the Ikeda-Nagasawa-Watanabe piecing together procedure, or through the Feynman-Kac transform via $\exp\int_0^t \kappa(Y^{\kappa}_t)dt$. The case $\sB\equiv 1$ and $\kappa(x)=C(\alpha/2, \alpha, \sB)x_d^{-\alpha}$ corresponds exactly to the isotropic $\alpha$-stable process killed upon exiting ${\mathbb R}^d_+$. Throughout this paper, the positive constants $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$, $\theta$ , $r_0$, $n_0$ will remain the same. We will use the following convention: Lower case letters $c, c_i, i=1,2, \dots$ are used to denote constants in the proofs and the labeling of these constants starts anew in each proof. The notation $c_i=c_i(a,b,c,\ldots)$, $i=0,1,2, \dots$ indicates constants depending on $a, b, c, \ldots$. We will not specify the dependency on $d$. We will use ``$:=$" to denote a definition, which is read as ``is defined to be". For any $x\in {\mathbb R}^d$ and $r>0$, we use $B(x, r)$ to denote the open ball of radius $r$ centered at $x$. \bigskip \section{Setup and Preliminary}\label{s:SP} In this section we precisely describe the setup and recall some preliminary results from earlier works. Let $d \ge 1$, $\alpha\in (0,2)$, $j(|x-y|)=|x-y|^{-\alpha-d}$ and $J(x,y)=j(|x-y|)\sB(x,y)$. We first give the assumptions on the boundary function $\sB(x,y)$. \color{black} \noindent \textbf{(A1)} $\sB(x,y)=\sB(y,x)$ for all $x,y\in {\mathbb R}^d_+$. \medskip \noindent \textbf{(A2)} If $\alpha \ge1$, \color{black} there exist $\theta>\alpha-1$ and $C>0$ such that $$ |\sB(x, x)-\sB(x,y)|\le C\left(\frac{|x-y|}{x_d\wedge y_d}\right)^{\theta}\,. $$ \medskip \noindent \textbf{(A3)} There exist $C\ge 1$ and parameters $\beta_1, \beta_2, \beta_3, \beta_4 \color{black} \ge 0$, with $\beta_1>0$ if $\beta_3 >0$, and $\beta_2>0$ if $\beta_4>0$, such that \begin{equation}\label{e:B7} C^{-1}\widetilde{B}(x,y)\le \sB(x,y)\le C \widetilde{B}(x,y)\, ,\qquad x,y\in {\mathbb R}^d_+\, , \end{equation} where $\widetilde{B}(x,y)$ is defined in \eqref{e:B(x,y)}. \medskip \noindent \textbf{(A4)} For all $x,y\in {\mathbb R}^d_+$ and $a>0$, $\sB(ax,ay)=\sB(x,y)$. In case $d\ge 2$, for all $x,y\in {\mathbb R}^d_+$ and $\widetilde{z}\in {\mathbb R}^{d-1}$, $\sB(x+(\widetilde{z},0), y+(\widetilde{z},0))=\sB(x, y)$. For examples of functions $\sB$ satisfying \textbf{(A1)}-\textbf{(A4)}, see \cite{KSV, KSV21}. Assumption \color{black} \textbf{(A3)} implies that $\sB(x, y)$ is bounded. Note that \textbf{(A4)} implies that $x\mapsto \sB(x, x)$ is a constant on ${\mathbb R}^d_+$. Without loss of generality, we will assume that $\sB(x, x)=1$. We observe that if $\beta_4>0$, then, for any $\varepsilon\in (0, \beta_2)$, there exists $c_\varepsilon>0$ such that \begin{align}\label{e:B7_2} &(\log 2)^{-\beta_4}\widetilde{B}_{\beta_1, \beta_2, \beta_3, 0}(x,y) \le \widetilde{B}_{\beta_1, \beta_2, \beta_3, \beta_4}(x,y) \le c_{\varepsilon} \widetilde{B}_{\beta_1, \beta_2-\varepsilon, \beta_3, 0}(x,y). \end{align} {\it Throughout the paper we always assume that} \begin{align*} J(x,y)&=j(|x-y|) \sB(x,y) \text{ on } {\mathbb R}_+^d\times {\mathbb R}_+^d \text{ with } \sB \text{ satisfying } \textbf{(A1)}-\textbf{(A4)} \text{ and } \sB(x, x)=1. \end{align*} Let $\overline {\mathbb R}_+^d=\{x=(\widetilde{x},x_d):\, x_d \ge 0\}$. Define \begin{align*} {\mathcal E}(u,v)&:=\frac12 \int_{{\mathbb R}^d_+}\int_{{\mathbb R}^d_+} (u(x)-u(y))(v(x)-v(y))J(x,y)\, dy\, dx\\ &= \frac12 \int_{\overline {\mathbb R}^d_+}\int_{\overline {\mathbb R}^d_+} (u(x)-u(y))(v(x)-v(y))J(x,y)\, dy\, dx. \end{align*} By Fatou's lemma, $({\mathcal E}, C_c^{\infty}({{\mathbb R}^d_+}))$ and $({\mathcal E}, C_c^{\infty}({\overline {\mathbb R}^d_+}))$ are closable in $L^2({{\mathbb R}^d_+}, dx)(=L^2({\overline {\mathbb R}^d_+}, dx))$. Let ${\mathcal F}$ be the closure of $C_c^{\infty}({{\mathbb R}^d_+})$ under ${\mathcal E}_1:={\mathcal E}+(\cdot, \cdot)_{L^2({{\mathbb R}^d_+},dx)}$ and let $\overline {\mathcal F}$ be the closure of $C_c^{\infty}({\overline {\mathbb R}^d_+})$ under ${\mathcal E}_1={\mathcal E}+(\cdot, \cdot)_{L^2( {\mathbb R}^d_+, dx)}$. Then $({\mathcal E}, {\mathcal F})$ and $({\mathcal E}, \overline {\mathcal F})$ are regular Dirichlet forms. Let $((Y_t)_{t\ge 0}, ({\mathbb P}_x)_{x\in {{\mathbb R}^d_+}\setminus {\mathcal N}})$ be the Hunt process associated with $({\mathcal E}, {\mathcal F})$ whose lifetime is $\zeta$. By \cite[Proposition 3.2]{KSV}, the exceptional set ${\mathcal N}$ can be taken to be the empty set. We add a cemetery point $\partial$ to the state space ${{\mathbb R}^d_+}$ and define $Y_t=\partial$ for $t\ge \zeta$. Let $((\overline Y_t)_{t\ge 0}, ({\mathbb P}_x)_{x\in {\overline {\mathbb R}^d_+}\setminus {\mathcal N}_0})$ be the Hunt process associated with $({\mathcal E}, \overline {\mathcal F})$ where ${\mathcal N}_0$ is an exceptional set. Let $({\mathcal E}, \overline{{\mathcal F}}_{{\mathbb R}^d_+})$ be the part form of $({\mathcal E}, \overline{{\mathcal F}})$ on ${\mathbb R}^d_+$. i.e., the form corresponding to the process $\overline{Y}$ killed at the exit time $\tau_{{\mathbb R}^d_+}:=\inf\{t>0:\, \overline{Y}_t\notin {\mathbb R}^d_+\}$. It follows from \cite[Theorem 4.4.3(i)]{FOT} that $({\mathcal E}, \overline{{\mathcal F}}_{{\mathbb R}^d_+})$ is a regular Dirichlet form on $L^2({{\mathbb R}^d_+},dx)$ and that $C^{\infty}_c({\mathbb R}^d_+)$ is its core. Hence $\overline{{\mathcal F}}_{{\mathbb R}^d_+}={\mathcal F}$ implying that $\overline{Y}$ killed upon exiting ${\mathbb R}^d_+$ is equal to $Y$. Thus we conclude that $Y$ is a subprocess of $\overline{Y}$, that the exceptional set ${\mathcal N}_0$ can be taken to be a subset of $\partial {\mathbb R}^d_+,$ and that the lifetime of $Y$ can be identified with $\tau_{{\mathbb R}^d_+}$. Suppose that for all $x\in {\mathbb R}^d_+$ it holds that ${\mathbb P}_x(\tau_{{\mathbb R}^d_+}=\infty)=1$. Then $(Y_t, {\mathbb P}_x, x\in {\mathbb R}^d_+)\stackrel{d}{=} (\overline{Y}_t, {\mathbb P}_x, x\in {\mathbb R}^d_+)$ implying that ${\mathcal F}=\overline{{\mathcal F}}_{{\mathbb R}^d_+}=\overline{{\mathcal F}}$. For any $r>0$, define a process $Y^{(r)}$ by $Y^{(r)}_t:=r Y_{r^{-\alpha} t}$. By the proof of \cite[Lemma 5.1]{KSV}, $Y$ has the following scaling property. \begin{lemma}\label{l:scaling-of-Y} $(Y^{(r)}, {\mathbb P}_{x/r})\color{black} $ has the same law as $(Y, {\mathbb P}_x)$. \end{lemma} For any open subset $V$ of ${\mathbb R}^d_+$ and for $r>0$, we define $rV:=\{rx:\, x\in V\}$ and $\tau_V=\inf\{t>0:\, Y_t\notin V\}$. A consequence of Lemma \ref{l:scaling-of-Y} is that \begin{equation}\label{e:exit-time-scaling} {\mathbb E}_{rx}\tau_{rV}=r^{\alpha}{\mathbb E}_x \tau_V\, , \qquad x\in V\, . \end{equation} \begin{defn}\label{D:1.1} \rm A non-negative Borel function defined on ${\mathbb R}_+^d$ is said to be {harmonic} in an open set $V\subset {\mathbb R}_+^d$ with respect to $Y$ if for every bounded open set $U\subset\overline{U}\subset V$, \begin{equation}\label{e:har} f(x)= {\mathbb E}_x \left[ f(Y_{\tau_{U}})\right] \qquad \hbox{for all } x\in U. \end{equation} A non-negative Borel function $f$ defined on ${\mathbb R}_+^d$ is said to be \emph{regular harmonic} in an open set $V\subset {\mathbb R}_+^d$ if $$ f(x)= {\mathbb E}_x \left[ f(Y_{\tau_{V}})\right] \qquad \hbox{for all } x\in V. $$ \end{defn} The following result is taken form \cite{KSV, KSV21}. \begin{thm}[Harnack inequality, {\cite[Theorem 1.1]{KSV}} \& {\cite[Theorem 1.4]{KSV21}}] \label{t:uhp} \begin{itemize} \item[(a)] There exists a constant $C_1>0$ such that for any $r>0$, any $B(x_0, r) \subset {\mathbb R}^d_+$ and any non-negative function $f$ in ${\mathbb R}^d_+$ which is harmonic in $B(x_0, r)$ with respect to $Y$, we have $$ f(x)\le C_1 f(y), \qquad \text{ for all } x, y\in B(x_0, r/2). $$ \item[(b)] There exists a constant $C_2>0$ such that for any $L>0$, any $r>0$, any $x_1,x_2 \in {\mathbb R}^d_+$ with $|x_1-x_2|<Lr$ and $B(x_1,r)\cup B(x_2,r) \subset {\mathbb R}^d_+$ and any non-negative function $f$ in ${\mathbb R}^d_+$ which is harmonic in $B(x_1,r)\cup B(x_2,r)$ with respect to $Y$, we have $$ f(x_2)\le C_2 (L+1)^{\beta_1+\beta_2+d+\alpha} f(x_1)\, . $$ \end{itemize} \end{thm} \section{Hardy inequality and the finite lifetime}\label{s:H} For a given $\sB$, we define $C(\alpha, p, \sB)$ for $\alpha \in (0,2)$ and $p\in (-1, \alpha+\beta_1)$ by \begin{equation}\label{e:explicit-C} C(\alpha, p, \sB)= \int_{{\mathbb R}^{d-1}}\frac{1}{(|\widetilde{u}|^2+1)^{(d+\alpha)/2}} \int_0^1 \frac{(s^p-1)(1-s^{\alpha-p-1})}{(1-s)^{1+\alpha}} \sB\big((1-s)\widetilde{u}, 1), s\mathbf{e}_d \big)\, ds d\widetilde{u}\, , \end{equation} where $\mathbf{e}_d=(\tilde{0}, 1)$. In case $d=1$, $ C(\alpha, p, \sB)$ is defined by $$ C(\alpha, p, \sB)=\int_0^1 \frac{(s^p-1)(1-s^{\alpha-p-1})}{(1-s)^{1+\alpha}} \sB\big(1, s \big)\, ds, $$ but we will only give the statement of the result for $d\ge 2$. The statement in the $d=1$ case is similar and simpler. We first note that $p\mapsto C(\alpha, p, \sB)$ is strictly increasing for $p\in ((\alpha-1)_+, \alpha+\beta_1)$ (see \cite[Lemma 5.4 and Remark 5.5]{KSV}) and $\lim_{p\uparrow \alpha+\beta_1}C(\alpha, p, \sB)=\infty$. Moreover, \begin{align} \label{e:C} C(\alpha, p, \sB) \begin{cases} \in (0, \infty) & \text{ for } p\in ((\alpha-1)_+, \alpha+\beta_1);\\ =0 & \text{ for } p=0, \alpha-1;\\ \in (-\infty, 0) & \text{ for } p\in (\alpha-1, 0) \cup (0, \alpha-1). \end{cases} \end{align} Let $$ C_c^2({\mathbb R}^d_+; {\mathbb R}^d)=\{f:{\mathbb R}^d_+\to {\mathbb R} : \text{ there exists } u \in C_c^2({\mathbb R}^d) \text{ such that } u=f \text{ on } {\mathbb R}^d_+\} $$ be the space of functions on ${\mathbb R}^d_+$ that are restrictions of $C_c^2({\mathbb R}^d)$ functions. Clearly, if $f\in C_c^2({\mathbb R}^d_+; {\mathbb R}^d)$ then $f\in C^2_b({\mathbb R}^d_+) \cap L^2({\mathbb R}^d_+)$. For $\varepsilon>0$, let \begin{align}\label{e:defn-LBDe} L_{\alpha, \varepsilon}^\sB f(x):=\int_{{\mathbb R}^d_+, |y-x|> \varepsilon}(f(y)-f(x))J(x,y)\, dy, \quad x\in {\mathbb R}^d_+, \end{align} so that \begin{align}\label{e:defn-LBD} L_{\alpha}^\sB f(x)=\textrm{p.v.}\int_{{\mathbb R}^d_+}(f(y)-f(x))J(x,y)\, dy= \lim_{\varepsilon \to 0} L_{\alpha, \varepsilon}^\sB f(x), \end{align} which is defined for all functions $f:{\mathbb R}^d_+\to {\mathbb R}$ for which the principal value integral makes sense. We have shown in \cite[Proposition 3.4]{KSV} that this is the case when $f\in C_c^2({\mathbb R}^d_+; {\mathbb R}^d)$. For $p\in {\mathbb R}$, let $g_{p}(x)=x_d^p$, $x \in {\mathbb R}^d_+$. \begin{lemma}\label{l:LB-on-g} For $p\in (-1, \alpha+\beta_1)$, it holds that \begin{align}\label{e:Lp} L_{\alpha}^\sB g_{p}(x)= C(\alpha,p, \sB)x_d^{p-\alpha}, \quad x\in {\mathbb R}^d_+. \end{align} In particular, \begin{align}\label{e:Lal} L_{\alpha}^\sB g_{\alpha-1}(x)=0, \quad x\in {\mathbb R}^d_+. \end{align} Moreover, there exists $\widehat C=\widehat C(\alpha,p, \sB)>0$ such that \begin{align}\label{e:Lpe} |L_{\alpha, \varepsilon}^\sB g_{p}(x)| \le \widehat C \,x_d^{p-\alpha} \quad \text{for all }x\in {\mathbb R}^d_+ \text{ and } \varepsilon \in (0, x_d/2]. \end{align} \end{lemma} \noindent{\bf Proof.} The equality \eqref{e:Lp} is proved in \cite[Lemma 5.4]{KSV} for $p\in ((\alpha-1)_+, \alpha+\beta_1)$. It is easy to see that the proof in fact works for $p\in (-1, \alpha+\beta_1)$. We now follow the proof in \cite[Lemma 5.4]{KSV} to show \eqref{e:Lpe}. Fix $x=(\widetilde{0}, x_d) \in {\mathbb R}^d_+$ and $\varepsilon \in (0, x_d/2]$. By the change of variables $y=x_d z$, and by using \textbf{(A4)}, we have \begin{align*} L_{\alpha}^\sB g_{p, \varepsilon}(x) &=x_d^{p-\alpha} \int_{{\mathbb R}^d_+, |\widetilde{z}|^2+| z_d-1|^2> (\varepsilon/x_d)^2}\frac{z_d^p-1}{| (\widetilde{z}, z_d)-\mathbf{e}_d|^{d+\alpha}}\sB(\mathbf{e}_d, (\widetilde{z}, z_d))\, dz_d d \widetilde{z}\\ & =:x_d^{p-\alpha} I_1(\varepsilon)\, . \end{align*} Using the change of variables $\widetilde{z}=|z_d-1|\widetilde{u}$, we get \begin{align*} I_1(\varepsilon)&= \int_{{\mathbb R}^d_+, | z_d-1|^2 |\widetilde{u}|^2+| z_d-1|^2> (\varepsilon/x_d)^2} {(|\widetilde{u}|^2+1)^{-(d+\alpha)/2}}\frac{z_d^p-1}{|z_d-1|^{1+\alpha}} \sB\big(\mathbf{e}_d, (|z_d-1|\widetilde{u}, z_d)\big)dz_d \, d\widetilde{u}\\ &= \int_{{\mathbb R}^{d-1}}{(|\widetilde{u}|^2+1)^{-(d+\alpha)/2}} I_2(\varepsilon, \widetilde{u}) \, d\widetilde{u}, \end{align*} where $$I_2(\varepsilon, \widetilde{u})= \left(\int_0^{1-(\varepsilon/x_d)(|\widetilde{u}|^2+1)^{-1/2} }+\int_{1+(\varepsilon/x_d)(|\widetilde{u}|^2+1)^{-1/2}}^{\infty}\right) \frac{z_d^p-1}{|z_d-1|^{1+\alpha}} \sB\big(\mathbf{e}_d, (|z_d-1|\widetilde{u}, z_d)\big)dz_d . $$ Fix $\widetilde{u}$ and let $\epsilon_0=(\varepsilon/x_d)(|\widetilde{u}|^2+1)^{-1/2} \le 1/2$. By the same argument as that in the proof of \cite[Lemma 5.4]{KSV}, we have that $ I_2(\varepsilon, \widetilde{u})=I_{21}(\varepsilon, \widetilde{u})+I_{22}(\varepsilon, \widetilde{u}) $ where \begin{align*} I_{21}(\varepsilon, \widetilde{u})&:=\int_0^{1-\epsilon_0}\frac{(s^p-1)+(s^{\alpha-1-p}-s^{\alpha-1})}{(1-s)^{1+\alpha}}\sB\big(((1-s)\widetilde{u}, 1), s\mathbf{e}_d \big)\, ds, \\ I_{22}(\varepsilon, \widetilde{u}) &:=\int_{1-\epsilon_0}^{\frac{1}{1+\epsilon_0}}\frac{s^{\alpha-1-p}-s^{\alpha-1}}{(1-s)^{1+\alpha}}\sB\big(((1-s)\widetilde{u}, 1), s\mathbf{e}_d \big)\, ds, \end{align*} and there exists a constant $c_2>0$ independent of $\widetilde{u}\in {\mathbb R}^{d-1}$ such that \begin{align} \label{e:I21n} |I_{21}(\varepsilon, \widetilde{u})| \le \int_0^1 \frac{|(s^p-1)(1-s^{\alpha-p-1})|}{(1-s)^{1+\alpha}}\sB\big(((1-s)\widetilde{u}, 1), s\mathbf{e}_d \big)\, ds<c_1 <\infty\, . \end{align} Moreover, by \cite[p.121]{BBC} and the fact that $\epsilon_0 \le 1/2$, $$ \left|\int_{1-\epsilon_0}^{\frac{1}{1+\epsilon_0}}\frac{s^{\alpha-1-p}-s^{\alpha-1}}{(1-s)^{1+\alpha}} \sB\big((1-s)\widetilde{u}, 1), s\mathbf{e}_d \big)ds\right| \le c_2 \epsilon_0^{2-\alpha} \le c_2. $$ Therefore, \begin{align*} \sup_{\varepsilon \in (0, x_d/2]}|I_1(\varepsilon)|& \le c_3\int_{{\mathbb R}^{d-1}}\frac{d\widetilde{u}}{(|\widetilde{u}|^2+1)^{(d+\alpha)/2}} =c_4<\infty\, . \end{align*} {\hfill $\Box$ \bigskip} We now show that the following Hardy inequality holds when $\alpha \not= 1$. \color{black} \begin{prop}\label{t:hardy} Suppose $\alpha \not= 1$. Then there exists $C=C(\alpha) \in (0, \infty)$ such that for all $u \in {\mathcal F},$ \begin{align} \label{e:hardy} {\mathcal E}(u,u) \ge C \int_{{\mathbb R}^d_+} \frac{u(x)^2}{x_d^\alpha}dx. \end{align} \end{prop} \noindent{\bf Proof.} Since ${\mathcal F}$ is the closure of $C_c^{\infty}({{\mathbb R}^d_+})$ under ${\mathcal E}_1$, it suffices to prove \eqref{e:hardy} for $u \in C_c^{\infty}({{\mathbb R}^d_+})$. Fix $u \in C_c^{\infty}({{\mathbb R}^d_+})$, choose a $p\in (\alpha-1, 0) \cup (0, \alpha-1)$ and let $v(x)=u(x)/g_p(x)$. Recall from \eqref{e:C} that $C(\alpha, p, \sB) \in (-\infty, 0)$. Using the elementary identity $ (ab-cd)^2=a^2b(b-d)+c^2d(d-b)+bd(a-c)^2$ and the symmetry of $J$, we have that, for all $ \varepsilon >0$, \begin{align*} & \int_{{\mathbb R}^d_+ \times {\mathbb R}^d_+, |x-y|>\varepsilon} (v(y)g_p (y)-v(x)g_p (x))^2J(x,y)\, dy\, dx\\ =& \int_{{\mathbb R}^d_+ \times {\mathbb R}^d_+, |x-y|>\varepsilon} v(y)^2g_p (y) (g_p (y)-g_p (x)) + v(x)^2g_p (x) (g_p (x)-g_p (y)) J(x,y)\, dy\, dx\\&+ \int_{{\mathbb R}^d_+ \times {\mathbb R}^d_+, |x-y|>\varepsilon} g_p (x)g_p (y)(v(y)-v(x))^2 J(x,y)\, dy\, dx\\ \ge& -2 \int_{{\mathbb R}^d_+ } v(x)^2g_p (x)\left(\int_{ {\mathbb R}^d_+, |x-y|>\varepsilon} (g_p (y)-g_p (x)) J(x,y)\, dy\right)\, dx\\ =& -2\int_{\text{supp}(u) } \frac{u(x)^2}{g_p (x)} L_{\alpha, \varepsilon}^\sB g_{p}(x) dx. \end{align*} Let $a_0:=$dist$({\mathbb R}^d_{-},$supp$(u))/2>0$. By \eqref{e:Lpe}, the functions $\{ \frac{u(x)^2}{g_p (x)} L_{\alpha, \varepsilon}^\sB g_{p}(x): \varepsilon\in (0, a_0) \}$ are uniformly bounded on \text{supp}$(u)$. Thus, by the bounded convergence theorem, \eqref{e:C} and \eqref{e:Lp}, \begin{align*} &{\mathcal E}(u,u) =\lim_{\varepsilon \downarrow 0} \frac{1}2\int_{{\mathbb R}^d_+ \times {\mathbb R}^d_+, |x-y|>\varepsilon} (v(y)g_p (y)-v(x)g_p (x))^2J(x,y)\, dy\, dx\\ &\ge -\lim_{\varepsilon \downarrow 0} \int_{\text{supp}(u) } \frac{u(x)^2}{g_p (x)} L_{\alpha, \varepsilon}^\sB g_{p}(x) dx= c \int_{{\mathbb R}^d_+ } \frac{u(x)^2}{g_p (x)}x_d^{p-\alpha} dx=c \int_{{\mathbb R}^d_+ } \frac{u(x)^2}{x_d^\alpha}dx, \end{align*} where $c=- C(\alpha, p, \sB) \in (0, \infty)$. {\hfill $\Box$ \bigskip} Recall that $\zeta$ is the lifetime of $Y$. Using the above Hardy inequality, we now show that $\zeta$ is finite when $ \alpha>1$. \begin{prop} \label{p:finitelife} Suppose $ \alpha>1$. Then ${\mathcal F} \not=\overline {\mathcal F}$ and ${\mathbb P}_x(\zeta<\infty)=1$ for all $x \in {\mathbb R}^d_+$. \end{prop} \noindent{\bf Proof.} Take a $u\in C_{ c }^{\infty}(\overline {\mathbb R}^d_+)$ such that $u \ge 1$ on $B(0, 1) \cap {\mathbb R}^d_+$, then $u \notin {\mathcal F}$. In fact, if $u \in {\mathcal F}$, then by Proposition \ref{t:hardy}, $$ \infty > {\mathcal E}(u,u) \ge c \int_{{\mathbb R}^d_+ } \frac{u(x)^2}{x_d^\alpha}dx \ge c \int_{B(0, 1) \cap {\mathbb R}^d_+ } |x|^{-\alpha}dx=\infty, $$ which gives a contradiction. The fact that ${\mathcal F} \not=\overline {\mathcal F}$ implies that there is a point $x_0 \in {\mathbb R}^d_+$ such that ${\mathbb P}_{x_0}(\zeta<\infty)>0$. Then by the scaling property of $Y$ in Lemma \ref{l:scaling-of-Y}, we have that ${\mathbb P}_x(\zeta<\infty)={\mathbb P}_{x_0}(\zeta<\infty)>0$ for all $x \in {\mathbb R}^d_+$. Now, by the same argument as in the proof of \cite[Proposition 4.2]{BBC}, we have that ${\mathbb P}_x(\zeta<\infty)=1$ for all $x \in {\mathbb R}^d_+$. {\hfill $\Box$ \bigskip} The fact that the lifetime of $Y$ is finite has two important consequences. \begin{corollary}\label{c:lemma4-1} \begin{itemize} \item[(a)] For all $x\in {\mathbb R}^d_+$, ${\mathbb P}_x(Y_{\zeta-}\in \partial {\mathbb R}^d_+)=1$. \item[(b)] There exists a constant $n_0 \ge 2$ such that for all $x\in {\mathbb R}^d_+$, ${\mathbb P}_x\left( \tau_{B(x,n_0 x_d)}=\zeta \right)> 1/2$. \end{itemize} \end{corollary} \noindent{\bf Proof.} Using Lemma \ref{l:scaling-of-Y}, we see that $$ {\mathbb P}_x\big( \tau_{B(x,n x_d)}=\zeta\big)= {\mathbb P}_{(\widetilde 0, 1)}\big( \tau_{B((\widetilde 0, 1) ,n )} =\zeta \big), \quad x \in {\mathbb R}^d_+. $$ The sequence of events $(\{ \tau_{B((\widetilde 0, 1) ,n )}=\zeta \})_{n\ge 1}$ is increasing in $n$ and \begin{align}\label{e:zetaf} \cup_{n=1}^\infty\big\{ \tau_{B((\widetilde 0, 1) ,n )}=\zeta \big \}=\big\{\zeta<\infty\big\}. \end{align} Thus, by Proposition \ref{p:finitelife} we have \begin{equation}\label{e:c-lemma4-1} \lim_{n \to \infty} {\mathbb P}_{(\widetilde 0, 1)}\big( \tau_{B((\widetilde 0, 1) ,n )}=\zeta \big)= {\mathbb P}_{(\widetilde 0, 1)}\big(\zeta<\infty\big)=1. \end{equation} Moreover, since there is no killing inside ${\mathbb R}^d_+$, it holds that $\{ \tau_{B((\widetilde 0, 1) ,n )}=\zeta\}\subset \{Y_{\zeta-}\in \partial {\mathbb R}^d_+\}$ for each $n\ge 1$. Thus it follows from \eqref{e:zetaf} and \eqref{e:c-lemma4-1} that ${\mathbb P}_{(\widetilde 0, 1)}(Y_{\zeta-}\in \partial {\mathbb R}^d_+)=1$. The claim (a) now follows by scaling. To see (b), note that by \eqref{e:c-lemma4-1} there exists a $n_0 \ge 2$ such that ${\mathbb P}_{(\widetilde 0, 1)}\big( \tau_{B((\widetilde 0, 1) ,n_0 )} =\zeta \big)>1/2$. Therefore, $$ {\mathbb P}_x\big( \tau_{B(x,n_0 x_d)}=\zeta \big)={\mathbb P}_{(\widetilde 0, 1)}\big( \tau_{B((\widetilde 0, 1) ,n_0 )} =\zeta \big)>1/2, \quad x \in {\mathbb R}^d_+. $$ {\hfill $\Box$ \bigskip} \section{Dynkin's formula for barriers}\label{s:D} In this section we always assume that $\alpha>1$. Recall that \color{black} for $a,b>0$ and $\widetilde{w} \in {\mathbb R}^{d-1}$, $$ D_{\widetilde{w}}(a,b):=\{x=(\widetilde{x}, x_d)\in {\mathbb R}^d:\, |\widetilde{x}-\widetilde{w}|<a, 0<x_d<b\}. $$ Without loss of generality, we will mostly deal with the case $\widetilde w=\widetilde{0}$. We will write $D(a,b)$ for $D_{\widetilde{0}}(a,b)$ and $U(r)=D_{\widetilde{0}}(\frac{r}2, \frac{r}2).$ Further we use $U$ for $U(1)$. For any $a>0$, set $D(a):=\{x=(\widetilde{x}, x_d)\in {\mathbb R}^d: x_d>a\}$ and $U_a(r):= \{y\in U(r): \delta_{U(r)}>a\}$. We write $U_a$ for $U_a(1)$. Let $v\in C^{\infty}_c({\mathbb R}^d)$ be a non-negative smooth radial function such that $v(y)=0$ for $|y|\ge 1$ and $\int_{{\mathbb R}^d}v(y)\, dy=1$. For $b\ge 10$ and $k\ge 1$, set $v_k(y):=b^{kd} v(b^ky)$. Next we define $g_k:=v_k\ast(g{\bf 1}_{D(5^{-k})})$ for a bounded, compactly supported function $g$ vanishing on ${\mathbb R}^d\setminus {\mathbb R}^d_+$. Since $b^{-k}<5^{-k}$, we have $g_k\in C^\infty_c({\mathbb R}^d_+)$ and hence $L_{\alpha}^\sB g_k$ is defined everywhere. Also note that $v_k\ast g \in C^{\infty}_c({\mathbb R}^d_+; {\mathbb R}^d)$ and thus $L_{\alpha}^\sB (v_k \ast g)$ is well defined (cf. \cite[Subsection 3.2]{KSV}). Let $(a_k)_{k\ge 1}$ be a decreasing sequence of positive numbers such that $\lim_{k\to \infty}a_k=0$ and $$ a_k\ge 2^{-k(\beta_1/2+1)/(1+\alpha+3\beta_1/2)}\ge 2^{-k}. $$ \begin{lemma}\label{l:Lvk} Let $R, M \ge 1$ and $g:{\mathbb R}^d \to [0,M]$ be a bounded, compactly supported function vanishing on ${\mathbb R}^d\setminus {\mathbb R}^d_+$. For any $z\in U(R)$, it holds that \begin{equation}\label{e:Lvk-1} \lim_{k\to \infty}L_{\alpha}^\sB(v_k\ast g -g_k)(z)=0\, . \end{equation} Moreover, there exists $C>0$ independent of $R, M \ge 1$ and $g$ such that for all $k \ge 2$ and $z\in U_{a_k}(R)$, \begin{equation}\label{e:Lvk-2} 0\le L_{\alpha}^\sB(v_k\ast g -g_k)(z) \le CM (2/3)^{k(\beta_1/2+1)} z_d^{\beta_1}\, . \end{equation} \end{lemma} \noindent{\bf Proof.} Let $z\in U_{a_k}(R)$. We first estimate the difference \begin{eqnarray*} L_{\alpha}^\sB(v_k\ast g -g_k)(z)&=&\lim_{\epsilon \to 0}\int_{{\mathbb R}^d_+, |y-z|>\epsilon}\frac{((v_k\ast g)(y)-g_k(y))-((v_k\ast g)(z)-g_k(z))}{|y-z|^{d+\alpha}}\sB(y,z)\, dy. \end{eqnarray*} Note that for $k\ge 2$, $u\in B(0, b^{-k})$ and $y\in {\mathbb R}^d_+$ with $y_d>3^{-k}$, it holds that $y_d-u_d>3^{-k}-10^{-k}>5^{-k}$. Therefore \begin{align}\label{e:3.11} 1-{\bf 1}_{D(5^{-k})}(y-u)=0. \end{align} Since $v_k$ is supported in $B(0, b^{-k})$, for all $k\ge 2$ and $z\in {\mathbb R}^d_+$ with $z_d> a_k>2^{-k}$, $$ \int_{{\mathbb R}^d}(1-{\bf 1}_{D(5^{-k})}(z-u))g(z-u)v_k(u)du=0. $$ Thus $(v_k\ast g -g_k)(z)=0$. Due to the same reason we have that for $z\in U_{a_k}(R)$, \begin{align*} &\int_{{\mathbb R}^d_+, |y-z|>\epsilon}\frac{\big((v_k\ast g)(y)-g_k(y)\big)-\big((v_k\ast g)(z)-g_k(z)\big)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy\\ & =\int_{{\mathbb R}^d_+, |y-z|>\epsilon, y_d\le 3^{-k}}\int_{{\mathbb R}^d} v_k(u) \frac{(1-{\bf 1}_{D(5^{-k})})(y-u)g(y-u)}{|y-z|^{d+\alpha}}\, du\sB(y,z)\, dy \\ &\le M \int_{{\mathbb R}^d}v_k(u)\, du \int_{{\mathbb R}^d, y_d\le 3^{-k}}\frac{\sB(y,z)}{|y-z|^{d+\alpha}}\, dy\\ &\le c_1M\int_{y_d\le 3^{-k}}\frac{1}{|y-z|^{d+\alpha}}\left(\frac{y_d}{|y-z|}\right)^{\beta_1/2}\, dy\\ &\le c_2M (3^{-k})^{\beta_1/2+1}\int_0^{\infty}\frac{t^{d-2}}{(t^2+cz_d^2)^{(d+\alpha+\beta_1/2)/2}}\, dt \\ &= c_3M (3^{-k})^{\beta_1/2+1} z_d^{-1-\alpha-\beta_1/2} \int_0^{\infty}\frac{s^{d-2}}{(s^2+1)^{(d+\alpha+\beta_1/2)/2}}\, ds \\ &\le c_4M (2/3)^{k(\beta_1/2+1)} z_d^{\beta_1}\, . \end{align*} In the third line we used that $0\le g\le M$, in the fourth the fact that (together with \eqref{e:B7_2}) $$ \left(\frac{y_d\wedge z_d}{|y-z|}\wedge 1\right)^{\beta_1}\log\left(1+\frac{(y_d\vee z_d)\wedge|y-z|}{y_d\wedge z_d \wedge|y-z|}\right)^{\beta_3} \le c \left(\frac{y_d}{|y-z|}\wedge 1\right)^{\beta_1/2}, $$ in the fifth integration in polar coordinates in ${\mathbb R}^{d-1}$, in the sixth the change of variables $t=c^{1/2}z_d s$, and in the last line the fact that $2^{-k(\beta_1/2+1)}z_d^{-1-\alpha-3\beta_1/2}\le 1$ which follows from $z_d\ge a_k$ and the choice of $a_k$. Note also that it is clear from the second line that the first line is non-negative. Thus by letting $\epsilon \to 0$ we get for $z\in U_{a_k}(R)$, $$ 0\le L_{\alpha}^\sB(v_k\ast g -g_k)(z) \le c_4 M (2/3)^{k(\beta_1/2+1)} z_d^{\beta_1}\, . $$ Now take $z\in U(R)$. Then there exists $k_0\ge 1$ such that $z\in U_{a_k}(R)$ for all $k\ge k_0$, and it follows from above that $$ \lim_{k\to \infty}L_{\alpha}^\sB(v_k\ast g -g_k)(z)=0\, . $$ {\hfill $\Box$ \bigskip} \begin{lemma}\label{l:vLg} Assume that $R, M \ge 1$ and $g:{\mathbb R}^d_+\to [0,M]$ is a function which is $C^2$ on $D(R,R)$. For any $k\ge 2$, $z\in U_{a_k}(R)$ and $|u|<b^{-k}$, \begin{equation}\label{e:vLg-0} \mathrm{p.v.} \int_{{\mathbb R}^d_+} \frac{g(y-u)-g(z-u)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy \end{equation} is well defined. Moreover, for $z\in U_{a_k}(R)$, \begin{equation}\label{e:vLg-1} L_{\alpha}^\sB(v_k\ast g)(z)=\int_{{\mathbb R}^d}v_k(u)\left( \mathrm{p.v.} \int_{{\mathbb R}^d_+} \frac{g(y-u)-g(z-u)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy\right)\, du\, , \end{equation} and there exists $C(z)=C(z, g, M, R)>0$ such that $|L_{\alpha}^\sB (v_k\ast g)(z)|\le C(z)$ for all $k\ge 2$. \end{lemma} \noindent{\bf Proof.} Let $z\in U_{a_k}(R)$ and $|u|<b^{-k}$. Let $G(y, z, u):=(g(y-u)-g(z-u))|y-z|^{-d-\alpha}$. For $0<\epsilon <\eta <z_d/10$, consider \begin{align*} &\int_{{\mathbb R}^d_+, \epsilon < |y-z| }G(y, z, u)\sB(y,z)\, dy-\int_{{\mathbb R}^d_+, \eta < |y-z| }G(y, z, u)\sB(y,z)\, dy\\ &=\int_{{\mathbb R}^d_+, \epsilon < |y-z| < \eta}G(y, z, u)\sB(y,z)\, dy\\ &= \int_{ \epsilon < |y-z| <\eta}G(y, z, u)\, dy+ \int_{ \epsilon < |y-z| <\eta}G(y, z, u)(\sB(y,z)-1)\, dy\\ &=: I+II\, . \end{align*} Since $g$ is $C^2$ on $D(R,R)$ and $y-u, z-u\in D(R,R)$, we see that \begin{align*} &| I |\le \int_{\epsilon < |(y-u)-(z-u)|<\eta} \frac{|g(y-u)-g(z-u)-\nabla g(z-u){\bf 1}_{(|(y-u)-(z-u)|<1)}\cdot (y-z)|}{|(y-u)-(z-u)|^{d+\alpha}}\, dy\\ &\le c_1 \sup_{w\in B(z, z_d/5)}|\partial^2 g (w)| \int_{ \epsilon < |y-z| <\eta} |y-z|^{-d-\alpha+2}\, dy=c_2(z) (\eta^{2-\alpha}-\epsilon^{2-\alpha})\, . \end{align*} Further, by using the mean value theorem in the first line and \textbf{(A2)} in the second, we get \begin{align*} &| II |\le \sup_{w\in B(z, z_d/5)}|\nabla g (w)| \int_{\epsilon < |y-z| <\eta} \frac{|\sB(y,z)-1|}{|y-z|^{d+\alpha-1}}\, dy\\ &\le c_3(z) \int_{\epsilon < |y-z| <\eta} |y-z|^{-d-\alpha}\left(\frac{|y-z|}{y_d\wedge z_d}\right)^{\theta}\, dy\\ &\le c_4 c_3(z) z_d^{-\theta} \int_{\epsilon < |y-z| <\eta} |y-z|^{-d-\alpha+\theta}\, dy=c_5(z) (\eta^{\theta-\alpha+1}-\epsilon^{\theta-\alpha+1})\, . \end{align*} The estimates for $I$ and $II$ imply that the principal value integral in \eqref{e:vLg-0} is well defined. Let $z\in U_{a_k}(R)$. For $\epsilon < z_d/10$ and $|u|<b^{-k}$, we have \begin{align*} &\left| \int_{{\mathbb R}^d_+, |y-z|>\epsilon} G(y, z, u)\sB(y,z)\, dy\right|\\ &\le \left|\int_{{\mathbb R}^d_+, |y-z|\ge z_d/10} G(y, z, u)\sB(y,z)\, dy\right| +\left|\int_{{\mathbb R}^d_+, \epsilon < |y-z| < z_d/10} G(y, z, u)\sB(y,z)\, dy\right| \\ &=:III+IV\, . \end{align*} Estimating $g$ by $M$, we get that $$ III\le 2M \int_{|y-z|\ge z_d/10}|y-z|^{-d-\alpha}dy \le c_7 z_d^{-\alpha}=c_8(z) \, . $$ The integral in $IV$ is estimated in $I$ and $II$ with $\eta=z_d/10$, so we have $$ IV\le c_2(z)(z_d/10)^{2-\alpha}+ c_5(z)(z_d/10)^{\theta-\alpha +1}= c_9(z)\, . $$ Thus we have that \begin{equation}\label{e:vLg-2} \left| \int_{{\mathbb R}^d_+, |y-z|>\epsilon} \frac{g(y-u)-g(z-u)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy\right| \le c_{10}(z)\, . \end{equation} Hence we can use the dominated convergence theorem to conclude that \begin{align*} &L_{\alpha}^\sB (v_k\ast g)(z) =\lim_{\epsilon \to 0}\int_{{\mathbb R}^d_+, |y-z|>\epsilon}\frac{(v_k\ast g)(y)-(v_k\ast g)(z)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy\\ &=\lim_{\epsilon\to 0} \int_{|u|<b^{-k}}v_k(u) \int_{{\mathbb R}^d_+, |y-z|>\epsilon} \frac{g(y-u)-g(z-u)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy \, du\\ &=\int_{|u|<b^{-k}} v_k(u)\left(\lim_{\epsilon \to 0} \int_{{\mathbb R}^d_+, |y-z|>\epsilon} \frac{g(y-u)-g(z-u)}{|y-z|^{d+\alpha}}\sB(y,z)\, dy\right)\, du\, , \end{align*} which is \eqref{e:vLg-1}. The last statement follows from \eqref{e:vLg-2} . {\hfill $\Box$ \bigskip} We note also that if $g$ is continuous in $D(R, R)$, then $\lim_{k\to \infty}(v_k\ast g)(z)=g(z) $ for all $z\in D(R, R)$. Let $h_{p, R}(x)=x_d^p {\bf 1}_{D(R,R)}(x)$ and $h_{p, \infty}(x)=g_{p}(x)=x_d^p$ for $x \in {\mathbb R}^d_+$. We also let $h_{p}(x)=h_{p, 1}(x)$. \begin{lemma}\label{l:hq} Assume that $R \ge 1$. \color{black} Let $p\in [\alpha-1, \alpha+\beta_1)$ and set $b=10\vee 2^{4(p-2)_-+3}$ and \color{black} $$ a_k:=2^{-k(p+1+\frac12\beta_1)/(\alpha+1+\frac32\beta_1)}\vee 2^{-k(2+\beta_1)/(1+\alpha+\frac32\beta_1-p)}. $$ There exists a constant $C=C(R)>0$ such that for any $k\ge 1$ and $z\in U_{a_k}(R)$, \begin{equation}\label{e:hq-1} |L_{\alpha}^\sB(v_k \ast h_{p, R})(z)-L_{\alpha}^\sB h_{p, R}(z)|\le C \left(\frac{4}{5}\right)^k \, . \end{equation} In particular, the functions $z\mapsto |L_{\alpha}^\sB(v_k \ast h_{p, R})(z)-L_{\alpha}^\sB h_{p, R}(z)|$ are all bounded by the constant $C$on $U(R)$, and for any $z\in U(R)$, $ \lim_{k\to \infty} \left| L_{\alpha}^\sB(v_k \ast h_{p, R})(z)-L_{\alpha}^\sB h_{p, R}(z)\right| =0. $ \end{lemma} \noindent{\bf Proof.} First note that $a_k\ge 2^{-k}$ since the first term in its definition is larger than $2^{-k}$. Fix $ z \ \in U_{a_k}( R \color{black})$. By using Lemma \ref{l:vLg} with $g=h_{p, R}$ in the second line below we see that \begin{align*} &L_{\alpha}^\sB(v_k\ast h_{p, R})(z)\\ &=\int_{{\mathbb R}^d}v_k(u)\left(\mathrm{p.v.}\int_{{\mathbb R}^d_+}\frac{h_{p, R}(y-u)- h_{p, R}(z-u)}{|y-z|^{d+\alpha}}\sB(y, z)dy\right)du\\ &=\int_{{\mathbb R}^d}v_k(u)\left(\mathrm{p.v.}\int_{{\mathbb R}^d_+}\frac{h_{p, R}(y-u)- h_{p, R}(z-u)-(h_{p, R}(y)-h_{p, R}(z))}{|y-z|^{d+\alpha}}\sB(y, z)dy\right)du\\ &\quad+ L_{\alpha}^\sB h_{p, R}(z). \end{align*} Set $b=10\vee 2^{4(p-2)_-+3}$. Now we write, for $u\in B(0, b^{-k})$, \begin{align*} & \mathrm{p.v.} \color{black} \int_{{\mathbb R}^d_+}\frac{h_{p, R}(y-u)- h_{p, R}(z-u)-(h_{p, R}(y)-h_{p, R}(z))}{|y-z|^{d+\alpha}}\sB(y, z)dy\\ &=\int_{D(R+b^{-k}, R+b^{-k})\setminus U(R), y_d>5^{-k}}+\int_{D(R+b^{-k}, R+b^{-k}), y_d<5^{-k}}\\ &\quad + \int_{U(R), y_d>5^{-k}, |y-z|>2^{-1}z_d}+\ \mathrm{p.v.} \int_{U(R) \cap B(z, z_d/2)}\color{black}=:I+II+III+IV. \end{align*} We deal with $I$ first. For $u\in B(0, b^{-k})$, \begin{align*} I&=\int_{D(R+b^{-k}, R+b^{-k})\setminus D(R-b^{-k}, R-b^{-k}), y_d>5^{-k}}+ \int_{D(R-b^{-k}, R-b^{-k})\setminus U(R), y_d>5^{-k}}=:I_1+I_2. \end{align*} Obviously, we have $ |I_1|\le c_1(R) b^{-k}. $ Let $A_k:=(D(R-b^{-k}, R-b^{-k})\setminus U(R))\cap\{y:y_d>5^{-k}\}$ and $$F_1(y_d, z_d, u_d):=p(y_d-z_d)\cdot \int^1_0\left((z_d-u_d+t(y_d-z_d))^{p-1}-(z_d+t(y_d-z_d))^{p-1}\right)dt.$$ \color{black} Then, we have \begin{align*} &|I_2|=\big|\int_{A_k} \frac{(y_d-u_d)^p-y_d^p-((z_d-u_d)^p-z_d^p)}{|y-z|^{d+\alpha}} \sB(y, z)dy\big|\\ & =\big|\int_{A_k}\frac{ F_1(y_d, z_d, u_d) }{|y-z|^{d+\alpha}} \sB(y, z)dy\big| \le c|u_d| \int_{A_k}\frac{(z_d \wedge y_d)^{-(p-2)_-} |y_d-z_d|}{|y-z|^{d+\alpha}}dy\\ & \le c b^{-k}5^{k(p-2)_-}\int_{2R>|y-z|>a_k}\frac{dy } {|y-z|^{d+\alpha-1}} dy\\ & \le c_2(R) b^{-k}2^{3k(p-2)_-+1}\le c_2 2^{-k((p-2)_{-}+2)}, \end{align*} where in the first inequality we used the mean value theorem for the difference inside the integral in the numerator, in the second last inequality, the fact $a_k\ge 2^{-k}$ and in the last inequality, the fact $b\ge 2^{4(p-2)_- +3}$.\color{black} For $II$, we have $$ |II| \le c_3 \int_{D(R+b^{-k}, R+b^{-k}), y_d<5^{-k}}\frac{y_d^p+z_d^p+b^{-kp}}{|y-z|^{d+\alpha}}\sB(y, z)dy. $$ Similarly as in the proof of Lemma \ref{l:Lvk} we first estimate \begin{align*} &\int_{D(R+b^{-k}, R+b^{-k}), y_d<5^{-k}}\frac{y_d^p+b^{-kp}}{|y-z|^{d+\alpha}}\sB(y, z)dy\\ &\le c_4\int_{{\mathbb R}^{d-1}}\int^{5^{-k}}_0\frac{y_d^{p+\beta_1/2}+10^{-kp}y_d^{\beta_1/2}}{|y-z|^{d+\alpha+\frac12\beta_1}}dy_dd{\widetilde y}\\ &\le c_5 5^{-k(p+1+\frac12\beta_1)}\int^\infty_0\frac{t^{d-2}}{(t^{2}+cz_d^2)^{(d+\alpha+\frac12\beta_1)/2}}dt \\ &= c_6 5^{-k(p+1+\frac12\beta_1)}z_d^{-1-\alpha-\frac12\beta_1}\int^\infty_0\frac{s^{d-2}}{(s^{2}+1)^{(d+\alpha+\frac12\beta_1)/2}}ds\\ & \le c_7 (2/5)^{k(p+1+\frac12\beta_1)}z_d^{\beta_1}. \end{align*} For the remaining part, we use a similar argument: \begin{align*} &\int_{D(R+b^{-k}, R+b^{-k}), y_d<5^{-k}}\frac{z_d^p}{|y-z|^{d+\alpha}}\sB(y, z)dy\\ &\le c_8 z_d^p\int_{{\mathbb R}^{d-1}}\int^{5^{-k}}_0\frac{y_d^{\beta_1/2}}{|y-z|^{d+\alpha+\frac12\beta_1}}dy_dd\widetilde{y}\\ &\le c_9 z_d^p 5^{-k(1+\frac12\beta_1)}\int^\infty_0\frac{t^{d-2}}{(t^2+cz^2_d)^{(d+\alpha+\frac12\beta_1)/2}}dt\\ &=c_{10} z_d^p 5^{-k(1+\frac12\beta_1)}z_d^{-1-\alpha-\frac12\beta_1}\int^\infty_0\frac{s^{d-2}}{(s^2+1)^{(d+\alpha+\frac12\beta_1)/2}}ds \\ &\le c_{11} (4/5)^{k(1+\frac12\beta_1)}2^{-k(2+\beta_1)}z_d^{p-1-\alpha-\frac12\beta_1}\le c_{12} (4/5)^{k(1+\frac12\beta_1)}z_d^{\beta_1}. \end{align*} Thus $$ |II|\le c_{13} \big((2/5)^{k(p+1+\frac12 \beta_1)}+(4/5)^{k(1+\frac12\beta_1)}\big)z_d^{\beta_1}. $$ Let $B_k:=U(R)\cap \{y_d>5^{-k}\}\cap \{y:|y-z|>2^{-1}z_d\}$. Then, we have \begin{align*} |III|&=\left|\int_{B_k}\frac{(y_d-u_d)^p-y_d^p-((z_d-u_d)^p-z_d^p)}{|y-z|^{d+\alpha}} \sB(y, z)dy\right|\\ &=\left|\int_{B_k}\frac{ F_1(y_d, z_d, u_d) \color{black}}{|y-z|^{d+\alpha}} \sB(y, z)dy\right|\\ &\le c_{14}b^{-k}2^{k3(p-2)_-}\int_{U(R), |y-z|>2^{-1}z_d}\frac{1}{|y-z|^{d+\alpha-1}}dy\\ &\le c_{15}(4/5)^{k}2^{-3k}(z_d^{1-\alpha}\vee \log \frac1{z_d}) \le c_{16} (4/5)^{k}z_d^3(z_d^{1-\alpha}\vee \log \frac1{z_d}), \end{align*} where in the first inequality we use the mean value theorem inside the integral in the numerator and the fact the derivative of the integrand is bounded above by $c(5^{-k}-b^{-k})^{-(p-2)_-}\le c2^{3k(p-2)_-}$. Let $F_2(y_d, z_d, u_d):=p(p-1)\int^1_0\left((z_d-u_d+t(y_d-z_d))^{p-2}-(z_d+t(y_d-z_d))^{p-2}\right)(1-t)dt$. For $t\in [0, 1]$, $u\in B(0, b^{-k})$ and $y\in B(z, \frac12 z_d)$, $z_d-u_d+t(y_d-z_d)$ and $z_d+t(y_d-z_d)$ are both comparable with $z_d$. Thus, for $IV$, we have that, for large $k$, \begin{align*} &|IV| \le \left| \mathrm{p.v.} \int_{U(R)\cap B(z, 2^{-1}z_d)}\frac{(y_d-u_d)^p-y_d^p-((z_d-u_d)^p-z_d^p)}{|y-z|^{d+\alpha}} (\sB(y, z)-1)dy\right|\\ &+\left|\int_{U(R)\cap B(z, 2^{-1}z_d)} \left[(y_d-u_d)^p-y_d^p-((z_d-u_d)^p-z_d^p) \right. \right.\\ &\qquad\qquad\qquad \left.\left. -p((z_d-u_d)^{p-1}-z_d^{p-1}) {\bf 1}_{B(z, a_k)} (y) (y_d-z_d) \right]{|y-z|^{-d-\alpha}} dy\right|\\ &\le \int_{U(R)\cap B(z, 2^{-1}z_d)} \frac{\left|F_1(y_d, z_d, u_d)\right|}{z_d^{ \theta} |y-z|^{d+\alpha- \theta }} + \int_{U(R)\cap (B(z, 2^{-1}z_d) \setminus B(z, a_k) )} \frac{ \left|F_1(y_d, z_d, u_d)\right|}{ |y-z|^{d+\alpha}} dy\\ & \quad +\int_{B(z, a_k)}\frac{\left| F_2(y_d, z_d, u_d) \right|}{|y-z|^{d+\alpha-2}} dy\\ &\le c_{17} \left( \int_{B(z, 2^{-1}z_d)} \frac{ |u_d|z_d^{p-2} dy} {z_d^\theta |y-z|^{d+\alpha-1-\theta}} + \int_{B(z, 2^{-1}z_d) \setminus B(z, a_k) } \frac{ |u_d|z_d^{p-2}dy }{ |y-z|^{d+\alpha-1}} +\int_{B(z, a_k)}\frac{ |u_d|z_d^{p-3}dy}{|y-z|^{d+\alpha-2}} \right)\\ &\le c_{ 18 }b^{-k} \left ( z_d^{p-1-\alpha} +z_d^{p-2 } a_k^{1-\alpha} + z_d^{p-1-\alpha} \right) \le c_{19} (2/b)^k z_d^{p-1-\alpha } \le c_{ 20 } (4/5)^k z_d^{p+1-\alpha }, \end{align*} where in the second inequality we used \textbf{(A2)} and in the third inequality the mean value theorem for the difference inside the integral in the definitions of $F_1$ and $F_2$. Combining the estimates for $I$, $II$, $III$ and $IV$, we arrive at the desired assertion. {\hfill $\Box$ \bigskip} \begin{lemma}\label{l:estimate-of-L-hat-B} \begin{itemize} \item [(a)] There exists $C_1>0$ such that for every $R \ge 1$ and $z\in U(R)$, $$ 0\ge L_{\alpha}^\sB h_{\alpha-1, R}(z) \ge -C_1 z_d^{\beta_1}(|\log z_d|^{\beta_3}\vee1) \int_{|y|\ge R \color{black} }|y|^{-\beta_1-d-1}\big(1+{\bf 1}_{|y|\ge1}(\log|y|)^{\beta_3}\big)\, dy\, . $$ \item [(b)] Let $\alpha-1 <p <\alpha+\beta_1$. There exist $r_0\in (0,1/2]$ and $C_2>0$ and $C_3>0$ such that for every $z\in D(\frac12, r_0)$, $$ C_{2} z_d^{p-\alpha}\le L_{\alpha}^\sB h_p(z) \le C_{3} z_d^{p-\alpha}. $$ \end{itemize} \end{lemma} \noindent{\bf Proof.} (a) Let $z\in U (R)$. Then by \eqref{e:Lal}, we see that \begin{align*} L_{\alpha}^\sB h_{\alpha-1, R}(z)=-\int_{D(R,R)^c\cap {\mathbb R}^d_+}\frac{y_d^{ \alpha -1} }{|y-z|^{d+\alpha}}\sB (z,y)\, dy\, , \end{align*} which is negative. Further, if $y\in D(R,R)^c$ and $z\in U(R)=D(R/2,R/2)$, then $|y|\ge R>|z|$, $|y-z|\ge z_d$ and $|y-z| \asymp |y|$. Thus it follows from \cite[Lemma 5.2]{KSV} that \begin{align*} &|L_{\alpha}^\sB h_{\alpha-1, R}(z)|\le \int_{D(R,R)^c\cap {\mathbb R}^d_+} \frac{|y|^{\alpha-1}}{|y-z|^{d+\alpha}}\sB (z,y)\, dy \\ &\le c_1\int_{y \in {\mathbb R}^d_+, |y|\ge R, |y-z|\ge z_d} |y|^{-d-1}\sB (z,y)\, dy \\ &\le c_2 z_d^{\beta_1}(|\log z_d|^{\beta_3}\vee1) \int_{|y|\ge R} |y|^{-\beta_1-d-1}\big(1+{\bf 1}_{|y|\ge1}(\log|y|)^{\beta_3}\big)\, dy \, . \end{align*} \noindent (b) Let $z\in U$. Then by using Lemma \ref{l:LB-on-g}, \begin{eqnarray*} L_{\alpha}^\sB h_p(z) =C(\alpha, p, \sB )z_d^{p-\alpha}-\int_{D(1,1)^c\cap {\mathbb R}^d_+}\frac{y_d^p}{|y-z|^{d+\alpha}}\sB (z, y)\, dy\, . \end{eqnarray*} Since the second term is non-negative, by removing it we obtain the upper bound. In the same way as before (this uses $p<\alpha+\beta_1$), $$ \left| \int_{D(1,1)^c\cap {\mathbb R}^d_+}\frac{y_d^p}{|y-z|^{d+\alpha}}\sB (z, y)\, dy \right| \le c_2 z_d^{\beta_1}|\log z_d|^{\beta_3}\, . $$ Thus, for any $z\in U$, $$ L_{\alpha}^\sB h_p(z) \ge C(\alpha, p, \sB )z_d^{p-\alpha}-c_2 z_d^{\beta_1}|\log z_d|^{\beta_3}\, . $$ Since $p-\alpha<\beta_1$ and $C(\alpha, p, \sB )>0$, we can find $r_0\in (0,1/2]$ such that the function $t\mapsto C(\alpha, p, \sB )) t^{p-\alpha}-c_2 t^{\beta_1}|\log t|^{\beta_3} $ is bounded from below by $c_3 t^{p-\alpha}$ with a positive constant $c_3>0$ for all $t\in (0,r_0)$. This concludes the proof of the lower bound. {\hfill $\Box$ \bigskip} In the remainder of this paper, $r_0$ always stands for the constant in the lemma above. \begin{lemma}\label{l:exit-time estimate-U} There exists a constant $C>0$ such that \begin{equation}\label{e:exit-time-estimate-U} {\mathbb E}_x \tau_U \le C x_d^{\alpha-1}\,, \quad x\in U. \end{equation} \end{lemma} \noindent{\bf Proof.} Choose $q\in (\alpha-1, \alpha)$ and let $\eta(x):=h_{\alpha-1}(x)-h_q(x)$, $x\in {\mathbb R}^d_+$. For $x\notin D(1,1)$, $\eta(x)=0$, while if $x\in D(1,1)$ we have $\eta(x)=x_d^{\alpha-1}-x_d^q>0$. By Lemma \ref{l:estimate-of-L-hat-B}, for all $x\in U(r_0)$ we have that $L^\sB h_{\alpha-1}(x)\le 0$ and $L^\sB h_q(x)\ge c_1 x^{q-\alpha}$. Thus we can find $r_1\in (0, r_0]$ such that \begin{equation}\label{e:exit-time-estimate-U-3} L_{\alpha}^\sB \eta(x)=L_{\alpha}^\sB h_{\alpha-1}(x)-L_{\alpha}^\sB h_q(x)\le -c_1 x_d^{q-\alpha}\le -1, \qquad x\in U(r_1). \end{equation} Let $g_k=v_k\ast (\eta {\bf 1}_{D(5^{-k}})$. It follows from Lemmas \ref{l:Lvk} and \ref{l:hq} applied to $h_q$ and $h_{\alpha-1}$ that $L_{\alpha}^\sB g_k\to L_{\alpha}^\sB \eta$ on $U$ and the sequence of functions $|L_{\alpha}^\sB g_k-L_{\alpha}^\sB \eta|$ is bounded by some constant $c_2>0$. In particular, \begin{equation}\label{e:exit-time-estimate-U-2} -L_{\alpha}^\sB g_k(z) \ge -L_{\alpha}^\sB \eta(z)-c_2 \ge 1-c_2\, , \qquad z\in U(r), r\le r_1\, . \end{equation} It follows from \cite[Lemma 3.6]{KSV} that for all $t\ge 0$, $$ {\mathbb E}_x [g_k(Y_{t\wedge \tau_{U_{a_k}(r)}})]-g_k(x)={\mathbb E}_x\int_0^t {\bf 1}_{s<\tau_{U_{a_k}(r)}} L_{\alpha}^\sB g_k (Y_s)\, ds \, , \qquad x\in U(r), r\le r_1. $$ As $k\to \infty$, the left-hand side converges to ${\mathbb E}_x [\eta(Y_{t\wedge \tau_{U(r)}})]-\eta(x)$. For the right-hand side we can use Fatou's lemma (justified because of \eqref{e:exit-time-estimate-U-2}) to conclude that for $x\in U(r)$ with $ r\le r_1$, \begin{align*} \limsup_{k\to \infty}{\mathbb E}_x\int_0^t {\bf 1}_{s<\tau_{U_{a_k}(r)}} L_{\alpha}^\sB g_k(Y_s)\, ds \le {\mathbb E}_x\int_0^t {\bf 1}_{s<\tau_{U(r)}} L_{\alpha}^\sB \eta(Y_s)\, ds \le -{\mathbb E}_x (t\wedge \tau_{U(r)})\, . \end{align*} Thus we get that ${\mathbb E}_x [\eta(Y_{t\wedge \tau_{U(r)}})]-\eta(x) \le -{\mathbb E}_x (t\wedge \tau_{U(r)})$, and by letting $t\to \infty$, $$ -\eta(x)\le {\mathbb E}_x [\eta(Y_{\tau_{U(r)}})]-\eta(x) \le -{\mathbb E}_x \tau_{U(r)}\, \, , \qquad x\in U(r), r\le r_1. $$ Thus we get ${\mathbb E}_x \tau_{U(r)}\le \eta(x)\le x_d^{\alpha-1}$. By using that $U(r_1)=r_1 U$ and \eqref{e:exit-time-scaling}, for any $x\in U$, $$ {\mathbb E}_x \tau_U=r_1^{-\alpha} {\mathbb E}_{r_1x}\tau_{r_1U}\le r_1^{-\alpha}(r_1x_d)^{\alpha-1}= r_1^{-1} x_d^{\alpha-1}\, . $$ We have proved the claim of the lemma with $C=r_1^{-1} $. {\hfill $\Box$ \bigskip} \color{black} \begin{prop}\label{l:dynkin-hp} Let $p\in [\alpha-1, \alpha+\beta_1)$, $R \ge 1$ and $r\le R$. For every $x\in U(r)$ it holds that \begin{equation}\label{e:dynkin-hp} {\mathbb E}_x [h_{p, R}(Y_{ \tau_{U(r)}})]=h_{p, R}(x)+{\mathbb E}_x \int_0^{\tau_{U(r)}} L_{\alpha}^\sB h_{p, R}(Y_s)\, ds \, . \end{equation} \end{prop} \noindent{\bf Proof.} Set $g_k:=v_k\ast (h_{p, R} {\bf 1}_{D(5^{-k})})$. Let $x\in U(r)$, $r\le R$. There is $k_0\ge 1$ such that $x\in U_{a_k}(r)$ for all $k\ge k_0$. Note that since $g_k\in C_c^{\infty}({\mathbb R}^d_+)$, it follows from \cite[Lemma 3.6]{KSV} that for all $t\ge 0$, $$ {\mathbb E}_x [g_k(Y_{t\wedge \tau_{U_{a_k}(r)}})]=g_k(x)+{\mathbb E}_x\int_0^t {\bf 1}_{s<\tau_{U_{a_k}(r)}} L_{\alpha}^\sB g_k (Y_s)\, ds \, . $$ Clearly, $\lim_{k\to \infty}\tau_{U_{a_k}(r)}=\tau_{U(r)}$. Since $g_k\to h_{p, R}$ as $k\to \infty$, we get that the left-hand side above converges to ${\mathbb E}_x [h_{p, R}(Y_{t\wedge \tau_{U(r)}})]$. On the other hand, by combining Lemmas \ref{l:Lvk} and \ref{l:hq}, we see that for every $z\in U(r)$ with $r\le R$, $$ \lim_{k\to \infty}|L_{\alpha}^\sB g_k (z)- L_{\alpha}^\sB h_{p, R}(z)|=0 $$ and $|L_{\alpha}^\sB g_k (z)- L_{\alpha}^\sB h_{p, R}(z)|$ is bounded. Thus, we can use the bounded convergence theorem and get $$ \lim_{k\to \infty} {\mathbb E}_x\int_0^t {\bf 1}_{s<\tau_{U_{a_k}(r)}} L_{\alpha}^\sB g_k (Y_s)\, ds = {\mathbb E}_x \int_0^t {\bf 1}_{s<\tau_{U(r)}} L_{\alpha}^\sB h_{p, R}(Y_s)\, ds \, . $$ Therefore, $$ {\mathbb E}_x [h_{p, R}(Y_{t\wedge \tau_{U_{a_k}(r)}})]=h_{p, R}(x)+{\mathbb E}_x\int_0^{t \wedge \tau_{U(r)}} L_{\alpha}^\sB h_{p, R} (Y_s)\, ds \, . $$ By letting $t\to \infty$ we get that the left-hand side above converges to ${\mathbb E}_x [h_{p, R}(Y_{\tau_{U(r)}})]$. When $p=\alpha-1$, by Lemma \ref{l:estimate-of-L-hat-B} (a), $ L_{\alpha}^\sB h_{p, R}(z)\le 0$. Thus we can use use the monotone convergence theorem and obtain \eqref{e:dynkin-hp}. When $p\in (\alpha-1, \alpha+\beta_1)$, by Lemma \ref{l:estimate-of-L-hat-B} (b) and scaling, we have $ L_{\alpha}^\sB h_{p, R}(z)> 0$ on $D(R/2, r_0R)\supset D(r/2, rr_0/2)$. Thus, we can use the monotone convergence theorem and get $$ \lim_{t\to \infty}{\mathbb E}_x\int_0^{t \wedge \tau_{U(r)}} {\bf 1}_{Y_s \in D(r/2, rr_0/2)} L_{\alpha}^\sB h_{p, R} (Y_s) \, ds ={\mathbb E}_x \int_0^{\tau_{U(r)}} {\bf 1}_{Y_s \in D(r/2, rr_0/2)} L_{\alpha}^\sB h_{p, R}(Y_s)\, ds. $$ On the other hand, since $$ L_{\alpha}^\sB h_{p, R}(z)=C(\alpha, p, \sB)z_d^{p-\alpha}-\int_{D(R, R)}\frac{y^p_d}{|y-z|^{d+\alpha}}\sB(y, z)dy, \quad z\in D(R, R), $$ we know that $L_{\alpha}^\sB h_{p, R}(z)$ is bounded on $U(r) \setminus D(r/2, rr_0/2)$. Thus, using Lemma \ref{l:exit-time estimate-U} and the bounded convergence theorem, we get $$ \lim_{t\to \infty}{\mathbb E}_x\int_0^{t \wedge \tau_{U(r)}} {\bf 1}_{Y_s \in U(r) \setminus D(r/2, rr_0/2)} L_{\alpha}^\sB h_{p, R} (Y_s)\, ds ={\mathbb E}_x \int_0^{\tau_{U(r)}} {\bf 1}_{Y_s \in U(r) \setminus D(r/2, rr_0/2)} L_{\alpha}^\sB h_{p, R}(Y_s)\, ds. $$ Combining these, we obtain \eqref{e:dynkin-hp} for $p\in (\alpha-1, \alpha+\beta_1)$ too. {\hfill $\Box$ \bigskip} \section{Boundary Harnack principle} \label{s:BHP} In this section we always assume that $\alpha>1$. The next two results are applications of Proposition \ref{l:dynkin-hp}. \begin{lemma}\label{l:gharmonic} For all $r>0$ it holds that \begin{equation}\label{e:gharmonic} {\mathbb E}_x [h_{\alpha-1, \infty}(Y_{ \tau_{U(r)}})]=h_{\alpha-1, \infty}(x), \quad \text{for all } x \in U(r). \end{equation} In particular, the function $h_{\alpha-1, \infty}(x)=g_{\alpha-1}(x)=x_d^{\alpha-1}$ is harmonic in ${\mathbb R}^d_+$ with respect to $Y$. \end{lemma} \noindent{\bf Proof.} Fix $r>0$ and $x \in U(r)$. Let $R \ge 1 \vee r$. By Proposition \ref{l:dynkin-hp}, \begin{equation}\label{e:ha} {\mathbb E}_x [h_{\alpha-1, R}(Y_{ \tau_{U(r)}})]=x_d^{\alpha-1}+{\mathbb E}_x \int_0^{\tau_{U(r)}} L_{\alpha}^\sB h_{\alpha-1, R}(Y_s)\, ds \, . \end{equation} By the monotone convergence theorem, $$\lim_{R \to \infty} {\mathbb E}_x [h_{\alpha-1, R}(Y_{ \tau_{U(r)}})]={\mathbb E}_x [h_{\alpha-1, \infty}(Y_{ \tau_{U(r)}})].$$ Using Lemmas \ref{l:estimate-of-L-hat-B} (a) and \ref{l:exit-time estimate-U}, we see that \begin{align*} 0 &\ge {\mathbb E}_x \int_0^{\tau_{U(r)}} L_{\alpha}^\sB h_{\alpha-1, R}(Y_s)\, ds\\ &\ge -c_1(r){\mathbb E}_x {\tau_{U(r)}} \int_{|y|\ge R} |y|^{-\beta_1-d-1}\big(1+{\bf 1}_{|y|\ge1}(\log|y|)^{\beta_3}\big)\, dy\\ &\ge -c_2(r) \int_{|y|\ge R} |y|^{-\beta_1-d-1}\big(1+{\bf 1}_{|y|\ge1}(\log|y|)^{\beta_3}\big)\, dy. \end{align*} Thus $$ \lim_{R \to \infty}{\mathbb E}_x \int_0^{\tau_{U(r)}} L_{\alpha}^\sB h_{\alpha-1, R}(Y_s)\, ds=0. $$ The proof is now complete. {\hfill $\Box$ \bigskip} Recall that $r_0$ in the constant in Lemma \ref{l:estimate-of-L-hat-B}(b). \begin{lemma}\label{l:exltlow} There exists $C>0$ such that $$ {\mathbb E}_x\tau_{U(r_0)} \le C {\mathbb P}_x\left(Y_{\tau_{U(r_0)}}\in D(1, 1) \right) \quad \text{for all } x \in U(r_0). $$ \end{lemma} \noindent{\bf Proof.} Choose a $p\in (\alpha-1, \alpha)$. By Proposition \ref{l:dynkin-hp}, for every $x\in U(r_0)$, \begin{align*} {\mathbb E}_x [h_{p}(Y_{ \tau_{U(r_0)}})]=h_{p}(x)+{\mathbb E}_x \int_0^{\tau_{U(r_0)}} L_{\alpha}^\sB h_{p}(Y_s)\, ds \, . \end{align*} Thus, using Lemma \ref{l:estimate-of-L-hat-B} (b) and that $h_{p}$ is bounded by 1 and supported on $D(1, 1)$, we get $$ {\mathbb P}_x\left(Y_{\tau_{U(r_0)}}\in D(1, 1)\right) \ge {\mathbb E}_x [h_{p}(Y_{ \tau_{U(r_0)}})] \ge {\mathbb E}_x \int_0^{\tau_{U(r_0)}} L_{\alpha}^\sB h_{p}(Y_s)\, ds \ge c {\mathbb E}_x\tau_{U(r_0)}. $$ {\hfill $\Box$ \bigskip} Since $\int_{D(1,1)}y_d^{\alpha-1} |y|^{-d-\alpha-\beta_1}\, dy =+\infty$, by the same argument as that leading to \cite[(5.10)]{KSV}, we also have that there exists an $r_1 \in (0, r_0)$ small enough so that for all $r\in (0, r_1]$ and $ R \ge 1$, \begin{equation}\label{e:integral-upper-g} {\mathbb E}_x \int_0^{\tau_{U(r)}} (Y_t^d)^{\beta_1} |\log Y_t^d|^{\beta_3} \, dt \le {\mathbb E}_x[h_{\alpha-1}\color{black}(Y_{\tau_{U(r)}})] \le {\mathbb E}_x[h_{ \alpha-1, R}(Y_{\tau_{U(r)}})]\, ,\color{black} \quad x\in U(r) \, . \end{equation} Using \eqref{e:exit-time-scaling}, \eqref{e:integral-upper-g}, Proposition \ref{l:dynkin-hp} and Lemma \ref{l:estimate-of-L-hat-B} (a), repeating the proof of \cite[Lemma 5.7]{KSV}, we get the following upper bound. \begin{lemma}\label{l:upper-bound-for-integral} There exists a constant $C>0$ such that for all $x\in U$, \begin{equation}\label{e:upper-bound-for-integral} {\mathbb E}_x \int_0^{\tau_{U}} (Y_t^d)^{\beta_1} |\log Y_t^d|^{\beta_3}\, dt \le C x_d^{\alpha-1}\, . \end{equation} \end{lemma} Using the above lemma, \cite[Lemma 5.2(b)]{KSV} and the scaling property of $Y$, the proof of the next lemma is the same as that of \cite[Lemma 6.1)]{KSV}, we omit the proof. \begin{lemma}\label{e:POTAe7.14} There exists $C>0$ such that for all $0<4r\le R\le 1$ and $w\in D(r,r)$, $$ {\mathbb P}_w\Big(Y_{\tau_{B(w, r)\cap {\mathbb R}^d_+}}\in A(w, R, 4)\cap {\mathbb R}^d_+\Big)\le C \frac{r^{\alpha+\beta_1}}{R^{\alpha+\beta_1}}\frac{w_d^{\alpha-1}}{r^{\alpha-1}}. $$ \end{lemma} Recall that $r_0$ in the constant in Lemma \ref{l:estimate-of-L-hat-B}(b). \begin{lemma}\label{l:POTAl7.4} There exists $C>0$ such that for any $x \in U(2^{-4}r_0)$, $$ {\mathbb P}_x\left(Y_{\tau_{U(r_0)}}\in D(1, 1)\right)\le C {\mathbb P}_x\left(Y_{\tau_{U(r_0)}}\in D(1/2, 1)\setminus D(1/2, 3/4)\right). $$ \end{lemma} \noindent{\bf Proof.} Let $V=U(r_0)$ and $$ H_2:=\{Y_{\tau_{U}}\in D(1, 1)\}, \quad H_1:=\{Y_{\tau_{U}}\in D(1/2, 1)\setminus D(1/2, 3/4)\}. $$ Choose a $p \in (\alpha-1, \alpha)$ and let $ \kappa(x) =C(\alpha, p, \sB)x_d^{-\alpha}. $ Let $Y^{\kappa}$ be the subprocess of $Y$ with killing potential $\kappa$ so that the corresponding Dirichlet form is $ {\mathcal E}(u,v)+\int_{{\mathbb R}^d_+} u(x)v(x)\kappa(x) dx . $ Either by repeating the proof of \cite[(5.17)]{KSV} or using \cite[Theorem 1.3]{KSV}, we get that $$ {\mathbb P}_w\left(Y^{\kappa}_{\tau_{V}}\in D(r_0/4, 1)\setminus D(r_0/4, 3/4)\right) \ge c_1 w^p_d, \quad w\in U(r_0/2). $$ Thus, \begin{equation}\label{e:POTAe7.15} {\mathbb P}_w(H_1)\ge {\mathbb P}_w\left(Y^{\kappa}_{\tau_{V}}\in D(1/2, 1)\setminus D(1/2, 3/4)\right) \ge c_1 w^p_d, \quad w\in U(r_0/2). \end{equation} For $i\ge 1$, set $$ s_0=s_1, \quad s_i=\frac{r_0}8\Big(\frac12-\frac1{50}\sum^i_{j=1}\frac1{j^2}\Big)\quad \text{ and } \quad J_i=D(s_i, 2^{-i-3}r_0)\setminus D(s_i, 2^{-i-4}r_0) . $$ Note that $r_0/(20)<s_i<r_0/(16)$. Define for $i\ge 1$, \begin{equation}\label{e:POTAe7.16} d_i=\sup_{z\in J_i}\frac{{\mathbb P}_z(H_2)}{{\mathbb P}_z(H_1)}, \quad \widetilde{J}_i= D(s_{i-1}, 2^{-i-3}r_0), \quad \tau_i=\tau_{\widetilde{J}_i}. \end{equation} Repeating the argument leading to \cite[(6.29)]{KSV19}, we get that for $z\in J_i$ and $i\ge 2$, \begin{equation}\label{e:POTAe7.17} {\mathbb P}_z(H_2)\le \Big(\sup_{1\le k\le i-1}d_k\Big){\mathbb P}_z(H_1) +{\mathbb P}_z\left(Y_{\tau_i}\in D(1, 1) \setminus \cup^{i-1}_{k=1}J_k\right). \end{equation} Recall that $n_0$ in the constant in Corollary \ref{c:lemma4-1} (b). For $i\ge 2$, define $\sigma_{i,0}=0, \sigma_{i,1}=\inf\{t>0: |Y_t-Y_0|\geq n_0 2^{-i-2}r_0\} $ and $\sigma_{i,m+1}=\sigma_{i,m}+\sigma_{i,1}\circ\theta_{\sigma_{i,m}}$ for $m\geq 1$. By Corollary \ref{c:lemma4-1} (b), we have that \begin{align}\label{e:POTAe7.18} {\mathbb P}_{w}(Y_{\sigma_{i,1}}\in \widetilde{J}_i) \le 1- {\mathbb P}_{w}( \sigma_{i,1}=\zeta ) \le 1-{\mathbb P}_{w}( \tau_{B(w,n_0 w_d)} =\zeta ) <2^{-1},\ \ \ w\in \widetilde{J}_i. \end{align} For the purpose of further estimates, we now choose a positive integer $l$ such that $l \ge \alpha+\beta_1$. Next we choose $i_0 \ge 2$ large enough so that $n_02^{-i+1}<1/(200 l i^3)$ for all $i\ge i_0$. Now we assume $i\ge i_0$. Using \eqref{e:POTAe7.18} and the strong Markov property we have that for $z\in J_i$, \begin{align}\label{e:POTAe7.19} &{\mathbb P}_z( \tau_{i}>\sigma_{i,li})\leq {\mathbb P}_z(Y_{\sigma_{i,k}}\in \widetilde{J}_i, 1\leq k\leq li )\nonumber\\ &= {\mathbb E}_z \left[ {\mathbb P}_{Y_{\sigma_{i,li-1}}} (Y_{\sigma_{i,1}}\in \widetilde{J}_i) : Y_{\sigma_{i,li-1}} \in \widetilde{J}_i, Y_{\sigma_{i,k}}\in \widetilde{J}_i, 1\leq k\leq li-2 \right] \nonumber\\ &\leq {\mathbb P}_z\left(Y_{\sigma_{i,k}}\in \widetilde{J}_i, 1\leq k\leq li-1 \right)2^{-1}\leq 2^{-li}. \end{align} Note that if $z\in J_i$ and $y\in D(1, 1) \setminus[ \widetilde{J}_i \cup(\cup_{k=1}^{i-1}J_k)]$, then $|y-z|\ge (s_{i-1}-s_i) \wedge (2^{-4}r_0-2^{-i-3}r_0) = r_0/(400 i^2)$. Furthermore, since $2^{-i-2}n_0 < 1/(400 i^2)$ (recall that $i\ge i_0$), if $Y_{\tau_i}(\omega)\in D(1, 1) \setminus \cup_{k=1}^{i-1}J_k$ and $\tau_i(\omega)\le \sigma_{i,li}(\omega)$, then $\tau_i(\omega)=\sigma_{i,k}(\omega)$ for some $k=k(\omega)\le li$. Dependence of $k$ on $\omega$ will be omitted in the next few lines. Hence on $\{Y_{\tau_{i}}\in D (1 , 1) \setminus \cup_{k=1}^{i-1}J_k,\ \ \tau_{i}\leq \sigma_{i,li}\}$ with $Y_0=z\in J_i$, we have $|Y_{\sigma_{i,k}}-Y_{\sigma_{i,0}}|=|Y_{\tau_i}-Y_0|> \frac{r_0}{400i^2}$ for some $1\leq k\leq li$. Thus for some $1\leq k\leq li$, $ \sum_{j=0}^k|Y_{\sigma_{i,j}}-Y_{\sigma_{i,j-1}}|> r_0(400i^2)^{-1} $ which implies for some $1\leq j\leq k\le li$, $ |Y_{\sigma_{i,j}}-Y_{\sigma_{i,j-1}}|\geq r_0({k400i^2})^{-1}\ge r_0(li)^{-1} (400i^2)^{-1}\, . $ Thus, we have \begin{align*} & \{Y_{\tau_{i}}\in D(1, 1) \setminus \cup_{k=1}^{i-1}J_k,\ \ \tau_{i}\leq \sigma_{i,li}\}\\ \subset & \cup_{j=1}^{li}\{|Y_{\sigma_{i,j}}- Y_{\sigma_{i,j-1}}|\geq r_0/(800li^3), Y_{\sigma_{i,j}}\in D(1, 1), Y_{\sigma_{i,j-1}}\in \widetilde{J}_{i} \}. \end{align*} Now, using Lemma \ref{e:POTAe7.14} (with $r=2^{-i-2}n_0r_0$ and $R=r_0/(800 l i^3)$) (noting that $4 \cdot 2^{-i-1}n_0<1/(400 l i^3)$ for all $i\ge i_0$), and repeating the argument leading to \cite[(6.34)]{KSV19}, we get that for $z\in J_i$, \begin{align*} &{\mathbb P}_z \left(Y_{\tau_{i}}\in D(1, 1) \setminus \cup_{k=1}^{i-1}J_k,\ \ \tau_{i}\leq \sigma_{i,li} \right) \leq li \sup_{w\in \widetilde{J}_{i}} {\mathbb P}_w\left(|Y_{\sigma_{i,1}}-w |\geq r_0(800li^3)^{-1}, Y_{\sigma_{i,1}}\in D(1, 1) \right)\\ &\le li \sup_{w\in \widetilde{J}_{i}} {\mathbb P}_w\left(4>|Y_{\sigma_{i,1}}-w |\geq r_0(800li^3)^{-1}\right) \le c_{12}li \left( \frac{800li^3}{2^{i+3}}\right)^{\alpha+\beta_1}. \end{align*} By this and (\ref{e:POTAe7.19}), we have for $z\in J_i$, $i\ge i_0$, \begin{align}\label{e:POTAe7.22} &{\mathbb P}_z\left( Y_{\tau_{i}}\in D(1, 1) \setminus \cup_{k=1}^{i-1}J_k \right) \leq 2^{-li} +c_{2} li \left( \frac{800li^3 }{2^{i+3}}\right)^{\alpha+\beta_1}. \end{align} By our choice of $l$, we have \begin{align}\label{e:POTAe7.23} &li \left( \frac{800li^3}{2^{i+3}}\right)^{\alpha+\beta_1} = 100^{\alpha+\beta_1} l^{1+\alpha+\beta_1} i^{1+3(\alpha+\beta_1)} \left(2^{-(\alpha+\beta_1)}\right)^i \ge \left(2^{-(\alpha+\beta_1)}\right)^i \ge (2^{-l})^{i}. \end{align} Thus combining \eqref{e:POTAe7.23} with \eqref{e:POTAe7.22}, and then using \eqref{e:POTAe7.15}, we get that for $z\in J_i$, $i\ge i_0$, \begin{align}\label{e:POTAe7.24} &\frac{{\mathbb P}_z( Y_{\tau_i}\in D(1, 1) \setminus \cup_{k=1}^{i-1}J_k)}{{\mathbb P}_z(H_1)} \le c_{3} li 2^{ip} \left( \frac{800li^3 }{2^{i+3}}\right)^{\alpha+\beta_1} \le c_{4} i ^{1+3(\alpha+\beta_1)}2^{(p-\alpha-\beta_1)i}. \end{align} By this and (\ref{e:POTAe7.17}), for $z\in J_i$, $i\ge i_0$,for all $i\ge i_0$ \begin{align*} &\frac{{\mathbb P}_z( H_2)}{{\mathbb P}_z(H_1)} \leq \sup_{1\leq k\leq i-1} d_k +\frac{{\mathbb P}_z( Y_{\tau_i}\in D(1 , 1) \setminus \cup_{k=1}^{i-1}J_k)}{{\mathbb P}_z(H_1)}\le \sup_{1\leq k\leq i-1}d_k+ c_{4} \frac{i ^{1+3(\alpha+\beta_1)}}{2^{(\alpha+\beta_1-p)i}}. \end{align*} This implies that for all $i\ge1$ \begin{align} d_i& \leq \sup_{1\leq k\leq i_0-1} d_k +c_{4}\sum_{k=1}^i\frac{i ^{1+3(\alpha+\beta_1)}}{2^{(\alpha+\beta_1-p)i}} \leq \sup_{1\leq k\leq i_0-1} d_k +c_{4}\sum_{k=1}^\infty\frac{i ^{1+3(\alpha+\beta_1)}}{2^{(\alpha+\beta_1-p)i}} =:c_{5} <\infty.\nonumber \end{align} Since $U(2^{-4}r_0) \subset \cup_{k=1}^\infty J_k$, the proof is now complete. {\hfill $\Box$ \bigskip} Using Corollary \ref{c:lemma4-1} (b), we can prove the following Carleson estimate. \begin{thm}[Carleson estimate]\label{t:carleson} There exists a constant $C>0$ such that for any $w \in\partial {\mathbb R}^d_+$, $r>0$, and any non-negative function $f$ in ${\mathbb R}^d_+$ that is harmonic in ${\mathbb R}^d_+ \cap B(w, r)$ with respect to $Y$ and vanishes continuously on $ \partial {\mathbb R}^d_+ \cap B(w, r)$, we have \begin{equation}\label{e:carleson} f(x)\le C f(\widehat{x}) \qquad \hbox{for all } x\in {\mathbb R}^d_+\cap B(w,r/2), \end{equation} where $\widehat{x}\in {\mathbb R}^d_+\cap B(w,r)$ with $\widehat{x}_d\ge r/4$. \end{thm} \noindent{\bf Proof.} Recall that $n_0$ in the constant in Corollary \ref{c:lemma4-1} (b). By using $B_0(x)=B(x,n_0x_d)$ instead of $B_0(x)=B(x,x_d/2)$ in the proof of \cite[Theorem1.2]{KSV} and applying our Corollary \ref{c:lemma4-1} (b), the proof of the theorem is almost identical to that of \cite[Theorem1.2]{KSV}. We omit the details. {\hfill $\Box$ \bigskip} \color{black} \noindent {\bf Proof of Theorem \ref{t:BHP}.} Recall that $r_0$ in the constant in Lemma \ref{l:estimate-of-L-hat-B}(b). By scaling, it suffices to deal with the case $r=1$. Moreover, by Theorem \ref{t:uhp} (b), it suffices to prove \eqref{e:TAMSe1.8new} for $x, y\in D_{\widetilde w}(2^{-8}r_0, 2^{-8}r_0)$. Since $f$ is harmonic in $D_{\widetilde w}(2, 2)$ and vanishes continuously on $B(\widetilde w, 2)\cap \partial {\mathbb R}^d_+$, it is regular harmonic in $D_{\widetilde w}(7/4, 7/4)$ and vanishes continuously on $B(\widetilde w, 7/4)\cap \partial {\mathbb R}^d_+$ (see \cite[Lemma 5.1]{KSV19} and its proof). Throughout the remainder of this proof, we assume that $x\in D_{\widetilde w}(2^{-8}r_0, 2^{-8}r_0)$. Without loss of generality we take $\widetilde{w}=0$. Define $x_0=(\widetilde{x}, 1/(16))$ and $V=U(r_0)$. By the Harnack inequality and Lemma \ref{l:POTAl7.4}, we have \begin{align}\label{e:TAMSe6.37} f(x)&={\mathbb E}_x[f(Y_{\tau_{V}})]\ge {\mathbb E}_x[f(Y_{\tau_{V}}); Y_{\tau_{V}}\in D(1/2, 1)\setminus D(1/2, 3/4)] \nonumber\\ &\ge c_0f(x_0){\mathbb P}_x(Y_{\tau_{V}}\in D(1/2, 1)\setminus D(1/2, 3/4)) \ge c_1f(x_0){\mathbb P}_x(Y_{\tau_{V}}\in D(1, 1)). \end{align} Set $w_0=(\widetilde{0}, 2^{-7})$. Then, using \cite[Proposition 3.11 (a)]{KSV}, we also have \cite[(6.13)-(6.14)]{KSV}, that is, \begin{align}\label{e:POTAe7.27} f(w_0)&\ge c_2 \int_{{\mathbb R}^d_+\setminus D(1, 1)} J \color{black} ({w_0}, y)f(y)dy, \end{align} and \begin{equation}\label{e:new-estimate-for-J} J \color{black}(z,y)\le c_{3} J \color{black}(w_0,y), \quad \text{ for any $z\in U$ and $y\in {\mathbb R}^d_+\setminus D(1, 1)$}. \end{equation} Combining \eqref{e:new-estimate-for-J} with \eqref{e:POTAe7.27} we now have \begin{align}\label{e:POTAe7.29} &{\mathbb E}_x\left[f(Y_{\tau_{V}}); Y_{\tau_{V}}\notin D(1, 1) \right]={\mathbb E}_x\int^{\tau_{V}}_0\int_{{\mathbb R}^d_+\setminus D(1, 1)} J \color{black}(Y_t, y)f(y)dydt\nonumber\\ &\le c_{3} {\mathbb E}_x\tau_{V}\int_{{\mathbb R}^d_+\setminus D(1, 1)} J \color{black}(w_0, y)f(y)dy\le c_{4} f(w_0){\mathbb E}_x\tau_{V}. \end{align} On the other hand, by the Harnack inequality (Theorem \ref{t:uhp}) and Carleson's estimate (Theorem \ref{t:carleson}), we have \begin{align}\label{e:POTAe7.30} {\mathbb E}_x\left[f(Y_{\tau_{V}}); Y_{\tau_{V}}\in D(1, 1) \right]&\le c_{5}f(x_0){\mathbb P}_x\left(Y_{\tau_{V}}\in D(1, 1) \right). \end{align} Combining \eqref{e:POTAe7.29} and \eqref{e:POTAe7.30}, and using the Harnack inequality, we get \begin{align*} f(x)&={\mathbb E}_x\left[f(Y_{\tau_{V}}); Y_{\tau_{V}}\in D(1, 1) \right]+ {\mathbb E}_x\left[f(Y_{\tau_{V}}); Y_{\tau_{V}}\notin D(1, 1) \right]\nonumber\\ &\le c_4f(w_0){\mathbb E}_x\tau_{V}+c_{5}f(x_0){\mathbb P}_x\left(Y_{\tau_{V}}\in D(1, 1) \right) \le c_{6} f(x_0)\left({\mathbb E}_x\tau_{V}+{\mathbb P}_x\left(Y_{\tau_{V}}\in D(1, 1) \right) \right). \end{align*} This with \eqref{e:POTAe7.27} and Lemma \ref{l:exltlow} implies that $ f(x) \asymp f(x_0) {\mathbb P}_x\left(Y_{\tau_{V}}\in D(1, 1) \right). $ For any $y\in D(2^{-8}r_0, 2^{-8}r_0)$, we have the same estimate with $f(y_0)$ instead of $f(x_0)$, where $y_0=(\widetilde{y}, 1/(16))$. By the Harnack inequality, we have $f(x_0)\asymp f(y_0)$. Thus, $$ \frac{f(x)}{f(y)}\asymp \frac{ {\mathbb P}_x\left(Y_{\tau_{V}}\in D(1, 1) \right)} { {\mathbb P}_y\left(Y_{\tau_{V}}\in D(1, 1) \right)}. $$ We now apply this with $g_{\alpha-1}$ (which is harmonic by Lemma \ref{l:gharmonic} ) to conclude that $$ \frac{x_d^{\alpha-1}}{y_d^{\alpha-1}}\asymp \frac{ {\mathbb P}_x\left(Y_{\tau_{V}}\in D(1, 1) \right)} { {\mathbb P}_y\left(Y_{\tau_{V}}\in D(1, 1) \right)}\asymp \frac{f(x)}{f(y)}. $$ {\hfill $\Box$ \bigskip} We now show that any non-negative function which is regular harmonic near a portion of boundary vanishes continuously on that portion of boundary, cf.~\cite[Remark 6.2]{BBC} and \cite[Lemma 3.2]{CK02}. Thus, the above boundary Harnack principle also holds for regular harmonic functions. \begin{lemma}\label{l:bh-decay} There exists a constant $C>0$ such that for every bounded function $f:{\mathbb R}^d_+\to [0,\infty)$ which is regular harmonic in $U=D(1/2,1/2)$, it holds that $$ f(x)\le C \|f\|_{\infty} x_d^{\alpha-1}, \quad x\in D(2^{-5}, 2^{-5}). $$ \end{lemma} \noindent{\bf Proof.} Let $f:{\mathbb R}^d_+\to [0,\infty)$ be a bounded function which is regular harmonic in $U=D(1/2,1/2)$. Then for every $x\in D(2^{-5}, 2^{-5})$, \begin{equation}\label{e:bh-decay-1} f(x)={\mathbb E}_x [f(Y_{\tau_U}), Y_{\tau_U}\in D(1,1)]+{\mathbb E}_x [f(Y_{\tau_U}), Y_{\tau_U}\notin D(1,1)]. \end{equation} In the first term we use $f(Y_{\tau_U})\le \|f\|_{\infty}$ and then apply Theorem \ref{t:BHP} to get \begin{equation}\label{e:bh.decay-2} {\mathbb E}_x [f(Y_{\tau_U}), Y_{\tau_U}\in D(1,1)]\le \|f\|_{\infty} \color{black} {\mathbb P}_x(Y_{\tau_U}\in D(1,1))\le c_1 \|f\|_{\infty}x_d^{\alpha-1}. \end{equation} Now we estimate the second term. For $z\in U$ and $w\in {\mathbb R}^d_+\setminus D(1, 1)$, we have $|w-z|\asymp |w|$. Thus, by using \cite[Lemma 5.2(a)]{KSV}, \begin{align*} &\int_{{\mathbb R}^d_+\setminus D(1,1)} f(w) \color{black} \sB(z,w)|z-w|^{-d-\alpha} dw\nonumber\\ &\le c_2\|f\|_{\infty} z_d^{\beta_1}(|\log z_d|^{\beta_3}\vee 1)\int_{{\mathbb R}^d_+\setminus D(1,1)}\frac{1}{|w|^{d+\alpha+\beta_1}} \big(1+{\bf 1}_{|w|\ge1}(\log|w|)^{\beta_3}\big) dw\\ &\le c_2 \|f\|_{\infty}z_d^{\beta_1} |\log z_d|^{\beta_3}\int_{{\mathbb R}^d_+\setminus D(1,1)}\frac{ \big(1+{\bf 1}_{|w|\ge1}(\log|w|)^{\beta_3}\big)}{|w|^{d+\alpha+\beta_1}} dw. \end{align*} Hence, by using the L\'evy system formula and \color{black} Lemma \ref{l:upper-bound-for-integral}, for $x\in D(2^{-5}, 2^{-5})$, \begin{align}\label{e:bh-decay-3} &{\mathbb E}_x\left[f(Y_{\tau_{U}}); Y_{\tau_{U}}\notin D(1,1)\right] \nonumber \\ & = {\mathbb E}_x \int_0^{\tau_U} \int_{{\mathbb R}^d_+\setminus D(1,1)} f(Y_t) J(Y_t,w)\, dw \color{black} \nonumber\\ &\le c_3 \|f\|_{\infty} x_d^{\alpha-1} \int_{{\mathbb R}^d_+\setminus D(1,1 )}\frac{ \big(1+{\bf 1}_{|w|\ge1}(\log|w|)^{\beta_3}\big)}{|w|^{d+\alpha+\beta_1}} dw\nonumber \\ &\le c_4 \|f\|_{\infty} x_d^{\alpha-1}. \end{align} The claim of the lemma follows by using \eqref{e:bh.decay-2} and \eqref{e:bh-decay-3} in \eqref{e:bh-decay-1}. {\hfill $\Box$ \bigskip} \begin{prop}\label{p:h-decay} There exists $C>0$ such that for every $f:{\mathbb R}^d_+\to [0,\infty)$ which is regular harmonic in $U=D(1/2,1/2)$, it holds that $$ f(x)\le C \left(\frac{f(w)}{w_d^{\alpha-1}}\right) x_d^{\alpha-1}, \quad w, x\in D(2^{-3}, 2^{-3}). $$ In particular, $f$ vanishes continuously $\{x=(\widetilde{x},0)\in \partial {\mathbb R}^d_+: |\widetilde{x}|< 2^{-3}\}$. \end{prop} \noindent{\bf Proof.} For any $k\in {\mathbb N}$ define $$ f_k(x):={\mathbb E}_x[f(Y_{\tau_U})\wedge k]={\mathbb E}_x[(f\wedge k)(Y_{\tau_U})] . $$ Then $f_k$ is regular harmonic in $U$ and bounded in ${\mathbb R}^d_+$. By Lemma \ref{l:bh-decay}, $f_k(x)\le c_4 \|f\wedge k\|_{\infty} x_d^{\alpha-1}$. Thus $f_k$ vanishes continuously at the boundary. By the boundary Harnack principle, Theorem \ref{t:BHP}, there is $C>0$ such that for every $k\in {\mathbb N}$ $$ \frac{f_k(x)}{f_k(w)}\le C \frac{x_d^{\alpha-1}}{w_d^{\alpha-1}}, \qquad x, w\in D(2^{-3}, 2^{-3}). $$ By letting $k\to \infty$ we get that $$ \frac{f(x)}{f(w)}\le C \frac{x_d^{\alpha-1}}{w_d^{\alpha-1}}, \qquad x, w\in D(2^{-3}, 2^{-3}). $$ {\hfill $\Box$ \bigskip} \section{Estimates on Green functions and Potential}\label{s:EGP} \subsection{Green function estimates} By following \cite[Section2]{KSV21} we see that there exists a symmetric function $ G(x,y)$, $x,y\in {\mathbb R}^d$ such that for all Borel functions $f:{\mathbb R}^d\to [0,\infty)$ we have that $$ {\mathbb E}_x\int_0^{\zeta}f(Y_t)\, dt =\int_{{\mathbb R}^d_+}G(x,y)f(y)\, dy. $$ Moreover, since $Y$ is transient, $G(x,y)$ is not identically infinite, and as in \cite[Proposition 2.2]{KSV21}, we can conclude that $G(\cdot, \cdot)$ is lower-semicontinuous in each variable and finite outside the diagonal. Further, for every $x\in {\mathbb R}^d_+$, $G(x, \cdot)$ is harmonic with respect to $Y$ in ${\mathbb R}^d_+\setminus\{x\}$ and regular harmonic in ${\mathbb R}^d_+\setminus B(x, \epsilon)$ for every $\epsilon >0$. The function $G$ enjoys the following scaling property (proved as in \cite[Proposition 2.4]{KSV21}): For all $x,y\in {\mathbb R}^d_+$, $x\neq y,$ \begin{equation}\label{e:green-scaling} G(x,y)=G\left(\frac{x}{|x-y|}, \frac{y}{|x-y|}\right)|x-y|^{\alpha-d}\, . \end{equation} Choose a $p \in (\alpha-1, \alpha)$ and let $ \kappa(x) =C(\alpha, p, \sB)x_d^{-\alpha}. $ Let $Y^{\kappa}$ be the subprocess of $Y$ with killing potential $\kappa$ so that the corresponding Dirichlet form is $ {\mathcal E}^{\kappa}(u,v)=\color{black} {\mathcal E}(u,v)+\int_{{\mathbb R}^d_+} u(x)v(x)\kappa(x) dx$. Let $G(x, y)$ and $G^{\kappa}(x, y)$ be the Green functions of $Y$ and $Y^{\kappa}$ respectively. Then $G(x, y)\ge G^{\kappa}(x, y)$. Now the following result follows immediately from \cite[Proposition 4.1]{KSV21}. \begin{prop}\label{p:green-lower-bound} For any $C_1>0$, there exists a constant $C_2>0$ such that for all $x,y\in {\mathbb R}^d_+$ satisfying $|x-y|\le C_1(x_d\wedge y_d)$, it holds that $$ G(x,y)\ge C_2|x-y|^{-d+\alpha}. $$ \end{prop} From now on we assume $d > (\alpha+\beta_1+\beta_2)\wedge 2$. In \cite[Section 4.2]{KSV21}, the killing function plays no role. Repeating the argument leading to \cite[Proposition 4.6]{KSV}, we get \begin{prop}\label{p:green-upper-bound} There exists a constant $C>0$ such that for all $x,y\in {\mathbb R}^d_+$ satisfying $|x-y|\le x_d\wedge y_d$, it holds that $$ G(x,y) \le C |x-y|^{-d+\alpha}. $$ \end{prop} Using Proposition \ref{p:h-decay}, we can combine Proposition \ref{p:green-upper-bound} with Theorem \ref{t:carleson} to get the following result, which is key for us to get sharp two-sided Green functions estimates. \begin{prop}\label{p:gfcnub} There exists a constant $C>0$ such that for all $x, y\in {\mathbb R}^d_+$, \begin{align} \label{e:upper} G(x,y) \le C |x-y|^{-d+\alpha}. \end{align} \end{prop} \noindent{\bf Proof.} It follows from Proposition \ref{p:green-upper-bound} that there exists $c_1>0$ such that $G(x, y)\le c_1$ for all $x, y\in {\mathbb R}^d_+$ with $|x-y|=1$ and $x_d\wedge y_d>1$. By Theorem \ref{t:uhp}, for any $c_2>0$, there exists $c_3>0$ such that $G(x, y)\le c_3$ for all $x, y\in {\mathbb R}^d_+$ with $|x-y|=1$ and $x_d\wedge y_d>c_2$. Now by Theorem \ref{t:carleson}, we see that there exists $c_4>0$ such that $G(x,y) \le c_4$ for all $x, y\in {\mathbb R}^d_+$ with $|x-y|=1$. Therefore, by \eqref{e:green-scaling}, we have $$ G(x,y) \le C |x-y|^{-d+\alpha}, \quad x, y\in {\mathbb R}^d_+. $$ {\hfill $\Box$ \bigskip} Now we prove the two-sided Green function estimates. \noindent {\bf Proof of Theorem \ref{t:Green}.} The scaling property and the invariance property of the half space under scaling imply that in order to prove Theorem \ref{t:Green} we only need to show that for all $x,y\in {\mathbb R}^d_+$ satisfying $|x-y|=1$, \begin{equation}\label{e:Green2} C^{-1} \left(x_d \wedge 1 \right)^{\alpha-1}\left({y_d} \wedge 1 \right)^{\alpha-1} \le G (x,y) \le C \left(x_d \wedge 1 \right)^{\alpha-1}\left({y_d} \wedge 1 \right)^{\alpha-1}. \end{equation} By scaling, Theorem \ref{t:uhp}, and Propositions \ref{p:green-lower-bound} and \ref{p:gfcnub}, we only need to show \eqref{e:Green2} for $x_d \wedge y_d \le 2^{-3}$ and $|x-y|=1$. We now assume that $|x-y|=1$ and let $x_0=(\widetilde x, 2^{-3})$ and $y_0=(\widetilde y, 2^{-3})$. We first note that Proposition \ref{p:h-decay}, together with scaling, clearly implies that for all $x\in {\mathbb R}^d_+$, $y\mapsto G(x,y)$ vanishes at the boundary of ${\mathbb R}^d_+$. Suppose that $y_d \ge 2^{-3}$. Then, by Theorem \ref{t:uhp}, and Propositions \ref{p:green-lower-bound} and \ref{p:gfcnub}, we have $G(x_0,y) \asymp c >0$. Thus by Theorem \ref{t:BHP}, \begin{align} \label{e:Gs1} G(x,y) \asymp G(x_0,y) (x_d/2^{-3})^{\alpha-1} \asymp x_d^{\alpha-1}. \end{align} Suppose that $y_d < 2^{-3}$. Then by Theorem \ref{t:BHP} and \eqref{e:Gs1} \begin{align} \label{e:Gs2} G(x,y) \asymp G(x,y_0) (y_d/2^{-3})^{\alpha-1} \asymp x_d^{\alpha-1} y_d^{\alpha-1}. \end{align} \eqref{e:Gs1} and \eqref{e:Gs2} imply that \eqref{e:Green2} hold for $x_d \wedge y_d \le 2^{-3}$ and $|x-y|=1$. {\hfill $\Box$ \bigskip} \subsection{Estimates on Potentials}\label{ss:lb} Recall that we have assumed $d > (\alpha+\beta_1+\beta_2)\wedge 2$. Let $G^{B(w,R)\cap {\mathbb R}^d_+}(x,y)$ be the Green function of the process $Y$ killed upon exiting $B(w,R)\cap {\mathbb R}^d_+$, $w\in \partial {\mathbb R}^d_+$. For any $a>0$, let $B^+_a:=B(0, a)\cap {\mathbb R}^d_+$. Using Theorem \ref{t:Green} and the formula $$ G^{B^+_1}(y, z)=G(y, z)-{\mathbb E}_y[G(Y_{\tau_{B^+_1}}, z)], $$ the proof of next result is standard. For example, see \cite[Lemma 5.1]{KSV21} \begin{lemma}\label{l:GB_1} For any $\varepsilon \in (0, 1)$ and $M>1$, there exists a constant $C>0$ such that for all $y,z \in B^+_{1-\varepsilon}$ with $|y-z| \le M(y_d\wedge z_d)$. $$ G^{B^+_1}(y, z)\ge C |y-z|^{-d+\alpha}. $$ \end{lemma} By Theorems \ref{t:BHP} and \ref{t:uhp} we have for any $r>0$ and $x\in {\mathbb R}^d_+$ with $x_d<r/2$, $$ {\mathbb P}_x(Y_{\tau_{D_{\widetilde x}(r,r)}} \in D_{\widetilde x}(r, 4r) \setminus D_{\widetilde x}(r, 3r)) \ge c x^{\alpha-1}. $$ Using this and Lemma \ref{l:GB_1}, the proofs of next two lemmas are identical to those of \cite[Lemmas 5.2 and 5.3]{KSV21}. \begin{lemma}\label{p:GB_1} For every $\varepsilon \in (0, 1/4)$ and $M, N>1$, there exists a constant $C_>0$ such that for all $x,z \in B^+_{1-\varepsilon}$ with $x_d \le z_d$ satisfying $x_d/N \le |x-z|\le M z_d $, it holds that $$ G^{B^+_1}(x, z)\ge Cx^{\alpha-1}_d|x-z|^{-d+1}. $$ \end{lemma} \begin{lemma}\label{l:GB_4} For every $\varepsilon \in (0, 1/4)$ and $M \ge 40/\varepsilon$, there exists a constant $C>0$ such that for all $x,z \in B^+_{1-\varepsilon}$ with $x_d \le z_d$ satisfying $|x-z|\ge M z_d $, it holds that $$ G^{B^+_1}(x, z)\ge C x^{\alpha-1}_dz^{\alpha-1}_d|x-z|^{-d-\alpha+2}. $$ \end{lemma} Combining the above results with scaling, we get \begin{thm}\label{t:GB} For any $\varepsilon \in (0, 1/4)$, there exists a constant $C>0$ such that for all $w \in \partial {\mathbb R}^d_+$, $R>0$ and $x,y \in B(w, (1-\varepsilon)R)\cap {\mathbb R}^d_+$, it holds that $$ G^{B(w, R)\cap {\mathbb R}^d_+}(x, y)\ge C \left(\frac{x_d}{|x-y|} \wedge 1 \right)^{\alpha-1}\left(\frac{y_d}{|x-y|} \wedge 1 \right)^{\alpha-1} \frac{1}{|x-y|^{d-\alpha}}. $$ \end{thm} \begin{prop}\label{p:bound-for-integral-new} For any $\widetilde{w} \in {\mathbb R}^{d-1}$, any Borel set $D$ satisfying $D_{\widetilde{w}}(R/2,R/2) \subset D \subset D_{\widetilde{w}}(R,R)$ and any $x=(\widetilde{w}, x_d)$ with $x_d \le R/10$, \begin{equation}\label{e::bound-for-integral-new} {\mathbb E}_x \int_0^{\tau_D}(Y_t^d)^{\gamma \color{black}}\, dt = \int_D G^D(x,y)y_d^{\gamma \color{black}}\, dy \asymp \begin{cases} R^{ \gamma +1}x_d^{\alpha-1}, & \gamma \color{black}>-1,\\ x_d^{\alpha-1}\log(R/x_d), & \gamma =-1, \\ x_d^{\alpha+ \gamma \color{black}}, &-{\alpha}< \gamma <-1, \end{cases} \end{equation} where the comparison constant is independent of $\widetilde{w} \in {\mathbb R}^{d-1}$, $D$, $R$ and $x$. \end{prop} \noindent{\bf Proof.} Let $D$ be a Borel set satisfying $D(R/2,R/2) \subset D \subset D(R,R)$. By Theorems \ref{t:Green} and \ref{t:GB}, we have that for all $x \in D$ \begin{align} &\int_D G^D(x,y)y_d^{\gamma \color{black}}\, dy \le \int_D G(x,y)y_d^{\gamma \color{black}}\, dy\nonumber\\ & \le c_1 \int_{D(R, R)} \left(\frac{x_d}{|x-y|} \wedge 1 \right)^{\alpha-1}\left(\frac{y_d}{|x-y|} \wedge 1 \right)^{\alpha-1} \frac{ y_d^{\gamma}dy}{|x-y|^{d-\alpha}} , \label{e:e:pu1} \end{align} and for $x=(\widetilde{0}, x_d)$ with $x_d \le R/10$ \begin{align} & \int_D G^D(x,y)y_d^{\gamma \color{black}}\, dy \ge \int_{B^+_{R/2}} y_d^{\gamma} G^{B^+_{R/2}}(x, y)\, dy \ge c_2\int_{D(R/5, R/5)} y_d^{\gamma} G^{B^+_{R/2}}(x, y)\, dy\nonumber\\ &\ge \int_{D(R/5, R/5)} \left(\frac{x_d}{|x-y|} \wedge 1 \right)^{\alpha-1}\left(\frac{y_d}{|x-y|} \wedge 1 \right)^{\alpha-1} \frac{ y_d^{\gamma}dy}{|x-y|^{d-\alpha}}. \label{e:e:pd1} \end{align} We now apply \cite[Lemma 3.3 and Theorem 3.4]{AGV} and their proofs to \eqref{e:e:pu1} and \eqref{e:e:pd1} and get \eqref{e::bound-for-integral-new} {\hfill $\Box$ \bigskip} \begin{corollary} For all $x \in {\mathbb R}^d_+$, $$ {\mathbb E}_x\int_0^{\zeta} (Y_t^d)^{ \gamma \color{black}}\, dt = \int_{{\mathbb R}^d_+}G(x,y)y_d^{\gamma \color{black}} dy \asymp \begin{cases} \infty & \gamma \color{black} \ge -1 \text{ or } \gamma \color{black} \le -{\alpha},\\ x_d^{\alpha+ \gamma \color{black}}, & -\alpha< \gamma \color{black}<-1. \end{cases} $$ In particular, for all $x \in {\mathbb R}^d_+$, ${\mathbb E}_x[\zeta]= \infty$. \end{corollary} \noindent{\bf Proof.} When $ \gamma>-\alpha $, the result follows by letting $R \to \infty$ in Proposition \ref{p:bound-for-integral-new}. If $ \gamma \le -\alpha $, by Theorems \ref{t:Green}, for all $x \in {\mathbb R}^d_+$ \begin{align*} & \int_{{\mathbb R}^d_+}G(x,y)y_d^{\gamma \color{black}} dy \ge c_1 \int_{D_{\widetilde{x}}(x_d, x_d/2)} \left( \frac{x_d}{|x-y|} \right)^{\alpha-1}\left(\frac{y_d}{|x-y|} \right)^{\alpha-1} \frac{ y_d^{\gamma}dy}{|x-y|^{d-\alpha}}\\ &= c_1x_d^{\alpha-1} \int_{D_{\widetilde{x}}(x_d, x_d/2)} \frac{ y_d^{\gamma+\alpha-1}dy}{|x-y|^{d+\alpha-2}} \ge c_2x_d^{1-d} \int_{D_{\widetilde{x}}(x_d, x_d/2)} y_d^{\gamma+\alpha-1}dy=\infty. \end{align*} {\hfill $\Box$ \bigskip} \vspace{.1in} \small
15a80270eb431fc2da530420fc30564605359f08
\section{Introduction} \label{sec:intro} Cosmological-perturbation theory (CPT) is a pillar of our modern understanding of cosmology. It consists in describing small deviations from highly-symmetric background space-times by means of perturbative techniques, while accounting for the fundamental invariance of general relativity under changes of coordinates. When CPT is developed around a homogeneous and isotropic background, an important simplification may occur at large scales (\textsl{i.e.~} on distances larger than the length scale associated with the universe expansion -- or contraction -- rate) if the universe can be described by an ensemble of independent, locally homogeneous and isotropic patches. This picture is called the separate-universe approach~\cite{Salopek:1990jq,Sasaki:1995aw,Wands:2000dp,PhysRevD.68.103515,PhysRevD.68.123518,Lyth:2005fi, Tanaka:2021dww} and is also known as the quasi-isotropic picture \cite{Lifshitz:1960,Starobinsky:1982mr,PhysRevD.49.2759,Khalatnikov_2002}. When applicable, it implies that studying the large-scale cosmological perturbations boils down to solving the homogeneous and isotropic problem with different initial conditions, which allows one to track only a subset of the relevant degrees of freedom. This represents a very substantial technical simplification. Another simplification that occurs on large scales is when the background evolution features a dynamical phase-space attractor. This is for instance the case in inflating backgrounds, if inflation proceeds in the so-called slow-roll regime. In that case, if the separate-universe approach can be used, perturbations are subject to the same attractor, which makes them collapse on a phase-space subset (the dimension of which equals the number of matter fields), removing the dependence on initial field velocities. This further reduction of the effective phase-space makes the use of the Lagrangian framework convenient, which explains why most analyses of the separate-universe approach and of the conditions for its validity have been carried out in the Lagrangian framework. However, there are situations in which the background dynamics is not endowed with such an attractor, hence the full phase-space structure must be considered. For instance, this is the case if inflation proceeds in the so-called ultra-slow-roll regime (which may or may not be stable, see \Refc{Pattison:2018bct}, but which always retains dependence on the initial field velocities), or for some contracting cosmologies (see \textsl{e.g.~} \Refs{Miranda:2019ara, Grain:2020wro}). In particular, bouncing cosmologies, where the classical expansion is preceded by a contracting phase and a regular bounce, are typical examples of alternatives to inflation~\cite{Wands:1998yp,Finelli:2001sr} arising in various contexts such as string theory or loop quantum gravity, see \textsl{e.g.~} \Refs{Khoury:2001wf,Barrau:2013ula,Brandenberger:2016vhg,Agullo:2016tjh}. In such situations it is important to be able to describe and compare CPT~and the separate-universe approach in the phase space, \textsl{i.e.~} using the Hamiltonian formalism. While CPT~has already been investigated in this framework, see \textsl{e.g.~} \Refs{Langlois:1994ec, Domenech:2017ems}, the aim of the present work is to discuss the Hamiltonian version of the separate-universe approach. Our goal is both to construct the separate-universe formalism in the Hamiltonian picture, and to establish the conditions under which it properly describes the full phase-space properties of cosmological perturbations. Note that a phase-space formulation of cosmological perturbations (either in CPT or in the separate-universe approach) is also crucial when it comes to describing them at the quantum-mechanical level. For instance, as shown in \Refc{Grain:2019vnq}, the choice of an initial vacuum state is intimately related to the choice of a phase-space parametrisation. In slow-roll inflation, a large class of parameterisations leads to the same vacuum state, namely the Bunch-Davies vacuum, but the situation is less clear in general and makes a phase-space formulation appropriate. Let us mention that the present work lays the ground for upcoming articles in which we will further investigate the gauge formalism (\textsl{i.e.~} transformation under changes of coordinates, the gauge-fixing procedure, and the construction of a gauge-invariant parametrisation of the phase space) in the Hamiltonian framework, both in full CPT and in the separate-universe approach. In this article, we will discuss the gauge-fixing procedure in most commonly-used gauges, since it plays an important role in comparing the separate-universe formalism with CPT, but one should bear in mind that this discussion will be complemented by a more systematic analysis in follow-up publications. Another motivation behind this analysis is the so-called stochastic-inflation formalism~\cite{Starobinsky:1982ee,Starobinsky:1986fx}, which heavily relies on the separate-universe framework. In this approach, quantum cosmological fluctuations act as a stochastic noise on the large-scale evolution as they cross out the Hubble radius (either during inflation or during a slowly contracting era). The stochastic formalism has been extensively used in the context of slow-roll inflation where it has been shown to be in very good agreement with predictions from quantum-field-theoretic calculations (see \textsl{e.g.~} \Refs{Starobinsky:1994bd,Tsamis:2005hd,Finelli:2010sh,Garbrecht:2013coa,Onemli:2015pma,Burgess:2015ajz,Kamenshchik:2021tjh}) and to preserve the attractor nature of the slow-roll regime~\cite{Grain:2017dqa}. Combined with the $\delta N$ formalism, where curvature perturbations on large scales are related to the fluctuations in the local amount of expansion, it gives rise to the stochastic-$\delta N$ formalism~\cite{Vennin:2015hra, Fujita:2013cna}, which allows one to incorporate quantum backreaction in the calculation of the density field of the universe. This plays an important role notably in the analysis of primordial black hole production, which usually requires a phase of strong stochastic effects~\cite{Pattison:2017mbe, Biagetti:2018pjj, Ezquiaga:2019ftu}. Given that primordial black holes form in models where deviations from the slow-roll attractor are also observed (in particular along the ultra-slow-roll regime), the stochastic-$\delta N$ formalism has been recently extended beyond the slow-roll setup~\cite{Nakao:1988yi, PhysRevD.46.2408, Rigopoulos:2005xx, Tolley:2008na, Weenink:2011dd, Grain:2017dqa, Firouzjahi:2018vet, Ezquiaga:2018gbw, Pattison:2019hef, Pattison:2021oen}, where the full phase-space structure of the fields needs to be resolved. The present analysis will therefore confirm the validity of this approach by studying the separate-universe in the Hamiltonian framework. In practice, \Refc{Pattison:2019hef} pointed out that the derivation of the Langevin equations of stochastic inflation requires to perform a gauge transformation, from the spatially-flat gauge where the free scalar-field correlators are computed, to the uniform-expansion gauge where the stochastic noise needs to be expressed. Our goal is also to derive the tools required to perform such a transformation on generic grounds, in the full phase space of the separate-universe system. As mentioned above, another situation where a dynamical attractor is not always available is the case of slowly contracting cosmologies, so the present work can be seen as a prerequisite for the development of a ``stochastic-contraction'' formalism~\cite{Miranda:2019ara, Grain:2020wro}, which will be the topic of future works. The paper is organised as follows. In \Sec{sec:CosmoHam}, we briefly review the basics of the phase-space (or Hamiltonian) formulation of general relativity, and apply it to the case of homogeneous and isotropic cosmologies. We then incorporate cosmological perturbations in the formalism, using CPT~in \Sec{sec:cosmopert} and with the separate-universe setup in \Sec{sec:SepUniv}. We compare the two approaches in \Sec{sec:suvspert}, both at the level of the phase-space dynamics and of the gauge-fixing procedure. Our results are summarised and further discussed in \Sec{sec:conclusion}, and we end the paper with four appendices to which various technical aspects of the calculations presented in the main text are deferred. \section{Cosmology in the Hamiltonian formalism} \label{sec:CosmoHam} \subsection{Hamiltonian description of general relativity} \label{sec:Hamiltonian:GR} Let us start by reviewing the basics of the Hamiltonian formulation of general relativity (see \textsl{e.g.~} \Refc{thieman_book} for a detailed mathematical introduction and \Refc{Langlois:1994ec} for an application to the cosmological context). Since our work takes place in the context of primordial cosmology, for explicitness, we consider the case where the matter content of the universe is given by a single scalar field, $\phi$, minimally coupled to gravity in a four-dimensional curved space-time with metric $g_{\mu\nu}$. The total action then reads \begin{eqnarray} \label{eq:action} S=\displaystyle\int\mathrm{d}^4x\sqrt{-g}\left[\frac{M_\usssPl^2}{2}R-\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\,\partial_\nu\phi-V(\phi)\right], \end{eqnarray} where $g$ is the determinant of $g_{\mu\nu}$, $R$ is the four-dimensional Ricci scalar, $V(\phi)$ is the scalar field potential, and $M_\usssPl$ is the reduced Planck mass. The Hamiltonian formulation is obtained by foliating the four-dimensional space-time into a set of three-dimensional space-like hypersurfaces, $\Sigma_\tau$, where the foliation is defined by the lapse function, $N(\tau,\vec{x})$, and the shift vector $N^i(\tau,\vec{x})$. Here, $\tau$ stands for the time variable and $\vec{x}$ for the spatial coordinates on the hypersurfaces. The line element is then expressed in the ADM form~\cite{Arnowitt:1962hi} \begin{eqnarray} \label{eq:metric:ADM} \mathrm{d} s^2=-N^2(t,\vec{x})\mathrm{d}\tau^2+\gamma_{ij}(\tau,\vec{x})\left[\mathrm{d} x^i+N^i(\tau,\vec{x})\mathrm{d}\tau\right]\left[\mathrm{d} x^j+N^j(\tau,\vec{x})\mathrm{d}\tau\right]. \end{eqnarray} In the above expression, $\gamma_{ij}$ is the induced metric on the spatial hypersurfaces $\Sigma_\tau$. Indices are lowered by $\gamma_{ij}$ and raised by its inverse $\gamma^{ij}$. The canonical variables for the scalar field are $\phi$ and its conjugate momentum $\pi_\phi:=\delta S/\delta \dot\phi$, where we have introduced the notation $\dot{f}:=\mathrm{d} f/\mathrm{d}\tau$ for the time derivative, and $\delta S/\delta\dot\phi$ is the functional derivative. The associated Poisson bracket is $\left\{\phi(\tau,\vec{x}),\pi_\phi(\tau,\vec{y})\right\}=\delta^3(\vec{x}-\vec{y})$. Similarly for the gravitational sector, the canonical variables are the induced metric, $\gamma_{ij}$, and its associated momentum $\pi^{ij}:=\delta S/\delta\dot\gamma_{ij}$, with the Poisson bracket reading $\left\{\gamma_{ij}(\tau,\vec{x}),\pi^{mn}(\tau,\vec{y})\right\}=\frac{1}{2}\left(\delta^m_i\delta^n_j+\delta^n_i\delta^m_j\right)\delta^3(\vec{x}-\vec{y})$. Here, $\delta^3(\vec{x})$ stands for the Dirac distribution while $\delta^i_j$ is the Kronecker symbol. Since the time derivatives of the lapse function and of the shift vector do not appear in the action~\eqref{eq:action}, they are Lagrange multipliers, corresponding to the freedom in the choice of the coordinate system. The dynamics of the gravitational and scalar-field degrees of freedom is then derived from the following total Hamiltonian \begin{eqnarray} \label{eq:full:Hamiltonian} C\left[N,N^i\right]=\displaystyle\int\mathrm{d}^3\vec{x}\left[N\left(\mathcal{S}^{(\mathrm{G})}+\mathcal{S}^{(\phi)}\right)+N^i\left(\mathcal{D}^{(\mathrm{G})}_i+\mathcal{D}^{(\phi)}_i\right)\right], \end{eqnarray} which is obtained from \Eq{eq:action} by performing a Legendre transform, and where $\mathcal{S}:=\mathcal{S}^{(\mathrm{G})}+\mathcal{S}^{(\phi)}$ is the scalar (or energy) constraint and $\mathcal{D}_i:=\mathcal{D}^{(\mathrm{G})}_i+\mathcal{D}^{(\phi)}_i$ is the vector (or momentum/diffeomorphism) constraint, which both receive contributions from the gravitational sector and from the scalar-field sector.\footnote{The term ``smeared constraint" often appears in the literature and refers to the spatial integral of the corresponding constraint and its associated Lagrange multiplier (for instance, $\int\mathrm{d}^3 \vec{x} N \mathcal{S}$ is the smeared scalar constraint).\label{footnote:smeared:constraints}} As functions of the canonical variables, these constraints are given by \begin{eqnarray} \mathcal{S}^{(\mathrm{G})} & = &\frac{2}{M_\usssPl^2\sqrt{\gamma}}\left(\pi^{ij}\pi_{ij}-\frac{\pi^2}{2}\right)-\frac{M_\usssPl^2\sqrt{\gamma}}{2}\,\mathcal{R}(\gamma_{ij}), \label{eq:scgg} \\ \mathcal{D}^{(\mathrm{G})}_i & = & -2\partial_m\left(\gamma_{ij}\pi^{jm}\right)+\pi^{mn}\partial_i\gamma_{mn}\, ,\label{eq:dcgg} \end{eqnarray} where $\gamma$ stands for the determinant of $\gamma_{ij}$, $\pi:=\gamma_{ij}\pi^{ij}$ is the trace of the gravitational momentum, and \begin{eqnarray} \mathcal{S}^{(\phi)} & = & \frac{1}{2\sqrt{\gamma}}\pi^2_\phi+\frac{\sqrt{\gamma}}{2}\gamma^{ij}\partial_i\phi\,\partial_j\phi+\sqrt{\gamma}V(\phi), \label{eq:scsg} \\ \mathcal{D}^{(\phi)}_i& =& \pi_\phi\partial_i\phi\, .\label{eq:dcsg} \end{eqnarray} The ``gravitational potential term'' is given by the three-dimensional Ricci scalar, $\mathcal{R}(\gamma_{ij})$, associated to the induced metric on the spatial hypersurfaces $\Sigma_\tau$. The equation of motion for any function $F$ of the phase-space variables is thus obtained using the full Poisson bracket \begin{eqnarray} \label{eq:PoissonBracket} \left\{F,G\right\}=\displaystyle\int\mathrm{d}^3x\left[\left(\frac{\delta F}{\delta\gamma_{ij}}\frac{\delta G}{\delta \pi^{ij}}-\frac{\delta G}{\delta\gamma_{ij}}\frac{\delta F}{\delta \pi^{ij}}\right)+\left(\frac{\delta F}{\delta\phi}\frac{\delta G}{\delta \pi_\phi}-\frac{\delta G}{\delta\phi}\frac{\delta F}{\delta \pi_\phi}\right)\right] \end{eqnarray} and the above total Hamiltonian, \textsl{i.e.~} \begin{eqnarray} \dot{F}(\phi,\pi_\phi;\gamma_{ij},\pi^{mn})=\left\{F(\phi,\pi_\phi;\gamma_{ij},\pi^{mn}),C\left[N,N^i\right]\right\}. \label{eq:HamEqGen} \end{eqnarray} In addition, the dynamics is constrained to lie on the phase-space surface where both the scalar and the diffeomorphism constraints vanish, \textsl{i.e.~} $\mathcal{S}^{(\mathrm{G})}+\mathcal{S}^{(\phi)}=0$ and $\mathcal{D}^{(\mathrm{G})}_i+\mathcal{D}^{(\phi)}_i=0$. This is so because minimisation of the action has to hold for any arbitrary choice of the lapse function and the shift vector, appearing as Lagrange multipliers in the Hamiltonian. Furthermore, one can show that the Poisson bracket between constraints yields only combinations of the same constraints, \textsl{i.e.~} these are ``first-class'' constraints in Dirac's terminology. As a consequence, the constrained surface in the phase space is preserved through the dynamical evolution generated by the total Hamiltonian.\footnote{This holds by virtue of the contracted Bianchi identities. Indeed, one can rewrite the scalar and diffeomorphism constraints in terms of the Einstein-Hilbert tensor $G_{\mu\nu}$ and the energy momentum tensor $T_{\mu\nu}$ as \cite{Arnowitt:1962hi}: \begin{eqnarray} \mathcal{S}^{(G)} + \mathcal{S}^{(\phi)} & = & G^0_{0} - \frac{T^0_{\,0}}{M_\usssPl^2} , \\ \mathcal{D}^{(G)}_i + \mathcal{D}^{(\phi)}_i & = & G^0_{i} - \frac{T^0_{\,i}}{M_\usssPl^2} \,. \end{eqnarray} In terms of the covariant derivative $\nabla$, the contracted Bianchi identities read $\nabla_\mu (G^\mu_\nu - T^\mu_\nu/M_\usssPl^2)=0$. Under the conditions that the spatial part of Einstein equations holds $(G_{ij}- T_{ij}/M_\usssPl^2=0)$ and that the diffeomorphism constraints are initially satisfied $(G^0_{\,i} - T^0_{\,i}/M_\usssPl^2=0)$, the Bianchi identities reduce to $\nabla_0 (G^0_{\, \nu}- T^0_{\,\nu}/M_\usssPl^2)=0$. Thus they impose the time invariance of the constrained surface (see part 3.1 of \Refc{Bojowald:2010qpa} for further details).} The full dynamics in the Hamiltonian framework is thus given by four dynamical equations, obtained by applying \Eq{eq:HamEqGen} to the phase-space variables $(\phi,\pi_\phi;\gamma_{ij},\pi^{mn})$, plus four constraint equations (one from the scalar constraint and three from the diffeomorphism constraint). The dynamical equations for the gravitational sector are rather involved (in particular due to the complexity of $\mathcal{R}$ as a function of the induced metric components) and we will not report them here, see \Refc{thieman_book} for explicit expressions [and \Eq{eq:gammaijdot} for $\dot{\gamma}_{ij}$]. For the scalar-field sector, they take the simple form \begin{eqnarray} \dot\phi & =& \frac{N}{\sqrt{\gamma}}\pi_\phi+N^i\partial_i\phi, \\ \dot\pi_\phi & =& -N\sqrt{\gamma}V_{,\phi}+\partial_i\left(N\sqrt{\gamma}\gamma^{ij}\partial_j\phi\right)+\partial_i\left(N^i \pi_\phi\right), \end{eqnarray} where $V_{,\phi}:=\partial V/\partial\phi$. \subsection{Homogeneous and isotropic cosmologies} \label{ssec:bck} We now apply the Hamiltonian formalism to homogeneous and isotropic cosmologies. More precisely, we consider Friedmann-Lema\^itre-Robertson-Walker (FLRW) space-times, where the metric reduces to \begin{eqnarray} \mathrm{d} s^2=-N^2(\tau)\mathrm{d}\tau^2+p(\tau)\widetilde{\gamma}_{ij}\mathrm{d} x^i\mathrm{d} x^j\, . \end{eqnarray} Here, the lapse function, $N$, and $a^2:=p$, depend on time only, and we have introduced the three-dimensional time-independent metric $\widetilde{\gamma}_{ij}$ and its inverse $\widetilde{\gamma}^{ij}$, such that $\widetilde{\gamma}^{ij}\widetilde{\gamma}_{jm}=\delta^i_m$. In this setting, a given choice for the lapse function corresponds to a given choice for the time coordinate. For instance, setting $N=1$ is equivalent to working with the cosmic time (denoted $t$ hereafter), $N=a$ corresponds to conformal time (denoted $\eta$ hereafter), and $N=1/H$ (where $H=\mathrm{d} \ln(a)/\mathrm{d} t$ is the Hubble parameter) means working with the number of e-folds, $\ln(a)$ (denoted $\mathcal{N}$ in the following), as the time coordinate. When the time coordinate is left unspecified (\textsl{i.e.~} when the lapse function is left free), we will use the generic notation $\tau$. Note that homogeneity imposes that the shift vector $N^i$ depends on time only, and since a uniform vector field provides a preferred direction unless it vanishes, isotropy further imposes that $N^i=0$. For FLRW space-times, the canonical variables for the gravitational sector can be reduced to a single scalar, $p(\tau)$, and its conjugate momentum, $\pi_p(\tau)$. Since $\gamma_{ij}(\tau)=p(\tau)\widetilde{\gamma}_{ij}$, the link between $\pi^{ij}$ and $\pi_p$ follows from noticing that the action $\tilde{S}$ for the homogeneous and isotropic problem can be obtained by replacing \begin{eqnarray} \tilde{S}[\phi,\dot{\phi},p,\dot{p}] =\left. S\left[\phi, \dot{\phi},\gamma_{\ij},\dot{\gamma}_{ij}\right]\right\vert_{\gamma_{ij}=p(\tau)\widetilde{\gamma}_{ij}} \,. \end{eqnarray} The momentum conjugate to $p$ is thus given by \begin{eqnarray} \label{eq:pi_p:pi_ij} \pi_p= \frac{\delta \tilde{S}}{\delta\dot{p}}=\left. \frac{\delta\dot{\gamma}_{ij}}{\delta\dot{p}} \frac{\delta S}{\delta\dot{\gamma}_{ij}}\right\vert_{\gamma_{ij}=p(\tau)\widetilde{\gamma}_{ij}} = \widetilde{\gamma}_{ij} \left. \pi^{ij} \right\vert_{\gamma_{ij}=p(\tau)\widetilde{\gamma}_{ij}}\, , \end{eqnarray} which can be inverted as \begin{eqnarray} \label{eq:pi_ij:pi_p} \left. \pi^{ij} \right\vert_{\gamma_{ij}=p(\tau)\widetilde{\gamma}_{ij}} = \frac{\pi_p}{3}\widetilde{\gamma}^{ij}\,, \end{eqnarray} where we use that $\pi^{ij}$ is proportional to $\widetilde{\gamma}^{ij}$ because of isotropy. Since $(p,\pi_p)$ forms a set of canonically conjugate variables, one can introduce a new Poisson bracket with respect to these variables, which will be denoted with the same brackets for the sake of simplicity.\footnote{The canonical nature of the couple $(p,\pi_p)$ can be further checked by noticing that $\gamma_{ij}=p\widetilde{\gamma}_{ij}$ can be inverted as $p=\gamma_{ij}\widetilde{\gamma}^{ij}/3$, so together with \Eq{eq:pi_p:pi_ij}, \Eq{eq:PoissonBracket} gives rise to $\{ p,\pi_p\} =\widetilde{\gamma}^{ij}\widetilde{\gamma}_{ij}/3=1 $.} For the matter sector, homogeneity imposes that $\phi$ and $\pi_\phi$ depend on time only, so phase space can be parametrised by the time-dependent variables $(\phi,\pi_\phi;p,\pi_p)$, in terms of which the scalar constraints reduce to \begin{eqnarray} \mathcal{S}^{(\mathrm{G})} & = &-\frac{\pi_p^2\sqrt{p}}{3M_\usssPl^2}\, , \\ \mathcal{S}^{(\phi)} & = & \frac{\pi^2_\phi}{2p^{3/2}}+p^{3/2}V(\phi) \, . \end{eqnarray} Let us note that thanks to homogeneity, the two diffeomorphism constraints $\mathcal{D}_i^{(\mathrm{G})}$ and $\mathcal{D}_i^{(\phi)}$ identically vanish on this reduced phase space. An alternative description of the gravitational sector is through the set of canonical variables \begin{eqnarray} v& := &p^{3/2}\, , \\ \theta &:= &\frac{2\pi_p}{3\sqrt{p}}\, , \label{eq:theta:def} \end{eqnarray} where $v=a^3$ is the volume variable and $\theta$ is related to the expansion rate of the hypersurfaces $\Sigma_\tau$ (see \App{app:expansion}). It is straightforward to check that $\left\{v,\theta\right\}=1$, and the scalar constraints are now given by \begin{eqnarray} \mathcal{S}^{(\mathrm{G})} & = &-\frac{3v\theta^2}{4M_\usssPl^2}\, , \label{eq:BckgConsGrav}\\ \mathcal{S}^{(\phi)} & = & \frac{\pi^2_\phi}{2v}+vV(\phi)\, . \label{eq:BckgConsPhi} \end{eqnarray} The main advantage of these variables is to remove all the $\sqrt{p}$ dependence. In terms of $v$ and $\theta$, the induced metric and its canonical momentum are $\gamma_{ij}(\tau)=v^{2/3}(\tau)\widetilde{\gamma}_{ij}$ and $\pi^{ij}(\tau)=\frac{1}{2}v^{1/3}(\tau)\theta(\tau)\widetilde{\gamma}^{ij}$. Let us now derive the constraint and dynamical equations using the variables $(\phi,\pi_\phi;v,\theta)$. The scalar constraint equation $\mathcal{S}=0$, known as the Friedmann equation, reads \begin{eqnarray} \theta^2=\frac{4M_\usssPl^2}{3}\left[\frac{\pi_\phi^2}{2v^2}+V(\phi)\right]. \label{eq:ConstHom} \end{eqnarray} The equations of motion for the gravitational sector are then given by \begin{eqnarray} \dot{v} &= &-\frac{3N}{2M_\usssPl^2} v\theta, \label{eq:Dotv} \\ \dot{\theta} & =& N\left[\frac{3\theta^2}{4M_\usssPl^2}+\frac{\pi^2_\phi}{2v^2}-V(\phi)\right]. \label{eq:Dot:theta} \end{eqnarray} The first of these dynamical equations exhibits the relation between $\theta$ and the expansion rate given by $\dot{v}/v$. The second equation, known as the Raychaudhuri equation, can be further simplified using the constraint equation~\eqref{eq:ConstHom}, and one obtains \begin{eqnarray} \label{eq:Raychaudhuri} \dot\theta=N\left(\frac{\pi_\phi}{v}\right)^2. \label{eq:ThetaPi} \end{eqnarray} For the scalar-field sector, the dynamics reads \begin{eqnarray} \dot\phi & =& N \frac{\pi_\phi}{v} \, , \label{eq:DotPhiPi} \\ \dot\pi_\phi & = & -NvV_{,\phi}\, . \label{eq:DotPi} \end{eqnarray} Let us note that combining \Eqs{eq:Raychaudhuri} and~\eqref{eq:DotPhiPi} leads to $(\dot\phi)^2=N\dot\theta$, which can be viewed as the second of the Hamilton-Jacobi equations as introduced \textsl{e.g.~} in \Refs{Salopek:1990jq,Liddle:1994dx}. Finally, let us see how the usual form of the Friedmann and Raychaudhuri equations can be recovered. Recalling that $v=a^3$, the Hubble parameter is given by $H_N=\dot{v}/(3v)=-N\theta/(2M_\usssPl^2)$, where we have generalised its definition to an arbitrary lapse function $N$, and where the second expression comes from \Eq{eq:Dotv}. Upon introducing $\rho=\dot{\phi}^2/(2N^2)+V(\phi)$ and $P=\dot{\phi}^2/(2N^2)-V(\phi)$, the energy density and the pressure associated to the scalar field respectively, the Friedmann equation~\eqref{eq:ConstHom} takes the usual form \begin{eqnarray} \left(\frac{H_N}{N}\right)^2=\frac{\rho}{3M_\usssPl^2}\, , \label{eq:Friedmann:usual} \end{eqnarray} where we have used \Eq{eq:DotPhiPi} to relate $\dot{\phi}$ and $\pi_\phi$. For the Raychaudhuri equation, by combining the second Hamilton-Jacobi equation $(\dot{\phi})^2=N\dot{\theta}$ and the relation $H_N=-N\theta/(2M_\usssPl^2)$ derived above, one obtains the usual form \begin{eqnarray} \label{eq:Raychaudhury} \frac{\dot{H}_N}{N^2} = -\frac{\rho+P}{2M_\usssPl^2} \, . \end{eqnarray} The Klein-Gordon equation for the scalar field can also be obtained by differentiating \Eq{eq:DotPhiPi} with respect to time, and further using \Eqs{eq:Dotv},~\eqref{eq:DotPi} and the relation $H_N=-N\theta/(2M_\usssPl^2)$, leading to \begin{eqnarray} \ddot{\phi}+\left(3H_N - \frac{\dot{N}}{N} \right)\dot{\phi}+N^2 V_{,\phi}=0\, . \end{eqnarray} \section{Cosmological perturbations} \label{sec:cosmopert} Let us now study cosmological perturbations evolving on the homogeneous and isotropic background described in \Sec{sec:CosmoHam}. This can be either done in the Lagrangian formalism, as in \Refs{Mukhanov:1990me,Malik:2008im}, or in the Hamiltonian formalism, as in \Refc{Langlois:1994ec}. In both approaches, working in Fourier space is convenient since, owing to the background invariance under spatial translations, different Fourier modes decouple at leading order in perturbation theory. In practice, any tensor field $T_{i\cdots j}(\tau,\vec{x})$ can be Fourier transformed on the spatial hypersurfaces $\Sigma_\tau$ according to \begin{eqnarray} T_{i\cdots j}(\tau,\vec{k})=\displaystyle\int\frac{\mathrm{d}^3 \vec{x}}{(2\pi)^{3/2}}\,T_{i\cdots j}(\tau,\vec{x})\,e^{-i\vec{k}\cdot\vec{x}}\, , \label{eq:Fourier} \end{eqnarray} where $\vec{k}\cdot\vec{x}$ is the scalar product $k_ix^i$, and the inverse transform is given by \begin{eqnarray} T_{i\cdots j}(\tau,\vec{x})=\displaystyle\int\frac{\mathrm{d}^3\vec{k}}{(2\pi)^{3/2}}\,T_{i\cdots j}(\tau,\vec{k})\,e^{i\vec{k}\cdot\vec{x}}\, . \end{eqnarray} Note that the wavevector $\vec{k}$ is defined with respect to the flat metric on spatial hypersurfaces $\Sigma_\tau$, \textsl{i.e.~} it is a {\it comoving} wavevector. As a consequence, its indices are raised and lowered with the metric $\widetilde{\gamma}_{ij}$, so for instance $k^2=k_i k^i =\widetilde{\gamma}^{ij}k_ik_j=\widetilde{\gamma}_{ij}k^ik^j$. In practice, we will be considering real-valued tensor fields, for which the Fourier coefficients must satisfy \begin{eqnarray} T^\star_{i\cdots j}(\tau,\vec{k})=T_{i\cdots j}(\tau,-\vec{k})\, , \label{eq:reality:condition} \end{eqnarray} where a star denotes the complex conjugate. Hereafter this contraint will be referred to as the reality condition. \subsection{Scalar degrees of freedom} \label{ssec:dofpert} In general, cosmological perturbations can be expanded into scalar, vector and tensor degrees of freedom (this is the so-called SVT decomposition~\cite{Lifshitz:1945du}). In the following we will focus on scalar perturbations, since they are the main purpose of the separate-universe approach, and given that vector and tensor perturbations can be dealt with in a similar way. The lapse function $N$ and the variables describing the scalar field sector, $\phi$ and $\pi_\phi$, are scalar quantities, and so are their perturbations. They can be written as \begin{eqnarray} \delta N(\tau,\vec{x})&:= &N(\tau,\vec{x}) - N(\tau) \, , \label{eq:DefPerturbations1}\\ \delta\phi(\tau,\vec{x}) &:= & \phi(\tau,\vec{x}) -\phi(\tau) \, , \label{eq:DefPerturbations2} \\ \delta\pi_\phi(\tau,\vec{x})&:=&\pi_\phi(\tau,\vec{x}) - \pi_\phi(\tau) \, , \label{eq:DefPerturbations3} \end{eqnarray} where the functions $N(\tau)$, $\phi(\tau)$ and $\pi_\phi(\tau)$ are solutions to the homogenous and isotropic problem studied in \Sec{sec:CosmoHam} [hereafter, quantities solving the homogeneous and isotropic problem will always be denoted with the argument ``$(\tau)$'']. The perturbations of the shift vector can be written in a similar way, \begin{eqnarray} \delta N^i(\tau,\vec{x}) := N^i(\tau,\vec{x})- N^i(\tau) \, , \label{eq:DefPerturbations4} \end{eqnarray} where $N^i(\tau)=0$ since the shift vector vanishes in the homogeneous and isotropic setup. According to the SVT decomposition, $\delta N^i$ can be expanded into the gradient of a scalar and a divergence-free vector, namely $\delta N_i = \partial_i (\delta N_1) + (\delta N_2)_i $, where $\delta N_1$ is a scalar and $\delta N_2$ is a vector such that $\partial_i (\delta N_2)^i=0$. As explained above, we focus on scalar perturbations and thus set $\delta N_2=0$. In Fourier space, one has \begin{eqnarray} \label{eq:deltaNi:deltaN1} \delta N^i(\tau,\vec{k})= i \frac{k^i}{k} \delta N_1(\tau,\vec{k})\, , \end{eqnarray} where $\delta N_1(\tau,\vec{k})$ has been rescaled by an overall $k$ factor for later convenience. Note that the reality condition ${\delta N^i}^\star(\tau,\vec{k})=\delta N^i(\tau,-\vec{k})$, see \Eq{eq:reality:condition}, translates into the same condition for $\delta N_1$, namely $\delta N_1^\star(\tau,\vec{k})=\delta N_1(\tau,-\vec{k})$, since $i\vec{k}$ is invariant under complex conjugation and sign flipping of the wavevector. The induced metric and its conjugate momentum are perturbed as \begin{eqnarray} \label{eq:DefPerturbations5} \delta\gamma_{ij}(\tau,\vec{x}) &:=& \gamma_{ij}(\tau,\vec{x})- \gamma_{ij}(\tau)\, , \\ \delta\pi^{ij}(\tau,\vec{x}) &:=& \pi^{ij}(\tau,\vec{x})-\pi^{ij}(\tau) \, . \label{eq:DefPerturbations6} \end{eqnarray} These are tensors on the spatial hypersurfaces $\Sigma_\tau$, and still according to the SVT decomposition they can be expanded as $h_{ij} = h_1 \gamma_{ij} + \partial_i \partial_j h_2 + \partial_i (h_3)_j + \partial_j (h_3)_i + (h_4)_{ij}$, where $h_{ij}$ denotes a generic tensor form, $h_1$ and $h_2$ are scalars, $h_3$ is a divergence-free vector, and $h_4$ is a conserved tensor in the sense that $\partial_i h_4^{ij} = 0$ and $(h_4)_i^{\ i}=0$. Keeping only scalar perturbations amounts to setting $h_3=h_4=0$. In Fourier space, $h_1$ is proportional to $\gamma_{ij}$ while $h_2$ is proportional to $k_i k_j$. For this reason, we introduce the two basis matrices \begin{eqnarray} \label{eq:Mij:def} M^1_{ij}:=\frac{1}{\sqrt{3}}\widetilde{\gamma}_{ij} & ~~~\mathrm{and}~~~ & M^2_{ij}:=\sqrt{\frac{3}{2}}\left(\frac{k_ik_j}{k^2}-\frac{\widetilde{\gamma}_{ij}}{3}\right)\, , \end{eqnarray} which are indeed linear combinations of $\gamma_{ij}$ and $k_i k_j$, and whose indices are raised and lowered using the metric $\widetilde{\gamma}_{ij}$ since $\vec{k}$ is a comoving wavevector. Note that $M^1$ captures the purely isotropic part of the perturbations. Our choice of normalisation (which slightly differs from the one in \cite{Langlois:1994ec}\footnote{As a consequence, our gravitational variables slightly differ from the ones in \Refc{Langlois:1994ec}. Denoting $(\delta\gamma_A^\mathrm{L},\delta\pi_A^{\mathrm{L}})$ the variables used in \Refc{Langlois:1994ec}, the link between the two sets of variables is given by: \begin{eqnarray} \delta\gamma_1^{\mathrm{L}}=\frac{\delta\gamma_1}{v^{2/3}\sqrt{3}}\, ,\qquad \delta\pi_1^{\mathrm{L}}=\sqrt{3}v^{2/3}\delta\pi_1\, ,\qquad \delta\gamma_2^{\mathrm{L}}=\sqrt{\frac{{3}}{2}}\frac{\delta\gamma_2}{v^{2/3}}\, ,\qquad \delta\pi_2^{\mathrm{L}}=\sqrt{\frac{2}{3}}v^{2/3}\delta\pi_2. \end{eqnarray} Both sets of variables are related by a diagonal canonical transformation, which thus corresponds to a pure squeezing~\cite{Colas:2021llj}.}) is such that these two matrices form an orthonormal basis, \textsl{i.e.~} $M^{ij}_AM^{A'}_{ij}=\delta_{A,A'}$, where $A$ and $A'$ run over 1 and 2. In Fourier space, the scalar perturbations in the induced metric and its momentum can thus be expanded as \begin{eqnarray} \label{eq:delta:gamma:Mbasis} \delta\gamma_{ij}(\tau,\vec{k})&=&\delta\gamma_1(\tau,\vec{k}) M^1_{ij}+\delta\gamma_2(\tau,\vec{k}) M^2_{ij}(\vec{k})\, , \\ \delta\pi^{ij}(\tau,\vec{k})&=&\delta\pi_1(\tau,\vec{k}) M^{ij}_1+\delta\pi_2(\tau,\vec{k}) M^{ij}_2(\vec{k})\, . \label{eq:delta:pi:Mbasis} \end{eqnarray} The two scalar degrees of freedom for the gravitational sector are then described by $(\delta\gamma_1,\delta\pi_1)$ and $(\delta\gamma_2,\delta\pi_2)$. They are related to the original induced metric and conjugate momentum through \begin{eqnarray} \label{eq:delta:gamma:A:delta:gamma:ij} \delta\gamma_A=M^{ij}_A\delta\gamma_{ij} \quad\text{and}\quad \delta\pi_A=M^A_{ij}\delta\pi^{ij}\, . \end{eqnarray} Let us finally stress that, although the indices in $M_A^{ij}$ are lowered and raised with $\widetilde{\gamma}_{ij}$, \Eqs{eq:delta:gamma:Mbasis} and~\eqref{eq:delta:pi:Mbasis} should not be interpreted as leading to similar rules for $\delta\gamma_{ij}$ and $\delta\pi^{ij}$. Indeed, the indices of $\gamma_{ij}$ and $\pi^{ij}$ are lowered and raised with the full induced metric $\gamma_{ij}$. For instance, at linear order in perturbation theory, this leads to $\delta\gamma^{ij}(\tau,\vec{x})=-\gamma^{im}(\tau)\gamma^{jn}(\tau)\delta\gamma_{mn}(\tau,\vec{x})$, hence \begin{eqnarray} \label{eq:delta:gamma:ij:UP} \delta\gamma^{ij}(\tau,\vec{k})&=&-\frac{\delta\gamma_1(\tau,\vec{k})}{v^{4/3}}M_1^{ij}-\frac{\delta\gamma_2(\tau,\vec{k})}{v^{4/3}}M_2^{ij}(\vec{k})\, , \end{eqnarray} where we have used that $\gamma^{ij}(\tau)=v^{-2/3}\widetilde{\gamma}^{ij}$. For the conjugate momentum, one obtains, still at leading order in perturbation theory, $\delta\pi_{ij}(\tau,\vec{x})=\gamma_{im}(\tau)\gamma_{jn}(\tau)\delta\pi^{mn}(\tau,\vec{x})+\delta\gamma_{im}(\tau,\vec{x})\gamma_{jn}(\tau)\pi^{mn}(\tau)+\delta\gamma_{jm}(\tau,\vec{x})\gamma_{in}(\tau)\pi^{mn}(\tau)$. In Fourier space, this leads to \begin{eqnarray} \label{eq:delta:pi:ij:DOWN} \delta\pi_{ij}(\tau,\vec{k}) =\left[v^{4/3}\delta\pi_1(\tau,\vec{k})+v\theta\delta\gamma_1(\tau,\vec{k})\right] M^1_{ij}+\left[v^{4/3}\delta\pi_2(\tau,\vec{k})+v\theta\delta\gamma_2(\tau,\vec{k})\right] M^2_{ij}(\vec{k})\, ,\quad \end{eqnarray} where $\pi^{mn}(\tau)$ has been related to $\theta$ by combining \Eqs{eq:pi_ij:pi_p} and~\eqref{eq:theta:def}. We note that the configuration variables $\delta\gamma_1$ and $\delta\gamma_2$ also contribute to $\delta\pi_{ij}$. For expressions of $\delta\gamma_{ij}(\tau,\vec{k})$ and $\delta\pi ^{ij}(\tau,\vec{k})$ valid at second order, see \Eqs{eq:delta:gamma:ij:UP:2ndOrder} and~\eqref{eq:delta:pi:ij:DOWN:2ndOrder}. We have thus identified the relevant scalar degrees of freedom at the perturbative level (we note that the lapse function and the shift vector have no associated momenta, and so is the case for their perturbations). For completeness, the relationship between the perturbative degrees of freedom in the Hamiltonian framework and those defined in the Lagrangian approach are given in \App{app:lag}. \subsection{Dynamics of the perturbations} \label{sec:Dynamics:Scalar:Perturbations} Let us now study the dynamics of the perturbation variables introduced in the previous section. Our starting point is to view \Eqs{eq:DefPerturbations2}, \eqref{eq:DefPerturbations3}, \eqref{eq:DefPerturbations5} and \eqref{eq:DefPerturbations6} as defining a canonical transformation, which simply consists in subtracting fixed, time-dependent functions from the phase-space variables. Such a transformation, which is a mere translation in phase space, obviously preserves the Poisson brackets, hence it is indeed canonical. Our first task is to derive the Hamiltonian for this new set of canonical variables. In practice, let us formally arrange the configuration variables into a vector $\vec{q}(\tau,\vec{x})$, with conjugated momentum $\vec{p}(\tau,\vec{x})$. The perturbation variables are defined according to $\delta\vec{q}(\tau,\vec{x}) = \vec{q}(\tau,\vec{x})- \vec{q}(\tau)$ and $\delta\vec{p}(\tau,\vec{x}) = \vec{p}(\tau,\vec{x})- \vec{p}(\tau)$, where $\vec{q}(\tau)$ and $ \vec{p}(\tau)$ solve the homogeneous and isotropic problem described in \Sec{ssec:bck}. When evaluated on the fields $\vec{q}(\tau,\vec{x})$ and $\vec{p}(\tau,\vec{x})$, the scalar constraint can be Taylor expanded in $\delta\vec{q}$ and $\delta\vec{p}$ (for the moment to infinite order, so the analysis remains exact at this stage) according to \begin{eqnarray} & &\kern-3em \mathcal{S}\left[\vec{q}(\tau,\vec{x}),\vec{p}(\tau,\vec{x})\right] =\mathcal{S}\left[\vec{q}(t),\vec{p}(t)\right] +\underbrace{\delta q_\mu \frac{\partial\mathcal{S}}{\partial q_\mu}\left[\vec{q}(t),\vec{p}(t)\right] +\delta p^\mu \frac{\partial\mathcal{S}}{\partial p^\mu}\left[\vec{q}(t),\vec{p}(t)\right] }_{\mathcal{S}^{(1)}\left[\delta\vec{q},\delta\vec{p}\right]} \nonumber \\ & &\kern1em +\underbrace{\frac{1}{2}\sum_{\{n,m ; n+m=2\} }\left(\delta q_\mu\right)^n \left(\delta p^\nu\right)^m \frac{\partial^2\mathcal{S}}{\left(\partial q_\mu\right)^i \left(\partial p^\nu\right)^j}\left[\vec{q}(t),\vec{p}(t)\right]}_{\mathcal{S}^{(2)}\left[\delta\vec{q},\delta\vec{p}\right]} +\sum_{n\geq 3} \mathcal{S}^{(n)}\left[\delta\vec{q},\delta\vec{p}\right]\, . \label{eq:S:Taylor} \end{eqnarray} In this expression, the first term vanishes, $\mathcal{S}[\vec{q}(\tau),\vec{p}(\tau)] =0 $, since, by definition, $\vec{q}(\tau)$ and $\vec{p}(\tau)$ satisfy the homogeneous and isotropic scalar constraint. The other terms are organised in powers of the perturbation variables: $\mathcal{S}^{(1)}$ contains linear combinations of the perturbation variables, $\mathcal{S}^{(2)}$ contains quadratic combinations, \textsl{etc.~} A similar expression can be written down for the diffeomorphism constraint $\mathcal{D}_i$. The dynamics of $\vec{q}(\tau,\vec{x})$ and $\vec{p}(\tau,\vec{x})$ is given by the Hamitonian~\eqref{eq:full:Hamiltonian}, whose Hamilton equations read \begin{eqnarray} \label{eq:eom:qmu} \dot{q}_\mu(\tau,\vec{x}) &=& N(\tau,\vec{x}) \frac{\partial \mathcal{S}}{\partial p^\mu}\left[\vec{q}(\tau,\vec{x}),\vec{p}(\tau,\vec{x})\right] + N^i (\tau,\vec{x})\frac{\partial \mathcal{D}_i}{\partial p^\mu}\left[\vec{q}(\tau,\vec{x}),\vec{p}(\tau,\vec{x})\right] \, ,\\ \dot{p}^\mu(\tau,\vec{x}) &=& - N(\tau,\vec{x}) \frac{\partial \mathcal{S}}{\partial q_\mu}\left[\vec{q}(\tau,\vec{x}),\vec{p}(\tau,\vec{x})\right] - N^i(\tau,\vec{x}) \frac{\partial \mathcal{D}_i}{\partial q_\mu}\left[\vec{q}(\tau,\vec{x}),\vec{p}(\tau,\vec{x})\right]\, . \label{eq:eom:pmu} \end{eqnarray} Upon plugging the Taylor series~\eqref{eq:S:Taylor} (and the analogue formula for the diffeomorphism constraint) into \Eq{eq:eom:qmu}, where $\dot{q}_\mu(\tau,\vec{x}) = \dot{q}_\mu(\tau)+\delta \dot{q}_\mu(\tau,\vec{x})$, one obtains \begin{eqnarray} \delta\dot{q}_\mu& =& N(\tau) \frac{\partial \mathcal{S}}{\partial p^\mu} \left[\vec{q}(\tau),\vec{p}(\tau) \right] - \dot{q}_\mu(\tau) +\delta N(\tau,\vec{x})\frac{\partial}{\partial\left(\delta p^\mu\right)} \mathcal{S}^{(1)}\left[\delta\vec{q},\delta\vec{p}\right] \nonumber \\ & & + N (\tau,\vec{x})\frac{\partial}{\partial\left(\delta p^\mu\right)} \sum_{n\geq 2} \mathcal{S}^{(n)}\left[\delta\vec{q},\delta\vec{p}\right] + N^i(\tau,\vec{x}) \frac{\partial}{\partial\left(\delta p^\mu\right)} \sum_{n\geq 1} \mathcal{D}_i^{(n)}\left[\delta\vec{q},\delta\vec{p}\right] \, . \end{eqnarray} In this expression, the first two terms in the right-hand side cancel each other out since, by definition, $\vec{q}(\tau)$ obeys the first Hamilton equation of the homogeneous and isotropic problem. Only remain the last three terms, which shows that the equation of motion of $\delta\vec{q}$ has the form of a first Hamilton equation with a Hamiltonian density given by $ \delta N \mathcal{S}^{(1)}\left[\delta\vec{q},\delta\vec{p}\right] +N \sum_{n\geq 2} \mathcal{S}^{(n)}\left[\delta\vec{q},\delta\vec{p}\right] + N^i \sum_{n\geq 1} \mathcal{D}_i^{(n)}\left[\delta\vec{q},\delta\vec{p}\right]$. The same conclusion can be drawn from plugging the Taylor series of the constraints into \Eq{eq:eom:pmu} and deriving the equation of motion for $\delta\vec{p}$, which can be cast into a second Hamilton equation with the same Hamiltonian, namely \begin{eqnarray} \label{eq:Expanded:Hamiltonian} C\left[\delta\vec{q}, \delta\vec{p} \right] = \int \mathrm{d}^3 \vec{x} & &\Bigg[N(\tau) \mathcal{S}^{(2)} + \delta N \mathcal{S}^{(1)} + \delta N^i \mathcal{D}_i^{(1)} \nonumber \\ & & + N(\tau) \sum_{n=3}^\infty \mathcal{S}^{(n)}+ \delta N \sum_{n=2}^\infty \mathcal{S}^{(n)}+\sum_{n=2}^\infty\delta N^i \mathcal{D}_i^{(n)} \Bigg] \, . \end{eqnarray} In this expression, the quadratic terms have been singled out for later convenience. Although we have shown that this Hamiltonian gives the correct equations of motion, one must ensure that the correct constraints are also recovered. This is the case since the perturbed lapse function multiplies $\sum_{n\geq 1}\mathcal{S}^{(n)}$ in \Eq{eq:Expanded:Hamiltonian}, namely the full scalar constraint minus $\mathcal{S}[\vec{q}(t),\vec{p}(t)]$, which itself vanishes as already mentioned. One can also check that the perturbed shift vector multiplies $\sum_{n\geq 1}\mathcal{D}_i^{(n)}$, which is nothing but the full diffeomorphism constraint. Even though the above Hamiltonian provides an exact description of the perturbation variables, in practice, tractable calculations can only be performed by truncating the expansion at a finite order. At leading order in perturbation theory, only the quadratic terms in \Eq{eq:Expanded:Hamiltonian} remain (\textsl{i.e.~} those in the first line). Variation with respect to the perturbed lapse and the perturbed shift give the linear scalar constraint equation $\mathcal{S}^{(1)}=0$ and the linear diffeomorphism constraint equation $\mathcal{D}_i^{(1)}=0$ respectively, while the dynamics of the other phase-space coordinates is generated by $\mathcal{S}^{(2)}$ [given that $N(\tau)$ is already determined by the background solution]. An important remark is that, although $\mathcal{S}^{(2)}$ vanishes in the full theory (the full scalar constraint vanishes so it must vanish order by order), this is not guaranteed at linear order in perturbation theory (where only the linear constraints are satisfied). The reason is that, at that order, linear relationships are imposed between the phase-space variables, where all quadratic and higher-order contributions are neglected. Since $\mathcal{S}^{(2)}$ is a quadratic constraint, this explains why it is not satisfied. It may still be referred to as the quadratic ``constraint'' in what follows, although one must recall that this constraint is not satisfied at linear order. Finally, let us note that if one wanted to study higher orders, one could iterate the same procedure, and perform the canonical transformation which consists in subtracting from the perturbation variables the solutions to the linear problem that we will now derive. One would then find that, at order $n$, the scalar and diffeomorphism constraints are given by $\mathcal{S}^{(n)}=0$ and $\mathcal{D}_i^{(n)}=0$, while the dynamics is generated by $\mathcal{S}^{n+1}$.\\ As mentioned at the beginning of this section, \Eqs{eq:DefPerturbations2}, \eqref{eq:DefPerturbations3}, \eqref{eq:DefPerturbations5} and \eqref{eq:DefPerturbations6} can be seen as a canonical transformation, which preserves the Poisson brackets, hence $(\delta\phi, \delta\pi_\phi, \delta\gamma_{ij},\delta\pi^{ij})$ share the same Poisson brackets as $(\phi, \pi_\phi, \gamma_{ij},\pi^{ij})$ and given below \Eq{eq:metric:ADM}. In \Sec{ssec:dofpert}, the scalar degrees of freedom were identified, and for the gravitational sector they are given by $\delta\gamma_1$, $\delta\gamma_2$, $\delta\pi_1$ and $\delta\pi_2$. Their Poisson brackets can be obtained from \Eq{eq:delta:gamma:A:delta:gamma:ij}, which leads to $\{ \delta\gamma_A(\vec{x}), \delta\gamma_{A'}(\vec{y})\} =\{ \delta\pi_A(\vec{x}), \delta\pi_{A'}(\vec{y})\} =0$ and $\{ \delta\gamma_A(\vec{x}), \delta\pi_{A'}(\vec{y})\} = M_A^{ij} M^{A'}_{\ell m} \{ \delta\gamma_{ij}(\vec{x}), \delta\pi^{\ell m}(\vec{y}) \} = \delta(\vec{x}-\vec{y}) (M_A^{ij} M^{A'}_{ij} + M_A^{ij} M^{A'}_{ji} )/2 = \delta(\vec{x}-\vec{y}) \delta_{A,A'}$ where we have used that the $M$ matrices form an orthonormal basis. As a consequence, arranging the scalar perturbations into a vector $\delta\vec{ \phi}:=(\delta\phi,\delta \gamma_1,\delta \gamma_2)$ for convenience, the conjugate momentum to $\delta\vec{\phi}$ is given by $\delta\vec{ \pi}_\phi:=(\delta\pi_\phi,\delta \pi_1,\delta \pi_2)$. In real space, the Poisson brackets thus read \begin{eqnarray} \left\{\delta\vec{\phi}(\tau,\vec{x}),\delta\vec{\pi}_\phi(\tau,\vec{y})\right\}= \delta^3(\vec{x}-\vec{y})\boldsymbol{I}\, , \end{eqnarray} where $\boldsymbol{I}$ is the identity matrix (here in 3 dimensions), while in Fourier space they are given by \begin{eqnarray} \left\{\delta\vec{\phi}(\tau,\vec{k}),\delta\vec{\pi}_\phi^\star(\tau,\vec{k}')\right\}= \delta^3(\vec{k}-\vec{k}')\boldsymbol{I}\, . \end{eqnarray} Note that since the two matrices $M^A_{ij}$ satisfy $M^A_{ij}(-\vec{k})=M^A_{ij}(\vec{k})$, the reality condition~\eqref{eq:reality:condition} applies for $\delta\gamma_A(t,\vec{k})$ and $\delta\pi_A(t,\vec{k})$ [in addition to holding for $\delta N(t,\vec{k})$, $\delta\phi(t,\vec{k})$, and $\delta \pi_\phi(t,\vec{k})$ as already mentioned]. The expansion of the constraints at first and second order in the perturbation variables is performed in \App{app:gloss} in the case of a flat FLRW background, and below we only quote the results. The linear diffeomorphism constraint is given in \Eq{eq:D1:app:final} and reads \begin{eqnarray} \label{eq:D1i:D1} \mathcal{D}^{(1)}_i(t,\vec{k})=i\,k_i\,\mathcal{D}^{(1)}(t,\vec{k}), \end{eqnarray} where $\mathcal{D}^{(1)}$ is a scalar given by \begin{eqnarray} \mathcal{D}^{(1)}=\pi_\phi\,\delta\phi+\frac{1}{\sqrt{3}}v^{1/3}\theta\left(\frac{1}{2}\delta\gamma_1-\sqrt{2}\delta\gamma_2\right)-\frac{2}{\sqrt{3}}v^{2/3}\left(\delta\pi_1+\sqrt{2}\delta\pi_2\right)\, . \label{eq:diff1gen} \end{eqnarray} Note that \Eq{eq:D1i:D1} implies that the reality condition~\eqref{eq:reality:condition}, ${\mathcal{D}^{(1)}_i}^\star(t,\vec{k})=\mathcal{D}^{(1)}_i(t,-\vec{k})$, also holds for $\mathcal{D}^{(1)}$, namely ${\mathcal{D}^{(1)}}^\star(t,\vec{k})=\mathcal{D}^{(1)}(t,-\vec{k})$. In general, the diffeomorphism constraint is a vector and leads to three constraint equations. In the present case, it reduces to a single constraint equation, $\mathcal{D}^{(1)}(t,\vec{k})=0$, since only scalar perturbations are considered. The linear scalar constraint is obtained by combining the contributions in \Eqs{eq:calT1:app:final}, \eqref{eq:calW1:app:final}, \eqref{eq:T1:app:final}, \eqref{eq:W1:app:final}, and reads \begin{eqnarray} \mathcal{S}^{(1)}(t,\vec{k})&=&-\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,\delta\pi_1-\frac{v^{1/3}}{\sqrt{3}}\left(\frac{\pi_\phi^2}{v^2}-V+{M_\usssPl^2}\frac{k^2}{v^{2/3}}\right)\,\delta\gamma_1+\frac{M_\usssPl^2}{\sqrt{6}}\frac{k^2}{v^{1/3}}\delta\gamma_2 \nonumber\\ & & +\frac{\pi_\phi}{v}\,\delta\pi_\phi+vV_{,\phi}\delta\phi\, , \label{eq:scal1gen:simp} \end{eqnarray} where the background scalar constraint~\eqref{eq:ConstHom} has been used. This constraint also satisfies the reality condition~\eqref{eq:reality:condition}, $\mathcal{S}^{(1)\star}(t,\vec{k})=\mathcal{S}^{(1)}(t,-\vec{k})$, since it is a linear combination of the perturbation variables (each satisfying the reality condition) with coefficients given by real-valued functions of the background and of $k^2$. Let us note that, when expressing the smeared constraints (see footnote~\ref{footnote:smeared:constraints}) in Fourier space, one should make sure to avoid double counting of the degrees of freedom and take into account the reality condition~\eqref{eq:reality:condition}. In practice, the integration over Fourier modes can be split into two parts, $\mathbb{R}^{3+}:=\mathbb{R}^2\times\mathbb{R}^+$ and $\mathbb{R}^{3-}:=\mathbb{R}^2\times\mathbb{R}^-$. The integration over $\mathbb{R}^{3-}$ can then be written as an integral over $\mathbb{R}^{3+}$ using the fact that $\delta N(t,\vec{k})$, $\delta N_1(t,\vec{k})$, $\mathcal{D}^{(1)}_i(t,\vec{k})$ and $\mathcal{S}^{(1)}(t,\vec{k})$ all satisfy the reality condition. This gives for the smeared constraints \begin{eqnarray} \label{eq:D1:calD1} D^{(1)}[\delta N^i]&=&\displaystyle\int_{\mathbb{R}^{3+}}k\,\mathrm{d}^3k\,\left[\delta N_1 \mathcal{D}^{(1)\star}+\delta N^\star_1 \mathcal{D}^{(1)}\right], \\ S^{(1)}[\delta N]&=&\displaystyle\int_{\mathbb{R}^{3+}}\mathrm{d}^3k\left[\delta N\mathcal{S}^{(1)\star}+\delta N^\star\mathcal{S}^{(1)}\right]\, , \label{eq:S1:calS1} \end{eqnarray} where the extra $k$ in the smeared diffeomorphism constraint comes from \Eq{eq:D1i:D1}. As explained below \Eq{eq:Expanded:Hamiltonian}, at quadratic order, only the perturbed scalar constraint, $\mathcal{S}^{(2)}$, is needed. Expressing the smeared constraint as an integral over $\mathbb{R}^{3+}$ in order to avoid double counting again, \begin{eqnarray} \label{eq:S2.calS2} S^{(2)}\left[N\right]=2\displaystyle\int_{\mathbb{R}^{3+}} \mathrm{d}^3k\,N(\tau)\,\mathcal{S}^{(2)}(\tau,\vec{k}), \end{eqnarray} in \App{app:gloss} we show that [see \Eqs{eq:calT2:app:final}, \eqref{eq:calW2:app:final}, \eqref{eq:T2:app:final} and~\eqref{eq:W2:app:final}] \begin{eqnarray} \label{eq:scalconst2} \mathcal{S}^{(2)}&=&\frac{v^{1/3}}{M_\usssPl^2}\left(2\left|\delta\pi_2\right|^2-\left|\delta\pi_1\right|^2\right)+\frac{1}{2v}\left|\delta\pi_\phi\right|^2+\frac{v}{2}\left(\frac{k^2}{v^{2/3}}+V_{,\phi,\phi}\right)\left|\delta\phi\right|^2 \nonumber\\ &&+\frac{1}{3v^{1/3}}\left(\frac{\pi^2_\phi}{v^2}+\frac{V}{2} - \frac{M_\usssPl^2 k^2}{4 v^{2/3}}\right)\left|\delta\gamma_1\right|^2 + \frac{1}{3v^{1/3}}\left(\frac{\pi^2_\phi}{v^2}+\frac{V}{2} - \frac{M_\usssPl^2 k^2}{8 v^{2/3}}\right) \left|\delta\gamma_2\right|^2 \nonumber \\ &&-\frac{\theta}{4M_\usssPl^2}\left(\delta\pi_1\delta\gamma^\star_1+\mathrm{c.c.}\right)+\frac{\theta}{2M_\usssPl^2}\left(\delta\pi_2 \delta\gamma^\star_2+\mathrm{c.c.}\right) +\frac{\sqrt{2}M_\usssPl^2}{24v}k^2\left(\delta\gamma_1\delta\gamma_2^\star+\mathrm{c.c.}\right) \nonumber \\ &&-\frac{\sqrt{3}}{4}v^{1/3}\left[\left(\frac{\pi_\phi}{v^2}\delta\pi_\phi-V_{,\phi}\delta\phi\right)\delta\gamma_1^\star+\mathrm{c.c.}\right], \label{eq:S2:full} \end{eqnarray} where ``c.c.'' means the complex conjugate of the previous term, and where the background scalar constraint~\eqref{eq:ConstHom} has been used to further simplify the expression. The two first lines correspond to diagonal terms, the third line features cross terms within the gravitational sector (we notice that there is no cross term in the scalar-field sector), while the fourth line stands for coupling between the two sectors. In particular, one can see that the scalar field perturbations couple only to the isotropic gravitational configuration. There is however no coupling between the scalar field and the isotropic gravitational momentum $\delta\pi_1$. The absence of coupling between $\delta\phi$ or $\delta\pi_\phi$ and $\delta\gamma_2$ or $\delta\pi_2$ is due to the fact a scalar field can only generate isotropic perturbations. The two gravitational degrees of freedom are however coupled to each other, through a term of the form $k^2(\delta\gamma_1)(\delta\gamma_2)$, that is to say via gradient interactions. We are now in a position where we can derive the equations of motion for the perturbations, \begin{eqnarray} &&\left\{\begin{array}{l} \displaystyle\dot{{\delta\gamma_1}}=- \frac{2}{\sqrt{3}} v^{2/3} k \delta N_1 -\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,{\delta N}-\frac{N}{M_\usssPl^2}\left({2v^{1/3}}\,{\delta\pi_1}+\frac{\theta}{2}\,{\delta\gamma_1}\right), \\ \displaystyle\dot{{\delta\pi_1}}= - \frac{v^{1/3}\theta}{2\sqrt{3}} k \delta N_1 + \frac{v^{1/3}}{\sqrt{3}}\left(\frac{\pi^2_\phi}{v^2}-V + M_\usssPl^2 \frac{k^2}{v^{2/3}}\right)\,{\delta N} \\ ~~~~~~~~\displaystyle+N\bigg[-\frac{2}{3v^{1/3}}\left(\frac{\pi^2_\phi}{v^2}+\frac{V}{2} - \frac{M_\usssPl^2 k^2}{4 v^{2/3}} \right)\,{\delta\gamma_1}+\frac{\theta}{2M_\usssPl^2}\,{\delta\pi_1} \\ ~~~~~~~~\displaystyle+\frac{\sqrt{3}}{2}v^{1/3}\left(\frac{\pi_\phi}{v^2}\,{\delta\pi_\phi}-V_{,\phi}\,{\delta\phi}\right) - \frac{\sqrt{2}}{12 v} M_\usssPl^2 k^2 \delta\gamma_2 \bigg], \end{array}\right. \nonumber\\ &&\nonumber \\ &&\left\{\begin{array}{l} \displaystyle\dot{{\delta\gamma_2}}= -2\sqrt{\frac{2}{3}} v^{2/3} k \delta N_1 + N \left( \frac{4 v^{1/3}}{M_\usssPl^2} \delta\pi_2 + \frac{\theta}{M_\usssPl^2}\delta\gamma_2 \right) , \\ \label{eq:eom:gen} \displaystyle\dot{{\delta\pi_2}}= \sqrt{\frac{2}{3}} v^{1/3} \theta k \delta N_1 - \frac{M_\usssPl^2 k^2}{\sqrt{6} v^{1/3}}\delta N \\ ~~~~~~~~\displaystyle + N \bigg[ - \frac{\theta}{M_\usssPl^2} \delta \pi_2 - \frac{\sqrt{2} M_\usssPl^2}{12 v} k^2 \delta \gamma_1 - \frac{2}{3 v^{1/3}} \left( \frac{\pi_\phi^2}{v^2} + \frac{V}{2} - \frac{M_\usssPl^2 k^2}{8 v^{2/3}} \right) \delta\gamma_2 \bigg] \,, \end{array}\right. \label{eq:EOM}\\ &&\nonumber \\ &&\left\{\begin{array}{l} \displaystyle\dot{{\delta\phi}}=\frac{\pi_\phi}{v}\,{\delta N}+N\left(\frac{1}{v}\,{\delta\pi_\phi}-\frac{\sqrt{3}}{2}\frac{\pi_\phi}{v^{5/3}}\,{\delta\gamma_1}\right), \\ \displaystyle\dot{{\delta\pi_\phi}}=-\pi_\phi k \delta N_1 -vV_{,\phi}\,{\delta N}-N\left[v\left(\frac{k^2}{v^{2/3}} + V_{,\phi,\phi}\right)\,{\delta\phi}+\frac{\sqrt{3}}{2}v^{1/3}V_{,\phi}\,{\delta\gamma_1}\right]. \end{array}\right. \nonumber \end{eqnarray} \subsection{Fixing the gauge} \label{ssec:gaugefix} Since the theory is independent of a specific choice of space-time coordinates, some combinations of the perturbation variables can be set to zero, which amounts to working in a specific gauge. Changes of coordinates bear four degrees of freedom (one per coordinate), made of two scalars and one vector.\footnote{In general, a change of coordinates can be written as $x^\mu\to x^\mu+\xi^\mu$, where $\xi_i=\partial_i f + f_i$, with $\partial_i f^i=0$. The two scalar degrees of freedom correspond to $\xi^0$ and $f$ while $f^i$ contains the vector degrees of freedom.\label{footnote:ChangeOfCoordinates}} In practice, they correspond to the Lagrange multipliers of the theory (one scalar in the lapse, one scalar and one vector in the shift). Since we are dealing with scalar perturbations only, this implies that two combinations of scalar perturbations can be set to zero. The vanishing of their respective equations of motion leads to two additional vanishing combinations. Together with the two linear constraint equations, this allows one to freeze six out of the eight variables (namely $\delta N$, $\delta N_1$, $\delta\phi$, $\delta\pi_\phi$, $\delta\gamma_1$, $\delta\pi_1$, $\delta\gamma_2$ and $\delta\pi_2$), such that only two variables (hence a single physical degree of freedom) remain. This single remaining physical degree of freedom can be parametrised in a gauge-invariant way, \textsl{e.g.~} using the so-called Mukhanov-Sasaki combination~\cite{Mukhanov:1981xt,Kodama:1984ziu}\footnote{Note that, to match conventions usually adopted in the literature, we use a different normalisation than in previous versions of this article.} \begin{eqnarray} \label{eq:MS:def} Q_{{}_\mathrm{MS}} := \sqrt{\frac{v}{N}} \delta\phi + \frac{M_\usssPl^2\pi_\phi}{\sqrt{6N}\theta v^{7/6}}\left(\sqrt{2}\delta\gamma_1-\delta\gamma_2\right) . \end{eqnarray} A detailed discussion of gauge transformations in the Hamiltonian formalism, and of the systematic construction of gauge-invariant combination, will be presented separately in a forthcoming article. The above constraint and dynamical equations allow one to derive an autonomous equation of motion for the Mukhanov-Sasaki variable, namely \begin{eqnarray} \label{eq:MS:CPT} \ddot{Q}_{{}_\mathrm{MS}} + \left(k^2-\frac{\ddot{z}}{z}\right)Q_{{}_\mathrm{MS}} =0\, , \end{eqnarray} which is written in conformal time $\eta$, and where $z\equiv v^{1/3}\sqrt{2\epsilon_1}M_\usssPl $ with $\epsilon_1\equiv 2 M_\usssPl^2 \dot{\theta}/(N\theta^2)$ the first Hubble-flow parameter. An alternative approach is to fix the gauge in which the calculation is performed. Although a gauge-invariant approach is more elegant in general, gauge fixing may be required in some problems (for instance in numerical approaches, see \textsl{e.g.~} \Refc{Fidler:2015npa}, or in the stochastic-$\delta N$ formalism as explained below in \Sec{sec:Uniform:Expansion:Gauge:def}). We thus end this section by considering a few different gauge choices that are commonly used in the literature, namely the spatially-flat gauge, the Newtonian gauge, the (generalised) synchronous gauges and the uniform-expansion gauges. These gauges will also be of particular interest to discuss on how to properly match the separate universe approach to CPT, which is why they are introduced before \Sec{sec:SepUniv}. The connection with the (more often discussed) definition of these gauges in the Lagrangian framework is given in \App{app:lag}. \subsubsection*{Spatially-flat gauge} \label{sec:Spatially:Flat:def} Let us start with the spatially-flat gauge, in which one sets $\delta\gamma_{ij}=0$. This implies that $\delta\gamma_1=\delta\gamma_2=0$. In that gauge, phase-space reduction proceeds as follows. For $\delta\gamma_1$ and $\delta\gamma_2$ to remain zero, their equation of motion should vanish too (\textsl{i.e.~} $\delta\dot{\gamma}_1=\delta\dot{\gamma}_2=0)$, which from \Eq{eq:eom:gen} gives two constraint equations, namely \begin{eqnarray} & &\frac{2}{\sqrt{3}}kv^{2/3}\,\delta N_1+\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,\delta N+\frac{2}{M_\usssPl^2}Nv^{1/3}\,\delta\pi_1 =0\, , \label{eq:const1flat} \\ & &2\sqrt{\frac{2}{3}}kv^{2/3}\,\delta N_1-\frac{4}{M_\usssPl^2}Nv^{1/3}\,\delta\pi_2=0\, . \label{eq:const2flat} \end{eqnarray} Moreover, when $\delta\gamma_A=0$, the linear constraints are given by \begin{eqnarray} \mathcal{D}^{(1)}&=&\pi_\phi\,\delta\phi-\frac{2}{\sqrt{3}}v^{2/3}\left(\delta\pi_1+\sqrt{2}\delta\pi_2\right)=0, \label{eq:diffeoflat} \\ \mathcal{S}^{(1)}&=&\frac{\pi_\phi}{v}\,\delta\pi_\phi+vV_{,\phi}\,\delta\phi-\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,\delta\pi_1=0\, .\label{eq:scalarflat} \end{eqnarray} We thus have four constraint equations, which allow us to express $\delta N$, $\delta N_1$, $\delta\pi_1$ and $\delta\pi_2$ in terms of the other phase-space variables (namely $\delta\phi$ and $\delta\pi_\phi$), and one obtains \begin{eqnarray} \label{eq:deltaNphi:flat} \frac{\delta N}{N} & =& -\frac{\pi_\phi}{v\theta}\,\delta\phi\, ,\\ \label{eq:deltaN1phi:flat} k\frac{\delta N_1}{N}&=&\left(\frac{3}{2M_\usssPl^2}\frac{\pi_\phi}{v}-\frac{V_{,\phi}}{\theta}\right)\delta\phi-\frac{\pi_\phi}{v^2\theta}\,\delta\pi_\phi\, ,\\ \label{eq:pi1phi:flat} \delta\pi_1&=&\frac{M_\usssPl^2}{\sqrt{3}v^{2/3}\theta}\left(\frac{\pi_\phi}{v}\,\delta\pi_\phi+vV_{,\phi}\,\delta\phi\right)\, , \\ \label{eq:pi2phi} \delta\pi_2&=&-\frac{M_\usssPl^2}{\sqrt{6}}\frac{\pi_\phi}{v^{5/3}\theta}\,\delta\pi_\phi+\frac{M_\usssPl^2}{\sqrt{6}}\left(\frac{3}{2M_\usssPl^2}\frac{\pi_\phi}{v^{2/3}}-\frac{v^{1/3}V_{,\phi}}{\theta}\right)\delta\phi\, . \end{eqnarray} One thus has a single physical scalar degree of freedom, described by $\delta\phi$ and $\delta\pi_\phi$, the dynamics of which is given by the two last equations of \Eq{eq:eom:gen} where the above replacements are made. One can also check that, still with the above replacements, the equations of motion for $\delta\pi_1$ and $\delta\pi_2$, \textsl{i.e.~} the second and the fourth entries of \Eq{eq:eom:gen}, are automatically satisfied. \subsubsection{Newtonian gauge} \label{sec:Newtonian:def} Let us now consider the Newtonian gauge, which corresponds to setting $\delta\gamma_2=\delta N_1=0$. For $\delta\gamma_2$ to remain zero, the third entry of \Eq{eq:eom:gen} has to vanish, which leads to $\delta\pi_2=0$. The gravitational anisotropic degree of freedom is therefore entirely frozen. Similarly, for $\delta\pi_2$ to remain zero, the fourth entry of \Eq{eq:eom:gen} has to vanish, which leads to \begin{eqnarray} \label{eq:deltaN:deltagamma1:Newtonian} \frac{k^2}{\sqrt{6}v^{1/3}}\,\delta N +\frac{N}{6\sqrt{2}v}k^2\delta\gamma_1=0\, . \end{eqnarray} Moreover, the two linear constraints read \begin{eqnarray} \label{eq:D1:Newtonian} \mathcal{D}^{(1)}&=&\pi_\phi\,\delta\phi+\frac{1}{2\sqrt{3}}v^{1/3}\theta\,\delta\gamma_1-\frac{2}{\sqrt{3}}v^{2/3}\delta\pi_1=0, \\ \mathcal{S}^{(1)}&=&-\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,\delta\pi_1-\frac{v^{1/3}}{\sqrt{3}}\left(\frac{\pi_\phi^2}{v^2}-V+M_\usssPl^2\frac{k^2}{v^{2/3}}\right)\delta\gamma_1+\frac{\pi_\phi}{v}\,\delta\pi_\phi+vV_{,\phi}\delta\phi=0. \nonumber \\ \end{eqnarray} With the above three constraint equations, one can either fix $\delta N$, $\delta\gamma_1$ and $\delta\pi_1$, and work with $(\delta\phi,\delta\pi_\phi)$ as describing the remaining dynamical variable; or fix $\delta N$, $\delta\phi$ and $\delta\pi_\phi$, and work with $(\delta\gamma_1,\delta\pi_1)$ as describing the remaining dynamical variable; or any other combination. Let us mention that an alternative definition of the Newtonian gauge is to start from the conditions $\delta\gamma_2=\delta\pi_2=0$, since the third entry of \Eq{eq:eom:gen} then implies that $\delta N_1=0$. \subsubsection{Generalised synchronous gauge} \label{sec:Generalised:Synchronous:def} The generalised synchronous gauges are such that neither the lapse function nor the shift vector are perturbed, $\delta N = \delta N_1=0$. They can be viewed as gauges in which the global time and space coordinate of the perturbed FLRW background exactly coincide with the time and space coordinate of the (strictly homogeneous and isotropic) FLRW space-time, irrespectively of the initial choice of the background lapse function. Choosing for instance the background lapse function to be cosmic time [so $N(\tau)=1$], this boils down to the standard synchronous gauge. Let us note that the conditions $\delta N = \delta N_1=0$ do not impose further constraints from the equations of motion~\eqref{eq:eom:gen}, contrary to what was obtained in the spatially-flat and Newtonian gauges. As a consequence, the generalised synchronous gauges are not entirely fixed and still contain two spurious gauge modes. \subsubsection{Uniform-expansion gauge} \label{sec:Uniform:Expansion:Gauge:def} As explained in \Refc{Pattison:2019hef}, the stochastic-$\delta N$ formalism~\cite{Vennin:2015hra, Fujita:2013cna} is formulated in the uniform-expansion gauge, in which the perturbation of the integrated expansion \begin{eqnarray} \mathcal{N}_{\mathrm{int}} := \frac{1}{3} \int \nabla_\mu n^\mu\, n_\nu \mathrm{d} x^{\nu} \end{eqnarray} is set to zero. In this expression, $\nabla$ denotes the covariant derivative, and $n^{\mu}$ is the unit vector such that the form $n_\mu$ is orthogonal to the spatial hypersurfaces $\Sigma_\tau$ (see \App{app:expansion} for an explicit calculation of the expansion rate $ \nabla_\mu n^\mu$, in particular \Eq{eq:ExpRateGen}, and of the integrated expansion $\mathcal{N}_{\mathrm{int}}$). The reason for setting $\delta \mathcal{N}_{\mathrm{int}}=0$ is that, in order to relate the large-scale curvature perturbation with the fluctuation in the number of $e$-folds~ $\mathcal{N}$, as implied by the $\delta N$ formalism, the Langevin equations of stochastic inflation have to be solved with the number of $e$-folds~ as the time variable, and this amounts to fixing $\mathcal{N}_{\mathrm{int}}$ across the different patches of the universe. In \App{app:expansion}, it is shown that at the background level, $\mathcal{N}_{\mathrm{int}} = \ln(v)/3$, see \Eq{eq:NintHom}, \textsl{i.e.~} the integrated expansion is nothing but the number of $e$-folds~ $\mathcal{N}$. At first order in the perturbation variables, one obtains $\delta \mathcal{N}_{\mathrm{int}} = \delta\gamma_1/(2\sqrt{3} v^{2/3}) + k \int \delta N_1\mathrm{d}\tau/3$, see \Eq{eq:delta:N:int:CPT}. The uniform-expansion gauge thus corresponds to setting \begin{eqnarray} \delta\gamma_1=\delta N_1=0\, . \end{eqnarray} Note that the vanishing of $\delta\dot{\gamma}_1$ in \Eq{eq:eom:gen} leads to an additional constraint equation, so together with the two first-order constraint equations, one can thus set five out of the eight variables. As a consequence, the uniform-expansion gauge is not entirely fixed and still contains one spurious gauge mode. This may seem a priori problematic~\cite{Figueroa:2021zah} but as we will show below in \Sec{sec:Uniform:Expansion:Gauge:SU:def}, in the separate-universe framework (where stochastic inflation is formulated), the gauge becomes unequivocally defined. \section{Separate universe} \label{sec:SepUniv} Let us now describe the separate-universe approach~\cite{Salopek:1990jq, Sasaki:1995aw, Wands:2000dp, Lyth:2003im, Rigopoulos:2003ak, Lyth:2005fi} (also known as the quasi-isotropic approach~\cite{Lifshitz:1960, Starobinsky:1982mr, Comer:1994np, Khalatnikov:2002kn}), which consists in introducing local perturbations to the homogeneous and isotropic problem described in \Sec{ssec:bck}, as a proxy for the full perturbative problem studied in \Sec{sec:cosmopert}. Our goal is to establish this formalism (and the corresponding validity conditions) in the Hamiltonian framework, complementing analyses performed in the Lagrangian approach such as \Refc{Pattison:2019hef}. \subsection{Homogeneous and isotropic perturbations} \label{sec:SepUniv:variables} The starting point of the separate-universe approach is to perturb the homogeneous and isotropic background variables introduced in \Sec{ssec:bck}, namely $N\to N(\tau)+\overline{\delta N}$, $(v,\theta)\to[v(\tau)+\overline{\delta v},\theta(\tau)+\overline{\delta\theta}]$, and $(\phi,\pi_\phi)\to[\phi(\tau)+\overline{\delta\phi},\pi_\phi(\tau)+\overline{\delta\pi_\phi}]$. Hereafter, an overall bar denotes perturbations of the background variables, which a priori differ from the perturbation variables used in the full treatment of \Sec{sec:cosmopert}. As we will show, they however succeed in capturing their behaviour above the Hubble radius, \textsl{i.e.~} when $k\ll aH$. In order to make this statement explicit, one must first determine to which background perturbations the variables introduced in \Sec{sec:cosmopert} correspond, \textsl{i.e.~} one must establish a ``dictionary'' between CPT~and the separate universe approach. For obvious reasons, $\delta N$, $\delta\phi$ and $\delta\pi_\phi$ correspond to $\overline{\delta N}$, $\overline{\delta\phi}$ and $\overline{\delta\pi_\phi}$ respectively. Since the shift $N^i$ vanishes at the background level, there is no perturbed shift in the separate-universe approach, \textsl{i.e.~} $\overline{\delta N_1}=0$. For the gravitational sector, since $\gamma_{ij}(\tau)=v^{2/3}\widetilde{\gamma_{ij}}$ at the background level, one has $\gamma_{ij}\to (v+\overline{\delta v})^{2/3}\widetilde{\gamma_{ij}}$ in the separate universe, which leads to $\overline{\delta\gamma_{ij}}=[(v+\ovl{\delta v})^{2/3}-v^{2/3}] \widetilde{\gamma}_{ij}$. Making use of \Eq{eq:delta:gamma:A:delta:gamma:ij}, this gives rise to $\overline{\delta\gamma_1} = \sqrt{3}[(v+\ovl{\delta v})^{2/3}-v^{2/3}] $ and $\overline{\delta\gamma_2}=0$. Similarly, combining \Eqs{eq:pi_ij:pi_p} and~\eqref{eq:theta:def}, at the background level one has $\pi^{ij}(\tau)=v^{1/3}\theta\widetilde{\gamma}^{ij}/2$, which leads to $\overline{\delta\pi^{ij}}=[(v+\ovl{\delta v})^{1/3}(\theta+\ovl{\delta\theta})-v^{1/3}\theta]\widetilde{\gamma}^{ij}/2$. Making use of \Eq{eq:delta:gamma:A:delta:gamma:ij} again, this gives $\overline{\delta\pi_1}=\sqrt{3}[(v+\ovl{\delta v})^{1/3}(\theta+\ovl{\delta\theta})-v^{1/3}\theta]/2$ and $\overline{\delta\pi_2}=0$. These formula are summarised in Table~\ref{table:correspondence}. \begin{table}[h!] \centering \begin{tabular}{ |c|c| } \hline CPT~& separate-universe approach \\ \hline \hline $\delta N$ & $\overline{\delta N}$ \\ \hline $\delta N_1$ & 0 \\ \hline $\delta\gamma_1$ & $\overline{\delta\gamma_1}=\sqrt{3}[(v+\ovl{\delta v})^{2/3}-v^{2/3}] $\\ \hline $\delta\pi_1$ & $\overline{\delta\pi_1}=\frac{\sqrt{3}}{2}[(v+\ovl{\delta v})^{1/3}(\theta+\ovl{\delta\theta})-v^{1/3}\theta]$\\ \hline $\delta\gamma_2$ & $0$\\ \hline $\delta\pi_2$ & $0$\\ \hline $\delta\phi$ & $\overline{\delta\phi}$\\ \hline $\delta\pi_\phi$ & $\overline{\delta\pi_\phi}$\\ \hline \end{tabular} \caption{Correspondence between variables in CPT~and in the separate-universe approach.} \label{table:correspondence} \end{table} One can see that the anisotropic degrees of freedom, $\delta\gamma_2$ and $\delta\pi_2$, as well as the shift, are simply absent in the separate-universe approach. One can also check that the transformation from $(\overline{\delta\gamma_1},\overline{\delta\pi_1})$ to $(\overline{\delta v},\overline{\delta\theta})$ is canonical, as it should. \subsection{Dynamics of the background perturbations} \label{sssec:sepunivgen} The dynamics of the perturbations in the separate-universe approach can be obtained by plugging the replacement rules derived in \Sec{sec:SepUniv:variables} into the Hamiltonian~\eqref{eq:full:Hamiltonian}, whose contributions are given in \Eqs{eq:scgg}-\eqref{eq:dcsg}. This gives rise to \begin{eqnarray} \mathcal{C}&=&\frac{-3}{4M_\usssPl^2}v\theta^2\left(1+\frac{\ovl{\delta\gamma_1}}{\sqrt{3} v^{2/3}}\right)^{1/2}\left(1+\frac{2}{\sqrt{3}}\frac{\ovl{\delta\pi_1}}{v^{1/3}\theta}\right)^2\nonumber \\ &&+\frac{\pi_\phi^2}{2v}\left(1+\frac{\ovl{\delta\gamma_1}}{\sqrt{3}v^{2/3}}\right)^{-3/2}\left(1+\frac{\ovl{\delta\pi_\phi}}{\pi_\phi}\right)^2+v\left(1+\frac{\ovl{\delta\gamma_1}}{\sqrt{3}v^{2/3}}\right)^{3/2}V\left(\phi+\ovl{\delta\phi}\right), \label{eq:Hamiltonian:SU:full} \end{eqnarray} where the smeared constraint is $C=\int \mathrm{d}^3 \vec{x} N\mathcal{C}$, and where $\theta^2$ can be expressed using the background constraint equation~\eqref{eq:ConstHom}. Here we have parametrised the gravitational perturbations with $\ovl{\delta\gamma_1}$ and $\ovl{\delta\pi_1}$ instead of $\ovl{\delta v}$ and $\ovl{\delta\theta}$, to allow for a more direct comparison with CPT. The two sets of variables are however simply related with the formulas given in Table~\ref{table:correspondence}, and below we will also provide the result in terms of $\ovl{\delta v}$ and $\ovl{\delta\theta}$, since they have the advantage of providing a simple interpretation as perturbations of the volume and of the expansion rate. Note that we have also dropped the term proportional to $\partial_i\phi \partial_j \phi$ since $\delta\bar{\phi}$ is a homogeneous degree of freedom. Similarly, since only homogeneous and isotropic perturbations are included in the induced metric, its Ricci scalar vanishes, \textsl{i.e.~} $\mathcal{R}(\tau)=\ovl{\delta\mathcal{R}}=0$, which explains why the term proportional to $\mathcal{R}$ in \Eq{eq:scgg} is absent too. The next step is to expand \Eq{eq:Hamiltonian:SU:full} to the quadratic order in perturbations, see the discussion above \Eq{eq:Expanded:Hamiltonian}. It gives rise to the following Hamiltonian \begin{eqnarray} \ovl{C}\left[N+\overline{\delta N}\right]=N\,\mathcal{S}^{(0)}+\left(\overline{\delta N}\,\ovl{\mathcal{S}^{(1)}}+N\,\ovl{\mathcal{S}^{(2)}}\right) , \end{eqnarray} where $\mathcal{S}^{(0)}$ is given by \Eqs{eq:BckgConsGrav} and~\eqref{eq:BckgConsPhi}, and the perturbed scalar constraint at linear and quadratic order is \begin{eqnarray} \ovl{\mathcal{S}^{(1)}}&=&-\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,\ovl{\delta\pi_1}-\frac{v^{1/3}}{\sqrt{3}}\left(\frac{\pi_\phi^2}{v^2}-V\right)\,\ovl{\delta\gamma_1}+\frac{\pi_\phi}{v}\,\ovl{\delta\pi_\phi}+vV_{,\phi}\,\ovl{\delta\phi}\, , \label{eq:s1su} \\ \ovl{\mathcal{S}^{(2)}}&=&-\frac{v^{1/3}}{M_\usssPl^2}\left(\ovl{\delta\pi_1}\right)^2+\frac{1}{3v^{1/3}}\left(\frac{\pi_\phi^2}{v^2}+\frac{V}{2}\right)\left(\ovl{\delta\gamma_1}\right)^2+\frac{1}{2v}\left(\ovl{\delta\pi_\phi}\right)^2+\frac{v}{2}V_{,\phi,\phi}\left(\ovl{\delta\phi}\right)^2 \nonumber \\ &&-\frac{\theta}{2M_\usssPl^2}\left(\ovl{\delta\pi_1}\right)\left(\ovl{\delta\gamma_1}\right)-\frac{\sqrt{3}}{2}v^{1/3}\left(\frac{\pi_\phi}{v^2}\,\ovl{\delta\pi_\phi}-V_{,\phi}\,\ovl{\delta\phi}\right)\,\ovl{\delta\gamma_1}\, .\label{eq:s2su} \end{eqnarray} The separate-universe variables have to lie on the constraint $\ovl{\mathcal{S}^{(1)}}=0$, while $\ovl{\mathcal{S}^{(2)}}$ contributes to their dynamics, which Hamilton equations are given by \begin{eqnarray} &&\left\{\begin{array}{l} \displaystyle\dot{\overline{\delta\gamma_1}}=-\frac{\sqrt{3}}{M_\usssPl^2}v^{2/3}\theta\,\overline{\delta N}-\frac{N}{M_\usssPl^2}\left({2v^{1/3}}\,\ovl{\delta\pi_1}+\frac{\theta}{2}\,\ovl{\delta\gamma_1}\right), \\ \displaystyle\dot{\overline{\delta\pi_1}}=\frac{v^{1/3}}{\sqrt{3}}\left(\frac{\pi^2_\phi}{v^2}-V\right)\ovl{\delta N}+N\left[-\frac{2}{3v^{1/3}}\left(\frac{\pi^2_\phi}{v^2}+\frac{V}{2}\right)\,\ovl{\delta\gamma_1}+\frac{\theta}{2M_\usssPl^2}\,\ovl{\delta\pi_1}\right] \nonumber \\~~~~~~~~\displaystyle +N\frac{\sqrt{3}}{2}v^{1/3}\left(\frac{\pi_\phi}{v^2}\,\ovl{\delta\pi_\phi}-V_{,\phi}\,\ovl{\delta\phi}\right), \end{array}\right. \label{eq:SepUnivFull}\\ && \\ &&\left\{\begin{array}{l} \displaystyle\dot{\overline{\delta\phi}}=\frac{\pi_\phi}{v}\,\overline{\delta N}+N\left(\frac{1}{v}\,\overline{\delta\pi_\phi}-\frac{\sqrt{3}}{2}\frac{\pi_\phi}{v^{5/3}}\,\ovl{\delta\gamma_1}\right), \\ \displaystyle\dot{\overline{\delta\pi_\phi}}=-vV_{,\phi}\,\overline{\delta N}-N\left(vV_{,\phi,\phi}\,\ovl{\delta\phi}+\frac{\sqrt{3}}{2}v^{1/3}V_{,\phi}\,\ovl{\delta\gamma_1}\right). \end{array}\right. \nonumber \end{eqnarray} By comparing those equations of motion with their CPT~counterpart, \Eqs{eq:EOM}, one notices that the contribution involving the diffeomorphism constraint is absent in the SU. This is because $\mathcal{D}_i^{(1)}=i\, k_i\, \mathcal{D}^{(1)}$ is proportional to $k$, see \Eq{eq:D1i:D1}, so it indeed disappears at large scales. However, the constraint equation itself leads to a relationship between the perturbation variables that does not involve $k$, and which therefore contains non-trivial information even at large scales. The fact that it is lost in the separate-universe approach may therefore seem problematic a priori, and the consequences of this loss will be further analysed below. At this stage, let us simply notice that a SU version of the diffeomorphism constraint can still be defined using the correspondence table \ref{table:correspondence}: \begin{eqnarray} \label{eq:diff1:cpt} \ovl{\mathcal{D}}^{(1)} := \pi_\phi \ovl{\delta \phi} + \frac{1}{2\sqrt{3}} v^{1/3} \theta \ovl{\delta\gamma}_1 - \frac{2}{\sqrt{3}} v^{2/3} \ovl{\delta \pi}_1\,. \end{eqnarray} Using \Eq{eq:SepUnivFull}, one can readily show that $\dot{\ovl{\mathcal{D}}}^{(1)}=0$ as long as the linear scalar constraint is satisfied $\ovl{\mathcal{S}}^{(1)}=0$. This implies that $\ovl{\mathcal{D}}^{(1)} $ is a conserved quantity in SU. It is worth stressing that the initial value of $\ovl{\mathcal{D}}^{(1)} $ is usually set by CPT, which is employed to describe cosmological perturbations before they cross out the Hubble radius. In CPT, $\mathcal{D}^{(1)}=0$, but this does not guarantee that $\ovl{\mathcal{D}}^{(1)} $ vanishes initially (hence at later time) since $\mathcal{D}^{(1)}$ and $\ovl{\mathcal{D}}^{(1)} $ generically differ.\footnote{We thank Diego Cruces for interesting discussions leading to this remark.} By comparing \Eqs{eq:diff1gen} and~\eqref{eq:diff1:cpt}, one notices that they coincide when $ 2 v^{1/3} \delta\pi_2+\theta \delta\gamma_2=0$. Therefore, by working in gauges where the anisotropic sector satisfies this constraint in CPT, one reinstates the diffeomorphism constraint in SU, $\ovl{\mathcal{D}}^{(1)}=0$. As we will see below, this condition is however not required for the SU approach to be reliable. As mentioned above, it is also interesting to cast the result in terms of the variables $\ovl{\delta v}$ and $\ovl{\delta\theta}$ for the gravitational sector, and one finds that the scalar constraint is given by \begin{eqnarray} \label{eq:S1:SU} \ovl{\mathcal{S}^{(1)}} = -\frac{3v\theta}{2M_\usssPl^2}\overline{\delta\theta}+vV_{,\phi}\overline{\delta\phi}+\frac{\pi^2_\phi}{v}\left(\frac{\overline{\delta\pi_\phi}}{\pi_\phi}-\frac{\overline{\delta v}}{v}\right), \end{eqnarray} that the diffeomorphism constraint reduces to \begin{eqnarray} \ovl{\mathcal{D}}^{(1)} = \pi_\phi \ovl{\delta\phi} - v \ovl{\delta\theta}\,, \end{eqnarray} and that the equations of motion read \begin{eqnarray} &&\left\{\begin{array}{l} \displaystyle\dot{\overline{\delta v}}=-\frac{3}{2M_\usssPl^2}v\theta\left(\overline{\delta N}+N\frac{\overline{\delta v}}{v}+N\frac{\overline{\delta\theta}}{\theta}\right), \\ \displaystyle\dot{\overline{\delta\theta}}=\frac{\pi^2_\phi}{v^2}\left(\overline{\delta N}-2N\frac{\overline{\delta v}}{v}+2N\frac{\overline{\delta\pi_\phi}}{\pi_\phi}\right), \end{array}\right. \label{eq:SepUnivVTheta}\\ &&\left\{\begin{array}{l} \displaystyle\dot{\overline{\delta\phi}}=\frac{\pi_\phi}{v}\,\left(\overline{\delta N}-N\frac{\overline{\delta v}}{v}+N\frac{\overline{\delta\pi_\phi}}{\pi_\phi}\right), \\ \displaystyle\dot{\overline{\delta\pi_\phi}}=-vV_{,\phi}\,\left(\overline{\delta N}+N\frac{\overline{\delta v}}{v}\right)-NvV_{,\phi,\phi}\,\overline{\delta\phi}\, . \end{array}\right. \label{eq:SepUnivVTheta:2} \end{eqnarray} Note that in order to simplify the equation of motion for the perturbed expansion rate, we made use of the scalar constraint at the background level and at first order, $\mathcal{S}^{(0)}=\ovl{\mathcal{S}^{(1)}}=0$. \\ An important remark is that, while the above formulas have been obtained by plugging the correspondence relations given in Table \ref{table:correspondence} into the full Hamiltonian~\eqref{eq:full:Hamiltonian}-\eqref{eq:dcsg}, an alternative derivation would be to start from the Hamiltonian of the homogeneous and isotropic problem, \Eqs{eq:BckgConsGrav} and \eqref{eq:BckgConsPhi}, or even from the equations of motion of the homogeneous and isotropic problem, \textsl{i.e.~} \Eqs{eq:ConstHom}-\eqref{eq:Dot:theta} and~\eqref{eq:DotPhiPi}-\eqref{eq:DotPi}, and plug in the correspondence relations at these levels. In \App{app:pert}, we show that these two alternative procedures yield exactly the same equations. In other words, it is equivalent to (i) first perturb the system and then restrict the analysis to homogeneous and isotropic perturbations, and (ii) first impose homogeneity and isotropy and then perturb the reduced system. Let us also note that, once the phase space has been reduced to the separate-universe degrees of freedom, the Hamiltonian~\eqref{eq:Hamiltonian:SU:full} is exact, \textsl{i.e.~} it does not contain any perturbative expansion. As a consequence, even though we have derived the relevant dynamical equations at leading order, one could treat the separate-universe non perturbatively, by imposing the vanishing of \Eq{eq:Hamiltonian:SU:full} (this is the scalar constraint equation) and using \Eq{eq:Hamiltonian:SU:full} to derive the (non-linear) equations of motion.\footnote{We note that the equivalence with the alternative derivation consisting in including the separate-universe deviations in the homogeneous and isotropic Hamiltonian, \Eqs{eq:BckgConsGrav} and \eqref{eq:BckgConsPhi}, or even directly in the homogeneous and isotropic equations of motion, \Eqs{eq:ConstHom}-\eqref{eq:Dot:theta} and~\eqref{eq:DotPhiPi}-\eqref{eq:DotPi}, also holds at the non-linear level, hence at all orders in the separate-universe perturbations (see \App{app:pert}).} Below we check the agreement between the separate-universe approach and standard CPT~at leading order, but one should bear in mind that the separate-universe approach is non perturbative. \subsection{Fixing the gauge} \label{sec:SU:gauge} We end this section by mentioning that gauge fixing can also be performed in the separate-universe framework, which contains five variables, namely $\ovl{\delta N}$, $\ovl{\delta\phi}$, $\ovl{\delta\pi_\phi}$, $\ovl{\delta\gamma_1}$ and $\ovl{\delta\pi_1}$. Since the theory has a single Lagrange multiplier, namely the lapse function, a single scalar combination of the perturbation variables can be set to zero (compared to two in CPT, see footnote~\ref{footnote:ChangeOfCoordinates}). The vanishing of its equation of motion then leads to a constraint equation, which, added to the scalar constraint equation, leaves two phase-space variables free, \textsl{i.e.~} one scalar physical degree of freedom (\textsl{i.e.~} the same number of physical degrees of freedom as in CPT). As explained around \Eq{eq:MS:def} in the context of the full CPT, this physical degree of freedom can be parametrised by the Mukhanov-Sasaki variable, which in the separate-universe framework reads \begin{eqnarray} \label{eq:MS:SU:def} \ovl{Q}_{{}_\mathrm{MS}} := \sqrt{\frac{v}{N}}\ovl{\delta\phi} + \frac{M_\usssPl^2\pi_\phi}{\sqrt{3N}\theta v^{7/6}}\ovl{\delta\gamma_1}\, . \end{eqnarray} Making use of the above constraint and dynamical equations, it obeys the second-order equation of motion \begin{eqnarray} \label{eq:Q:eom:SU} \ddot{\ovl{Q}}_{{}_\mathrm{MS}} -\frac{\ddot{z}}{z}\ovl{Q}_{{}_\mathrm{MS}} =\left(4 \frac{\pi_\phi}{\theta v} V - 2 V_{,\phi}\right) \ovl{\mathcal{D}}^{(1)}=\sqrt{\frac{\epsilon_1}{2}}\epsilon_2 M_\usssPl H^2 \ovl{\mathcal{D}}^{(1)}\, , \end{eqnarray} where the second expression casts the right-hand side in terms of the first and second Hubble-flow parameters $\epsilon_2 \equiv \mathrm{d}\ln\epsilon_1/\mathrm{d}\ln(v^{1/3})$. This needs to be compared to its CPT counterpart, namely \Eq{eq:MS:CPT}. Two differences can be noticed. First, the term proportional to $k^2 Q_{{}_\mathrm{MS}}$ is absent in the SU, since gradient terms are indeed negligible at large scales. Second, a right-hand side involving the SU diffeomorphism constraint is present in \Eq{eq:Q:eom:SU}. As noted above, a specific constraint can be imposed in the anisotropic sector to make it vanish. Otherwise, $ \ovl{\mathcal{D}}^{(1)}$ is a constant, hence the right-hand side is either almost constant (as in slow-roll inflation), or decays (as in ultra slow-roll inflation, where it decays as $1/v$). In either case, it is much smaller than the left-hand side, which necessarily grows ($Q_{{}_\mathrm{MS}} \propto v^{1/3}$ hence $\ddot{Q}_{{}_\mathrm{MS}} \propto v$, both in slow roll and ultra slow roll). As a consequence, the term arising from the diffeomorphism constraint can only affect sub-dominant modes on super-Hubble scales, which must be discarded in a gradient expansion anyway. We conclude that it does not jeopardise the SU approach. Alternatively, let us see how the gauges introduced in \Sec{ssec:gaugefix} proceed in the separate-universe picture. \subsubsection{Spatially-flat gauge} In the spatially-flat gauge introduced in \Sec{sec:Spatially:Flat:def}, $\delta\gamma_{ij}=0$, which simply translates into $\ovl{\delta\gamma_1}=0$ in the separate-universe approach. Requiring that $\dot{\ovl{\delta\gamma_1}}=0$ in \Eq{eq:SepUnivFull}, together with the vanishing of the linear scalar constraint given in \Eq{eq:s1su}, then leads to \begin{eqnarray} \label{eq:deltaNbar:SFgauge} \overline{\delta N}& =& -\frac{2M_\usssPl^2}{3}\frac{N}{\theta^2}\left(V_{,\phi}\,\overline{\delta\phi}+\frac{\pi_\phi}{v^2}\,\overline{\delta\pi_\phi}\right), \\ \ovl{\delta\pi_1}&=&\frac{M_\usssPl^2 }{\sqrt{3}v^{2/3}\theta}\left(\frac{\pi_\phi}{v}\ovl{\delta\pi_\phi}+v V' \ovl{\delta\phi}\right). \label{eq:deltapi1bar:SFgauge} \end{eqnarray} All variables can therefore be expressed in terms of $\ovl{\delta\phi}$ and $\ovl{\delta\pi_\phi}$ only, whose dynamics is given by \Eqs{eq:SepUnivVTheta:2} where the above replacements are made. \subsubsection{Newtonian gauge} The Newtonian gauge was defined in \Sec{sec:Newtonian:def} with the condition $\delta\gamma_2=\delta\pi_2=0$, or equivalently $\delta\gamma_2=\delta N_1=0$. Since the corresponding variables are already set to zero in the separate-universe approach, see Table~\ref{table:correspondence}, these conditions yield no prescription in the separate universe. \subsubsection{Generalised synchronous gauge} The generalised synchronous gauge was defined in \Sec{sec:Generalised:Synchronous:def} with the condition $\delta N=0$, which here translates into $\ovl{\delta N}=0$. As already noticed in \Sec{sec:Generalised:Synchronous:def}, no further constraint is imposed from the equations of motion, so that gauge is not entirely fixed. \subsubsection{Uniform-expansion gauge} \label{sec:Uniform:Expansion:Gauge:SU:def} In the uniform-expansion gauge introduced in \Sec{sec:Uniform:Expansion:Gauge:def}, $\delta N_1=\delta\gamma_1=0$, which simply translates into $\ovl{\delta\gamma_1}=0$ in the separate-universe approach. One thus obtains \Eqs{eq:deltaNbar:SFgauge} and~\eqref{eq:deltapi1bar:SFgauge} as in the separate-universe spatially-flat gauge, so the uniform-expansion gauge is unequivocally defined in the separate-universe framework. \section{Separate universe versus cosmological-perturbation theory} \label{sec:suvspert} Having studied scalar fluctuations in CPT, see \Sec{sec:cosmopert}, and in the separate-universe approach, see \Sec{sec:SepUniv}, we are now in a position where we can compare the two and derive the conditions under which the latter provides a reliable approximation of the former. This will be first done by leaving the gauge unfixed, where we will recover the conditions obtained in \Refc{Pattison:2019hef} from an analysis performed in the Lagrangian framework. We will then consider the gauges introduced in \Sec{ssec:gaugefix}, where we will show that the agreement between the gauge matching procedures is not always guaranteed, and that it sometimes requires specific matching prescriptions that we will establish. \subsection{Arbitrary gauge} \label{ssec:comp} For the linear scalar constraint, one has to compare \Eq{eq:s1su} with \Eq{eq:scal1gen:simp} where the replacements outlined in Table~\ref{table:correspondence} are performed. One can see that the two constraints are the same, provided that \begin{eqnarray} \frac{k^2}{v^{2/3}}\ll\frac{1}{M_\usssPl^2}\left|\frac{\pi_\phi^2}{v^2}-V\right|\, .\label{eq:lss1} \end{eqnarray} The case of the diffeomorphism constraint was already discussed around \Eq{eq:diff1:cpt}. For the quadratic scalar constraint, one has to compare \Eqs{eq:scalconst2} and~\eqref{eq:s2su}. For the terms proportional to $|\delta\phi|^2$ to match, one must impose \begin{eqnarray} \frac{k^2}{v^{2/3}}&\ll&\left|V_{,\phi,\phi}\right| \label{eq:lss3}\, , \end{eqnarray} which implies that the physical wavenumber is much smaller than the mass of the scalar field. The terms involving gravitational perturbations require more attention. They can be written in matricial form as $(\delta \gamma_1, \delta\gamma_2) M (\delta \gamma_1^\star, \delta\gamma_2^\star) ^\mathrm{T} $, where \begin{eqnarray} M=\begin{pmatrix} \frac{1}{3v^{1/3}}\left(\frac{\pi_\phi^2}{v^2}+\frac{V}{2} - \frac{M_\usssPl^2 k^2}{4v^{2/3}} \right) & \frac{\sqrt{2}M_\usssPl^2}{24v}k^2 \\ \frac{\sqrt{2}M_\usssPl^2}{24v}k^2 & \frac{1}{3v^{1/3}}\left(\frac{\pi_\phi^2}{v^2}+\frac{V}{2} - \frac{M_\usssPl^2 k^2}{8v^{2/3}} \right) \end{pmatrix} \label{eq:M:matrix:def} \end{eqnarray} is a symmetric matrix that can be read off from \Eq{eq:scalconst2}, and which eigenvalues are given by \begin{eqnarray} \lambda_1=\frac{2}{3v^{1/3}}\left(\frac{\pi_\phi^2}{v^2}+\frac{V}{2}\right) \quad\quad\text{and}\quad\quad \lambda_2=\lambda_1-\frac{1}{4}\frac{M_\usssPl^2 k^2}{v}\, . \end{eqnarray} For the term proportional to $k$ to play a negligible role, one thus has to impose \begin{eqnarray} \frac{k^2}{v^{2/3}}&\ll&\frac{1}{M_\usssPl^2}\left|\frac{\pi_\phi^2}{v^2}+\frac{V}{2}\right|. \label{eq:lss2} \end{eqnarray} When the conditions~\eqref{eq:lss3} and~\eqref{eq:lss2} hold, \Eqs{eq:scalconst2} reduces to \begin{eqnarray} \mathcal{S}^{(2)}(k\to 0)&\simeq & - \frac{v^{1/3}}{M_\usssPl^2}\left|\delta\pi_1\right|^2 +\frac{1}{3v^{1/3}}\left(\frac{\pi^2_\phi}{v^2}+\frac{V}{2}\right)\left|\delta\gamma_1\right|^2 +\frac{\left|\delta\pi_\phi\right|^2}{2v} + \frac{v}{2} V_{,\phi,\phi} \left|\delta\phi\right|^2\nonumber \\ &&-\frac{\theta}{2M_\usssPl^2}\mathrm{Re}\left[\left({\delta\pi^\star_1}\right)\left({\delta\gamma_1}\right)\right] -\frac{\sqrt{3}}{2}v^{1/3} \left\{\frac{\pi_\phi}{v^2}\mathrm{Re}\left[\left({\delta\pi^\star_\phi}\right)\left({\delta\gamma_1}\right)\right] -V_{,\phi} \mathrm{Re}\left[\left({\delta\phi^\star}\right)\left({\delta\gamma_1}\right)\right] \right\} \nonumber \\ && + \frac{2 v^{1/3}}{M_\usssPl^2} \left|\delta\pi_2\right|^2 + \frac{1}{3v^{1/3}}\left(\frac{\pi^2_\phi}{v^2}+\frac{V}{2} \right) \left|\delta\gamma_2\right|^2 +\frac{\theta}{M_\usssPl^2}\mathrm{Re}\left[\left({\delta\pi^\star_2}\right)\left({\delta\gamma_2}\right)\right] , \end{eqnarray} which has to be compared to \Eq{eq:s2su}. The separate-universe quadratic constraint, $\ovl{\mathcal{S}^{(2)}}$, can be formally matched to the two first lines of the above limit (note that cosmological perturbations are real-valued in real space). As expected, it is however unable to capture the last line of the above expression, which contains the anisotropic gravitational perturbations. Nonetheless, it is important to stress that in the above limit, these anisotropic degrees of freedom decouple from the isotropic ones. This is why, at large scales, the dynamics of the isotropic cosmological perturbations is independent of the anisotropic sector, and is thus correctly described by the separate-universe approach. These considerations thus allow us to establish the following statement:\\ \textit{On large scales, the separate-universe framework, in which the homogeneous and isotropic problem is perturbed (either at the level of its Hamiltonian or at the level of its dynamical equations), is equivalent to the full CPT~where the anisotropic degrees of freedom are set to zero.}\\ The next question is to determine whether or not it is legitimate to set the anisotropic degrees of freedom to zero, \textsl{i.e.~} under which condition the anisotropic degrees of freedom are negligible compared with the isotropic degrees of freedom. The answer to that question is necessarily gauge dependent, since the relative amplitude of both sets of degrees of freedom depends on the gauge. This is why, in the remaining part of this section, we will further investigate the gauges introduced in \Secs{ssec:gaugefix} and~\ref{sec:SU:gauge}. But before moving on to that discussion, two remarks are in order. First, at the gauge-invariant level, one can compare the separate-universe approach and CPT~by inspecting the equations of motion for the Mukhanov-Sasaki variable, \textsl{i.e.~} \Eqs{eq:Q:eom} and~\eqref{eq:Q:eom:SU}. Under the condition~\eqref{eq:lss3}, the latter reduces to the former, which confirms the validity of the separate-universe approach. Second, the three conditions obtained on the amplitude of the wavenumber, \textsl{i.e.~} \Eqs{eq:lss1}, \eqref{eq:lss3} and~\eqref{eq:lss2}, can be summarised as follows. Upon writing $k=\sigma a H$, and using the Friedmann and Raychaudhuri equations~\eqref{eq:Friedmann:usual} and~\eqref{eq:Raychaudhury}, they give rise to \begin{eqnarray} \label{eq:sigma:cond} \sigma\ll \sqrt{\vert \eta \vert}, \sqrt{\vert 1+ 3w \vert}, \sqrt{\vert 1+ \frac{3}{5}w \vert }\, . \end{eqnarray} Here, $\eta\equiv V''/H^2$ is the so-called ``eta parameter'', which measures the squared mass of the field in Hubble units. In the context of inflation, it is given by $\eta\simeq 6\epsilon_1-3\epsilon_2/2$, where $\epsilon_1:=-\mathrm{d}(\ln H)/(\mathrm{d}\mathcal{N})$ and $\epsilon_2:=\mathrm{d}(\ln \epsilon_1)/(\mathrm{d}\mathcal{N})$ are the two first slow-roll parameters. It is therefore a small parameter. The quantity $w$ denotes the equation-of-state parameter, which in inflation differs from $-1$ by slow-roll corrections. Hence the second and third constraints are of order one. The most stringent constraint therefore comes from the eta parameter\footnote{It is worth noting that this might be different in a non-inflationary context, for instance when the universe transits from an accelerated expansion to a decelerated one (or vice-versa) for which $ \sqrt{\vert 1+ 3w \vert}$ vanishes. We also note that the last constraint is always of order one or larger unless one is considering matter contents violating the null energy condition, \textsl{i.e.~} $w<-1$.} and imposes to consider super-Hubble wavelengths. We finally stress that this set of conditions is gauge-dependent in the sense that some of them may not be mandatory in some specific gauges. For instance, the constraints~\eqref{eq:lss1} and~\eqref{eq:lss2} are not necessary when working in the uniform-expansion gauge or in the spatially-flat gauge, in which $\delta\gamma_1$ is imposed to be zero. \subsection{Fixing the gauge} \label{sec:US:vs:CPT:gauge} Let us now compare the separate-universe approach and CPT~in the few gauges discussed in \Secs{ssec:gaugefix} and~\ref{sec:SU:gauge}. \subsubsection{Spatially-flat gauge} The spatially-flat gauge is unequivocally defined both in CPT~and in the separate-universe approach. However, the gauge-fixing procedure proceeds differently in these two frameworks. Indeed, even though the same expression is obtained for the perturbed momentum of the induced metric, see \Eqs{eq:pi1phi:flat} and~\eqref{eq:deltapi1bar:SFgauge}, it leads to different expressions for the perturbed lapse and shift, see \Eqs{eq:deltaNphi:flat} and~\eqref{eq:deltaNbar:SFgauge}. This clearly violates the correspondences of Table~\ref{table:correspondence}. Another manifestation of this mismatch comes from noticing that applying the correspondence of Table~\ref{table:correspondence} to \Eq{eq:deltaN1phi:flat} leads to a relationship between $\ovl{\delta\phi}$ and $\ovl{\delta\pi_\phi}$ that is clearly not satisfied in the separate-universe picture. The reason for these discrepancies can be traced back to the fact that $k\delta N_1$ is not $k$-suppressed~\cite{Cruces:2021iwq}, see \Eq{eq:deltaN1phi:flat} again. We thus conclude that in the spatially-flat gauge, the separate universe approach does not lead to the appropriate gauge fixing. \subsubsection{Newtonian gauge} As explained in \Sec{sec:SU:gauge}, since the Newtonian gauge consists in freezing the anisotropic degrees of freedom, it does not lead to any relevant constraint in the separate-universe framework. This problem can be solved by considering an alternative definition of the Newtonian gauge by means of \Eqs{eq:deltaN:deltagamma1:Newtonian} and~\eqref{eq:D1:Newtonian}, \textsl{i.e.~} by imposing \begin{eqnarray} \label{eq:Newtonian:alternative:1} \frac{{\delta N}}{N}&=&-\frac{{\delta\gamma_1}}{2\sqrt{3} v^{2/3}}\, , \\ {\delta\pi_1} &=& \frac{\sqrt{3}\pi_\phi}{2 v^{2/3}}{\delta\phi} + \frac{\theta}{4v^{1/3}} {\delta\gamma_1}\, . \label{eq:Newtonian:alternative:2} \end{eqnarray} The fact that these two conditions lead to the same definition of the Newtonian gauge as the one introduced in \Sec{sec:Newtonian:def} (namely $\delta\gamma_2=\delta N_1=0$, or equivalently $\delta\gamma_2=\delta\pi_2=0$), can be seen as follows. Combining \Eq{eq:Newtonian:alternative:2} with the vanishing of \Eq{eq:diff1gen} first leads to $\theta\delta\gamma_2+2v^{1/3}\delta\pi_2=0$. By differentiating this relationship with respect to time, using the equations of motion~\eqref{eq:eom:gen}, one obtains $N\delta\gamma_2(2\pi_\phi^2/v^2+4V+M_\usssPl^2 k^2/v^{2/3})/6+N\theta v^{1/3} \delta\pi_2/M_\usssPl^2=0$, where we have used \Eq{eq:Newtonian:alternative:1} to simplify the result, together with the Friedmann equation~\eqref{eq:ConstHom}. The above two formulas then lead to $\delta\gamma_2=\delta\pi_2=0$, which indeed corresponds to the (original) definition of the Newtonian gauge. The advantage of defining the Newtonian gauge with \Eqs{eq:Newtonian:alternative:1} and~\eqref{eq:Newtonian:alternative:2} is that these two relations (more precisely the barred version of them) give non-trivial constraints in the separate-universe framework. One may be concerned that the vanishing of the time derivative of \Eq{eq:Newtonian:alternative:2} in the separate-universe leads to an additional constraint equation, that would make the gauge over constrained. This is however not the case since \Eq{eq:Newtonian:alternative:2} comes from the vanishing of the diffeomorphism constraint, and as discussed in \Sec{sssec:sepunivgen}, in the separate universe one always has $\dot{\ovl{\mathcal{D}^{(1)}}}=0$. Furthermore, by construction, the gauge-fixing conditions~\eqref{eq:deltaN:deltagamma1:Newtonian} and~\eqref{eq:D1:Newtonian} are properly mapped through the correspondences of Table~\ref{table:correspondence}. This makes the Newtonian gauge well behaved from the separate-universe perspective, provided that the definition~\eqref{eq:Newtonian:alternative:1}-\eqref{eq:Newtonian:alternative:2} is employed. \subsubsection{Generalised synchronous gauge} As explained in \Secs{ssec:gaugefix} and~\ref{sec:SU:gauge}, the generalised synchronous gauges are under-constrained both in CPT~and in the separate-universe approach. Let us note that, in the latter case, one can use the same trick as above in the Newtonian gauge, and add the barred version of \Eq{eq:Newtonian:alternative:2}, \textsl{i.e.~} ${\ovl{\mathcal{D}^{(1)}}}=0$, in the definition of the generalised synchronous gauge. Together with $\ovl{\delta N}=0$, this fully specifies that gauge in the separate-universe framework and makes it well behaved. This may also offer a way to cure the synchronous gauge in CPT. This is because, as pointed out above, the condition ${\ovl{\mathcal{D}^{(1)}}}=0$ is equivalent to imposing $ \theta \delta\gamma_2+2v^{1/3} \delta\pi_2=0$ in the anisotropic sector of CPT, which may fix the remaining gauge degrees of freedom. We plan to investigate this possibility in a future work. \subsubsection{Uniform-expansion gauge} As explained in \Secs{ssec:gaugefix} and~\ref{sec:SU:gauge}, the uniform-expansion gauge is not fully defined in CPT, but it is unambiguous in the separate-universe approach. There are a priori several ways to complement the definition of that gauge in CPT, for instance by further constraining the anisotropic sector (such that it does not lead to additional conditions in the separate-universe framework). However, the comparison with the separate-universe version of the uniform-expansion gauge does not depend on that choice since, as pointed out above, $\delta\gamma_2$ and $\delta\pi_2$ decouple from the isotropic degrees of freedom in the large-scale limit. This makes the uniform-expansion gauge well behaved from the separate-universe perspective (whatever its completion in CPT). \section{Conclusions} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} \label{sec:conclusion} In this work, we have presented a Hamiltonian, phase-space description of Cosmological-Perturbation Theory (CPT) and of the separate-universe approach, when the matter content of the universe is made of a scalar field and gravity is described with general relativity. The separate-universe approach consists in perturbing the reduced Hamiltonian of the homogeneous and isotropic problem, or equivalently, in perturbing the dynamical equations obtained for that same problem.\footnote{This equivalence is valid even at the non-perturbative level, as proven in \App{app:pert}.} Our conclusion, stated at the end of \Sec{ssec:comp}, is that this matches CPT~at leading order in perturbations when restricted to isotropic degrees of freedom (\textsl{i.e.~} when setting the anisotropic perturbations to zero, $\delta N_1=\delta\gamma_2=\delta\pi_2=0$), provided that one considers sufficiently large scales, \textsl{i.e.~} scales satisfying \Eq{eq:sigma:cond}. This result is non trivial since it implies that (i) phase-space reduction to the isotropic sector and (ii) derivation of the dynamical equations, are two commuting procedures on large scales. Since the dynamics of isotropic and anisotropic degrees of freedom decouple at large scales, we have shown that the separate-universe formalism provides an accurate description of the large-scale gauge-invariant combinations such as the Mukhanov-Sasaki variable. Note that we have not made any specific assumption about the background solution, hence the validity of the separate-universe approach has been established for all kind of cosmological evolution (slow-roll and non-slow-roll inflating --- in agreement with the conclusion of \Refc{Pattison:2019hef} but in contrast to what was found in \Refc{Cruces:2018cvq}, expanding, even contracting, \textit{etc}.). When calculations need to be performed in a given gauge, one should bear in mind that not all gauges are well suited for the separate-universe approach. More precisely, we have found that in the spatially-flat gauge, the gauge-fixing procedure fails in the separate-universe approach because of the important role the perturbed shift plays in the CPT~version of that gauge. The Newtonian gauge is a priori ill-defined in the separate-universe approach, but we have found an alternative (though perfectly equivalent at the level of CPT) definition of that gauge that makes it unambiguous in the separate-universe approach, where the gauge-fixing procedure correctly reproduces CPT. The synchronous gauges are ambiguous in both approaches, but they can be made well defined in the separate-universe approach by using a similar trick (which consists in further imposing that the diffeomorphism constraint vanishes as a gauge condition). Finally, the uniform-expansion gauge, which is employed in the stochastic-$\delta N$ formalism, is well defined in the separate-universe approach, where the gauge-fixing procedure correctly reproduces CPT. We note that, among the different gauges that we considered, the separate-universe healthy gauges have in common that they impose the vanishing of the perturbed shift. Let us now mention a few research directions this work opens up. First, although we have shown that the separate universe matches CPT~ at leading order in perturbations only, our formulation allowed us to derive fully non-perturbative equations of motion in the separate universe, hence paving the way for investigating the matching with CPT~at the next-to-leading order. Second, our treatment of the gauge-invariant problem was restricted to deriving the equation of motion for the Mukhanov-Sasaki variable, but it remains to establish a systematic procedure that would provide all gauge-invariant parameterisations of the Hamiltonian phase space, both in CPT~and in the separate-universe framework. Similarly, while we have exhibited examples of both problematic and healthy gauges in the separate-universe approach, building a formalism to study gauge transformations in the Hamiltonian picture should allow us to classify gauges in a more systematic way, and to derive generic criteria for them to (i) be unambiguous and (ii) feature a gauge-fixing procedure in the separate-universe approach that matches the one performed in CPT. We will further investigate these aspects in forthcoming works. Third, as mentioned in \Sec{sec:intro}, a Hamiltonian description of the separate-universe dynamics is necessary for the stochastic-inflation formalism (at least in the absence of a phase-space attractor). Let us stress that in this context, there is no equivalent Lagrangian formulation, since the phase-space direction of the stochastic noise plays a crucial role, and it cannot be encoded in the Lagrangian approach. For instance, it is involved in determining whether stochastic effects break classical attractors~\cite{Grain:2017dqa}, in solving the vielbeins' frame ambiguity~\cite{Pinol:2020cdp}, or in describing the backreaction of quantum fluctuations in a phase of ultra-slow roll~\cite{Firouzjahi:2020jrj, Pattison:2021oen, Figueroa:2021zah}. This is why the present Hamiltonian formulation is a pre-requisite to using the stochastic formalism in the absence of a phase-space attractor, such as when slow roll is violated during inflation or in slowly contracting cosmologies. \acknowledgments We would like to thank Diego Cruces and David Wands for interesting discussions.
f6e989bf4f263bc1ef96faa766d5dc4240a3eab1
\section{Introduction}\label{section:introduction} Lyapunov exponent is a quantity to measure sensitivity of an orbit to initial conditions and natural scientists often compute it to find chaotic signal. However, the existence of Lyapunov exponent is seldom discussed. The aim of this paper is to investigate the abundance of dynamical systems whose Lyapunov exponents fail to exist on a physically observable set, that is, a \emph{positive Lebesgue measure} set. Let $M$ be a compact Riemannian manifold and $f: M\to M$ a differential map. A point $x\in M$ is said to be \emph{Lyapunov irregular} if there is a non-zero vector $v\in T_xM$ such that the Lyapunov exponent of $x$ for $v$, \begin{equation}\label{eq:0730a} \lim _{n\to \infty} \frac{1}{n} \log \Vert Df^n(x) v\Vert , \end{equation} does not exist. When we would like to emphasize the dependence on $v$, we call it Lyapunov irregular for $v$. Similarly a point $x$ is said to be \emph{Birkhoff irregular} if there is a continuous function $\varphi : M\to \mathbb R$ such that the time average $\lim _{n\to \infty} ( \sum _{j=0}^{n-1} \varphi (f^j (x)) )/n$ does not exist. Otherwise, we say that $x$ is \emph{Birkhoff regular}. Moreover, we call the set of Lyapunov (resp.~Birkhoff) irregular points the \emph{Lyapunov} (resp.~\emph{Birkhoff}) \emph{irregular set} of $f$. We borrowed these terminologies from Abdenur-Bonatti-Crovisier \cite{ABC2011}, while they studied the \emph{residuality} of Lyapunov/Birkhoff irregular sets, which is not the scope of the present paper. Indeed, the residuality of irregular sets is a generic property (\cite[Theorem 3.15]{ABC2011}) while the positivity of Lebesgue measure of irregular sets does not hold for Axiom A diffeomorphisms, see e.g.~\cite{Young2002}. The terminology \emph{historic behavior} by Ruelle \cite{Ruelle2001} is also commonly used for the forward orbit of a point to mean that the point is Birkhoff irregular, in particular in the study of the positivity of Lebesgue measure of Birkhoff irregular sets after Takens \cite{Takens2008}, see e.g.~\cite{KS2017,KNS2019} and references therein. Due to the Oseledets multiplicative ergodic theorem, the Lyapunov irregular set of $f$ is a zero measure set for any \emph{invariant} measure. However, this tells nothing about whether the Lyapunov irregular set is of positive Lebesgue measure in general. In fact, the Birkhoff ergodic theorem ensures that the Birkhoff irregular set has zero measure with respect to any invariant measure, but for a wide variety of dynamical systems the Birkhoff irregular set is known to have positive Lebesgue measure, see e.g.~\cite{Ruelle2001, Takens2008, KS2017, KNS2019} and references therein. Furthermore, the positivity of Lebesgue measure of the Birkhoff irregular set for these examples are strongly related with \emph{non-hyperbolicity} of the systems, and the two complementary conjectures given by Palis \cite{Palis2000} and Takens \cite{Takens2008} for the abundance of dynamics with the Birkhoff irregular set of positive Lebesgue measure opened a deep research field in smooth dynamical systems theory. So, it is naturally expected that finding a large class of dynamical systems with the Lyapunov irregular set of positive Lebesgue measure would be a significant subject. Yet, the known example whose Lyapunov irregular set has positive Lebesgue measure is only a surface flow with an attracting homoclinic loop, called a figure-8 attractor (\cite{OY2008}), see Section \ref{s:12} for details. However, the homoclinic loop is easy to be broken by small perturbations. Therefore, in this paper we give surface diffeomorphisms with a $\mathcal C^r$-\emph{robust homoclinic tangency} ($r\geq 2$) and the Lyapunov irregular set of positive Lebesgue measure. Recall that Newhouse \cite{Newhouse79} showed that, when $M$ is a closed surface, any homoclinic tangency yields a $\mathcal C^r$-diffeomorphism $f$ with a robust homoclinic tangency associated with a thick basic set $\Lambda$, that is, there is a neighborhood $\mathcal O$ of $f$ in the set $\mathrm{Diff}^r(M)$ of $\mathcal C^r$-diffeomorphisms such that for every $g\in \mathcal O$ the continuation $\Lambda _g$ of $\Lambda$ has a homoclinic tangency. Such an open set $\mathcal O$ is called a \emph{Newhouse open set}. We finally remark that if $f$ is a $\mathcal C^1$-diffeomorphism whose Lyapunov irregular set has positive Lebesgue measure and $\tilde f$ is conjugate to $f$ by a $\mathcal C^1$-diffeomorphism $h$, that is, $\tilde f= h^{-1} \circ f \circ h$, then the Lyapunov irregular set of $\tilde f$ also has positive Lebesgue measure. Our main theorem is the following. \begingroup \setcounter{tmp}{\value{thm}} \setcounter{thm}{0} \renewcommand\thethm{\Alph{thm}} \begin{thm}\label{thm:main} There exists a diffeomorphism $g$ in a Newhouse open set of $\mathrm{Diff}^r(M)$ of a closed surface $M$ and $2\leq r<\infty$ such that for any small $\mathcal C^r$-neighborhood $\mathcal O$ of $g$ one can find an uncountable set $\mathcal L\subset \mathcal O$ satisfying the following: \begin{itemize} \item[(1)] Every $f$ and $\tilde f$ in $\mathcal L$ are not topologically conjugate if $f\neq \tilde f$; \item[(2)] For any $f\in \mathcal L$, there exist open sets $U_f\subset M$ and $V_f\subset \mathbb R^2$, under the identification of $TU_f$ with $U_f \times \mathbb R^2$, such that any point $x\in U_f$ is Lyapunov irregular for any non-zero vector $v\in V_f$. \end{itemize} Furthermore, $\mathcal L$ can be decomposed into two uncountable sets $\mathcal R$ and $\mathcal{I}$ such that any point in $U_f$ is Birkhoff regular for each $f\in \mathcal R$ and any point in $U_f$ is Birkhoff irregular for each $f\in \mathcal I$. \end{thm} \endgroup \setcounter{thm}{0} \begin{rem}[Generalization of Theorem \ref{thm:main}] It is a famous folklore result known to Bowen that a surface flow with heteroclinically connected two dissipative saddle points has the Birkhoff irregular set of positive Lebesgue measure (see Subsection \ref{s:12}), and its precise proof was given by Gaunersdorfer \cite{Gaunersdorfer1992}, see also Takens \cite{Takens1994}. However, again, the heteroclinic connections are easily broken by small perturbations, and thus Takens asked in \cite{Takens2008} whether the Birkhoff irregular set can have positive Lebesgue measure in a persistent manner. In \cite{KS2017}, the first and fourth authors affirmatively answered it by showing that there is a \emph{dense} subset of any $\mathcal C^r$-Newhouse open set of surface diffeomorphisms with $2\leq r<\infty$ such that any element of the dense set has an open subset in the Birkhoff irregular set, by extending the technology developed for a special surface diffeomorphism with a robust homoclinic tangency given by Colli and Vargas \cite{CV2001}. Furthermore, we adopt the Colli-Vargas diffeomorphism to prove Theorem \ref{thm:main}. Therefore, it is likely that Theorem \ref{thm:main} can be extended to surface diffeomorphisms in a dense subset of any Newhouse open set. The main technical difficulty might be the control of higher order terms of the return map of diffeomorphisms in the dense set, which do not appear for the return map of the Colli-Vargas diffeomorphism, see the expression \eqref{eq:0812bb}. Furthermore, the above result \cite{KS2017} was recently extended in \cite{BBi2020} to the $\mathcal C^\infty$ and $\mathcal C^\omega$ categories by introducing a geometric model, and Colli-Vargas' result was extended in \cite{KNS2021} to a $3$-dimensional diffeomorphism with a $\mathcal C^1$-robust homoclinic tangency derived from a blender-horseshoe. Hence, we expect that Theorem \ref{thm:main} holds for $r=\infty $, $\omega$ and for $r=1$ when the dimension of $M$ is three. We also remark that \cite{LR2016, Barrientos2021} extended the result of \cite{KS2017} to $3$-dimensional flows and higher dimensional diffeomorphisms. \end{rem} \begin{rem}[Irregular vectors] Ott and Yorke \cite{OY2008} asserted that they constructed an open set $U$ any point of which is Lyapunov irregular for \emph{any} non-zero vectors, but we believe that their proof has a gap. What one can immediately conclude from their argument is that any point in $U$ is Lyapunov irregular for non-zero vectors in the \emph{flow} direction (and thus, the set of irregular vectors are not observable); see Section \ref{s:12} for details. In Section \ref{s:03}, we further show that a surface diffeomorphism with a figure-8 attractor introduced by Guarino-Guih\'eneuf-Santiago \cite{GGS2019} has an open set every element of which is Lyapunov irregular for \emph{any} non-zero vectors. \end{rem} \begin{rem}[Relation with Birkhoff irregular sets] One can find differences between Birkhoff irregular sets and Lyapunov irregular sets, other than Theorem \ref{thm:main}, in the literature. Indeed, it was already pointed out in Ott-Yorke \cite{OY2008} that the figure-8 attractor has a positive Lebesgue measure set on which the time averages exist but the Lyapunov exponents do not exist (see also \cite{Furman1997}). Conversely, diffeomorphisms whose Birkhoff irregular set has positive Lebesgue measure but Lyapunov irregular set has zero Lebesgue measure were exhibited in \cite{CYZ2020}. We also remark that, in contrast to the deterministic case, under physical noise both Birkhoff and Lyapunov irregular sets of any diffeomorphism have zero Lebesgue measure by \cite{Araujo2000} and \cite{NNT2021}. \end{rem} In the rest of Section 1, we explain that several nonhyperbolic systems in the literature also have Lyapunov irregular sets of positive Lebesgue measure (see, in particular, Section \ref{s:o1}). The purpose of the attention to these examples are not to increase the collection of dynamics with observable Lyapunov irregular sets, but rather to understand the mechanism making observable Lyapunov irregular sets, which is especially discussed in Section \ref{s:o2}. \subsection{Other examples}\label{s:o1} \subsubsection{Figure-8 attractor}\label{s:12} Ott and Yorke showed in \cite{OY2008} that a figure-8 attractor has the Lyapunov irregular set of positive Lebesgue measure as follows. Let $(f^t)_{t\in \mathbb R}$ be a smooth flow on $\mathbb R^2$ generated by a vector field $V: \mathbb R^2 \to \mathbb R^2$ with an equilibrium point $p$ of saddle type with homoclinic orbits, that is, the unstable manifold of $p$ coincides with the stable manifold of $p$ and consists of $\{ p\}$ and two orbits $\gamma _1$, $\gamma _2$. We also assume that the loops $\gamma _1\cup \{p\}$ and $\gamma _2 \cup \{p\}$ are attracting in the sense that $ \alpha _- > \alpha _+, $ where $\alpha _+$ and $-\alpha _- $ are eigenvalues of the linearized vector field of $V$ at $p$ with $\alpha _\pm >0$. Due to the assumption, one can find open sets $U_1$ and $U_2$ inside and near the loops $\gamma _1 \cup \{p\}$ and $\gamma _2 \cup \{p\}$, respectively, such that the $\omega$-limit set of $(f^t(x))_{t\in \mathbb R}$ is $\gamma _i\cup \{p\} $ for all $x\in U_i$ with $i= 1, 2$. In this setting, $\gamma _1 \cup \gamma _2 \cup \{p\}$ is called a \emph{figure-8 attractor}. It is easy to see that the Birkhoff irregular set of the figure-8 attractor is empty inside $U_1\cup U_2$: in fact, if $x\in U_1 \cup U_2$, then \[ \lim _{t\to\infty} \frac{1}{t} \int ^t _{0} \varphi \circ f^s(x) ds =\varphi (p)\quad \text{for any continuous function $\varphi : \mathbb R^2\to \mathbb R$} \] (cf.~\cite{GGS2019}). On the other hand, Ott and Yorke showed in \cite{OY2008} that any point $x$ in $U _1 \cup U_2$ is Lyapunov irregular for the vector $V(x)$, that is, the Lyapunov irregular set has positive Lebesgue measure (in fact, they implicitly put an additional assumption for simple calculations, see Section \ref{a:pt}). As previously mentioned, they also asserted that $x\in U _1 \cup U_2$ is Lyapunov irregular for any non-zero vector $v$, because $( \frac{1}{t} \log \det (Df^t(x) V(x) \, Df^t(x)v) )_{t\in \mathbb R}$ converges to $\alpha _+ - \alpha _-$ as $t\to \infty$. However, the oscillation of $( \frac{1}{t} \log \Vert Df^t(x)v\Vert )_{t\in \mathbb R}$ is not a direct consequence of this fact and the oscillation of $( \frac{1}{t}\log \Vert Df^t(x) V(x)\Vert )_{t\in \mathbb R}$ when $v$ is not parallel to $V(x)$ because the angle between $Df^t(x) V(x)$ and $ Df^t(x)v$ can also oscillate. \subsubsection{Bowen flow}\label{s:12o} In \cite{OY2008}, Ott and Yorke also indicated the oscillation of Lyapunov exponents for a vector along the flow direction for a special Bowen flow by a numerical experiment. By following the argument of \cite{OY2008} for a figure-8 attractor, we can rigorously prove that the Lyapunov irregular set has positive Lebesgue measure for any Bowen flow. Let $(f^t)_{t\in \mathbb R}$ be a smooth flow on $\mathbb R^2$ generated by a vector field $V: \mathbb R^2 \to \mathbb R^2$ of class $\mathcal C^{1+\alpha }$ ($\alpha >0$) with two equilibrium points $p$ and $\hat p$ and two heteroclinic orbits $\gamma _1$ and $\gamma _2$ connecting the points, which are included in the unstable and stable manifolds of $p$ respectively, such that the closed curve $\gamma := \gamma _1 \cup \gamma _2 \cup \{p\} \cup \{ \hat p\}$ is attracting in the following sense: if we denote the expanding and contracting eigenvalues of the linearized vector field around $p$ by $\alpha _+$ and $-\alpha _-$, and the ones around $p_2$ by $\beta _+$ and $-\beta _-$, then \begin{equation*} \alpha _- \beta _- > \alpha _+ \beta _+ . \end{equation*} In this setting, one can find an open set $U$ inside and near the closed curve $\gamma$ such that the $\omega$-limit set of $(f^t(x))_{t\in \mathbb R}$ is $\gamma $ for all $x\in U$. As explained, it was proven in \cite{Gaunersdorfer1992,Takens1994} that any point in $U$ is Birkhoff irregular. In fact, if $x\in U$, then one can find time sequences $(\tau _n)_{n\in \mathbb N}$, $(\hat \tau _n)_{n\in \mathbb N}$ (given in Section \ref{a:pt}) such that \begin{equation}\label{eq:0805b} \begin{split} &\lim _{n\to\infty} \frac{1}{\tau _n} \int ^{\tau _n} _{0} \varphi \circ f^s(x) ds =\frac{r \varphi (p) + \varphi (\hat p)}{1+ r}, \\ &\lim _{n\to\infty} \frac{1}{\hat \tau _n} \int ^{\hat \tau _n} _{0} \varphi \circ f^s(x) ds =\frac{ \varphi (p) + \hat r \varphi (\hat p)}{1+ \hat r} \end{split} \end{equation} for any continuous function $\varphi : \mathbb R^2\to \mathbb R$, where $r =\frac{\alpha _-}{\beta _+}$ and $\hat r =\frac{\beta _-}{\alpha _+}$. According to Takens \cite{Takens1994} we call such a flow a \emph{Bowen flow}. We can show the following proposition for the Lyapunov irregular set, whose proof will be given in Section \ref{a:pt}. \begin{prop}\label{prop:0812c} For the Bowen flow $(f^t)_{t\in \mathbb R}$ with the open set $U$ given above, any point $x$ in $U$ is Lyapunov irregular for the vector $V(x)$. \end{prop} \begin{rem} For the time sequences in \eqref{eq:0805b} for which the time averages oscillate, we will see that \begin{equation}\label{eq:0812e} \lim _{n\to\infty} \frac{1}{\tau _n} \log \Vert D f^{\tau _n}(x) \Vert = \lim _{n\to\infty} \frac{1}{\hat \tau _n} \log \Vert D f^{\hat \tau _n}(x) \Vert =0 \end{equation} for any $x\in U$. That is, the mechanism causing the oscillation of Lyapunov exponents is different from the one leading to oscillation of time averages; see Section \ref{s:o2} for details. \end{rem} \subsubsection{ Guarino-Guih\'eneuf-Santiago's simple figure-8 attractor}\label{s:03} A disadvantage of the arguments in Sections \ref{s:12} and \ref{s:12o} is that, although it follows the arguments that a point $x$ in the open set $U_1 \cup U_2$ or $U$ is Lyapunov irregular for the vector $V(x)$ generating the flow, it is unclear whether $x$ is also Lyapunov irregular for a vector which is not parallel to $V(x)$, because the derivative $Df^t (x)$ at the return time $t$ to neighborhoods of $p$ or $p \cup \hat p$ is not explicitly calculated in the arguments (instead, the fact that $Df^t (x) V(x) = V(f^t(x))$ is used). On the other hand, Guarino, Guih\'eneuf and Santiago in \cite{GGS2019} constructed a surface diffeomorphism with a pair of saddle connections forming a figure of eight and whose return map is affine (see Proposition \eqref{eq:GGSkey}). By virtue of this simple form of the return map, it is quite easy to prove that the diffeomorphism has an open set each element of which is Lyapunov irregular for \emph{any} non-zero vectors. Furthermore, we will see in Section \ref{s:o2} that the calculation is a prototype of the proof of Theorem \ref{thm:main}. Fix a constant $\sigma > 1$ and numbers $a, b$ such that $1 < a < b < \sigma $. Let $I=[a,b]$ and denote the map $\mathbb R^2 \ni (x, y) \mapsto (\sigma ^{-2} x,\sigma y)$ by $H$. For every $n\in \mathbb{N}$, let $S_n =I\times \sigma ^{-n} I$ and $U_n =\sigma ^{-n}I \times I$, so that \[ H^n ( S_n) = U_{2n} \quad \text{and } \quad H^n : S_n \to U_{2n} \; \text{is a diffeomorphism}. \] See Figure \ref{fig-GGS}. Furthermore, let $R: \mathbb{R}^2\to \mathbb{R}^2$ be the affine map which is a rotation of $-\frac{\pi }{2}$ around the point $( \frac{a+b}{2}, \frac{a+b}{2} )$, i.e. \[ R(x,y) = (a+b -y , x). \] We say that a diffeomorphism of the plane is said to be \emph{compactly supported} if it equals the identity outside a ball centered at the origin $O$, and moreover the diffeomorphism has a \emph{saddle (homoclinic) connection} if it has a separatrix of the stable manifold coinciding with a separatrix of the unstable manifold associated with a saddle periodic point $O$, so that it bounds an open 2-disk. Specially, we call the union of $O$ and a pair of saddle connections associated with $O$ a \emph{figure-8 attractor} at $O$, and it satisfies $W^u(O)=W^s(O)$. \begin{prop}[{\cite[Proposition 3.4]{GGS2019}}]\label{prop:GGS} There exists a compactly supported $\mathcal C^\infty$-diffeomorphism $f:\mathbb R^2 \to \mathbb R^2$ which has a saddle connection of a saddle fixed point $O=(0,0)$, and moreover there are positive integers $n_0, k_0$ such that the following holds: \begin{enumerate}[{\rm (a)}] \item There is a neighborhood $\mathcal V$ of $O$ such that \[ \bigcup _{n\geq n_0} \bigcup _{0\leq \ell \leq n}f^\ell (S_n)\subset \mathcal V \quad \text{and } \quad f\vert _{\mathcal V} = H. \] \item $f^{k_0} (U_n)=S_n$ for all $n\geq n_0$ and \[ f ^{k_0}(x,y) =R(x,y) \quad \text{for all $(x,y) \in [0, \sigma ^{-2n_0}] \times I$}. \] \end{enumerate} In particular, for every $n \geq n_0$, \begin{equation}\label{eq:GGSkey} f^{n+k_0}(x, y) = (a + b - \sigma ^ny , \sigma ^{-2n} x) \in S_{2n} \quad \text{for all $(x, y) \in S_n$}. \end{equation} \end{prop} \begin{figure}[hbt] \centering \scalebox{0.8}{ \includegraphics[clip]{fig-GGS} } \caption{ Guarino-Guih\'eneuf-Santiago's diffeomorphism} \label{fig-GGS} \end{figure} \begin{rem} If we suppose that $f|_{V_3} = s_h \circ f| _{V_1} \circ s_v$, where $V_i$ is the $i$-th quadrant of $\mathbb R^2$, and $s_v, s_h : \mathbb{R}^2 \to \mathbb{R}^2$ are symmetry maps with respect to the vertical and horizontal axes, respectively, $f$ has a figure-8 attractor at $O$, see \cite{GGS2019}. \end{rem} Although the dynamics in Proposition \ref{prop:GGS} is defined on $\mathbb R^2$, one can easily embed the restriction of $f$ on the support of $f$ into any compact surface. It follows from \cite[Corollary 3.5]{GGS2019} that if $z\in S_{n_0}$, then \[ \lim _{n\to \infty} \frac{1}{n} \sum _{j=0}^{n-1} \varphi (f^j (z)) = \varphi (O) \quad \text{for any continuous function $\varphi :\mathbb R^2\to \mathbb R$}. \] In particular, any point in $S_{n_0}$ is Birkhoff regular. Our result for the Lyapunov irregular set is the following, whose proof will be given in Section \ref{s:0811b}. \begin{thm}\label{prop:0811} For the diffeomorphism $f$ and the rectangle $S_{n_0}$ given in Proposition \ref{prop:GGS}, any point $z$ in $S_{n_0}$ is Lyapunov irregular for any non-zero vector. \end{thm} \begin{rem} We note that the piecewise expanding map on a surface constructed by Tsujii \cite{Tsujii2000} has a return map around the origin whose form is quite similar to one of the diffeomorphism of Theorem \ref{prop:0811}. So, it is natural to expect that (a slightly modified version of) the map in \cite{Tsujii2000} has an open set consisting of Lyapunov irregular points for any non-zero vectors. \end{rem} \subsection{Idea of proofs of Theorems \ref{thm:main} and \ref{prop:0811}: anti-diagonal matrix form of the return map}\label{s:o2} \subsubsection{The figure-8 attractor} We start from the Guarino-Guih\'eneuf-Santiago's figure-8 attractor. Let $f$ be the diffeomorphism given in Proposition \ref{prop:GGS}. Then, it follows from \eqref{eq:GGSkey} that for any $n\geq n_0$ and $z\in S_n$, $Df^{n+k_0}(z) $ is an anti-diagonal matrix, \begin{equation}\label{eq:0812a} Df^{n+k_0}(z) =\left(\begin{array}{cc}0 & -\sigma^{n} \\ \sigma^{-2n} & 0\end{array}\right), \end{equation} so $Df^{(2n+k_0) +(n+k_0)}(z) =Df^{2n+k_0}(f^{n+k_0}(z)) Df^{n+k_0}(z) $ is a diagonal matrix. Hence, if we define the $d$-th return time $N(d)$ from $S_{n_0}$ to $\bigcup _{n\geq n_0} S_n$ with $d\geq 1$ by \begin{equation}\label{eq:0812b1} N(d) = \sum _{d'=1}^{d} n(d'), \quad n(d') = 2^{d'-1}n_0 +k_0 \end{equation} (notice that $f^{N(d)} (S_{n_0}) \subset S_{2^d n_0}$), then it follows from a chain of calculations that for any $z\in S_{n_0}$ \begin{equation}\label{eq:0812d1} \begin{split} & Df^{N(2d-1)}(z)=(-1)^{d-1} \left(\begin{array}{cc}0 & -\sigma^{n_0 }\\ \sigma^{-2^{2d-1}n_{0}} & 0\end{array}\right),\\ & Df^{N(2d)}(z)=(-1)^{d} \left(\begin{array}{cc} 1 & 0\\ 0 & -\sigma^{(-2^{2d}+1)n_{0}} \end{array}\right), \end{split} \end{equation} and thus, for any $v\not\in \mathbb R \left( \begin{array}{c} 1\\ 0 \end{array} \right) \cup \mathbb R \left( \begin{array}{c} 0\\ 1 \end{array} \right) $, \begin{equation}\label{eq:0812b1c} \displaystyle \lim _{d \to \infty} \frac{1}{N(d)} \log \left\Vert Df^{N(d)}(z) v \right\Vert =0. \end{equation} Furthermore, one can see by a direct calculation that with the function $\vartheta : [0,1]\to \mathbb R$ given by $\vartheta (\zeta )=-(1-\zeta )/(1+\zeta )$ if $\zeta \geq 1/3$ and $\vartheta (\zeta )=-2\zeta /(1+\zeta )$ if $\zeta <1/3$, it holds that for any $\zeta \in [0,1]$, \begin{equation}\label{eq:0812b1c2} \lim _{d \to \infty} \frac{1}{N(4d) +\lfloor\zeta 2^{4d}n_0\rfloor} \log \left\Vert Df^{N(4d) +\lfloor\zeta 2^{4d} n_0\rfloor}(z) v \right\Vert = \vartheta (\zeta ) \log \sigma , \end{equation} where $\lfloor a \rfloor$ for $a\in \mathbb R$ is the greatest integer less than or equal to $a$. Note that $N(4d) =2^{4d}n_0 + (4dk_0-n_0)$, so $N(4d) +\lfloor\zeta 2^{4d}n_0\rfloor$ over $\zeta \in [0,1]$ essentially realizes all times from $N(4d)$ to $N(4d+1)$. A detailed calculation will be given in Section \ref{s:0811b}. \subsubsection{The Newhouse open set} Next we consider the diffeomorphisms in the Newhouse open set given in Theorem \ref{thm:main}. Colli and Vargas constructed in \cite{CV2001} a diffeomorphism $g$ in a Newhouse open set with constants $0<\lambda <1 <\sigma$ such that for any $\mathcal C^r$-neighborhood $\mathcal O$ of $g$ and any increasing sequence $(n_k^0)_{k\geq 0}$ of integers with $\limsup _{k\to\infty} n_{k+1}/n_k <\infty $, one can find a diffeomorphism $f$ in $\mathcal O$ together with a sequence of rectangles $(R_k)_{k=1}^\infty$ and a sequence of increasing sequence $(\tilde n_k)_{k\geq 1}$ of integers with $\tilde n_k =O(k)$ such that $f^{n_k+2} (R_k) \subset R_{k+1}$ and for each $(\tilde x_k+x,y)\in R_k$, \begin{equation}\label{eq:0812bb} f^{n_k+2}(\tilde x_k+x,y) = (\tilde x_{k,1}-\sigma^{2n_k}x^2-\lambda^{n_k}y, \sigma^{n_k}x), \end{equation} where $n_k= n_k^0 + \tilde n_k$ and $(\tilde x_k,0)$ is the center of $R_k$, see Theorem \ref{prop:0812} for details. Thus, the derivative of the return map has the form \begin{equation}\label{eq:0812b} Df^{n_k+2}(\tilde x_k +x, y)= \left( \begin{array}{cc} -2\sigma^{2n_k}x & -\lambda^{n_k} \\ \sigma^{n_k} & 0 \\ \end{array} \right). \end{equation} Compare this formula with \eqref{eq:0812a} for $n =n(d) -k_0$ and note that $\lim _{d\to \infty}(n(d+1) -k_0)/(n(d)-k_0) =2$. The biggest obstacle in \eqref{eq:0812b} to repeat the above calculation for Guarino-Guih\'eneuf-Santiago's figure-8 attractor is the term $-2\sigma ^{2n_k} x$: the absolute value of the term should be as small as the absolute value of $-\lambda ^{n_k}$ of \eqref{eq:0812b}, while $\sigma ^{2n_k}$ may be much larger than $\lambda ^{n_k}$ because $0<\lambda <1 <\sigma $. Therefore, the key point in the proof is to find a subset $U_k$ of $R_k$ such that any $x\in U_k$ satisfies the required condition $\vert -2\sigma ^{2n_k} x\vert <\xi \vert -\lambda ^{n_k}\vert$ with a positive constant $\xi $ independently of $k$ (Lemma \ref{lem4}), and to show $f^{n_k +2}(U_k) \subset U_{k+1}$ (Lemma \ref{lem2}). \subsubsection{Some technical observations} Finally, we give a couple of (more technical) remarks on the similarity of mechanics leading to observable Lyapunov irregular sets for the dynamics of this paper. \begin{rem} To understand the time scale $N(4d) +\lfloor\zeta 2^{4d}n_0\rfloor$ of \eqref{eq:0812b1c2}, calculations of (partial) Lyapunov exponents for the Bowen flow might be helpful. Let $(f^t)_{t\in\mathbb R}$, $V$, $p$, $\hat p$, $U$ be as in Section \ref{s:12o}. Let $N$ and $\hat N$ be small neighborhoods of $p$ and $\hat p$, respectively, such that $N \cap \hat N =\emptyset$. Fix $z\in U$ and let $\tau _n$ and $\hat \tau _n$ be the $n$-th return time of $z$ to $N$ and $\hat N$, respectively (see Section \ref{a:pt} for their precise definition). Then, since $Df^t (z)V(z) =V(f^t(z))$ for each $t\geq 0$, both $\Vert Df^{\tau _n} (z) V(z)\Vert $ and $\Vert Df^{\hat \tau _n} (z) V(z)\Vert $ are bounded from above and below uniformly with respect to $n$, which implies \eqref{eq:0812e} (while \eqref{eq:0805b} is a consequence of \cite{Takens1994}). We further define $\rho _n$ as the time $t$ in $[0, \tau _{n+1}-\tau _n]$ at which $f^{\tau _n+ t}(z)$ makes the closest approach to $p$ (that is, $\rho _n$ is the the minimizer of $ \Vert f^{\tau _n + t}(z) - p\Vert $ over $0\leq t \leq \tau _{n+1}-\tau _n$). Then, since the vector field $V$ is zero at $p$, it can be expected that $\Vert Df^{\tau _n +\rho _n} (z)V(z)\Vert =\Vert V(f^{\tau _n +\rho _n}(z))\Vert $ decays rapidly as $n$ increases. In fact, we can show that \[ \lim _{n\to \infty} \frac{1}{\tau _n +\rho _n}\log \Vert Df^{\tau _n +\rho _n} (z)V(z)\Vert = \frac{ \alpha _+ \beta _+ -\alpha _- \beta _-}{\alpha _+ +\beta _+ + \alpha _- + \beta _-} <0, \] which is $ \frac{\alpha _+ - \alpha _- }{2}$ when $\alpha _+=\beta _+$ and $\alpha _- =\beta _-$. On the other hand, $\vartheta (\zeta )$ in \eqref{eq:0812b1c2} takes the minimum $-\frac{1}{2}$ at $\zeta =\frac{1}{3}$, so the minimum of \eqref{eq:0812b1c2} is \[ \lim _{d \to \infty} \frac{1}{N(4d) + \lfloor \frac{2^{4d}n_0}{3}\rfloor} \log \left\Vert Df^{N(4d) + \lfloor \frac{2^{4d}n_0}{3}\rfloor}(z) v \right\Vert = -\frac{1}{2} \log \sigma = \frac{ \log \sigma +\log \sigma ^{-2}}{2}. \] \end{rem} \begin{rem} We emphasize that the choice of $(n_k^0)_{k\in \mathbb N}$ in \eqref{eq:0812bb} is totally free except the condition $\limsup _{k\to\infty} n_{k+1}^0/n_k ^0 < \infty$, while $(n(d))_{d\in \mathbb N}$ in \eqref{eq:0812b1} must satisfy $\lim _{d\to \infty}n(d+1)/n(d) =2$. This freedom makes the construction of the oscillation of (partial) Lyapunov exponents of $f$ a bit simpler. Indeed, in the proof of Theorem \ref{thm:main} we take $(n_k)_{k\in \mathbb N}$ as \begin{equation*} \lim _{p\to \infty} \frac{n_{2p+1}}{n_{2p}} < \lim _{p\to \infty}\frac{n_{2p}}{n_{2p-1} }< \infty , \end{equation*} which enables us to conclude that for any $z$ in an open subset of $R_{\kappa }$ with some large integer $\kappa$ and any vector $v$ in an open set, \begin{equation*} \begin{split} \displaystyle \lim _{p \to \infty} \frac{1}{ N_{2p-1} } \log \left\Vert Df^{ N_{2p-1} }(z) v \right\Vert &= \dfrac{\log\lambda+\alpha\log\sigma}{1+\alpha}\\ <\lim _{p \to \infty} \frac{1}{N_{2p}} \log \left\Vert Df^{ N_{2p} }(z) v \right\Vert &= \dfrac{\log\lambda+\beta\log\sigma}{1+\beta}, \end{split} \end{equation*} where $\alpha = \lim _{p\to \infty} n_{2p+1}/n_{2p}$, $\beta = \lim _{p\to \infty} n_{2p}/n_{2p-1} $ and $N_j =(n_\kappa +2) + (n_{\kappa +1} +2) +\cdots + (n_{\kappa +j} +2)$ (so the ``time at closest approach'' $N(4d)+\lfloor \zeta 2^{4d}n_0\rfloor$ with $\zeta \in (0,1)$ for Guarino-Guih\'eneuf-Santiago's figure-8 attractor is not necessary). \end{rem} \begin{rem} We outline why the open set $V_f$ in Theorem \ref{thm:main} is not easy to be replaced by $\mathbb R^2\setminus\{0\}$ by our argument. Again, Guarino-Guih\'eneuf-Santiago's figure-8 attractor might be useful to understand the situation. Let $v$ be the unit vertical vector. Then, it follows from \eqref{eq:0812d1} that $\left\Vert Df^{N(2d)}(z) v \right\Vert = \sigma ^{(-2^{2d} +1)n_0}$, which is much smaller than the lower bound $1- \sigma ^{(-2^{2d} +1)n_0}$ of $\left\Vert Df^{N(2d)}(z) v' \right\Vert $ for any non-zero vector $v'$ being not parallel to $v$, and thus \eqref{eq:0812b1c} does not hold for this $v$ (see \eqref{eq:0813d} for details). For the diffeomorphism of Theorem \ref{thm:main}, this special situation on the vertical line may be spread to a vertical cone $\mathcal K_v:=\{ (v_1, v_2) \in \mathbb R^2\mid \vert v_1 \vert \leq K ^{-1} \vert v_2\vert \}$ with a constant $K>1$ (see \eqref{eq:0915c}) and it is hard to repeat the above calculation on the cone due to the higher order term $-2\sigma ^{2n_k}x$ of \eqref{eq:0812b}. A similar difficulty occurs on a horizontal cone $\mathcal K_h:=\{ (v_1, v_2) \in \mathbb R^2\mid \vert v_2 \vert \leq K ^{-1} \vert v_1\vert \}$, and the open set $V_f$ of Theorem \ref{thm:main} is given as $\mathbb R\setminus (K_v \cup K_h)$. \end{rem} \section{Proof of Proposition \ref{prop:0812c}}\label{a:pt} We follow the argument \cite{OY2008} for the figure-8 attractor,\footnote{They implicitly ignored the higher order terms of the transient map of the flow, i.e.~assumed that $\hat s_n =cr_n$ and $s_{n+1}=\hat c\hat r_n$ instead of \eqref{eq:0813c} below.} so the reader familiar with this subject can skip this section. Let $(f^t)_{t\in \mathbb R}$ be the Bowen flow given in Section \ref{s:12o}. Let $N$ and $\hat N$ be neighborhoods of $p$ and $\hat p$, respectively, such that there are linearizing coordinates $\phi : N\to \mathbb R^2$ and $\hat \phi : \hat N\to \mathbb R^2$ satisfying that both $\phi (N )$ and $\hat \phi (\hat N )$ include $(0,1]^2$ and \begin{equation}\label{eq:0724b} \phi \circ f^t\circ \phi ^{-1}(r,s)=(e^{-\alpha _-t}r, e^{\alpha _+t}s),\quad \hat \phi \circ f^t\circ \hat \phi ^{-1}(r,s)=(e^{-\beta _-t}r, e^{\beta _+t}s) \end{equation} on $ (0,1]^2$. Fix $(x,y)\in U$. Let $\hat T_0$ be the hitting time of $(x,y)$ to $\{ \phi ^{-1}(1,s) \mid s\in (0,1]\}$, i.e.~the smallest positive number $t$ such that $ f^{t}(x,y) = \phi ^{-1}(1,s)$ with some $s\in (0,1]$. Let $s_1$ be the second component of $\phi \circ f^{\hat T_0}(x,y)$. We inductively define sequences $(t_n, T_n, \hat t_n, \hat T_n)_{n\in \mathbb N}$, $(s_n, r_n, \hat s_n, \hat r_n)_{n\in \mathbb N}$ of positive numbers as \begin{itemize} \item $t_n$ is the hitting time of $\phi^{-1}(1,s_n)$ to $\{ \phi ^{-1}(r,1) \mid r\in (0,1]\}$, and $r_n$ is the first component of $\phi \circ f^{t_n}\circ \phi ^{-1}(1,s_n) $, \item $T_n$ is the hitting time of $\phi ^{-1}(r_n,1)$ to $\{ \hat \phi ^{-1}(1, s) \mid s\in (0,1]\}$, and $\hat s_n$ is the second component of $\hat \phi \circ f^{T_n}\circ \phi ^{-1}(r_n,1)$, \item $\hat t_n$ is the hitting time of $\hat \phi^{-1}(1,\hat s_n)$ to $\{ \hat \phi ^{-1}(r,1) \mid r\in (0,1]\}$, and $\hat r_n$ is the first component of $\hat \phi \circ f^{t_n}\circ \hat \phi ^{-1}(1,\hat s_n) $, \item $\hat T_n$ is the hitting time of $\hat \phi ^{-1}(\hat r_n,1)$ to $\{ \phi ^{-1}(1, s) \mid s\in (0,1]\}$, and $ s_{n+1}$ is the second component of $ \phi \circ f^{T_n}\circ \hat \phi ^{-1}(\hat r_n,1)$. \end{itemize} Then, from \[ (e^{-\alpha _- t_n}, e^{\alpha _+t_n}s_n) =(r_n,1), \quad (e^{-\beta _- \hat t_n}, e^{\beta _+\hat t_n}\hat s_n) =(\hat r_n,1) \] it follows that \begin{equation}\label{eq:-724} t_n=- \frac{\log s_n}{\alpha _+}, \quad r_n = s_n^{a}, \quad \hat t_n=- \frac{\log \hat s_n}{\beta _+}, \quad \hat r_n = \hat s_n^{b }. \end{equation} with $a:=\frac{\alpha _- }{\alpha _+}$ and $b:=\frac{\beta _-}{\beta _+}$. On the other hand, it is straightforward to see that both $T_n$ and $\hat T_n$ are bounded from above and below uniformly with respect to $n$, and thus, since the vector field $V$ is of class $\mathcal C^{1+\alpha }$, one can find positive numbers $c$ and $\hat c$ (which are independent of $n$) such that \begin{equation}\label{eq:0813c} \hat s_n = c r_n +o(r_n^{1+\alpha}), \quad s_{n+1} = \hat c \hat r_n +o( \hat r_n^{1+\alpha }). \end{equation} Moreover, we set \[ \tau _n:= \hat T_0 + \sum _{k=1}^{n-1} (t_k + T_k + \hat t_k + \hat T_k), \quad \hat \tau _n:= \hat T_0 + \sum _{k=1}^{n-1} (t_k + T_k + \hat t_k + \hat T_k) + t_n +T_n, \] that is, the $n$-th return time to $N$ and $\hat N$, respectively. Notice that $D f^t (x,y) V(x,y) = V(f^t (x,y))$ for each $t\geq 0$. Hence, we have \[ \lim _{n\to \infty} \frac{1}{\tau _n}\log \Vert D f^{\tau _n} (x,y) V(x,y) \Vert =\lim _{n\to \infty} \frac{1}{\tau _n}\log \Vert V(1,s_n) \Vert =0 \] because $ \Vert V(1,s) \Vert $ is bounded from above and below uniformly with respect to $s\in (0,1]$. From now on, we identify $\phi (x,y)$ and $\hat \phi (x,y)$ with $(x,y)$ if it makes no confusion. We further define a sequence $(\rho_n )_{n\in \mathbb N}$ of positive numbers as $\rho _n$ is the minimizer of \[ \Vert f^{t}(1,s_n) - p\Vert ^2= e^{-2\alpha _- t} + e^{2\alpha _+ t}s_n^2 \] (under the linearizing coordinate $\phi$) over $0\leq t \leq t_n$, that is, the time at which $ f^{t}(1,s_n)$ makes the closest approach to $p$ over $0\leq t \leq t_n$. Then, it follows from a straightforward calculation that \begin{equation}\label{eq:0724c} \rho _n = - \frac{\log s_n}{\alpha _+ + \alpha _-} +C_1,\quad \Vert L(f^{\rho_n}(1,s_n))\Vert =C_1' s_n ^{\alpha _-/(\alpha _+ +\alpha _-)} , \end{equation} where $C_1 := \frac{\log \alpha _- - \log \alpha _+}{2(\alpha _+ + \alpha _-)} $, $C_1':=\sqrt{\alpha _-^2e^{-2\alpha _- C_1} + \alpha _+^2 e^{2\alpha _+C_1}}$ and $L$ is the linearized vector sub-field of $V$ around $p$ corresponding to \eqref{eq:0724b}, i.e~$L(x,y) =(-\alpha _-x, \alpha _+y)$. We show that \begin{equation}\label{eq:0813a2a} \limsup _{n\to\infty} \frac{1}{\tau _n +\rho _n}\log \Vert D f^{\tau _n +\rho _n} (x,y) V(x,y) \Vert \leq \frac{ \alpha _+ \beta _+ -\alpha _-\beta _-}{\alpha _+ + \beta _+ + \alpha _- + \beta _-}. \end{equation} Fix $\epsilon >0$. Then, it follows from \eqref{eq:0813c} that one can find $n_0$ such that \[ c_- r_n \leq \hat s_n \leq c_+ r_n, \quad \hat c_- \hat r_n \leq s_{n+1} \leq \hat c_+ \hat r_n \] for any $n\geq n_0$, where $c_\pm =(1\pm\epsilon )c$ and $\hat c_\pm =(1\pm \epsilon )\hat c$. Therefore, by induction, together with \eqref{eq:-724}, it is straightforward to see that \begin{align*} &c^{b\Lambda _n}_- \hat c^{\Lambda _n}_- s_{n_0 } ^{(ab)^n}\leq s_{n_0 +n} \leq (c^b_+)^{\Lambda _n} \hat c^{\Lambda _n}_+ s_{n_0 } ^{(ab)^n}, \\ & c_- \left(c_-^{b\Lambda _{n}} \hat c^{\Lambda _n}_- s_{n_0 }^{(ab)^n}\right)^a \leq \hat s_{n_0+n} \leq c_+ \left(c_+^{b\Lambda _{n}} \hat c^{\Lambda _n}_+ s_{n_0 }^{(ab)^n}\right)^a \end{align*} for any $n\geq 0$, where $\Lambda _n =1+ab +\cdots +(ab)^{n-1} = \frac{(ab)^n -1}{ab-1}$. Fix $n\geq n_0$ and write $N:=n_0+n$ to avoid heavy notations. Then, it holds that \begin{align*} & (ab)^n \log \left( s_{n_0} C_- \right) -C_2 \leq \log s_{N} \leq (ab)^n \log \left( s_{n_0} C_+\right) +C_2,\\ & a(ab)^n \log \left( s_{n_0} C_- \right) -C_2 \leq \log \hat s_{N} \leq a(ab)^n \log \left( s_{n_0} C_+ \right) +C_2 \end{align*} with some constant $C_2>0$, where $C_\pm :=c_\pm ^{b/(ab-1)} \hat c_\pm ^{1/(ab-1)} $. Thus, by \eqref{eq:-724} we have \begin{align*} \tau_{N} &\geq \sum _{k=1}^{n-1} \left(-\frac{1}{\alpha _+} - \frac{a}{\beta _+}\right) (ab)^k \log \left( s_{n_0} C_+\right) +C_{n_0} + n C_3 \\ &=- \frac{\alpha _- + \beta _+}{\alpha _- \beta _- -\alpha _+\beta _+} (ab)^{n} \log \left( s_{n_0} C_+\right) +C_{n_0}' + n C_3 \end{align*} with some constants $C_{n_0}$, $C_{n_0}'$ and $C_3$. Furthermore, it follows from \eqref{eq:0724c} that \[ \rho _{N} \geq - \frac{(ab)^n \log \left( s_{n_0} C_+ \right) }{\alpha _+ + \alpha _-} +C_3', \] so that \[ \tau_{N}+\rho _N\geq C_{n_0}^{\prime \prime} +nC_3 + \frac{\alpha _-(\alpha _+ + \beta _+ + \alpha _- + \beta _-)}{( \alpha _+ \beta _+ -\alpha _-\beta _-)(\alpha _+ + \alpha _-)}(ab)^n \log \left( s_{n_0} C_+ \right) \] with some constants $C_3'$, $C_{n_0}^{\prime \prime}$. On the other hand, by \eqref{eq:0724c} it holds that \[ \log \Vert V(f^{\tau _{N} + \rho_{N}}(x,y))\Vert = \log \Vert L(f^{ \rho_{N}(1,s_{N})})\Vert \leq \frac{\alpha _-}{(\alpha _+ +\alpha _-) } (ab)^n \log \left( s_{n_0} C_-\right) + C_3' \] with some constants $C_3$, $C_3'$. Therefore, \[ \limsup _{n\to\infty} \frac{1}{\tau _n +\rho _n}\log \Vert D f^{\tau _n +\rho _n} (x,y) V(x,y) \Vert\leq \frac{ \alpha _+ \beta _+ -\alpha _-\beta _- }{\alpha _+ + \beta _+ + \alpha _- + \beta _-} \cdot \frac{\log (s_{n_0}C_-)}{\log (s_{n_0}C_+)}. \] Since $\epsilon$ is arbitrary, we get \eqref{eq:0813a2a} (notice that $\frac{\log (s_{n_0}C_-)}{\log (s_{n_0}C_+)}$ converges to $1$ from below as $\epsilon$ goes to zero). In a similar manner, one can show that \[ \liminf _{n\to\infty} \frac{1}{\tau _n +\rho _n}\log \Vert D f^{\tau _n +\rho _n} (x,y) V(x,y) \Vert \geq \frac{ \alpha _+ \beta _+ -\alpha _-\beta _-}{\alpha _+ + \beta _+ + \alpha _- + \beta _-}, \] and we complete the proof of Proposition \ref{prop:0812c}. \qed \section{Proof of Theorem \ref{prop:0811}}\label{s:0811b} Let $f$ be the Guarino-Guih\'eneuf-Santiago diffeomorphism of Proposition \ref{prop:GGS} and $N(d)$ the $d$-th return time given in \eqref{eq:0812b1}. Fix $z\in S_{n_{0}}$. By induction with respect to $d$ we first show \eqref{eq:0812d1}. It immediately follows from \eqref{eq:GGSkey} that the first equality of \eqref{eq:0812d1} is true for $d=1$. Then let us assume that the first equality of \eqref{eq:0812d1} is true for a given positive integer $d$. Since $N(2(d+1)-1)=N(2d+1)=N(2d-1)+n(2d)+n(2d+1)$, by the chain rule and the inductive hypothesis, \begin{multline*} Df^{N(2(d+1)-1)}(z)=Df^{n(2d+1)}(f^{N(2d)}(z)) Df^{n(2d)}(f^{N(2d-1)}(z))Df^{N(2d-1)}(z)\nonumber\\ = \left(\begin{array}{cc} 0 & -\sigma^{2^{2d}n_0 }\\ \sigma^{-2^{2d+1}n_{0}} & 0 \end{array} \right)\left(\begin{array}{cc} 0 & -\sigma^{2^{2d-1}n_0 }\\ \sigma^{-2^{2d}n_{0}} & 0 \end{array} \right)\\ \times (-1)^{d-1} \left(\begin{array}{cc} 0 & -\sigma^{n_0 }\\ \sigma^{-2^{2d-1}n_{0}} & 0 \end{array} \right) \nonumber\\ =(-1)^{d-1} \left(\begin{array}{cc} 0 & \sigma^{n_0}\\ -\sigma^{-2^{2d+1}n_{0}} & 0 \end{array} \right) =(-1)^{d} \left(\begin{array}{cc} 0 & -\sigma^{n_0}\\ \sigma^{-2^{2d+1}n_{0}} & 0 \end{array} \right). \nonumber \end{multline*} That is, the first equality of \eqref{eq:0812d1} holds for $d+1$. In a similar manner, by induction with respect to $d$, we can prove the second equality of \eqref{eq:0812d1}. We next prove that $z$ is Lyapunov irregular for any nonzero horizontal vector $v= \left(\begin{array}{c} s\\ 0 \end{array} \right)$. By the first equality of \eqref{eq:0812d1}, we obtain \begin{equation}\label{eq:0813d} \frac{\log \left\Vert Df^{N(2d-1)}(z) v \right\Vert}{N(2d-1)} = \frac{ -2^{2d-1}n_{0} \log\sigma+\log |s|}{(2^{2d-1}-1)n_{0}+(2d-1)k_{0}} \xrightarrow[d\to \infty]{} -\log \sigma. \end{equation} On the other hand, it follows from the second equality of \eqref{eq:0812d1} that \[ \frac{\log \left\Vert Df^{N(2d)}(z) v \right\Vert}{N(2d)} =\frac{\log |s|}{{(2^{2d}-1)n_{0}+(2d)k_{0}}} \xrightarrow[d\to \infty]{} 0. \] In a similar manner, we can show that $z$ is Lyapunov irregular for any nonzero vertical vector. Finally, we will prove \eqref{eq:0812b1c} and \eqref{eq:0812b1c2}, which immediately implies that $z$ is Lyapunov irregular for any nonzero vector $v\not \in \mathbb R \left(\begin{array}{c} 1\\ 0 \end{array} \right) \cup \mathbb R \left(\begin{array}{c} 0\\ 1 \end{array} \right)$. For simplicity, we assume that $\zeta 2^{4d} n_0$ is an integer. Essentially, the proof of \eqref{eq:0812b1c} is included in the discussion until now. Thus, we show \eqref{eq:0812b1c2}. By \eqref{eq:0812d1} and the item (a) of Proposition \ref{prop:GGS}, \begin{align*} Df^{N(4d)+\zeta 2^{ 4d}n_0}(z)&= \left(\begin{array}{cc} \sigma^{-2\zeta \cdot 2^{4d}n_0} & 0\\ 0 & -\sigma^{ -(1-\zeta )2^{4d} n_{0} +n_0} \end{array}\right)\\ &=\sigma^{ -(1-\zeta )2^{4d} n_{0} } \left(\begin{array}{cc} \sigma^{(1-3\zeta ) 2^{4d}n_0} & 0\\ 0 & -\sigma^{ n_0} \end{array}\right). \end{align*} Fix a vector $v= \left(\begin{array}{c} s\\ u \end{array} \right) $ with $su\neq 0$. If $1-3\zeta \leq 0$, then \[ \lim _{d\to \infty}\left\Vert \left(\begin{array}{cc} \sigma^{(1-3\zeta ) 2^{4d}n_0} & 0\\ 0 & -\sigma^{ n_0} \end{array}\right) v\right\Vert = 1. \] Hence, since $N(4d) +\zeta 2^{4d}n_0 = (1+\zeta )2^{4d}n_0 +(4dk_0 -n_0)$, we get \[ \lim _{d\to \infty}\frac{1}{N(4d) +\zeta 2^{4d}n_0} \log \left\Vert Df^{N(4d) +\zeta 2^{4d} n_0}(z) v \right\Vert = -\frac{1-\zeta }{1+\zeta } \log \sigma . \] On the other hand, if $1-3\zeta > 0$, then \[ \lim _{d\to \infty}\left\Vert \left(\begin{array}{cc} \sigma^{(1-3\zeta ) 2^{4d}n_0} & 0\\ 0 & -\sigma^{ n_0} \end{array}\right) v\right\Vert \cdot \sigma^{-(1-3\zeta ) 2^{4d}n_0} =1. \] Thus we get \begin{align*} \lim _{d\to \infty}\frac{1}{N(4d) +\zeta 2^{4d}n_0} \log \left\Vert Df^{N(4d) +\zeta 2^{4d} n_0}(z) v \right\Vert &= \frac{-(1-\zeta ) +(1-3\zeta )}{1+\zeta } \log \sigma \\ &=- \frac{2\zeta }{1+\zeta } \log \sigma . \end{align*} This completes the proof of Theorem \ref{prop:0811}. \qed \section{Proof of Theorem \ref{thm:main}} In this section, we give the proof of Theorem \ref{thm:main}. In Section \ref{s:4.1} we briefly recall a small perturbation of a diffeomorphism with a robust homoclinic tangency introduced by Colli and Vargas \cite{CV2001}. In Section \ref{s:4.2} we establish key lemmas to control the higher order term in \eqref{eq:0812a}, and prove the positivity of Lebesgue measure of Lyapunov irregular sets in Section \ref{s:4.3}. Finally, in Section \ref{s:4.4}, we discuss the Birkhoff (ir)regularity of the set. \subsection{Dynamics}\label{s:4.1} Let us start the proof of Theorem \ref{thm:main} by remembering the Colli-Vargas model with a robust homoclinic tangency introduced in \cite{CV2001}. The reader familiar with this subject can skip this section. Let $M$ be a closed surface including $[-2,2]^2$, and a diffeomorphism $g\equiv g_\mu: M\to M$ with a real number $\mu$ satisfying the following. \begin{itemize} \item (Affine horseshoe) There exist constants $0<\lambda<\frac{1}{2} $ and $\sigma>2$ such that \[ g(x, y)=\left( \pm \sigma \left(x\pm \frac{1}{2}\right) , \pm \lambda y \mp \frac{1}{2}\right)\quad \text{if $\displaystyle \left\vert x\pm \frac{1}{2} \right\vert \leq \frac{1}{\sigma}$, $\vert y\vert \leq 1$} \] and $\lambda\sigma^2<1$; \item (Quadratic tangency) For any $(x,y)$ near a small neighborhood of $(0,-1)$, $$ g ^{2}(x,y)=(\mu -x^2 -y ,x). $$ \end{itemize} Then, it was proven by Newhouse \cite{Newhouse1970} that there is a $\mu$ such that $g$ has a $\mathcal C^2$-robust homoclinic tangency on $\{y=0\}$. See Figure \ref{fig1-1}. \begin{figure}[hbt] \centering \scalebox{0.7}{\includegraphics[clip]{fig1-8v2}} \caption{Colli-Vargas' diffeomorphism} \label{fig1-1} \end{figure} Colli and Vargas showed the following. \begin{thm}[\cite{CV2001}]\label{prop:0812} Let $g $ be the surface diffeomorphism with a robust homoclinic tangency given above. Then, for any $\mathcal C^r$-neighborhood $\mathcal O$ of $g$ $(2\leq r<\infty)$ and any increasing sequence $(n_k^0)_{k\in \mathbb N}$ of integers satisfying $n_{k}^0 =O((1+\eta )^k) $ with some $\eta >0$, one can find a diffeomorphism $f$ in $\mathcal O$ together with a sequence of rectangles $(R_k)_{k\in \mathbb N}$ and an increasing sequence $(\tilde n_k)_{k\in \mathbb N}$ of integers, satisfying that $\tilde n_{k} =O(k) $ and depends only on $\mathcal O$, such that the following holds for each $k\in \mathbb N$ with $n_k:= n_k^0 + \tilde n_k$: \begin{itemize} \item[$\mathrm{(a)}$] $f^{n_k+2} (R_k)\subset R_{k+1}$; \item[$\mathrm{(b)}$] For each $(\tilde x_k+x,y)\in R_k$, \begin{equation*} f^{n_k+2} (\tilde x_k+x,y) = (\tilde x_{k+1}-\sigma^{2n_k}x^2\mp \lambda^{n_k}y, \pm \sigma^{n_k}x), \end{equation*} where $(\tilde x_k,0)$ is the center of $R_k$. \end{itemize} \end{thm} Refer to the ``Conclusion'' given in p.~1674 and the ``Rectangle lemma'' and its proof given in pp.~1975--1976 of the paper \cite{CV2001}, where the notation $R_k$ was used to denote a slightly different object that we will not use, and our $R_k$ was written as $R_k^*$. See Remark \ref{rmk:0911c} and Theorem \ref{thm:0911b} for more information. By the coordinate translation $T_k: (x,y) \mapsto (x-\tilde x_k, y)$, which sends $(\tilde x_k,0)$ to $(0,0)$, the action of $ f^{n_k+2}\vert _{R_k}$ can be rewritten as \begin{equation}\label{map0} F_k: \left( \begin{array}{c} x \\ y \\ \end{array} \right) \mapsto \left( \begin{array}{c} -\sigma^{2n_k}x^2 \mp \lambda^{n_k}y \\ \pm \sigma^{n_k}x \\ \end{array} \right), \end{equation} which sends $(0,0)$ to $(0,0)$, that is, \[ f^{n_k+2}(x,y) =T_{k+1}^{-1} \circ F_k \circ T_k(x,y) \quad \text{for every $(x,y)\in R_k$}. \] Note that for each $l\geq k$, \[ f^{n_l+2} \circ f^{n_{l-1}+2} \circ \cdots \circ f^{n_k+2} =T_{l+1}^{-1} \circ \left(F_l\circ F_{l-1} \circ \cdots \circ F_k \right) \circ T_k, \] so the oscillation of $( \frac{1}{n} \log \Vert Df^n (\boldsymbol{x}) \boldsymbol{v}\Vert )_{n\in \mathbb N}$ for each $\boldsymbol{x}\in R_k$ with some $k$ and each nonzero vectors $\boldsymbol{v}$ in an open set follows from the oscillation of \[ \left( \frac{1}{(n_k +2 ) + \cdots + (n_{l-1} +2) + (n_l +2)} \log \Vert D\left(F_l\circ F_{l-1} \circ \cdots \circ F_k \right)(\boldsymbol{x}) \boldsymbol{v}\Vert \right) _{l\in \mathbb N} \] for each $\boldsymbol{x}\in T_k(R_k)$ and each nonzero vectors $\boldsymbol{v}$ in the open set, which we will show in the following. \subsection{Key lemmas}\label{s:4.2} First, let us fix some constants in advance. Fix a small neighborhood $\mathcal O$ of $g$, and let $(\tilde n_k) _{k\in \mathbb N}$ be the sequece given in Theorem \ref{prop:0812}. Notice that $\lambda\sigma<\lambda\sigma^2<1$. Take a sufficiently small $\eta>0$ and a sufficiently large integer $n_0\geq 2$ so that \begin{equation*} \lambda\sigma^{\frac{1+3\eta+8n_0^{-1}}{1-\eta}}<1, \end{equation*} and fix $1<\alpha<\beta<1+\eta$ such that \begin{equation}\label{Q02} \lambda\sigma^{\frac{6\beta-4+8n^{-1}_0}{2-\beta}}<1, \quad \alpha^2 \beta^2<2 \quad \mbox{and} \quad \lambda\sigma^{\alpha}<1. \end{equation} Let $(n_k^0)_{k\in \mathbb N}$ be an increasing sequence of integers given by \begin{equation}\label{eq:0915f} n_{2p}^0=\lfloor n_0\alpha^p\beta^p\rfloor -\tilde n_{2p},\quad n_{2p+1}^0=\lfloor n_0\alpha^{p+1}\beta^p\rfloor -\tilde n_{2p+1}, \end{equation} which are natural numbers for each $p$ by increasing $n_0$ if necessary. Since $x-1< \lfloor x\rfloor \le x$ and $\tilde n_k =O(k)$, by increasing $n_0$ if necessary, we have \begin{align*} \dfrac{n_{2p+1}^0}{n_{2p}^0}&<\dfrac{n_0\alpha^{p+1}\beta^p - \tilde n_{2p+1}}{n_0\alpha^p\beta^p-1- \tilde n_{2p}} < \alpha+\dfrac{\alpha (1+ \tilde n_{2p} ) }{n_0\alpha^p\beta^p-1-\tilde n_{2p}}<1+\eta,\\ \dfrac{n_{2p+2}}{n_{2p+1}}&<\dfrac{n_0\alpha^{p+1}\beta^{p+1} - \tilde n_{2p+2}}{n_0\alpha^{p+1}\beta^p-1 - \tilde n_{2p+1}}=\beta+\dfrac{\beta (1+ \tilde n_{2p+1})}{n_0\alpha^{p+1}\beta^p- 1 - \tilde n_{2p+1}}<1+\eta , \end{align*} so it holds that $n_k^0 = O((1+\eta ) ^k)$, which is the only requirement to apply Theorem \ref{prop:0812}. Set $ n_k = n_k^0 + \tilde n_k, $ then we obviously have \begin{equation*} n_{2p} =\lfloor n_0\alpha^p\beta^p\rfloor ,\quad n_{2p+1} =\lfloor n_0\alpha^{p+1}\beta^p\rfloor . \end{equation*} Define sequences $(b_k)_{k\in \mathbb N}$ and $(\varepsilon _k)_{k\in \mathbb N}$ of positive numbers by \begin{equation*} b_k=\sigma^{-\sum_{i=-1}^{+\infty}\frac{n_{k+1+i}}{2^i}} \end{equation*} and \begin{equation*} \varepsilon_k =\Big(\lambda\sigma^{\frac{6\beta-4+8n^{-1}_k}{2-\beta}}\Big)^{n_k}. \end{equation*} \begin{remark}\label{rmk:0911c} Define $\tilde b_k$ by \[ \tilde b_k=\sigma^{-\sum_{i=0}^{+\infty}\frac{n_{k+1+i}}{2^i}}, \] then $R_k$ of Theorem \ref{prop:0812} is of the form \[ R_k =\left[\tilde x_k - c_k \tilde b_k , \tilde x_k + c_k \tilde b_k \right] \times \left[-20 \tilde b_k^{\frac{1}{2}}, 20\tilde b_k^{\frac{1}{2}}\right] \] with some constant $c_k$ satisfying that \[ \frac{1}{2} \leq c_k \leq 10, \] see the ``Rectangle lemma'' and its proof given in pp.~1975-1976 of \cite{CV2001} (as previously mentioned, in the paper our $R_k$ is written as $R_k^*$ and the notations $R_k$ is used for another object). Note that $b_k<\tilde b_k$. Thus, $F_k$ in \eqref{map0} is well-defined on any rectangle of the form \[ \left[- c b_k , c b_k \right] \times \left[- c \sqrt{b_k}, c \sqrt{b_k }\right] \quad \text{with $\displaystyle 0< c \leq \frac{1}{2}$}. \] In the paper \cite{CV2001} the notation $b_k$ was used to denote $\tilde b_k$, but this positive number is not explicitly used in the following argument, so we defined $b_k$ as above for notational simplicity. \end{remark} By the construction of $(n_k) _{k\in \mathbb N}$, we have that $n_l / (n_k +1) < \beta ^{l -k}$ for each $k\leq l$. Hence, since $n_k$ is increasing, \begin{align*} 4n_k&<2n_k+n_{k+1}+ \frac{n_{k+2}}{2} + \cdots \\ &< 2(n_k +1) \left(1+ \frac{\beta}{2} + \frac{\beta ^2}{2^2} +\cdots \right) = \dfrac{4(n_k+1)}{2-\beta}. \end{align*} Therefore we have \begin{align}\label{Q05} \sigma^{-\frac{4(n_k+1)}{2-\beta}} < b_k < \sigma^{-4n_k}\quad \text{for each $k\in\mathbb{N}$} \end{align} and \begin{equation}\label{Q06} b_{k+1} > \sigma^{-\frac{4(n_{k+1}+1)}{2-\beta}}>\Bigg\{ \begin{array}{ll} \sigma^{-\frac{4(\alpha n_k+2)}{2-\beta}} &\quad \text{if $k$ is even},\\ \sigma^{-\frac{4(\beta n_k+2)}{2-\beta}} &\quad \text{if $k$ s odd}. \end{array} \end{equation} Furthermore, it follows from \eqref{Q02} that $\varepsilon_k $ can be arbitrary small by taking $k$ sufficiently large, so there exists a positive integer $k_0$ such that for any $k\geq k_0$ and $p\geq 0$, we get \begin{equation*} 2\alpha^p\beta^p-n_k^{-1}+\dfrac{\log 2}{\log\varepsilon_k}>\alpha^{p+2}\beta^{p+2} . \end{equation*} Fix such a $k_0$. Then it immediately holds that for any $k\geq k_0$ and $p\geq 0$, \begin{equation}\label{Q121} \varepsilon_k^{2\alpha^p\beta^p}<\varepsilon_k^{2\alpha^p\beta^p-n_k^{-1}}<\dfrac{1}{2}\varepsilon_k^{\alpha^{p+2}\beta^{p+2}}<\dfrac{1}{2}\varepsilon_k^{\alpha^{p+1}\beta^{p+1}}. \end{equation} In the following lemmas, we only consider the case when $k$ is an even number because it is enough to prove Theorem \ref{thm:main} and makes the statements a bit simpler, but similar estimates hold even when $k$ is an odd number. We first show the following. \begin{lem}\label{lem1} For every even number $k\geq k_0$, $p\in \mathbb N \cup \{0\}$ and $j \in \{ 0 , 1\}$, \begin{align*} \lambda^{n_{k+2p+j}}\sqrt{b_{k+2p+j}}\le \varepsilon_k^{\alpha^{p+j}\beta^p-n_k^{-1}}b_{k+2p+1+j}. \end{align*} \end{lem} \begin{proof} Fix an even number $k\geq k_0$. We will prove this lemma by induction with respect to $p$. For the case $p=0$, it follows from \eqref{Q02}, \eqref{Q05} and \eqref{Q06} that \begin{equation*} \lambda^{n_k}\sqrt{ b_k}\le \lambda^{n_k}\sigma^{-2n_k}\le \Big(\lambda\sigma^{\frac{6\beta-4+8n_k^{-1}}{2-\beta}}\Big)^{n_k}\sigma^{-\frac{4(\alpha n_k+2)}{2-\beta}}<\varepsilon_k^{1-n_k^{-1}}\cdot b_{k+1}, \end{equation*} and since $\frac{n_{k+1}}{n_k} \geq \frac{n_0 \alpha ^{k/2 +1}\beta ^{k/2} -1}{n_0 \alpha ^{k/2 }\beta ^{k/2} }\geq \alpha -\frac{1}{n_k}$ by the construction of $n_k$, \begin{equation*} \begin{split} \lambda^{n_{k+1}}\sqrt {b_{k+1}} &\le \lambda^{n_{k+1}}\sigma^{-2n_{k+1}}\\ &\le \Big(\lambda\sigma^{\frac{6\beta-4+8n_{k+1}^{-1}}{2-\beta}}\Big)^{n_{k}\cdot\frac{n_{k+1}}{n_k}}\sigma^{-\frac{4(\beta n_{k+1}+2)}{2-\beta}}\le\varepsilon_k^{\alpha-n_k^{-1}}\cdot b_{k+2}. \end{split} \end{equation*} Next we assume that the assertion of Lemma~\ref{lem1} is true for a given $p\in \mathbb N\cup \{0\}$. Then we have \begin{align*} \lambda^{n_{k+2p+2}}\sqrt{b_{n+2p+2}} &\le \lambda^{n_{k+2p+2}}\sigma^{-2n_{k+2p+2}}\\ &\le \Big(\lambda\sigma^{\frac{6\beta-4+8n_{k+2p+2}^{-1}}{2-\beta}}\Big)^{n_{k}\cdot\frac{n_{k+2p+2}}{n_k}}\sigma^{-\frac{4(\alpha n_{k+2p+2}+2)}{2-\beta}}\\ & <\varepsilon_k^{\alpha^{p+1}\beta^{p+1}-n_k^{-1}}\cdot b_{k+2p+3} \end{align*} and \begin{align*} \lambda^{n_{k+2p+3}}\sqrt{b_{n+2p+3}} &\le \lambda^{n_{k+2p+3}}\sigma^{-2n_{k+2p+3}}\\ &\le \Big(\lambda\sigma^{\frac{6\beta-4+8n_{k+2p+3}^{-1}}{2-\beta}}\Big)^{n_{k}\cdot\frac{n_{k+2p+3}}{n_k}}\sigma^{-\frac{4(\beta n_{k+2p+3}+2)}{2-\beta}}\\ & <\varepsilon_k^{\alpha^{p+2}\beta^{p+1}-n_k^{-1}}\cdot b_{k+2p+4}. \end{align*} That is, the assertion of Lemma~\ref{lem1} with $p+1$ instead of $p$ is also true. This completes the induction and the proof of Lemma~\ref{lem1}. \end{proof} Define a sequence $(U_{k,m})_{m \geq 0}$ of rectangles with $k\geq k_0$ by \begin{equation}\label{eq:0915e2} U_{k,m}=\left\{(x,y):\ |x|\leq \varepsilon_k^{(\alpha\beta)^{\lfloor \frac{m}{2} \rfloor }}b_{k+m},\ |y|\leq \varepsilon_k^{(\alpha\beta)^{\lfloor \frac{m}{2}\rfloor }}\sqrt{b_{k+m}}\right\} \end{equation} for each integer $m\geq 0$. Then, by Remark \ref{rmk:0911c}, $U_{k,m}$ is included in $R_{k+m}$ for any large $m$ (under the translation of $(\tilde x_{k+m},0)$ to $(0,0)$), on which $F_{k+m}$ in \eqref{map0} is well-defined. Then we have the following. \begin{lem}\label{lem2} For any even number $k\geq k_0$, $m\in \mathbb N\cup \{0\}$ and $\boldsymbol{x} \in U_{k,0}$, \[ F_{k+m-1} \circ F_{k+m-2} \circ \cdots \circ F_{k}(\boldsymbol{x}) \in U_{k, m}. \] \end{lem} \begin{proof} Fix an even number $k\geq k_0$, an integer $m\geq 0$ and $\boldsymbol{x} \in U_{k,0}$, and set $ \boldsymbol{x}_{k,m} := F_{k+m-1} \circ F_{k+m-2} \circ \cdots \circ F_{k}(\boldsymbol{x}), $ where $\boldsymbol{x}_{k,0}$ is interpreted as $ \boldsymbol{x}$ so that $\boldsymbol{x}_{k,0}\in U_{k,0}$. Denote the first and second coordinate of $\boldsymbol{x}_{k,m} $ by $x_{k,m}$ and $y_{k,m}$, respectively. We will show that $(x_{k,m} ,y_{k,m}) \in U_{k,m}$ by induction with respect to $m\in \mathbb N\cup \{0\}$. We first show that $(x_{k,m},y_{k,m})\in U_{k,m}$ for $m=1$. It holds that \begin{align*} |x_{k,1}| =|-\sigma^{2n_{k }}x_{k }^2 \mp \lambda^{n_{k}}y_{k }| \le \sigma^{4n_{k }} \varepsilon_k ^2b_{k }^2+\lambda^{n_{k }} \varepsilon_k \sqrt{b_{k }} \le \varepsilon_k ^2b_{k +1}+ \varepsilon_k^{2 -n_k^{-1}}b_{k +1}. \end{align*} In the last inequality, the first term is due to the equality $\sigma^{4n_k}b_k^2= b_{k+1}$ implied by the definition of $b_k$, and the second term comes from Lemma~\ref{lem1}. Hence, it follows from (\ref{Q121}) that \begin{equation*} |x_{k,1}|\le\dfrac{1}{2}\varepsilon_k^{\alpha\beta }b_{k+1}+\dfrac{1}{2}\varepsilon_k^{ \alpha\beta } b_{k+1}=\varepsilon_k^{ \alpha\beta }b_{k+1}\le \varepsilon_k b_{k+1} \end{equation*} and \begin{equation*} |y_{k,1}|=|\sigma^{n_{k}}x_{k}|\le \sigma^{2n_{k}} \varepsilon_k b_{k}=\varepsilon_k \sqrt{b_{k+1}}, \end{equation*} which concludes that $(x_{k,1},y_{k,1})\in U_{k+1}$. Next we assume that $(x_{k,m},y_{k,m})\in U_{k,m}$ for $m = 2p$ and $2p+1$ with a given integer $p\geq 0$. In addition, we assume (as an inductive hypothesis) that \begin{equation*} |x_{k,2p+1}|\le \varepsilon_k^{(\alpha\beta)^{p+1}}b_{k+2p+1}, \end{equation*} which indeed holds in the case when $p=0$ as seen above. Then it holds that \begin{align*} |x_{k,2p+2}| &=|-\sigma^{2n_{k+2p+1}}x_{k,2p+1}^2\mp \lambda^{n_{k+2p+1}}y_{k,2p+1}|\nonumber\\ &\le \sigma^{4n_{k+2p+1}}\varepsilon_k^{2(\alpha\beta)^{p}}b_{k+2p+1}^2+\lambda^{n_{k+2p+1}}\varepsilon_k^{(\alpha\beta)^{p}}\sqrt{b_{k+2p+1}}\nonumber\\ &\le \varepsilon_k^{2(\alpha\beta)^{p}}b_{k+2p+2}+\varepsilon_k^{(\alpha\beta)^{p}}\cdot \varepsilon_k^{\alpha^{p+1}\beta^p-n_k^{-1}}b_{k+2p+2}\\ &\le\dfrac{1}{2}\varepsilon_k^{(\alpha\beta)^{p+1}}b_{k+2p+2}+\dfrac{1}{2}\varepsilon_k^{(\alpha\beta)^{p+1}} b_{k+2p+2}=\varepsilon_k^{(\alpha\beta)^{p+1}}b_{k+2p+2},\\ |y_{k,2p+2}| &=|\sigma^{n_{k+2p+1}}x_{k,2p+1}|\\ &\le \sigma^{2n_{k+2p+1}}\varepsilon_k^{(\alpha\beta)^{p+1}}b_{k+2p+1}=\varepsilon_k^{(\alpha\beta)^{p+1}}\sqrt{b_{k+2p+2}},\\ |x_{k,2p+3}| &=|-\sigma^{2n_{k+2p+2}}x_{k,2p+2}^2 \mp \lambda^{n_{k+2p+2}}y_{k,2p+2}|\nonumber\\ &\le \sigma^{4n_{k+2p+2}}\varepsilon_k^{2(\alpha\beta)^{p+1}}b_{k+2p+2}^2+\lambda^{n_{k+2p+2}}\varepsilon_k^{(\alpha\beta)^{p+1}}\sqrt{b_{k+2p+2}}\nonumber\\ &\le \varepsilon_k^{2(\alpha\beta)^{p+1}}b_{k+2p+3}+\varepsilon_k^{(\alpha\beta)^{p+1}}\cdot \varepsilon_k^{(\alpha\beta)^{p+1}-n_k^{-1}}b_{k+2p+3}\\ &\le\dfrac{1}{2}\varepsilon_k^{(\alpha\beta)^{p+3}}b_{k+2p+3}+\dfrac{1}{2}\varepsilon_k^{(\alpha\beta)^{p+3}} b_{k+2p+3}\\ & \le \varepsilon_k^{(\alpha\beta)^{p+2}}b_{k+2p+3} \le \varepsilon_k^{(\alpha\beta)^{p+1}}b_{k+2p+3},\\ |y_{k,2p+3}|&=|\sigma^{n_{k+2p+2}}x_{k,2p+2}|\\ &\le \sigma^{2n_{k+2p+2}}\varepsilon_k^{(\alpha\beta)^{p+1}}b_{k+2p+2}=\varepsilon_k^{(\alpha\beta)^{p+1}}\sqrt{b_{k+2p+3}}. \end{align*} This shows that $(x_{k,m},y_{k,m})\in U_{k,m}$ for $m=2p+2$ and $2p+3$, which complete the proof of Lemma~\ref{lem2}. \end{proof} Since $0<\lambda\sigma^{\frac{6\beta-4+8n_k^{-1}}{2-\beta}}<1$ for any $k\geq 0$ by \eqref{Q02}, there exists a positive integer $m'$ such that \begin{equation*} \log\lambda\sigma^\alpha>(\alpha\beta)^{\frac{m'}{2}}\log\Big(\lambda\sigma^{\frac{6\beta-4+8n_k^{-1}}{2-\beta}}\Big) \end{equation*} for any $k\geq 0$. Fix such an $m'$. Fix also a real number $\xi\in(0,1)$. \begin{lem}\label{lem4} There exist positive integers $k_1\geq k_0$ and $m_0$ such that for any even number $k\geq k_1$, any integer $m\ge m_0$ and any $\boldsymbol{x} \in U_{k,0}$, \[ 2|x_{k,m}|\sigma ^{2n_{k+m}}\le \xi\lambda^{n_{k+m}}, \] where $x_{k,m}$ is the first coordinate of $ F_{k+m-1} \circ F_{k+m-2} \circ \cdots \circ F_{k}(\boldsymbol{x} )$. \end{lem} \begin{proof} Since $\varepsilon _k$ goes to zero as $k\to \infty$, there exists an even number $k_1\geq k_0$ such that \[ \varepsilon _k \leq \varepsilon_{k_0}^{(\alpha\beta)^{\frac{m'+2}{2}}} \] for any $k\geq k_1$. Recall that $k_0$ is an even number. Note that \[ n_k^{-1}(\alpha\beta)^{-\frac{m+1}{2}}(\log (2\lambda^{-1}\sigma)-\log\xi) \to 0 \quad \text{as $m\to \infty$}, \] so by the choice of $m'$, there exists an $m_0\in\mathbb{N}$ such that for every $m\ge m_0$, \begin{equation*} \log\lambda\sigma^\alpha\ge(\alpha\beta)^{\frac{m'}{2}}\log\Big(\lambda\sigma^{\frac{6\beta-4+8n_{k_0}^{-1}}{2-\beta}}\Big)+n_{k_0}^{-1}(\alpha\beta)^{-\frac{m+1}{2}}(\log (2\lambda^{-1}\sigma)-\log\xi). \end{equation*} Multiply the inequality by $(\alpha\beta)^{\frac{m+1}{2}}$, then we get \[ (\alpha\beta)^{\frac{m+1}{2}}\log\lambda\sigma^\alpha+n_{k_0}^{-1}\log\xi \ge (\alpha\beta)^{\frac{m'+m+1}{2}}\log\Big(\lambda\sigma^{\frac{6\beta-4+8n_{k_0}^{-1}}{2-\beta}}\Big)+n_{k_0}^{-1}\log (2\lambda^{-1}\sigma). \] Hence, it follows that \[ \xi^{n_{k_0}^{-1}}(\lambda\sigma^\alpha)^{(\alpha\beta)^{\lceil \frac{m}{2}\rceil}}\ge \xi^{n_{k_0}^{-1}}(\lambda\sigma^\alpha)^{(\alpha\beta)^{\frac{m+1}{2}}}\ge (2\lambda^{-1}\sigma)^{n_{k_0}^{-1}}\Big(\lambda\sigma^{\frac{6\beta-4+8n_{k_0}^{-1}}{2-\beta}}\Big)^{(\alpha\beta)^{\frac{m'+m+1}{2}}} \] because $\frac{m+1}{2}\ge\lceil\frac{m}{2}\rceil$, where $\lceil x\rceil$ denotes the smallest integer which is larger than or equal to $x$. Raise the above inequality to the $n_{k_0}$-th power, together with \eqref{Q02}, then we have \begin{align*} \lambda\sigma^{-1}\xi(\lambda\sigma^\alpha)^{n_{k_0}(\alpha\beta)^{\lceil \frac{m}{2}\rceil}} &\ge 2\Big( \lambda\sigma^{\frac{6\beta-4+8n_{k_0}^{-1}}{2-\beta}} \Big)^{n_{k_0}(\alpha\beta)^{\frac{m'+m+1}{2}}} = 2\big(\varepsilon_{k_0}^{(\alpha\beta)^{\frac{m'+2}{2}}}\big)^{(\alpha\beta)^{\frac{m-1}{2}}} \\ &\ge 2\big(\varepsilon_{k_0}^{(\alpha\beta)^{\frac{m'+2}{2}}}\big) ^{(\alpha\beta)^{\lfloor\frac{m}{2}\rfloor}} \ge 2 \varepsilon_k ^{(\alpha\beta)^{\lfloor\frac{m}{2}\rfloor}} \end{align*} for any $k\geq k_1$. Fix an even number $k\geq k_1$ and an integer $m\geq m_0$. Then, due to Lemma~\ref{lem2}, the definition of $b_{k+m}$, (\ref{Q05}) and the above inequality, we have \begin{align}\label{eq:0914c} \notag2|x_{k,m}|\sigma^{2n_{k+m}} &\le 2\varepsilon_ k^{(\alpha\beta)^{\lfloor\frac{m}{2}\rfloor}}b_{k+m}\sigma^{2n_{k+m}}\\ \notag &=2\varepsilon_k ^{(\alpha\beta)^{\lfloor\frac{m}{2}\rfloor}}\sqrt{b_{k+m+1}}\\ & \le \notag 2\varepsilon_k^{(\alpha\beta)^{\lfloor\frac{m}{2}\rfloor}}\sigma^{-2n_{k+m+1}} \le \notag 2\varepsilon_k^{(\alpha\beta)^{\lfloor\frac{m}{2}\rfloor}}\sigma^{-n_{k+m+1}}\\ &\le \lambda\sigma^{-1}\xi(\lambda\sigma^\alpha)^{n_k(\alpha\beta)^{\lceil \frac{m}{2}\rceil}}\sigma^{-n_{k+m+1}}. \end{align} On the other hand, when $m=2p$, \[ \lambda^{n_{k+m}}>\lambda^{n_k\alpha^p\beta^p+1} \quad \mbox{and}\quad \sigma^{n_{k+m+1}}>\sigma^{n_k\alpha^{p+1}\beta^p-1}, \] thus \[ \lambda^{n_{k+m}}\sigma^{n_{k+m+1}}>\lambda\sigma^{-1}(\lambda\sigma^\alpha)^{n_k\alpha^p\beta^p}. \] Similarly, when $m=2p+1$, \[ \lambda^{n_{k+m}}>\lambda^{n_k\alpha^{p+1}\beta^p+1} \ \mbox{and}\ \sigma^{n_{k+m+1}}>\sigma^{n_k\alpha^{p+1}\beta^{p+1}-1}>\sigma^{n_k\alpha^{p+2}\beta^p-1}, \] thus \[ \lambda^{n_{k+m}}\sigma^{n_{k+m+1}}>\lambda\sigma^{-1}(\lambda\sigma^\alpha)^{n_k\alpha^{p+1}\beta^p}>\lambda\sigma^{-1}(\lambda\sigma^\alpha)^{n_k\alpha^{p+1}\beta^{p+1}}. \] Therefore, we have \begin{equation*} \lambda^{n_{k+m}}=(\lambda^{n_{k+m}}\sigma^{n_{k+m+1}})\sigma^{-n_{k+m+1}} \ge \lambda\sigma^{-1}(\lambda\sigma^\alpha)^{n_k(\alpha\beta)^{\lceil m/2\rceil}} \sigma^{-n_{k+m+1}}. \end{equation*} Combining this estimate with \eqref{eq:0914c}, we get \[ 2|x_{k,m}|\sigma ^{2n_{k+m}}\le \xi\lambda^{n_{k+m}}, \] which completes the proof of Lemma~\ref{lem4}. \end{proof} \subsection{Lyapunov irregularity}\label{s:4.3} Let $k_1$ and $m_0$ be integers given in the previous subsection, and we fix even numbers $k\geq k_1$ and $m\geq m_0$ throughout this subsection. Fix $\boldsymbol{x}\in U_{k,0}$ and define $\boldsymbol{x}_{k, m+j} = (x_{k,m+j},y_{k,m+j})$ for each $j\geq 0$ by \[ \boldsymbol{x}_{k,m+j} := F_{k+m +j -1} \circ F_{k+m +j -2} \circ \cdots \circ F_{k}(\boldsymbol{x}) . \] Recall Lemma \ref{lem4} for $\xi \in (0,1)$, and set \begin{equation*} K :=\dfrac{1}{3\xi }. \end{equation*} Fix also a vector $\boldsymbol{v}_0=(v_0,w_0) \in T_{\boldsymbol{x}_{k,m} }M$ with \begin{equation}\label{eq:0915c} K^{-1} \leq \frac{|v_0|}{|w_0|}\leq K, \end{equation} and inductively define $\boldsymbol{v}_{j}=(v_{j},w_{j}) $ for each $j\geq 0$ by \[ \boldsymbol{v}_{j+1} := DF_{k+m +j }(\boldsymbol{x}_{k, m+j}) \boldsymbol{v}_{j}. \] For notational simplicity, we below use \[ \kappa := k +m \] and \[ (n_p;n_{p+2q}):=n_p+n_{p+2}+n_{p+4}+\cdots +n_{p+2q} \] for each $p, q\in\mathbb{N}$. For simplicity, we let $(n_p;n_{p-2})=0$ for $p\in \mathbb N$. \begin{lem}\label{lem5} There exist constants $C_j $ ($j =-2, -1, \ldots $) such that \begin{align*} \boldsymbol{v}_{2p}&=\left( \begin{array}{c} v_{2p} \\ w_{2p} \\ \end{array} \right)=\left( \begin{array}{c} C_{2p-1}\lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa };n_{\kappa +2p-2})}v_0 \\ \pm C_{2p-2}\lambda^{(n_{\kappa };n_{\kappa +2p-2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}w_0\\ \end{array} \right)\\ \boldsymbol{v}_{2p+1}&=\left( \begin{array}{c} v_{2p+1} \\ w_{2p+1} \\ \end{array} \right)=\left( \begin{array}{c} C_{2p}\lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}w_0 \\ \pm C_{2p-1}\lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}v_0\\ \end{array} \right) \end{align*} for every $p\geq 0$, and that $\dfrac{1}{2}\leq |C_j| \leq \dfrac{3}{2}$ for every $j\geq -2$. \end{lem} \begin{proof} We prove Lemma \ref{lem5} by induction. We first show the claim for $p=0$. The formula for $\boldsymbol{v}_0$ obviously holds with $C_{-2}=C_{-1}=1$. Due to \eqref{map0}, we have \begin{equation*} DF_\kappa (\boldsymbol{x}_{k,m} )= \left( \begin{array}{cc} -2\sigma^{2n_\kappa }x_{k,m} &\mp \lambda^{n_\kappa } \\ \pm \sigma^{n_\kappa } & 0 \\ \end{array} \right), \end{equation*} and \[ \left( \begin{array}{c} v_1 \\ w_1 \\ \end{array} \right) = DF_\kappa (\boldsymbol{x}_{k,m} )\left( \begin{array}{c} v_0 \\ w_0 \\ \end{array} \right)=\left( \begin{array}{c} -2x_{k,m} \sigma^{2n_{\kappa }}v_0\mp \lambda^{n_{\kappa }}w_0 \\ \pm \sigma^{n_{\kappa }}v_0 \\ \end{array} \right). \] By Lemma~\ref{lem4}, \eqref{eq:0915c} and the definition of $K$, \[ |2x_{k,m} \sigma^{2n_{\kappa }}v_0|\le \xi\lambda^{n_{\kappa }}|v_0|\le\xi K \lambda^{n_{\kappa }}|w_0| =\frac{1}{3} \lambda^{n_{\kappa }}|w_0|. \] In other words, \[ \left( \begin{array}{c} v_1 \\ w_1 \\ \end{array} \right)=\left( \begin{array}{c} C_0\lambda^{n_{\kappa }}w_0 \\ \pm C_{-1}\sigma^{n_{\kappa }}v_0 \\ \end{array} \right), \] with a constant $C_0$ satisfying \begin{equation}\label{eq:0915d4} 1-\frac{1}{3} \le|C_0|\le 1+\frac{1}{3}. \end{equation} Next we assume that the claim is true for a given $p\geq 0$, and will show the claim with $p+1$ instead of $p$. Note that \begin{align*} \boldsymbol{v}_{2p+2} &= \left( \begin{array}{c} v_{2p+2} \\ w_{2p+2} \\ \end{array} \right) =DF_{\kappa + 2p +1}(\boldsymbol{x}_{k ,m + 2p+1}) \left( \begin{array}{c} v_{2p+1} \\ w_{2p+1} \\ \end{array} \right)\\ &=\left( \begin{array}{c} -2x_{k ,m + 2p+1}\sigma^{2n_{\kappa + 2p +1}}v_{2p+1} \mp \lambda^{n_{\kappa + 2p +1}}w_{2p+1} \\ \pm \sigma^{n_{\kappa + 2p +1}}v_{2p+1} \\ \end{array} \right), \end{align*} whose first coordinate is \begin{multline*} - 2x_{k ,m + 2p+1}\sigma^{2n_{\kappa + 2p +1}} C_{2p}\lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}w_0 \\ \mp C_{2p-1}\lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}v_0 , \end{multline*} and second coordinate is \begin{align*} \pm C_{2p} \lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p+1})}w_0, \end{align*} by the inductive hypothesis. On the other hand, it follows from Lemma~\ref{lem4}, \eqref{eq:0915c} and the monotonicity of $(n_l)_{l\in \mathbb N}$ that the absolute value of the first term of the first coordinate is bounded by \begin{align*} & \xi\lambda^{n_{\kappa + 2p +1}}|C_{2p}| \lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}|w_0|\\ & \quad \leq \xi K \lambda^{n_{\kappa + 2p +1}} |C_{2p}| \lambda^{(n_{\kappa -1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa +2};n_{\kappa +2p})}|v_0| \\ & \quad = \frac{1}{3} \frac{\lambda ^{n_{\kappa -1}}}{\sigma ^{n_\kappa }} |C_{2p}| \lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}|v_0|\\ & \quad \leq \frac{1}{3} |C_{2p}| \lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}|v_0|. \end{align*} Hence, we can write $\boldsymbol{v}_{2p+2}$ as \[ \left( \begin{array}{c} v_{2p+2} \\ w_{2p+2} \\ \end{array} \right) =\left( \begin{array}{c} C_{2p+1}\lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}v_0 \\ \pm C_{2p} \lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p+1})} w_0 \\ \end{array} \right), \] with a constant $C_{2p +1}$ satisfying that \begin{equation}\label{eq:0915d2} |C_{2p-1}|- \frac{1}{3} |C_{2p}| \leq |C_{2p+1}| \leq |C_{2p-1}|+ \frac{1}{3} |C_{2p}|. \end{equation} Similarly, \begin{align*} \boldsymbol{v}_{2p+3} &= \left( \begin{array}{c} v_{2p+3} \\ w_{2p+3} \\ \end{array} \right) =DF_{\kappa + 2p +2}(\boldsymbol{x}_{k ,m + 2p+2}) \left( \begin{array}{c} v_{2p+2} \\ w_{2p+2} \\ \end{array} \right)\\ &=\left( \begin{array}{c} - 2x_{k ,m + 2p+2}\sigma^{2n_{\kappa + 2p +2}}v_{2p+2}\mp \lambda^{n_{\kappa + 2p +2}}w_{2p+2} \\ \pm \sigma^{n_{\kappa + 2p +2}}v_{2p+2} \\ \end{array} \right), \end{align*} whose first coordinate is \begin{multline*} - 2x_{k ,m + 2p+2}\sigma^{2n_{\kappa + 2p +2}} C_{2p+1}\lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}v_0 \\ \mp C_{2p}\lambda^{(n_{\kappa };n_{\kappa +2p+2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p+1})}w_0 , \end{multline*} and second coordinate is \begin{align*} \pm C_{2p+1} \lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p+2})}v_0 \end{align*} by the previous formula. On the other hand, it follows from Lemma~\ref{lem4}, \eqref{eq:0915c} and the monotonicity of $(n_l)_{l\in \mathbb N}$ that the absolute value of the first term of the first coordinate is bounded by \begin{align*} & \xi\lambda^{n_{\kappa + 2p +2}}|C_{2p+1}| \lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}|v_0|\\ & \quad \leq \xi K \lambda^{n_{\kappa + 2p +2}} |C_{2p+1}| \lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p +1})}|w_0| \\ & \quad = \frac{1}{3} |C_{2p+1}| \lambda^{(n_{\kappa };n_{\kappa +2p+2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p+1})}|w_0|. \end{align*} Hence, we can write $\boldsymbol{v}_{2p+3}$ as \[ \left( \begin{array}{c} v_{2p+3} \\ w_{2p+3} \\ \end{array} \right) =\left( \begin{array}{c} C_{2p+2}\lambda^{(n_{\kappa };n_{\kappa +2p+2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p+1})}w_0 \\ \pm C_{2p+1} \lambda^{(n_{\kappa +1};n_{\kappa +2p+1})}\sigma^{(n_{\kappa };n_{\kappa +2p+2})}v_0 \\ \end{array} \right), \] with a constant $C_{2p +2}$ satisfying that \begin{equation}\label{eq:0915d3} |C_{2p}|- \frac{1}{3} |C_{2p+1}| \leq |C_{2p+2}| \leq |C_{2p}|+\frac{1}{3} |C_{2p+1}|. \end{equation} Finally, combining \eqref{eq:0915d4}, \eqref{eq:0915d2} and \eqref{eq:0915d3}, we get \[ \frac{1}{2} = 1- \left(\frac{1}{3} + \frac{1}{3^2} +\cdots \right) \leq \vert C_j \vert \leq 1+ \left(\frac{1}{3} + \frac{1}{3^2} +\cdots \right) = \frac{3}{2} \] for any $j\geq 0$. This completes the proof of Lemma \ref{lem6}. \end{proof} Given two sequences $(a_p)_{p\geq 0}$ and $(b_p)_{p\geq 0}$ of positive numbers, if there exist constants $c_0,\ c_1>0$, independently of $p$, such that \[c_0<\dfrac{a_p}{b_p}<c_1,\] then, we say that $a_p$ and $b_p$ are equivalent, denoted by $a_p\sim b_p$. \begin{lem}\label{lem6} For every $p\geq 0$, we have \begin{align*} |v_{2p}|\sim \lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa };n_{\kappa +2p-2})}&<\lambda^{(n_{\kappa };n_{\kappa +2p-2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}\sim |w_{2p}|,\\ |v_{2p+1}|\sim \lambda^{(n_{\kappa };n_{\kappa +2p})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}&<\lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}\sim |w_{2p+1}|. \end{align*} \end{lem} \begin{proof} The equivalence relations follow from Lemma~\ref{lem5} directly. Since $0<\lambda<1$, $\sigma>1$, \[ \lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}<\lambda^{(n_{\kappa };n_{\kappa +2p-2})},\quad \sigma^{(n_{\kappa };n_{\kappa +2p-2})}<\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}, \] which gives the former formula immediately. In order to prove the later formula, it suffice to notice that \[ \lambda^{(n_{\kappa };n_{\kappa +2p})}<\lambda^{(n_{\kappa +2};n_{\kappa +2p})}<\lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}, \] \[ \sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}<\sigma^{(n_{\kappa +2};n_{\kappa +2p})}<\sigma^{(n_{\kappa };n_{\kappa +2p})}. \] This completes the proof of Lemma \ref{lem6}. \end{proof} An immediate consequence of Lemma~\ref{lem6} is that \[ \|\boldsymbol{v}_{2p}\|\sim \lambda^{(n_{\kappa };n_{\kappa +2p-2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})}, \] \[\ \|\boldsymbol{v}_{2p+1}\|\sim \lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa };n_{\kappa +2p})}. \] Since both $k$ and $m$ are even numbers, for every integer $p\geq 0$, we have \[ |n_{\kappa +2p}-n_0\alpha^{\frac{\kappa}{2}+p}\beta^{\frac{\kappa }{2}+p}|\le 1,\quad |n_{\kappa +2p+1}-n_0\alpha^{\frac{\kappa }{2}+p+1}\beta^{\frac{\kappa }{2}+p}|\le 1. \] According to Lemma~\ref{lem6}, \begin{align*} & \lim\limits_{p\to \infty} \dfrac{\log\|D(F_{n_{\kappa +2p}}\circ\cdots\circ F_{n_{\kappa +1}}\circ F_{n_{\kappa }})(\boldsymbol{x}_{k,m})\boldsymbol{v}_0\|}{(n_{\kappa }+2)+(n_{\kappa +1}+2)+\cdots+(n_{\kappa +2p}+2)}\\ & \quad = \lim\limits_{p\to\infty} \dfrac{\log\|\boldsymbol{v}_{2p+1}\|}{n_{\kappa }+n_{\kappa +1}+\cdots+n_{\kappa +2p}+O(p)}\\ & \quad = \lim\limits_{p\to \infty} \dfrac{\log\lambda^{(n_{\kappa +1};n_{\kappa +2p-1})}\sigma^{(n_{\kappa };n_{\kappa +2p})} +O(p)}{n_{\kappa }+n_{\kappa +1}+\cdots+n_{\kappa +2p}+O(p)}\\ & \quad = \lim\limits_{p\to \infty} \dfrac{(n_{\kappa +1}+n_{\kappa + 3}+\cdots+n_{\kappa +2p-1})\log\lambda+(n_{\kappa }+n_{\kappa +2}+\cdots+n_{\kappa +2p})\log\sigma +O(p)}{n_{\kappa }+n_{\kappa +1}+\cdots+n_{\kappa +2p}+O(p)}\\ &\quad = \lim\limits_{p\to \infty} \dfrac{n_0\alpha^{\frac{\kappa }{2}}\beta^{\frac{\kappa }{2}}[\alpha(1+\alpha\beta+\cdots+ (\alpha\beta)^{p-1})\log\lambda+(1+\alpha\beta+\cdots+(\alpha\beta)^p)\log\sigma]+O(p)}{n_0\alpha^{\frac{\kappa }{2}}\beta^{\frac{\kappa }{2}}[1+\alpha+\alpha\beta+\alpha^2\beta+\cdots+(\alpha\beta)^p] +O(p)}\\ & \quad = \dfrac{\log\lambda+\beta\log\sigma}{1+\beta}, \end{align*} and \begin{align*} & \lim\limits_{p\to \infty} \dfrac{\log\|D(F_{n_{\kappa +2p-1}}\circ\cdots\circ F_{n_{\kappa +1}}\circ F_{n_{\kappa }})(\boldsymbol{x}_{k,m})\boldsymbol{v}_0\|}{(n_{\kappa }+2)+(n_{\kappa +1}+2)+\cdots+(n_{\kappa +2p-1}+2)}\\ &\quad = \lim\limits_{p\to\infty} \dfrac{\log\|\boldsymbol{v}_{2p}\|}{n_{\kappa }+n_{\kappa +1}+\cdots+n_{\kappa +2p-1}+O(p)}\\ &\quad = \lim\limits_{p\to \infty} \dfrac{\log\lambda^{(n_{\kappa };n_{\kappa +2p-2})}\sigma^{(n_{\kappa +1};n_{\kappa +2p-1})} +O(p)}{n_{\kappa }+n_{\kappa +1}+\cdots+n_{\kappa +2p-1}+O(p)}\\ &\quad = \lim\limits_{p\to \infty} \dfrac{(n_{\kappa }+n_{\kappa +2}+\cdots+n_{\kappa +2p-2})\log\lambda+(n_{\kappa +1}+n_{\kappa + 2p +1}+\cdots+n_{\kappa +2p-1})\log\sigma +O(p)}{n_{\kappa }+n_{\kappa +1}+\cdots+n_{\kappa +2p-1}+O(p)}\\ &\quad = \lim\limits_{p\to \infty} \dfrac{n_k\alpha^{\frac{m}{2}}\beta^{\frac{m}{2}}[\alpha(1+\alpha\beta+\cdots+ (\alpha\beta)^{p-1})\log\lambda+(1+\alpha\beta+\cdots +(\alpha\beta)^{p-1})\log\sigma]+O(p)}{n_k\alpha^{\frac{m}{2}}\beta^{\frac{m}{2}}[1+\alpha+\alpha\beta+\alpha^2\beta+\cdots+(\alpha\beta)^{p-1}+\alpha^p\beta^{p-1}]+O(p)}\\ &\quad = \dfrac{\log\lambda+\alpha\log\sigma}{1+\alpha}. \end{align*} Since \[\dfrac{\log\lambda+\beta\log\sigma}{1+\beta}\not=\dfrac{\log\lambda+\alpha\log\sigma}{1+\alpha},\] together with the remark in the end of Subsection \ref{s:4.1}, this completes the proof of the assertion for the Lyapunov irregularity in Theorem \ref{thm:main}, where $U_f$ and $V_f$ are the interiors of $F_{\kappa -1} \circ F_{\kappa -2} \circ \cdots \circ F_{k}(U_{k,0})$ (under the coordinate translation) and $\{ (v_0, w_0) \mid K^{-1} \leq \frac{\vert v_0\vert }{\vert w_0\vert } \leq K\}$, respectively. \subsection{Birkhoff (ir)regularity}\label{s:4.4} To show Birkhoff (ir)regularity, as well as the uncountability of $f$ up to conjugacy in Theorem \ref{thm:main}, we need a more detailed description of Colli-Vargas' theorem as follows. Let $g$ be the surface diffeomorphism of Theorem \ref{prop:0812} and \[ \mathbb B_+^u:=g([-1,1]^2) \cap ([0,1]\times [-1,1]), \quad \mathbb B_-^u:=g([-1,1]^2) \cap ([-1,0]\times [-1,1]). \] Fo each $l\in \mathbb{N}$ and $\underline w=(w_{1}, w_2, \ldots ,w_{l}) \in \{ +, -\}^{l}$, we let \begin{align*} \mathbb{B}^{u}_{\underline w}:=\bigcap _{j=1}^l g^{-j+1} (\mathbb B_{w_j}^u), \quad \mathbb{G}^{u}_{\underline w}:=\mathbb{B}^{u}_{\underline w} \setminus \left(\mathbb{B}^{u}_{\underline w+}\cup \mathbb{B}^{u}_{\underline w-}\right), \end{align*} where $\underline w\pm =(w_1,\ldots ,w_l, \pm )\in \{+,-\}^{l+1}$. \begin{thm}[\cite{CV2001}]\label{thm:0911b} Let $g $ be the surface diffeomorphism with a robust homoclinic tangency given in Theorem \ref{prop:0812}. Take \begin{itemize} \item a $\mathcal C^r$-neighborhood $\mathcal O$ of $g$ with $2\leq r<\infty$, \item an increasing sequence $(n_k^0)_{k\in \mathbb N}$ of integers satisfying $n_{k}^0 =O((1+\eta )^k) $ with some $\eta >0$, \item a sequence $(\underline z _k^0)_{k\in \mathbb N}$ of codes with $\underline z_k^0 \in \{+,-\}^{n_k^0}$. \end{itemize} Then, one can find \begin{itemize} \item a diffeomorphism $f$ in $\mathcal O$ which coincides with $g$ on $\mathbb B_+^u \cup \mathbb B_-^u$, \item a sequence of rectangles $(R_k)_{k\in \mathbb N}$, \item increasing sequences $(\hat n_k)_{k\in \mathbb N}$, $(\hat m_k)_{k\in \mathbb N}$ of integers satisfying that $\tilde n_k := \hat n_{k}+\hat m_{k+1} =O(k) $ and depends only on $\mathcal O$, \item sequences $(\hat{\underline z} _k)_{k\in \mathbb N}$, $(\hat{ \underline w} _k)_{k\in \mathbb N}$ of codes with $\hat{\underline z}_k \in \{+,-\}^{\hat n_k}$, $\check{\underline w}_k \in \{+,-\}^{\hat m_k}$ \end{itemize} such that for each $k\in \mathbb N$, (a), (b) in Theorem \ref{prop:0812} hold and \begin{itemize} \item[$\mathrm{(c)}$] $R_k \subset \mathbb G_{\underline z_k}^u$ for $\underline z_k=\hat{\underline z}_k \underline z_k ^0 [\hat{\underline w}_{k+1} ]^{-1} $, where $[\underline w]^{-1} = (w_{l},\ldots ,w_{2}, w_1)$ for each $\underline w =(w_1, w_2,\ldots ,w_l) \in \{ +,-\} ^l$, $l\in \mathbb N$. \end{itemize} \end{thm} Fix a neighborhood $\mathcal O$ of $g$ and a sequence $(n_k^0)_{k\in \mathbb N}$ as given in \eqref{eq:0915f}. To indicate the dependence of $\boldsymbol{z} = (\underline z _k^0)_{k\in \mathbb N}$ on $f$ and $(R_k)_{k\in \mathbb N}$ in Theorem \ref{thm:0911b}, we write them as $f_{\boldsymbol{z}}$ and $(R_{k,\boldsymbol{z}})_{k\in \mathbb N}$. We first apply Theorem \ref{thm:0911b} to the sequence $\boldsymbol{z} = (\underline z _k^0)_{k\in \mathbb N}$ given by \[ \underline z_k^0=(+,+,\ldots ,+, z'_k), \quad z'_k\in \{ +, -\} \] for each $k\geq 1$. Then, it is straightforward to see from the item (c) of Theorem \ref{thm:0911b} that for any $k\in \mathbb N$, continuous function $\varphi : M\to \mathbb R$ and $ \epsilon >0$, there exist integers $k_2$ and $L_0$ such that \[ \sup _{\boldsymbol{x}\in R_k} \left\vert \varphi (f^{n}_{\boldsymbol{z} }(\boldsymbol{x} ) )- \varphi( \boldsymbol{p} _+) \right\vert <\epsilon \] whenever \[ N(k,k') + L_0 \leq n \leq N(k,k'+1) - L_0 \] with some $k'\geq k_2$, where $\boldsymbol{p} _+$ is the continuation for $f_{\boldsymbol{z} }$ of the saddle fixed point of $g$ corresponding to the point set $\mathbb B_{(+, +, \ldots )}^u$ and \[ N(p,q):=\sum _{k=p}^q (n_k +2) \] for each $p ,q \in \mathbb N$ with $p\leq q$. Hence, it holds that \[ \lim _{n\to\infty} \frac{1}{n}\sum_{j=0}^{n-1} \varphi ( f^j_{\boldsymbol{z} }(\boldsymbol{x} )) = \varphi (\boldsymbol{p} _+) \] for any $k\in \mathbb N$, $\boldsymbol{x} \in R_k$ and continuous function $\varphi : M\to \mathbb R$. Since the open set $U_{f_{\boldsymbol{z}}} $ consisting of Lyapunov irregular points constructed in the previous subsection is of the form $f_{\boldsymbol{z}} ^n (U_{0,k'})$ with some positive integers $n$ and $k'$, it follows from the remark following \eqref{eq:0915e2} and the item (a) of Theorem \ref{thm:0911b} that $U_{f_{\boldsymbol{z}}} \subset R_k$ with some $k$. This implies that any point in $U_{f_{\boldsymbol{z}}} $ is Birkhoff regular. Notice that the choice of $(z'_1, z'_2, \ldots ) $ in $\boldsymbol{z}$ is uncountable. On the other hand, if $\boldsymbol{z} =(\underline z_k^0)_{k\in \mathbb N}$ and $\boldsymbol{w} = (\underline w_k^0)_{k\in \mathbb N}$ are of the above form (in particular, $\underline w_k^0=(+,+,\ldots ,+, w'_k) \in \{ +,-\} ^{n_k^0}$ with $w'_k\in \{ +, -\}$) and $z_k' \neq w_k'$ for some $k$, then $f_{\boldsymbol{z} }$ and $f_{\boldsymbol{w} }$ are not topologically conjugate, or $f_{\boldsymbol{z} }$ and $f_{\boldsymbol{w} }$ are topologically conjugate by a homeomorphism $h$ on $M$ and $h(R_{k, \boldsymbol{z}}) \cap R_{k, \boldsymbol{w}} = \emptyset$ for every $k$, because of the item (c) of Theorem \ref{thm:0911b} and the fact that both $f_{\boldsymbol{z} }$ and $f_{\boldsymbol{w} }$ coincide with $g$ on $\mathbb B_+^u \cup \mathbb B_-^u$. Therefore, since there can exist at most countably many, mutually disjoint open sets (of positive Lebesgue measure) on $M$ due to the compactness of $M$, we complete the proof of the claim for the uncountable set $\mathcal R$ in Theorem \ref{thm:main}. We next apply Theorem \ref{thm:0911b} to the sequence $\boldsymbol{z} = (\underline z _k^0)_{k\in \mathbb N}$ given by \[ \underline z_k^0= \begin{cases} (+,+,\ldots ,+,z_k') \quad &\text{if $(2p -1)^2\leq k< (2p )^2$ with some $p$}\\ (-,-,\ldots ,-,z_k') \quad & \text{if $(2p )^2\leq k< (2p +1)^2$ with some $p$} \end{cases} \] with $z'_k\in \{ +, -\}$ for each $k\geq 1$. Then, it follows from the item (c) of Theorem \ref{thm:0911b} that for any $k\in \mathbb N$, continuous function $\varphi : M\to \mathbb R$ and $ \epsilon >0$, there exist integers $k_2$ and $L_0$ such that \[ \sup _{\boldsymbol{x}\in R_k} \left\vert \varphi (f^{n}_{\boldsymbol{z} }(\boldsymbol{x} ) )- \varphi( \boldsymbol{p} _+) \right\vert <\epsilon \] whenever \[ N(k,k') + L_0 \leq n \leq N(k,k'+1) - L_0, \quad \max\{ k_2 , (2p -1)^2\} \leq k'< (2p)^2 \] with some $p$, and \[ \sup _{\boldsymbol{x}\in R_k} \left\vert \varphi (f^{n}_{\boldsymbol{z} }(\boldsymbol{x} ) )- \varphi( \boldsymbol{p} _-) \right\vert <\epsilon \] whenever \[ N(k,k') + L_0 \leq n \leq N(k,k'+1) - L_0, \quad \max\{ k_2 , (2p )^2\} \leq k'< (2p+1)^2 \] with some $p$, where $\boldsymbol{p} _-$ is the continuation for $f_{\boldsymbol{z} }$ of the saddle fixed point of $g$ corresponding to the point set $\mathbb B_{(-, -, \ldots )}^u$. Hence, if we let \[ \mathbf N(\ell ):= N(k,(\ell +1)^2) - N(k,\ell ^2) = \sum _{k=\ell ^2+1}^{(\ell +1)^2} (n_k +2), \] then for any $k\in \mathbb N$, $\boldsymbol{x}\in R_k$ and continuous function $\varphi : M\to \mathbb R$, we have \[ \frac{1}{\mathbf N(2p-1)} \sum_{j= N(k,(2p -1)^2)}^{ N(k,(2p )^2)-1} \varphi (f^{j}_{\boldsymbol{z} }(\boldsymbol{x} ) ) = \varphi ( \boldsymbol{p} _+) +o(1) \] and \[ \frac{1}{\mathbf N(2p)} \sum_{j= N(k,(2p )^2)}^{N(k,(2p+1 )^2)-1} \varphi (f^{j}_{\boldsymbol{z} }(\boldsymbol{x} ) ) = \varphi ( \boldsymbol{p} _-) +o(1). \] Since $\mathbf N(1 ) + \mathbf N(2 ) +\cdots + \mathbf N(\ell -1) =o(\mathbf N(\ell ) ) $, this implies that, with $\ell := \lceil \sqrt k\rceil $ which we assume to be an odd number for simplicity, \begin{align*} &\frac{1}{N(k,(2p +1)^2)}\sum_{j=0}^{N(k,(2p +1)^2)-1} \varphi (f^{j}_{\boldsymbol{z} }(\boldsymbol{x} ) ) \\ & \quad =\frac{1}{N(k,(2p +1)^2) -N(k, \ell ^2)}\sum_{j=N(k, \ell ^2)}^{N(k,(2p +1)^2)-1} \varphi (f^{j}_{\boldsymbol{z} } (\boldsymbol{x} ) ) +o(1) \\ & \quad = \frac{ \mathbf N(\ell ) + \mathbf N(\ell +2) + \cdots + \mathbf N(2p -1)}{\mathbf N(\ell ) +\mathbf N(\ell +1) + \cdots + \mathbf N(2p )} \varphi ( \boldsymbol{p} _+) + \frac{ \mathbf N(\ell +1) + \mathbf N(\ell +3) + \cdots + \mathbf N(2p)}{\mathbf N(\ell ) +\mathbf N(\ell +1) + \cdots + \mathbf N(2p)} \varphi ( \boldsymbol{p} _-) +o(1) \\ & \quad \to \varphi ( \boldsymbol{p} _+) \quad (p\to \infty ). \end{align*} Similarly we have \[ \lim _{p\to \infty}\frac{1}{N(k,(2p )^2)}\sum_{j=0}^{N(k,(2p )^2)-1} \varphi (f^{j}_{\boldsymbol{z} }(\boldsymbol{x} ) ) = \varphi ( \boldsymbol{p} _-). \] That is, any point in $R_k$ is Birkhoff irregular. Therefore, repeating the argument for $\mathcal R$, we obtain the claim for the uncountable set $\mathcal I$ in Theorem \ref{thm:main}. This completes the proof of Theorem \ref{thm:main}. \begin{rem} The proof of Birkhoff (ir)regularity in this subsection essentially appeared in Colli-Vargas \cite{CV2001}. The difference is that our $(n_k^0)_{k\in \mathbb N}$ increases exponentially fast because of the requirement \eqref{eq:0915f}, while their $(n_k^0)_{k\in \mathbb N}$ is of order $O(k^2)$. \end{rem}
49f6d1ae1bf1700a76bffb42ac45cd50e003b73e
\section{Introduction} Micro-RNAs (miRNA) were first discovered in the nematode C.elegans \cite{ref2}, and subsequently found in practically all eukaryotes. Despite their small size (length of the sequence between 19 and 24 nucleotides), these single strand, non-coding RNAs play a role in the regulation of the genetic expression, through their capacity to hybridize with the 3’UTR of specific target mRNA (messenger RNA). Therefore, the specific function of each miRNA is strongly sequence-dependent, and the search of the associated targets is a non trivial problem. As shown in a recent review \cite{ref3}, miRNAs are known to be associated with the normal development and function of the organism, but are also involved in diseases. Several lines of evidence highlight the putative implication of miRNA in the physiopathology of numerous cancers. This includes the miRNA dysregulations observed in tumors (over- or under-expression), that led to the concept of “oncomiR” \cite{ref4}, and to the use of miRNA for the classification of tumors origins \cite{ref5}. Other arguments rely on the fact that widely expressed oncoproteins (e.g. MYC) or tumor suppressors (e.g. P53) regulate the transcription of specific miRNA that, in turn, modify the translation of cancer-associated genes \cite{ref4}. More recent findings revealed the existence of circulating miRNA, especially in tumor-bearing animals or patients. It is likely that they derive from the dying cells of the tumor, or are actively excreted in exosomes \cite{ref6}. The attractiveness of extracellular miRNA as cancer biomarkers relies on their stability and their dysregulation in the diseased cells. While bound to Argonaute proteins, miRNAs are stable in the extracellular environment after release from cells, whether as unprotected ribonucleoprotein complexes or within membranous vesicles. The released miRNA can be detected and quantitated as part of the “miRNome” of a biological fluid, such as plasma or serum. If the miRNA expression in a tumor are reflected in the circulation, invasive tissue biopsies could be replaced by straightforward assays of easily obtained blood products, and early warnings of tumorigenesis might be possible. Because of their short sequence and low concentration, miRNA detection is intrinsically difficult. Methods for this measurement can be classified in three classes: \begin{itemize} \item[1-] PCR based methods requiring a preliminary reverse transcription step, which sets the initial minimal total miRNA amount required to ~50ng. \item[2-] PCR free methods, for instance based on electrochemistry or optical (SPR) detection, do not require the reverse transcription step, and present limits of detection in the fg ($10^{-15}$ g) range. Similar sensitivity can be reached with optical, single molecule methods, such as fluorescence correlation spectroscopy (FSC) \cite{ref7}. In turn, all these methods require very sophisticated instrumentation. For this reason, none of them has emerged as a practical alternative to more standard, PCR based methods. \item[3-] Next-generation sequencing-based methods, that allow the detection of every miRNA species expressed in the tissue or cells of interest, but also necessitate around 10-100ng of total miRNA. \end{itemize} This work is based on a completely different approach, which in other contexts has been called 'stochastic detection' \cite{stox}. In this approach, two liquid media are separated by a membrane (lipid bilayer) in which channels such as transmembrane proteins can insert. Stochastic detection is based on the correlation between the presence of some analyte (here, miRNA) and the current across a single channel. A good example of stochastic detection is the method developed \cite{ref1} for DNA sequencing: a single stranded DNA going through a single nanopore channel modulates the current through it and can be correlated with the DNA sequence. In the past few years, DNA-based nanostructures \cite{nano1,nano2,nano3} have been developed that mimic naturally occurring membrane proteins \cite{nano4,nano5,nano6}. These nanostructures can also interact with lipid membranes. As compared to protein channels, they can be easily modified in terms of geometry or functionalization. In this report, we adapt a recently published DNA construction \cite{pore1}, the conductivity of which can be modulated in the presence of specific oligonucleotides (DNA or RNA). The method therefore falls into the category of single molecule, PCR free methods. The nanopore structure in \cite{pore1} is formed by 15 DNA strands which fold into six intertwined helices 72nt long. Some of the DNA strands that form the nanopore are covalently linked to cholesterol moieties to allow insertion into lipid bilayers. The design of ref.\cite{pore1} includes a mechanism to regulate the current across the nanopore. This mechanism is based on a single stranded DNA (hereafter called sentinel), linking two helices of the nanopore, which can be in two possible states. In the absence of an input signal (the complementary sequence of the sentinel), this strand is in a globular, floppy state: this is the closed state. When the input signal is present, it hybridizes to the sentinel, increasing its mechanical tension and pushing apart two helices. The nanopore is in an open state with a larger inner diameter and electric conductivity. The conductivity difference between closed and open states depends on the geometry of the nanopore. In ref.\cite{pore1} we showed that a 30nt long input sequence induces a measurable conductivity change. Unfortunately, this is not the case for a 22nt long sequence, as the ones we are targeting here. This can be easily understood: a double stranded DNA of 22 base pairs forms a stiff double helix 7nm long. This is intended to exert a mechanical tension upon two helices that are separated by $\sim 6$nm. The present sensitivity of the method precludes the detection of such small effect. Our goal here is to be able to detect single miRNA strands such as miR-21, a 22 nucleotide long miRNA involved in cancer (thyroid, breast and colorectal) \cite{ref9}. To facilitate sample preparation and handling, we will use the DNA analog of miR-21, with sequence: TAGCTTATCAGACTGATGTTGA. In the following, we describe a modification of the initial structure \cite{pore1} where the regulation of conductivity is made by changing the effective length of the nanopore. We successively describe nanopore structure and synthesis, a method to form stable lipid bilayers on which nanopores can insert, finally a characterization of electrical signature as a function of the presence of short oligonucletides. \section{Methods and results: nanopore structure} The basic design of the nanopore is inspired from that in ref. \cite{How13}. In short, six double helices are linked to form a barrel with hexagonal cross section. Strands thread between double helices, forming Holliday-type crossovers which increase structural stability. Our goal is to obtain a nanotube composed of two halves linked by a hinge and a locking mechanism (cf Fig. \ref{fig_schema}). To design the latter, we take inspiration from ref. \cite{box}. In this work, the goal was to trigger the opening and closing of a 3D origami box. The authors used two locks, each including a stem-loop structure with a 8nt loop. Upon addition of the opening key, an oligonucleotide complementary to a subset of the lock, the lock opened. We will use a similar mechanism here, as illustrated in Figure. \ref{fig_schema}. The nanopore is composed of two barrels. Each barrel is formed of six double helices arranged around a 2nm lumen with hexagonal cross-section. The staple design contains Holliday junction crossovers to increase the stability of the ensemble. The two barrels are linked by a hinge and a locking mechanism including a stem-loop structure. The stem is a 21nt double helix, the loop is 10nt long. In the closed state, the stem-loop effectively imposes a short distance ( $\sim 2$nm) between two of the helices. The transition to the open state requires an input signal which can bind to the exposed nucleotides of the loop and open the stem through a strand displacement process. The stem loop becomes a long single stranded loop with a 21nt long double stranded section. This acts as an entropic spring, effectively increasing the distance between the two previously close helices. The length of the loop was optimized by computing (using Nupack \cite{nupack}) the percentage of open structures. In Figure \ref{fig_oxdna} are shown the results of two oxDNA \cite{oxDNA} simulations, where the nanopore was simulated in closed and open states. Nanopore was assumed to be in solution, the cholesteryl modifications or any interaction with lipids were not taken into account. As shown in Fig. \ref{fig_oxdna}, thermal fluctuations push the nanopore configuration far from the ideal hexagonal arrangement. Still, these simulations give support to the idea that the opening of the stem loop can significantly perturb the geometry of a nanopore even for short input signals such as mi2R-21. Also, the simulations suggest that the stem loop structure when attached to the nanopore is reasonably stable against thermal fluctuations. \begin{figure} \includegraphics[scale=0.8]{cork_cartoon.png} \caption{Schematic representation of the opening mechanism: each cylinder represents a double helix. DNA nanopore is inserted into a lipid bilayer thanks to cholesterol modifications (orange ellipses). (a) Closed state: the stem loop imposes a short distance between two of the helices. (b) Open state: upon addition of miR-21, the stem loop unfolds giving rise to a mixed single and double stranded linker which pushes the two halves apart.} \label{fig_schema} \end{figure} \begin{figure} \includegraphics[scale=0.8]{cork_and_corkmi22.png} \caption{Schematic representation of the opening mechanism as modeled with oxDNA software. In this illutration, each nucleotide is represented by two sites, one centered on the phosphate group, the other centered on the nucleobase. Strands that form the hinge or the stem-loop locking mechanism are in red. The input signal is in blue. (a) Nanopore in the absence of input signal. (b) Nanopore hybridized to the input signal. } \label{fig_oxdna} \end{figure} To ascertain the opening mechanism and provide a proof of the detectability of the associated shape modification, we first used a fluorophore - quencher to monitor the FRET efficiency in the presence of the signal nucleotide. Two of the nanopore strands were modified with respectively Cy5 and BHQ2. In the absence of input signal, the nanopore should be in the closed state, in which BHQ2 quenches the fluorescence emission of Cy5. Adding miR-21 increases drastically the fluorescence recorded at 660nm, as can be seen in Fig. \ref{fig_fluo}. The same illustration shows that the addition of two random sequences, 32nt long, had no effect on the nanopore opening. \begin{figure} \includegraphics[scale=0.8]{fig_fluo.png} \caption{Fluorescence (660nm) recorded as a function of time for three different samples, all of them containing 100nM solution of nanopore, modified with a Cy5-BHQ2 couple. Before t=600s, each sample only contains the nanopore. At t=600s, an input signal was added. Blue: addition of miR-21. Orange and green: addition of two random sequences, 32nt long. } \label{fig_fluo} \end{figure} DNA nanopore structures can interact with lipid bilayers when modified with hydrophobic moities. In the pioneering work of Simmel and coll. \cite{Simm12}, the authors showed by TEM imaging how a large origami structure, modified with cholesteryl (cholesterol is attached to the deoxyribose via a six carbon spacer), was able to insert into a lipid vesicle. Subsequently, several teams also showed how similar structures could interact with locally planar bilayers by recording the current across bilayers. The interest of this configuration is the possibility to detect the insertion of single nanopores. For the planar configuration two main options can be distinguished. The formation of a black lipid layer has been used in refs. \cite{How13, black}. In this configuration, two compartiments are separated by a hydrophobic wall with a tiny hole. Painting lipids around the hole, then hydrating the system leads to the formation of a lipid bilayer. Alternatively, the so-called droplet interface bilayer (DIB) \cite{bayley} configuration (Figure \ref{fig_dib}) deals with two aqueous droplets immersed in an oil bath containing lipids in solution. When the two droplets are not in contact, a lipid monolayer forms around each droplet. The position of each droplet can be monitored through electrodes connected to micromanipulators. If the two droplets are brought into contact, a lipid bilayer quickly forms at the intersection. As previously noticed \cite{Dibcapa}, the stability of this interface is remarkable, although it strongly depends on lipid and oil composition. We used previously \cite{pore1} a 'patch-clamp' approach to obtain a bilayer, patching small pieces of giant unilamellar vesicles with a micropipette. As compared to this latter method, the use of DIBs is a much more robust approach with the disadvantage that not all lipid compositions can be explored. Further details are given in the SI. We considered two approaches to enhance nanopore insertion into bilayer. Four strands were elongated with a common sequence, 15nt long, to which a complementary oligonucleotide modified with cholesteryl could bind. Experimentally, the insertion frequency of these structures into DIBs was very low. The second, more successful method, enhanced the insertion by adding two biotin modifications in one side (cf. Fig. \ref{fig_schema}) of the nanopore in addition to the four cholesteryl modifications. In this second strategy, DIB system was asymetric. One of the droplets, connected to ground, contained the nanopore. The other droplet, connected to the prove electrode, contained a solution of streptavidin. The biotin-streptavidin interaction is a classical biological tool to bind two partners. Our intuition was that in the event of a nanopore insertion, biotins would bind to streptavidin, thus maintaining nanopores in close proximity of the bilayer. We also hypothetized that the eventual transport of streptavidin across the DIB bilayer would be negligible. \begin{figure} \includegraphics[scale=0.8]{dib.png} \caption{Illustration of the droplet interface bilayer (DIB) method. Two aqueous droplets are immersed in an oil bath (yellow) containing lipids in solution. Droplet position is monitored through electrodes (black segments). (a) Before contact, a monolayer forms at the surface of each droplet. After contact (b) a bilayer forms. Nanopores are contained in only one of these droplets (red rectangles). }\label{fig_dib} \end{figure} A typical experiment started by the insertion of agarose coated electrodes on each of the droplets. Then, the droplets were brought into contact by monitoring electrode position. A lipid bilayer has a well defined capacitive response to a short (10ms) 10mV pulse. Its formation could be easily monitored, usually it took less than one minute after droplet contact. After stabilization of the bilayer's resistance, we imposed alternatively positive (30mV) and negative (-30mV) potentials between ground and control electrode with very slow frequency (60s). During the positive potential phase, nanopores were expected to be driven towards the bilayer. Correspondingly, a negative potential would tend to remove them. Figure \ref{fig_record} illustrates two typical situations we encountered. Current time recordings showed step-like profiles with an essentially stable baseline. We observed several characteristic time intervals between jumps, with no evident link with experimental conditions. Fast transitions (jump frequency around 100Hz) were much more frequent than the slower transitions (jump frequency around 10Hz) displayed in panels \ref{fig_record}(a) and \ref{fig_record}(b). The average succes ratio, defined as the number of current recordings where jumps could be observed divided by the total number of current recordings, was rather low (less than 10\%). Unsuccessful recordings gave usually a flat signal (no jumps), or the interface was unstable leading to data difficult to interpret (with a vast majority of flat signals). In the absence of nanopores, no jumps were observed at all. To count the number and measure conductivity jumps, we used a Hidden Markov Model (HMM) \cite{hmm}. Given a time recording, HMM approximates it by a sequence of $N_s$ states, the values of which are optimized to minimize the difference between the sequence and the given time series. The number $N_s$ is a free parameter of the model. As shown in Fig. \ref{fig_record}, HNN approximation seems to be well adapted to the current recordings as a well defined base line can be easily found and the number $N_s$ appears to be well defined. \begin{figure} \includegraphics[scale=0.8]{fig_recor.png} \caption{ Time recording for a typical DIB experiment where voltage is varied between -30mV and 30mV. (a)(b) Slow dynamics regime (c)(d) Fast dynamics regime. (b) and (d) are zoomed images of (a) and (c), respectively.} \label{fig_record} \end{figure} A possible interpretation of the current recordings is as follows. Each time a nanopore inserts into the bilayer, its resistance decreases by a fixed amount which depends mainly on the geometry of the nanopore. A simple estimate of the nanopore's conductivity in its closed state using a geometrical model which ignores possible interactions between cations and nanopore's interior yields 1.3nS. In the absence of miR-21, we found conductivity distribution centered around this value with a secondary peak close to 1.6nS, as shown in the histogram of Fig. \ref{fig_histo}. Previous reports of similar structures \cite{Howopenclose} also yield values close to 1.6nS. As proven by numerical simulations \cite{aksi}, transport across DNA nanopores is not only through its lumen, cations can flow along the nanopore's outer surface or {\em through the 'gaps' in the DNA structure} \cite{aksi}. This would explain the fact that experimental values can be larger than theoretical ones. When the nanopore was incubated with the input signal, the stem-loop changed conformation as explained above and demonstrated by the coarse-grained simulations. Experimentally, this translated to the appearance of a second maximum in the distribution of conductivities. Its value ($2.8 \pm 0.2$ nS) is less than twice the value of the closed state. This should be expected, as the open stem-loop pushes apart the two halves and at the same time hinders the entrance of the nanopore. From the present experiments, it is difficult to elucidate further the insertion mechanism of nanopores. A possible interpretation of the existence of transient states could be as follows: nanopores lay on one side of the bilayer inserting roughly half of the cholesterol modified strands. This metastable state has an energetic penalty due to the exposure of cholesterol to water. An alternative metastable state corresponds to a completely inserted nanopore where all the cholesterol moities are in contact with the bilayer's interior and, at the same time, the hydrophylic outer surface of the nanopore is also in contact with it unless a toroidal rearrangement of lipid heads (as sketched in Fig. \ref{fig_schema}) takes place. Interaction with streptavidin probably lowers the energetic barrier between these two metastable states, which would explain the fast dynamics observed in many recordings. Slow insertion rates could then correspond to insertion in the absence of streptavidin. \begin{figure} \includegraphics[scale=0.8]{fig_histo.png} \caption{ Conductivity histogram (a)[mi22] = 0nM (b) [mi22] = 50nM.} \label{fig_histo} \end{figure} \section{Conclusions} Sensing of short oligonucleotide sequences is potentially an important step in the early detection of diseases such as cancer. Developing portable, direct methods to perform such detection could considerably generalize the use of microARN biomarkers. Compared to other single molecule detection procedures, nanopore based detection can benefit from miniaturization techniques used in semiconductor technology, which should provide eventually a compact, easy to use apparatus. In this report, we characterized a DNA nanopore structure which we showed was able to change conformation upon binding with a DNA analogue of the miR-21 microARN. The conformational change could be characterized by fluorescence and electric recordings. In doing so, we have shown that detection of single microARNs is a doable task when using the DIB configuration to generate stable and reproducible bilayers. The major difficulty which remains to be solved is the low rate of insertion into bilayers. The possibility to detect low concentrations of miRNA depends on the feasability of long electric recordings: the lower the miRNA concentration, the lower the number of possible open events. Reliable miRNA concentration measurements will therefore not only require parallel measurements but also a reasonable success rate in the detection of nanopores. This seems to be a major hurdle in the design of DNA based nanopores. A possibility explored by other groups was to increase the number of hydrophobic moities attached to the nanopore. This is only possible by embeding the nanopore structure into a larger platform as was done in \cite{LarSim}. \section{Materials and methods} \subsection{Fabrication of DNA nanopores} DNA nanopores were fabricated in a one-pot reaction by stepwise cooling an equimolar mixture of staples (1 $\mu$M) in folding buffer (Tris-acetate-EDTA buffer, 20 mM MgCl$_2$ ) from 85 to 20 $^{\circ}$C in 3h. Staple strands were designed using the scaDNAno software. Before running DIB experiments, DNA nanopores were further diluted in a 1M KCl buffer containing 0.05\% OPOE (Sigma). Cholesteryl functionalized DNA pores were produced by incubating the fully folded pores with cholesteryl modified strands (Eurogentec) for 45 min with 5 times excess. Before incubation, cholesteryl-modified oligonucleotides were heated to 60 $^{\circ}$C for 45 min to avoid aggregation. Streptavidin (Sigma-Aldrich) was used without any further purification. \subsection{Lipid preparation} POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) and DPhPC (1,2-Dipalmitoyl-sn-glycero-3-phosphocholin) were purchased from Avanti Lipids, hexadecane and silicon oil from Sigma. Lipids were stocked in chloroform with 10 mg ml$^{-1}$ concentration. Before disolution into a 7:3 hexadecane:silicon oil mixture, chloroform was evaporated in a vacuum dissicator for at least 1h. Dissolution of lipids into oil could require a mechanical stirring. \subsection{Electric recording} Two 200pL droplets were deposited in a 60$\mu$L well machined in poly(methyl methacrylate)(PMMA). The tip of two silver electrodes 100 $\mu$m in diameter were chlorinated overnight, then coated with agarose (2\%). The agarose coating facilitated the insertion of electrodes inside the droplets. Electrodes were actuated through micromanipulators and connected to an electronic current amplifier (HEKA and Intan). Data were acquired at 5kHz.
15cc3025cde94c9e95bddb3c7853adcd3ae48a9a
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{M}{edical} images of high quality are critical in early, fast and accurate diagnosis in the current clinical processes. Typically, spatial resolution of medical images is degraded by factors such as imaging modalities and acquisition time. To recover the distorted resolution, super-resolution (SR) methods are widely adopted on digital images as post-processors eluding the additional scanning costs \cite{one}. Image super-resolution is one of the prominent low-level vision problems in the computer vision domain. The aim of image SR is to reconstruct a high resolution (HR) image from a corresponding low resolution (LR) image. SR techniques are applied on both single and multiple images simultaneously subject to the input and output criteria. In this work, we study the Single-Image-Super-Resolution (SISR) for the application of medical images. In contrast to classical SISR applications, medical imaging SISR is captious as it is often followed by precise tasks of segmentation, classification or diagnosis \cite{two} \cite{deeba2020sparse}. Thus, it becomes imperative to produce methods which could not only preserve sensitive information but also magnify the structures of interest efficiently. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{Figures/Figure_1.pdf} \caption{A quantitative comparison of the super resolution capabilities of several state-of-art methods. In terms of PSNR, our proposed technique, Multimodal-Boost, exceeds the competition. The horizontal axis, which is measured in logarithmic FLOPs for easier observation and PSNR is represented on the vertical axis. The amount of parameters is represented by the circular area. A green circle in the top centered is regarded to be a superior SR network.} \centering\label{Fig1} \end{figure} Traditional SR based techniques such as interpolation \cite{three} or reconstruction\cite{four} based could not suffice for medical image SISR tasks. Interpolation methods such as linear, bicubic \cite{three} and lancoz \cite{duchon1979lanczos} often fail to reconstruct the high-frequency information which eventually results in blurred and smooth edges. In contrast, reconstruction based algorithms efficiently utilize prior local\cite{tai2010super}, global\cite{yang2010exploiting}, and sparse\cite{dong2016hyperspectral} knowledge to efficiently reconstruct the HR image. However, these methods can not efficiently simulate the non-linear transformation from LR space to HR space in dynamic scenes, hence the output SR image is distorted. Deep learning techniques have recently shown notable performance on various vision tasks such as image dehazing, object detection, activity recognition etc \cite{deebanovel}. Convolutional Neural Networks (CNNs) and particularly Generative Adversarial Aetworks (GANs) have shown remarkable performance on various SR applications such as remote sensing and medical imaging. Deep Convolutional Neural Network for Super-Resolution (SRCNN)\cite{dong2014learning} laid the foundation of deep learning (DL) based SR methods, followed by a number of techniques involving powerful capabilities of CNNs. Due to phenomenal performance of residual blocks and dense blocks, several works such as, Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)\cite{lim2017enhanced} Very Deep Convolutional Networks (VDSR)\cite{kim2016accurate},Fast Super-Resolution with Cascading Residual Network (RFASR)\cite{ahn2018fast} and Deep Back-Projection Networks for Super-Resolution (DBPN)\cite{haris2018deep} were proposed for the SR problem. These end to end models are deeper and complex thus requiring a large number of LR and corresponding HR pairs to get trained\cite{wang2020deep}. Moreover, these deep networks could not provide photo realistic results despite the high performance of Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR)\cite{two}. Thus, GAN based networks accompanied by attention mechanisms\cite{xu2021multi} were introduced to produce perceptually realistic outputs closer to the ground truth HR image. However, aforementioned deep learning based SR techniques have shown sub-optimal performance on medical images particularly for different types of modalities. Several medical imaging analysis tasks such as tumor segmentation\cite{zawish2018brain} and lesion detection\cite{zhu2019lesion} are essentially hindered by the substandard HR output with blurred edges. Using a Multi-Temporal Ultra-Dense Memory Network, Peng Ye et al.\cite{yi2019multi} proposed a new method based on video super-resolution. Previous video SR methods mostly use a single memory module and a single channel structure, which does not fully capture the correlations between frames specific to video. To achieve video super-resolution, this paper presents a multi-temporal ultra-dense. Yiqun Mei et al.\cite{mei2020image} developed an efficient SR method based on Cross-Scale Non-Local Attention Module (CS-NL) incorporating recurrent neural networks to handle an inherent property of images: cross-scale correlation of features. Afterward, Kui Jiang et al.\cite{jiang2020hierarchical} developed a hierarchical dense recursive network with super-resolution. Feature representations were made richer thanks to dense hierarchical connections. Additionally, it facilitates the lightweight SR model by providing Reasonable design and parameter sharing. In this work, we aim to overcome aforementioned problems in multimodal medical imaging SISR by proposing a combination of multi-attention GAN with wavelet subbands. Wavelet transform (WT)\cite{abbate1995wavelet} has a unique ability of extracting useful multi-scale features with the help of sparse subbands. Since WT accompanied with GAN has been widely adopted for several applications including remote sensing for spatio-temporal SR\cite{dharejo2021twist} and dehazing\cite{fu2021dw}, it would be interesting to utilize its potential for the medical imaging SR. In the first step, we derive the WT based four subbands of the given LR image using a 2D discrete wavelet transform (DWT) from Haar family of wavelet which eventually replaces LR input with low dimensional input space preserving the key information. These subbands are then fed into the proposed GAN involving multiple attention and upsample blocks which produces an improved HR wavelet component for each corresponding subband. Lastly, the HR image is reconstructed using 2 dimensional inverse discrete wavelet transform (2D-IDWT). Inspired from this\cite{johnson2016perceptual}, we use a perceptual loss network along with GAN to further boost the performance of our proposed method in term of qualitative and qualitative. Moreover, a challenging and time-consuming task in medical imaging SR is to modify the models designed for one modality to fit a new modality. We overcome this by introducing the transfer learning technique which efficiently adapts to unseen data with the help of pre-learned knowledge. In result, it becomes feasible to adapt on datasets of different modalities along with the reliable results. In summary, the following are the contributions of our methodology: \begin{itemize} \item {We proposed a novel wavelet-based multimodal and multi-attention GAN framework for medical imaging SR called Multimodal-Boost. The Multimodal-Boost can learn end-to-end residual mapping between low and high resolution wavelet subbands. The key advantage of utilizing wavelet transform is that it helps restoring rich content from images by accurately extracting missing details. WT give high-frequency information in a variety of directions that are most likely to result from (horizontal, vertical, and diagonal edges).\\} \item {We trained the perceptual network on VGG-16 to improve the SR results, as shown in the Figure \ref{Fig7}. Using perceptual loss, we boost network and improve super-resolution performance. We use a transfer-learning technique to deal with inadequate training for super-resolution medical applications. Our algorithm is trained on DIV2K \cite{timofte2017ntire} dataset prior to getting evaluated on multimodal medical datasets.\\} \item {Unlike many previous \hl Deep Neural networks (DNN) medical image SR approaches \cite{zhang2017deep} \cite{chen2021super} \cite{deeba2020wavelet}\cite{romano2016raisr}, our method employs multimodality data into a single model, which reduces the cost of adding new SR objectives such as detection and classification tasks. The results reveal that the proposed method outperforms existing methods in objective and subjective evaluation as shown in Figure. \ref{Fig1}}. \end{itemize} The rest of the paper is drawn as follows. Section II provides an overview of the related works. The proposed The multimodal-Boost approach is detailed in Section III. Section IV contains the results and discussion. Finally, in Section V, the conclusion is given. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{Figures/Figure_2.pdf} \caption{Multimodal-Boost presents a three-part methodology flow chart: scatter, predicted, and reconstruct portions. The interpolated version of the LR image is broken down into four LR wavelet components in the dispersed section. In the predicting section, a developed Mult-Attention GAN SR network is used to estimate the HR wavelet component from the LR counterpart. The third portion is the reconstructed component, which uses 2DIWT to generate super-resolution images. The two forms of attention blocks are multi-attention and upsample-attention.} \centering\label{Fig2} \end{figure} \section{Literature Work} SR is undoubtedly a classical low-level vision problem which has been highly studied with emerging deep learning based solutions for a variety of applications. The problem of SR can be categorised into either single image SR or multiple image SR, our focus in this paper is on single image SR. Thus, in this section we review the state-of-the-art SR methods for both general and medical imaging application. In the domain of DL based SR, SRCNN by Dong et al\cite{dong2014learning} was the very first technique incorporating deep CNNs for the SR task. Following that several works were proposed with a typical pipeline of feature extraction, non-linear mapping and HR image reconstruction. However, such deeper level and complex architectures failed to extract the important multi-level features of a given LR image. Liang et al\cite{liang2016incorporating} identified the importance of edge based features for enhancing the SR task using sobel edge priors of LR input for training a deep SR model. Later, VDSR\cite{kim2016accurate}, EDSR\cite{lim2017enhanced}, and RFASR\cite{ahn2018fast} were proposed with the use of very deep networks involving residual skip connections and residual feature aggregation techniques to enhance the SR output. However, these methods did not yield admissible SSIM and PSNR due to deterioration of visual perception which resulted in over-smoothed SR output. Ledig et al\cite{ledig2017photo} improved the visual perception in SR problem by proposing a GAN based network namely SRGAN by combining adversarial and content loss of generated HR output. Moreover, Meta-SR\cite{hu2019meta} and DBPN\cite{haris2018deep} achieved improved results where the former proposed a solution for high magnification and arbitrary magnification SR and the later used a deep back-projection network to generate clear HR output. While all these methods work well on natural datasets, their performance is not admissible for medical imaging tasks. The methods on pixel space optimization\cite{dong2014learning,lim2017enhanced,ahn2018fast} generate a smooth output which lacks the important information, while the methods on feature space optimization\cite{ledig2017photo} could not generate realistic enough outputs. For this reason, these methods are not well suited for a critical problem like medical imaging SR. Although there exist several studies to tackle medical imaging SR with the techniques such as cycle GAN\cite{liu2021perception}, U-Nets\cite{park2018computed}, 3D convolutions\cite{chen2018efficient} and attention based networks\cite{he2020super}, none of them achieved acceptable clinical ability. Moreover, most of these works are suitable only for single modality and their performance degrades when doing inference on new modality dataset. Therefore, in this paper we aim to address the shortcomings of above works by utilizing the inherent capabilities of discrete wavelet analysis on embedding the LR input image which is then provided to multi-attention and upsample blocks of proposed GAN network to produce the corresponding HR output. It can be seen from results that our model efficiently utilizes the transfer learning technique to produce clear HR output on multiple medical imaging modalities. \section{METHODOLOGY} \subsection{SR Based on the DWT} For low-level vision tasks like image denoising and super-resolution, choosing the proper wavelet can be difficult. Medical diagnostics, in particular, demands HR images with improved contextual features to provide better patient care. The WT stores detailed information about an image in many orientations to make the most of it. At each level of decomposition, the 2D-DWT produces four subbands that correspond to distinct frequency components and are referred to as approximate subbands, horizontal, vertical, and diagonal, containing complete edges information. Each subband contains image data with distinct characteristics that are important enough to offer precise image features. In practice, DWT is being used to send the input signal via the low-pass L(e) and high-pass filters H(e). The input signal's approximation and accuracy are then reduced by 2. $L(e)$ and $H(e)$, are the Haar wavelets' definitions: \begin{equation} L(e)= \begin{cases} 1, & e=0,1 \\ 0, & otherwise\\ \end{cases} H(e)= \begin{cases} 1, & e= 0 \\ -1, & e= 0\\ 0, & otherwise\\ \end{cases} \end{equation} \begin{figure*}[ht] \includegraphics[width=1\textwidth]{Figures/Figure_3.pdf} \caption{level-1 decomposition of CT image. 2DWT extracts four subbands ($\textbf{CT}_{LL}$, $\textbf{CT}_{LH}$, $\textbf{CT}_{HL}$, and $\textbf{CT}_{HH}$) from the input CT modality image, which are subsequently fed to the Predicted Network for training.} \centering\label{Fig3} \end{figure*} Generally, an image I(x,y), where xth and yth pixel values are in columns. The average components of the 2D-DWT are represented by four subbands: approximation, vertical, horizontal, diagonal, \textbf{LL},\textbf{LH}, \textbf{HL}, \textbf{HH}. When we use 1-level 2D-DWT to an LL input image to predict the HL, LH, and HH subbands, we get the miss-ing information features of the LL image, as shown in the figure. To obtain SR results, we employ 2D-IDWT to acquire the image missing features information. The coefficients of 2D-IDWT can be computed as below from the Haar wavelet. \begin{equation} \begin{cases} A=a+b+c+d \\ B=a-b+c-d \\ C=a+b-c-d \\ B=a-b-c+d \end{cases} \end{equation} where A, B, C, D, and a, b, c, d are the subbands pixel values. Interpolation-based SR approaches are limited in their capacity to reconstruct information appropriately. The result is not accurate because high-frequency domain information cannot be adequately recovered during SR. To improve super-resolved image performance, edges must be retained. The DWT was employed to keep the image edges features and texture details in high-frequency; the 2-DWT decomposed the $I_L$ image into the \textbf{LL}, \textbf{LH}, \textbf{HL}, \textbf{HH} subbands. Conventional DWT-based SR approaches combine DWT with Non-DL SR methods\cite{dharejo2021twist} . Let $I_G$ denote the image of the ground-truth HR in $m \times n $ by, and $I_L$ symbolize LR in size $m \times n$ by, and $I_L$ by the same scale factor (s). Authorize ${I_{LR}}$ to represent $I_L$ up-scale ability, with $m \times n $ size, and to display the \textbf {HR} image reconstructed, with m×n size, via bicubic interpolation. \begin{enumerate} \item The four different subbands are achived form LR input image, as shown in Figure. \ref{Fig2}., i.e.,$\textbf{CT}_{LL}$, $\textbf{CT}_{LH}$, $\textbf{CT}_{HL}$, $\textbf{CT}_{HH}$ $\textbf{CT}_{LR}$ \item Finally using a discrete inverse WT, the HR image of $\textbf{CT}_{HR}$ is reconstructed (IDWT). \end{enumerate} \begin{figure*}[ht] \includegraphics[width=1 \textwidth]{Figures/Figure_4.pdf} \caption{Proposed GAN network with multiple attention and upsample blocks.} \centering\label{Fig4} \end{figure*} \subsection{Multimodal Muti-Attention GAN} A generator and a discriminator are the two fundamental components of a GAN. Samples z from a preceding noise distribution $\textbf{p}_{noise}$ are sent into the generator G. The generator is performed with any model distribution $\textbf{p}_{model}$ in the data space $\hat{x}=G(z)$. The discriminator is a binary classifier whose function is to distinguish between real and fake data on generated samples $x\sim P_{data}, D(\hat {x })=1$ and $x\sim P_{data}, D(\hat{x})=0$. Convolutional layers are featured in almost all GAN-based image restoration models \cite{salimans2016improved}\cite{creswell2018generative}. GANs evaluate information in a local neighborhood while deploying convolutional layers is computationally expensive for long-range modeling relationships in images. Our proposed approach uses DWT to generate wavelet subbands to improve spatial resolution (super-resolution). Self-attention is introduced in to the GAN framework, allowing the generator and the discriminator to efficiently describe interactions between widely separated spatial regions, as shown in Figure. \ref{Fig4}. To calculate the attention blocks, the image features from previously hidden layers $x\in \mathbf{R} ^{C\times N} $ are transferred to feature spaces \textbf{f;g}, where $\textbf{f(x)}=W_{fx};g(x)=W_g$ \begin{equation} \beta_{i,j}= \frac{\exp(S_{i,j})} {\sum_{i=1}^N \exp(S_{i,j})} = {f(x_i)^T} {g(x_i)} \end{equation} When constructing the $\textbf{jth}$ area, $\beta_{i,j}$ shows how much attention the model pays to the $\textbf{ith}$ place. Where C is Considering N is the number of locations of features from the previously hidden layer, the output of the attention layer can be expressed as, $o$=$(o_1,o_2,…,o_j,…,o_N )$ $\in \mathbf{R}^{C \times N}$ where, \begin{equation} O_j= \mathcal {V}(\sum _{i=1} ^N \beta_{i,j} h(x_i)), h(x_i)=W_h X_i, \mathcal {V}=W_h X_i \end{equation} The learned weight matrices $\textbf{W}_g$ $\in \mathbf{R}^ {L \times C}$, $\textbf{W}_f$ $\in \mathbf{R}^ {L \times C}$, $\textbf{W}_h$ $\in \mathbf{R}^ {L \times C}$, and $\textbf{W}_v$ $\in \mathbf{R}^ {L \times C}$ are implemented as 1×1 convolutions. We used varied channel numbers of L=C/K, in all of our experiments, where k=1,2,4,8 thus after a few ImageNet training epochs, we chose $k=8 (i,e.,L= \frac{C}{8})$ and found no significant performance loss. We also add back the input feature map after multiplying the output of the attention layer by a scale parameter. As a result, the final outcome is as follows: \begin{equation} Y_i= \alpha o_i + x_i \end{equation} Where $\alpha$ is a scalar parameter initially set to 0, allowing the network to depend on local inputs first and then progressively learn to give non-local data to more weights. The attention block used in our method is shown in Figure \ref{Fig5}(a). Upsampling attention blocks, which tend to improve the resolution for CT images on wavelet subbands input features, are another attention block employed in our predicted part. The spatial resolution of the feature map was further improved by performing local neighborhood interpolation on the wavelet subband input features. These are fed to the leaky convolution ReLU, and the final feature map $(f_{Upsample}$ $\in \mathbf{R}^ {L \times C}$ Upsample attention block can be expressed as; \begin{equation} F_{map}= Sigmoid(Conv(1\times 1))\times f_{upsample} \end{equation} $F_{map}$ is referred to output of pixel attention module and $\textbf{(Conv(1×1))}$ represent the convolution operation with kernel size 1×1 so we can say the $F_{map}$ $\in \mathbf{R}^ {W \times L \times 1}$. The values of the feature map fall between 0 and 1. The feature map can be correctly changed at the component level to achieve superior SR results. The Upsample attention block is given in Figure \ref{Fig5}(b). \begin{figure*}[ht] \includegraphics[width=1 \textwidth]{Figures/Figure_5.pdf} \caption{Mechanism of attention blocks, whose goal is to boost network performance by improving super-resolution, where (a) displays the mechanism of Self-attention blocks and (b) illustrates the structure of Upsample blocks.} \centering\label{Fig5} \end{figure*} \begin{figure*}[ht] \includegraphics[width=1 \textwidth]{Figures/Figure_9.pdf} \caption{Visual results of our proposed method (Multmodal-Boost) compared to our State-of-art methods. We enlarge the particular region to notice the differences between the outcomes more clearly on CT teeth.} \centering\label{Fig6} \end{figure*} \subsection{The Reconstruction Part} To reconstruct the HR super-resolved image $CT_{HR}$, we use a 2D-IDWT inverse wavelet onto four components $\boldsymbol{CT_{LL}}, \boldsymbol{CT_{LH}}, \boldsymbol{CT_{HL}}$, and $\boldsymbol{CT_{HH}}.$ \subsection{VGG based Perceptual Loss } Perceptual loss is one of the best metrics for measuring image similarity when CNN algorithms are applied to the input image. It is utilized in various applications such as image denoising, image dehazing, image translation, and image super-resolution. Compared to Mean Squared Error (MSE) loss \cite{johnson2016perceptual}, the perceptual loss is more robust to several problems such as over-flattening and distortion \cite{shan20183}\cite{chen2017face}; hence for image SR, the perceptual loss is more powerful to search spatial resolution similarities between two images. Many CNN networks employ VGG-loss to measure perceptual loss; however, VGG-11, VGG-16, and VGG-19 are pre-trained networks that achieved amazing results using natural image datasets \cite{he2018amc}. VGG-loss is a term that can be stated as follows: \begin{equation} L_{VGG}= \mathbb{E}_{(I_{LR, HR})} [\frac{\|VGG(g(I_{LR}))-VGG(I_{HR})\|_F^2}{DHW}] \end{equation} where D; H; W denotes the computed tomography (CT) image depth, height, and width. VGG was trained for image classification using natural mages; therefore, it generates features not relevant to CT image super-resolution. This is one of the possible pitfalls with VGG-loss \cite{jo2020investigating}. Due to the scarcity of labeled CT images, VGG for CT datasets is a challenging task. We proposed a trained perceptual network to handle this challenge to extract a compressed encoding from CT data input and reconstruct an image comparable to the original image. In our case we used VGG-16 in the described perceptual loss network as shown in Figure \ref{Fig4}. The perceptual network comprises six convolution layers, each with 32, 32, 64, 64, 128, 128 filters. There is also a max pooling layer with a kernel size of 2 and a stride of 2. The perceptual loss convolution layers were utilized, followed by the ReLU activation function. In this design, we used a 33 filter size and a stride of 1. To extract the features, we trained the perceptual net-work to determine the perceptual loss. Figure \ref{Fig7} shows the perceptual loss based on VVG-16. \begin{equation} L_{prec}= \mathbb{E}_{(I_{LR, HR})} [\frac{\|\gamma (g(I_{LR}))-\gamma (I_{HR})\|_F^2}{DHW}] \end{equation} Where $\gamma$ is the pre-trained encoder network. \begin{figure}[h] \includegraphics[width=0.5 \textwidth]{Figures/Figure_6.pdf} \caption{VGG-16 Loss curves obtained on natural images.} \centering\label{Fig7} \end{figure} \begin{figure}[h] \includegraphics[width=0.5 \textwidth]{Figures/Figure_13.pdf} \caption{Performance evaluation of our model with different numbers of multi-attention blocks.} \centering\label{Fig8} \end{figure} \subsection{Model Optimization} The proposed multimodal multi-attention model loss is equal to the sum of the perceptual and generator-discriminator losses, as shown below: \begin{equation} \min_{g}\max_{d}L_{WGAN}(g,d)+\beta L_{perc}(g) \end{equation} $L_{WGAN} (g,d)$ is a Wasserstein GAN \cite{arjovsky2017wasserstein}\cite{isola2017image} optimizer, the weighted parameters are represented by the $\beta$, which claims to stand for substitution between WGAN-Loss and perceptual loss, and the letters g and d, which stand for generator and discriminator, respectively. $L_{perc} (g)$ is the perceptual loss. $L_{WGAN} (g,d)$ is in addition to the usual Wasserstein distance and the regularization gradient penalty, which is represented as, \begin{align*} \min_{g}\max_{d}L_{WGAN}(g,d)= (- \mathbb{E}_{I_{HR}}[d(I_{HR})]+ \\\mathbb{E}_{I_{LR}}[d(g(I_{LR})])+ \lambda \mathbb{E}[(\|\bigtriangledown_{\hat{I}}d(\hat{I})\|_2-1)^2] \end{align*} Where $\mathbb{E}_a [b]$ is the expression of b as a function of a, $\lambda$ represents the weighted parameter, $\hat{I}$ denotes the generated and real images in uniform sampling from a range of [0,1], and $\bigtriangledown$ is the gradient. \begin{figure*}[ht] \includegraphics[width=1 \textwidth]{Figures/Figure_10.pdf} \caption{Visual results of our proposed method (Multimodal-Boost) compared to our State-of-art methods. We enlarge the particular region to notice the differences between the outcomes more clearly on CT knee image.} \centering\label{Fig9} \end{figure*} \begin{figure*}[ht] \includegraphics[width=1 \textwidth]{Figures/Figure_12.pdf} \caption{Visual results of our proposed method (Multmodal-Boost) compared to our State-of-art methods. We enlarge the particular region to notice the differences between the outcomes more clearly on MRI-Brain image.} \centering\label{Fig10} \end{figure*} \section{RESULTS AND DISCUSSIONS} We introduced above a Multimodal-Boost framework for super-resolution tasks in medical images. It comprises a new neural network for SR medical image reconstruction, pair-wise attention blocks, and a novel GAN-based loss function. The proposed model is efficient with texture details and realistic for viewing reconstructed images to a large extent. Since super-resolution is an inverse problem with unpredictable solution \cite{lim2017enhanced} , the output of the SR image contains substantially more information than the matching LR image. Each LR feature from wavelet subbands was regarded as a training sample during Multimodal-Boost model training and fed to the training phase. Loss functions are employed to ensure generated SR images as close to HR. Since working with LR images for diagnosis is too challenging, SR images are highly rated for medical applications, and they include enough information for radiologists to make precise conclusions. The information was stored in hidden layers by SR DNN models, which resulted in high frequency or HR images with superior texture and edge details. Medical images are too complicated to handle compared to natural images, making machine learning models relatively challenging. The higher magnification area in Figure \ref{Fig6}, Figure \ref{Fig9}, and Figure \ref{Fig10} shows the differences between each approach. At the same time, Meta-SR achieves excellent performance than other techniques; it still struggles to generate pretty small textures, whereas our proposed method takes advantage of wavelet transform and obtains more excellent texture details. We also perform visual comparisons on other images using our Multimodal-Boost and other SR techniques, as seen in Figure "teeth-image" Figure \ref{Fig6}. All previous SR algorithms produce blurry outputs and fail to recover detailed information in the magnified image, while Meta-SR and our new approach Multimodal-Boost can only reconstruct a clean image with sharp lines. We evaluate the SR results of the image MRI Modality to support the multimodality and discover that SRCNN, EDSR, and Meta-SR all fail to recover the clear edges. It is indeed worth remembering that the texturing details they acquired are incorrect. CIRCLE-GAN and SR-ILLNN, on the other hand, can rebuild more compelling results that are consistent with ground truth, but they fail to provide more precise edge information. On the other hand, our technique does better when it comes to restoring edges and texture details. \subsection{Training Details} When training SR models with pairs of LR and HR images, downsampling is typically utilized. On the other hand, medical datasets have fixed in-plane resolutions of roughly 1mm and lack ultra-high spatial resolutions. As a result, transfer learning is one of the methods for effectively training models using external datasets, which has proven to be very useful in remote sensing and medical applications. We used massive datasets DIV2K \cite{timofte2017ntire} with $2000 \times 1400 $ pixels of good quality to train our proposed technique to use transfer learning. We randomly cropped $56\times 56$ sub-pictures from DIV2K training images for training. Preliminary, we obtain the trained and optimized network by pre-training the proposed model in a, particularly defined manner. The pre-trained network is fine-tuned by selecting the Shenzhen Hospital datasets \cite{jaeger2014two}, containing 662 X-ray images used for training and testing. All images were scaled to $512 \times 512$ pixels, and three modalities, including Montgomery County X-ray, Teeth,, and knee images, were chosen to test our model. MRI brain scans from the Calgary Campinas repository were another dataset we used to evaluate our proposed approach. This modality is produced using a 12-channel head-neck coil on an MR scanner (Discovery MR750; General Electric Healthcare, Waukesha, WI). All experiments have been carried out on a Windows-based machine with an Intel(R) Core(TM) i5-7300HQ CPU running at 3.40GHz and an NVIDIA GeForce GTX 1080-Ti graphics card. The setup also makes use of MATLAB 2019 with CUDA Toolkit and Anaconda. \begin{table*}[ht] \caption{DETAILS OF ALL TRAINED MODELS AND DIFFERENT LOSS MEASURES} \centering \begin{tabular}{|c| c | c | c |c | } \hline Experiments & Generator Network & Loss Function & WGAN\\ \hline CNN-VGG & CNN & $L_{VGG}$ & no\\ \hline WGAN & CNN & $L_{WGAN}$ & yes\\ \hline Perceptual & Multi-attention & $L_{perc}$ & no\\ \hline WAGN-VGG & CNN & $L_{WGAN}$ + $L_{VGG}$ & yes\\ \hline WAGN-MA-P & Multi-attention CNN & $L_{WGAN}$ + $L_{perc}$ & yes\\ \hline \end{tabular} \label{tab:var} \end{table*} \begin{table*}[ht] \caption{NETWORK COMPLEXITY AND INFERENCE SPEED ON SHENZHEN HOSPITAL DATASETS. MULTIMODAL-BOOST, IN COMPARI-SON TO SRDESNSENET AND ESRGAN, TAKES MORE TRAINING TIME WHILE TAKING LESS TIME THAN EDSR AND CIRCLE-GAN WITH TRANSFER LEARNING. FURTHERMORE, UNLIKE EDSR AND META-SR, WHICH ONLY WORK WITH A SINGLE MODALITY, IT USES MULTI-MODALITY DATA IN A SINGLE MODEL, LOWERING THE COST OF ADDING NEW SR TASKS} \centering \begin{tabular}{|c| c | c | c |c | } \hline Methods & Number of parameters (M) & Memory size (MB) & Inference time(s) & Number of $log_{10}$ flops\\ \hline SRCNN \cite{dong2015image} & 0.059 & 13.68 & 0.0113 & 17.4\\ \hline EDSR \cite{lim2017enhanced} & 32.55 & 266.78 & 2.178 & 24.2\\ \hline Meta-SR \cite{hu2019meta} & 4.075 & 32.64 & 1.0156 & 25.1 \\ \hline GAN-CIRCLE \cite{you2019ct} & 55.88 & 457.88 & 3.0172 & 24.8\\ \hline SR-ILLNN \cite{kim2021single} & 0.441 & 16.44 & 0.0123 & 28.1\\ \hline Multimodal-Boost & 21.68 & 105.68 & 2.0189 & 28.3\\ \hline \end{tabular} \label{tab:var} \end{table*} \begin{table*}[ht] \caption{OUR PROPOSED METHOD OUTPERFORMS SRCNN, EDSR, META-SR, CIRCLE-GAN, SR-ILLNN AND BICUBIC INTERPO-LATION ON MULTI-MODAL IMAGES SUCH AS TEETH, KNEE, CT SCAN IMAGES, AND CARDIAC MR BRAIN SCANS IMAGE IN TERMS OF BOTH QUALITATIVE AND QUANTITATIVE OUTCOMES. A GREATER PSNR INDICATES BETTER SR IMAGE RECON-STRUCTION QUALITY, WHILE A HIGHER SSIM INDICATES BETTER PERCEPTUAL QUALITY. BOLD REPRESENTS THE BEST PERFORMANCE.} \centering \begin{tabular}{c| rrrrrr } \hline &\multicolumn{2}{c}{CT: Modality (Teeth-Image) } & \multicolumn{2}{c}{CT: Modality (Knee-Image) } & \multicolumn{2}{c}{MRI: Modality (Brain-Image) }\\ State-of-art Methods & PSNR & SSIM &PSNR & SSIM & PSNR & SSIM \\ [0.25ex] \hline Bcubic & 26.30 & 0.825 & 25.10 & 0.815 & 25.22 & 0.842\\ SRCNN \cite{dong2015image} & 27.45 & 0.845 & 26.69 & 0.853 & 26.99 & 0.862\\ EDSR \cite{lim2017enhanced} & 29.54 & 0.873 & 28.52 & 0.868 & 28.72 & 0.890\\ Meta-SR \cite{hu2019meta} & 31.55 & 0.919 & 32.22 & 0.913 & 33.52 & 0.912 \\ CIRCLE-GAN \cite{you2019ct} & 32.44 & 0.902 & 33.33 & 0.921 & 33.93 & 0.929\\ SR-ILLNN \cite{kim2021single} & 34.57 & 0.921 & 35.76 & 0.925 & 35.41 & 0.931\\ \textbf {Multimodal-Boost} & \textbf{36.32} & \textbf {0.937} & \textbf{36.97} & \textbf {0.931} & \textbf {37.46} & \textbf {0.941}\\ \hline \end{tabular} \label{tab:hresult} \end{table*} \subsection{Transfer Learning} Transfer learning is a technique that improves the performance of deep neural networks by utilizing knowledge learned from natural image data sets as initial training data. Due to the different distributions of two types of images, directly applying a trained model using natural datasets to medical images will not work, so transfer learning is a feasible solution. Transfer learning has various advantages such as: \begin{itemize} \item The ability to borrow high-frequency information from natural image datasets, which boosts the proposed method's performance to reconstruct the HR image form LR image. \item It helps in faster model convergence. \item It improves the model accuracy. \end{itemize} To achieve high image resolution for diagnostic, our suggested Multimodal-Boost model leveraged transfer learning to integrate shared and supplementary information from diverse modalities. In order to assess each modality and the detailed information inside the image patches, we used 16 to 128 batch sizes for the HR patches in model training. When the epochs in both modalities, CT-Images from Shenzhen Hospital dataset \cite{jaeger2014two} and MRI brain scans from CalgaryCampinas, reach 180, the training process is finished. We train our model with ADAM optimizer \cite{kingma2014adam} by setting $\beta_1 = 0.9$, $\beta_2= 0.999 $, and $ \in =10^{-8}$. Our primary goal is to improve multimodal medical image quality. We trained the model with DIV2K before fine-tuning it with CT-image dataset, which contains 662 X-ray images. We also tested our model using MRI brain imaging so that the previous modality information can be used to reconstruct the next modality image. According to the proposed results, transfer learning knowledge considerably improved performance as shown in Figure \ref{Fig11} and Figure \ref{Fig12} . The DIV2K dataset \cite{timofte2017ntire} is being used for training since it is a recently proposed high-quality 2k resolutions image dataset for image enhancement. There are 800 training images, 100 validation image. \begin{figure}[h] \includegraphics[width=0.5 \textwidth]{Figures/Figure_14.pdf} \caption{Inference speed on Shenzhen Hospital dataset.} \centering\label{Fig13} \end{figure} \subsection{Comparison with State-of-art Methods} We analyzed and compared the proposed method with deep learning models including SRCNN \cite{dong2015image}, EDSR\cite{lim2017enhanced}, Meta-SR \cite{hu2019meta}, CIRCLE-GAN\cite{you2019ct}, SR-ILLNN \cite{kim2021single} and bicubic interpolation. These methods were designed for natural images DIV2K \cite{timofte2017ntire}, which have much bigger dimensions than the medical images we used. We employed transfer learning and retrained the models with smaller chunks of medical image datasets to make them work more robustly. We used the same hardware and experiment conditions to make a fair comparison. We employed the MSE loss; however, it was not very reliable due to distortion and over-flattening. As a result, we proposed perceptual loss $L_{perc}(g)$ using VGG-16 training, and then we improved our network with total loss $L_{WGAN} (g,d)+\beta L_{perc} (g)$, which is the sum of perceptual loss and generative discriminator loss function. The proposed model (Multimodal-Boost) is compared against state-of-the-art approaches in Tables 2, and the visualization results are shown in Figure \ref{Fig6}, Figure \ref{Fig9}, and \ref{Fig10}. \begin{figure}[h] \includegraphics[width=0.5 \textwidth]{Figures/Figure_7.pdf} \caption{Model loss over different experimental networks.} \centering\label{Fig11} \end{figure} \begin{figure}[h] \includegraphics[width=0.5 \textwidth]{Figures/Figure_8.pdf} \caption{Network convergence over different loss functions on Wasserstein Estimation.} \centering\label{Fig12} \end{figure} For comparison, we conduct experiments on numerous networks with varying configurations, presented in Table I and Figure \ref{Fig11}. We witnessed that WGAN-based losses converge faster than non-WGAN-based losses. Finally, combining WGAN and perceptual loss results in a shorter distance and greater convergence, which we apply to all losses in model optimization, as shows in Figure \ref{Fig12}. \subsection{Convergence} The convergence curve of VGG-loss in WGAN-MA-P has the fastest convergence across all types of approaches, as shown in Figure \ref{Fig11}. WGAN-MA-P loss is quite similar to WGAN- VGG, although the latter achieves lower. However, a reduced VGG loss does not always imply improved performance. According to our findings, the VGG-loss network's spatial resolution diminishes its benefits while still causing some loss. Finally, we show how the WGAN-MA-P loss converges very quickly with shorter Wasserstein distances, resulting in state-of-the-art performance compared to other WGAN-based alternatives. The network complexity of methodologies is measured in terms of the number of parameters, memory, inference time, and number of flops as shown in Table 2. Compared to SRCNN\cite{dong2015image}, Meta-SR \cite{hu2019meta}, SR-ILLNN \cite{kim2021single}, Multimodal-Boost has many parameters; however, it has fewer parameters than EDSR \cite{lim2017enhanced} and CIRCLE-GAN \cite{you2019ct}. Furthermore, on the same hardware, the proposed method takes 18\% longer to train than Meta-SR. As a result, further work on the architecture, such as model compression, should be done in the future. \subsection{Quantitative Analysis} We evaluated our approach on several attention blocks $(\rho=2, 4, 8)$ to investigate how well it performed. In terms of PSNR and SSIM, we found that the proposed Multimodal-Boost technique outperforms all others. When $\rho $ is increased, the proposed method performance degrades significantly, as illustrated in Figure \ref{Fig8}. Table 3 shows that the proposed strategy produces the maximum PSNR. For three images, SRCNN \cite{dong2015image} has the lowest PSNR, although it is better than Bilinear interpolation. The proposed method achieves 1.54–4.89 greater PSNR index when compared with recent methods EDSR\cite{lim2017enhanced}, Meta-SR\cite{hu2019meta}, CIRCLE-GAN\cite{you2019ct}, and SR-ILLNN \cite{kim2021single}. The quantitative performance metric is the peak signal-to-noise ratio (PSNR). The PSNR is defined as follows: Given a ground-truth image S and its reconstructed image $\hat{S}$, with $M\times N$ pixels size, \begin{equation} PSNR(S,\hat{S})= 10log_{10}\frac{225^2}{MSE(S,\hat{S)}} \end{equation} The Structural Similarity Index Measure (SSIM) is a popular tool for assessing the quality of high-resolution reconstructions. The SSIM index can be expressed mathematically as, \begin{equation} SSIM= \frac{(2\mu_x \mu_y+C_1)(2\sigma_{xy}+C_2)}{(\mu_x^2+\mu_y^2+C_1)(\sigma_x^2+\sigma_y^2+C_2)} \end{equation} where $\mu_x$, $\mu_y $ are the average of x and y respectively, $\sigma_x^2$, $\sigma_y^2$ are the variance of x and y respectively, $\sigma_{xy}$ is the covariance of x and y, $C_1$, $C_2$ are the constants. SISR is a low-level image processing task that can be used in a number of contexts. In real-world circumstances, SISR's time constraints are extremely stringent. We conduct tests to see how long each state-of-the-art method takes to run and then compare the results on Shenzhen Hospital datasets. This is depicted in Figure 13. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This work was supported by Natural Science Foundation of China, Grant/Award Number: 61836013; Beijing Natural Science Foundation, Grant/Award Number: 4212030; Beijing Technology, Grant/Award Number:Z191100001119090; Key Research Pr gram of Frontier Sciences, CAS, Grant/Award Number:ZDBS‐LY‐DQC016. \ifCLASSOPTIONcompsoc \section*{Data Availability} \else \section*{Data Availability} \fi We used the two popular datasets CT-images from Shenzhen Hospital datasets\cite{jaeger2014two} and Public MRI da-taset ({https://sites.google.com/view/calgary-campinas-dataset/home}). \section{CONCLUSION} In this study, we designed a multi-attention GAN framework with wavelet transform and transfer learning for multimodal SISR on medical images. This is the first time a multi-attention GAN has been employed in conjunction with a wavelet transform methodology for medical image super-resolution. We also used transfer leaning, which works with multimodality data that decrease the cost of adding new SR challenging targets like disease classification. We trained our network on the high-resolution DIV2K dataset, then applied transfer learning to train and test medical images on the Shenzhen Hospital datasets of various modalities. Furthermore, we used GANs to train a perceptual loss function that can better super-resolve LR features, resulting in improved perceptual quality of the generated images. Due to the 2D-DWT properties, the reconstructed images are more accurate and have more texture information. The utility of transfer learning is that the pre-trained model is fine-tuned using medical images such as cardiac MR scans and CT scans. In particular, we evaluated our outcomes in terms of visual, quantitative, adversarial, and perceptual loss. The PSNR and SSIM measurements are the preliminary step in evaluating the SR approach, however they are insufficient for real-time applications. In the future, we will study how the proposed approach affects target tasks such as disease detection and classification, as well as minimizing the amount of parameters and flops, which will effectively reduce training and inference time. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
f3614c090b2bc184064b5db5778fdbc8b05bd430
\section{Introduction} \label{sec:intro} Pseudorandomness is one of the most fundamental concepts in the domain of cryptography and complexity theory. In contrast to true randomness, it captures the notion of primitives that behaves randomly to the computationally-bounded observers \cite{yao1982theory,shamir1983generation,blum1984generate}. The pseudorandom objects like pseudorandom number generators (PRGs) and pseudorandom functions (PRFs) play a crucial role in designing classical symmetric key cryptography protocols for secure communication \cite{goldreich1986construct,haastad1999pseudorandom, goldreich1984cryptographic, luby1988construct, rompel1990one}. These pseudorandom objects can be designed by exploiting the algebraic properties of the families of keyed functions like the keyed hash functions or from some hardware assumptions like the physical unclonable functions (PUFs). In the classical world, the relationship between PUF and pseudorandomness is well-studied~\cite{ruhrmair2009foundations}. Similar to classical pseudorandomness, recently Ji, Liu, and Song \cite{ji2018pseudorandom} introduced the concept of quantum pseudorandomness such as pseudorandom quantum states (PRSs) and pseudorandom unitaries (PRUs) as families of states or unitary transformations that are indistinguishable from Haar measure (true random measure) to any quantum computationally-bounded observers. On the other hand, similar to the classical PUFs, recently, we have the concept of the quantum PUFs \cite{arapinis2021quantum}. However, unlike the classical case, the relation between the quantum pseudorandomness and the quantum PUFs is not well-explored. The existing PRS schemes are constructed under computational assumptions such as quantum-secure PRF or quantum secure one-way functions~\cite{brakerski2020scalable,ji2018pseudorandom}. An interesting question that arises is whether quantum pseudorandomness can be achieved under a different set of assumptions? In this paper, to the best of our knowledge, for the first time, we show the construction of quantum pseudorandom unitaries from quantum PUFs and vice-versa. Quantum pseudorandom states are an ensemble of a keyed family of quantum states $\{|\phi_k\rangle\}_{k \in \mathcal{K}}$ that can be generated efficiently \cite{ji2018pseudorandom}. The pseudorandomness comes from the property that for any polynomial-time quantum adversary, any polynomial number of copies of the states that are sampled from the ensemble $\{|\phi_k\rangle\}_{k \in \mathcal{K}}$ is indistinguishable from the same number of copies of Haar-random states. Similarly, pseudorandom unitaries are an ensemble of a keyed family of unitaries $\{U_k\}_{k \in \mathcal{K}}$ that can be implemented efficiently \cite{ji2018pseudorandom}. Analogous to the PRS, the pseudorandomness of the PRUs implies that with oracle access to the unitary, no polynomial-time quantum adversary can distinguish between the unitaries that are sampled from $\{U_k\}_{k \in \mathcal{K}}$ and the Haar-measure unitaries. A PUF is designed to be a cost-efficient, and low resource hardware device that provides a unique physically defined digital fingerprint \cite{delvaux2017security, herder2014physical}. Corresponding to a challenge it produces a unique response that acts as an identifier. In the case of classical PUFs, the uniqueness comes from the unique physical variations that occur naturally during the manufacturing process of the device. Such subtle physical variations of the hardware components during the manufacturing process can be easily measured but are infeasible to reproduce in practice. It is well known in the classical setting that theoretically, PUFs can be considered pseudorandom functions. However, in practice, most of the classical PUFs are vulnerable to machine learning-based attacks \cite{ganji2016strong,ruhrmair2010modeling,khalafalla2019pufs}. Due to this shortcoming, there is a significant interest in designing quantum PUFs that utilise quantum mechanical properties (qPUFs), where both the challenges and responses are quantum states \cite{arapinis2021quantum, gianfelici2020theoretical,nikolopoulos2017continuous}. Quantum challenges and responses feature an additional property that they cannot be cloned by the laws of quantum mechanics \cite{wootters1982single}, in contrast to the classical case. In general, a qPUF is modelled as a completely positive trace preserving (CPTP) map, which maps an input challenge state to a unique response state. In addition, a qPUF must also be unique i.e. two distinct qPUFs must generate different responses to any given challenge with high probability (\emph{uniqueness} property), and it must be unforgeable by any bounded (quantum or classical) adversary trying to clone the device (\emph{unforgeability property}). In \cite{arapinis2021quantum}, Arapinis et al. developed a formal security notion for the qPUFs and provided a qPUF construction based on Haar-random unitaries with the challenge states also being drawn from the Haar-random state, to satisfy the unforgeability property. Moreover, in the same paper, the authors design a generic quantum emulation-based attack for forging any qPUF and proved that their generic construction is unforgeable against any polynomial-time quantum adversary. This construction, although secure, is not practical due to the Haar-random requirement on the unitaries and the states. The reason for this is the fact that sampling from Haar measure requires exponential resources~\cite{knill1995approximation} and hence is experimentally challenging~\cite{carolan2015universal}. The construction of the unitary qPUF itself was partially improved by the result of Kumar et. al \cite{kumar2021efficient} where they constructed a qPUF based on unitary $t$-designs which are efficiently built. However, they still require the challenges to be drawn from the Haar-random set of states to prove the unforgeability property. Further, we emphasise that, unlike the classical PUFs, the literature on qPUFs is not yet mature, and we have only very few candidate designs for qPUFs as mentioned above. In this work, we make substantial progress in the above inefficient requirement of the challenges being chosen from the Haar measure. Specifically, we show that PRS can help reduce the challenger's overhead significantly in choosing the challenge states - from inefficient Haar-random states to efficient PRS. We further show that PRUs can be used as a viable candidate for qPUFs. This result provides yet another novel and efficient technique for constructing qPUFs. Moreover, here we also investigate, whether qPUFs can be used as PRUs. Similar to the qPUF, PRU is also a relatively new concept, and to the best of our knowledge, there are no concrete designs for the PRUs. Our investigation in this paper helps establish a close connection between these two new fields, i.e., qPUFs and quantum pseudorandomness. This relation gives us novel insights into designing both qPUFs and the PRUs. We are optimistic that the connections that we foster here would benefit both communities and the advances in one field would help to enrich the advances in the other. In the next subsection, we give a brief outline of our results. \subsection{Result overview} In this paper, we first address the inefficiency issue of the qPUF design in \cite{arapinis2021quantum, kumar2021efficient} and prove the security against quantum polynomial-time adversaries even if the challenge states are sampled from a set of pseudorandom quantum states. \begin{theorem}[informal]\label{th:inf-prs-unf} Any unitary qPUF satisfies unforgeability with the challenges that are selected from a Pseudorandom Quantum States (PRS) family (instead of Haar random states) against quantum polynomial-time (QPT) adversaries. \end{theorem} Here we also show that the PRUs can be used as qPUFs. Moreover, we establish a connection between a unitary family of qPUFs, with a specific practical requirement, and PRUs. \begin{theorem}[informal]\label{th:inf-pru-uu} Any PRU family can be a unitary qPUF family. \end{theorem} \begin{theorem}[informal]\label{th:inf-practicaluu-pru} A family of practically unknown unitaries is also a PRU family. Hence any unitary qPUF family that satisfies practical unknownness, is also a PRU family. \end{theorem} Later, we give a novel construction of PRUs from the family of qPUFs by exploring yet another hardware requirement which is their uniqueness property. The following result can also be applied to any unitary family with almost-maximal uniqueness, not only qPUFs. \begin{theorem}[informal]\label{th:inf-unique-pru} Any family of unitary transformation of over $d$-dimensional Hilbert space satisfying the almost-maximal uniqueness in the diamond norm is also a PRU family for sufficiently large $d$. Hence any PUF family satisfying this degree of uniqueness, is also a PRU. \end{theorem} The demographic summary of the above theorems can be found in Figure~\ref{fig:results}. We finish our paper with a secure and efficient qPUF-based client verification protocol using our result from Theorem $1$. \begin{theorem}[informal]\label{th:inf-protocols} The qPUF-based identification protocols in~\cite{doosti2020client} can achieve the same security guarantee against QPT adversary if the Haar-random states are replaced with PRS. \end{theorem} \begin{figure} \includegraphics[scale=0.35]{results.pdf} \centering \caption{Demographic Summary of the results. The left hand figure demonstrates Theorem~\ref{th:efficientuu-prs} stating that the universal unforgeability of unknown unitaries can be achieved efficiently using PRS. The right figure depicts the relationship between unknwon unitaries (UU), quantum physical unclonable functions (qPUF), pseudorandom unitaries (PRU) and families of almost maximally-distanced unitaries ($\{U_k\}_{maxd}$) proved in the Theorems~\ref{th:pru-uu}, \ref{th:pru-unique}, \ref{th:practicaluu-pru}, and \ref{th:max-unique-pru}. It also shows that they can be used as generators for pseudorandom quantum states (PRS).} \label{fig:results} \end{figure} \subsection{Notations} Here we provide some of the widely-used notations in this paper. \begin{itemize} \small \item PRG : Pseudorandom generator. \item PRF : Pseudorandom function. \item qPRF : Quantum-Secure Pseudorandom function. \item PRS : Pseudorandom state. \item PRU : Pseudorandom unitary. \item qPUF : Quantum physical unclonable function. \item QPT : Quantum polynomial-time. \item UU : Unknown Unitary transformation. \item CRP : Challenge response pair. \item BQP : Bounded quantum polynomial. \item CPTP : Completely positive trace preserving. \item $F(.,.)$ : Uhlmann's fidelity. \item $\mu$ : Haar measure. \item $\lambda$ : Security parameter. \end{itemize} \section{Preliminaries} \label{sec:prelims} This section presents the various ingredients required for our results and proofs. \subsection{Quantum Pseudorandomness}\label{sec:prelim-qpseudorand} Pseudorandomness is a central concept in modern cryptography which has also been extended to the quantum regime. Here we mention different notions that have been defined or extended into the quantum world namely, Pseudorandom Quantum States (PRS), quantum-secure Pseudorandom Functions (qPRF) and their quantum analogue, namely quantum Pseudorandom Unitaries (PRU). \subsubsection{Pseudorandom Quantum States (PRS)} \begin{definition}\label{def:prs}[Pseudorandom Quantum States(PRS): \cite{ji2018pseudorandom}] Let $\mathcal{H}$ be a Hilbert space and $\mathcal{K}$ the key space. $\mathcal{H}$ and $\mathcal{K}$ depend on the security parameter $\lambda$. A keyed family of quantum states $\{\ket{\phi_k}\in S(\mathcal{H})\}_{k\in\mathcal{K}}$ is \textit{pseudorandom}, if the following two conditions hold: \begin{itemize} \item \textbf{Efficient generation}. There is an efficient quantum algorithm $G$ which generates the state $\ket{\phi_k}$ on input $k$. That is, for all $k\in\mathcal{K}, G(k) = \ket{\phi_k}$. \item \textbf{Pseudorandomness}. Any polynomially many copies of $\ket{\phi_k}$ with the same random $k\in\mathcal{K}$ is computationally indistinguishable from the same number of copies of a Haar random state. More precisely, for any efficient quantum algorithm $\mathcal{A}$ and any $m\in poly(\lambda)$, \end{itemize} \begin{equation} |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}(\ket{\phi_k}^{\otimes m})=1] - \underset{\ket{\psi} \leftarrow \mu}{Pr}[\mathcal{A}(\ket{\psi}^{\otimes m})=1]| = negl(\lambda). \end{equation} where $\mu$ is the Haar measure on $S(\mathcal{H})$. \end{definition} \subsubsection{Quantum-secure Pseudorandom Function (qPRF)} Quantum-secure pseudorandom functions are families of functions that look like truly random functions to QPT adversaries. Formally, qPRF are defined as follows: \begin{definition}\label{def:qprf}[Quantum-Secure Pseudorandom Functions(qPRF): \cite{ji2018pseudorandom}] Let $\mathcal{K},\mathcal{X},\mathcal{Y}$ be the keyspace, the domain and range, all implicitly depending on the security parameter $\lambda$. A keyed family of functions $\{PRF_k: \mathcal{X} \rightarrow \mathcal{Y}\}_{k\in \mathcal{K}}$ is a quantum-secure pseudorandom function (qPRF) if for any polynomial-time quantum oracle algorithm $\mathcal{A}$, $PRF_k$ with a random $k \leftarrow \mathcal{K}$ is indistinguishable from a truly random function $f \leftarrow \mathcal{Y}^{\mathcal{X}}$ in the sense that: \begin{equation} |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}^{PRF_k}(1^{\lambda})=1] -\underset{f \leftarrow \mathcal{Y}^{\mathcal{X}}}{Pr}[\mathcal{A}^{f}(1^{\lambda})=1]| = negl(\lambda). \end{equation} \end{definition} \subsubsection{Pseudorandom Unitary Operators (PRUs)} These are unitary equivalent of PRFs defined as follows. \begin{definition}\label{def:pru}[Pseudorandom Unitary Operators(PRU): \cite{ji2018pseudorandom}] A family of unitary operators $\{U_k \in \mathcal{U}(\mathcal{H})\}_{k \in \mathcal{K}}$ is a pseudorandom unitary if two conditions hold: \begin{itemize} \item \textbf{Efficient computation}. There is an efficient quantum algorithm $Q$ such that for all $k$ and any state $\ket{\psi} \in S(\mathcal{H}), Q(k,\ket{\psi}) = U_k\ket{\psi}$. \item \textbf{Pseudorandomness}. $U_k$ with a random key $k$ is computationally indistinguishable from a Haar random unitary operator. More precisely, for any efficient quantum algorithm $\mathcal{A}$ that makes at most polynomially many queries to the oracle: \end{itemize} \begin{equation} |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}^{U_k}(1^{\lambda})=1] - \underset{U \leftarrow \mu}{Pr}[\mathcal{A}^U(1^{\lambda})=1]| = negl(\lambda). \end{equation} where $\mu$ is the Haar measure on $S(\mathcal{H})$. Note that here we focus on the Pseudorandomness condition of the PRU definition. \end{definition} \subsubsection{Unknown Unitary Transformations (UUs)} We also mention a relevant notion to PRU, called a family of Unknown Unitaries (UU) defined in~\cite{arapinis2021quantum}, that can also be interpreted as single-shot pseudorandomness. \begin{definition}[Unknown Unitary Transformation~\cite{arapinis2021quantum}]\label{def:uu} We say a family of unitary transformations $U^u$, over a $d$-dimensional Hilbert space $\mathcal{H}^d$ is called Unknown Unitaries if for all QPT adversaries $\mathcal{A}$ the probability of estimating the output of $U^u$ on any state $\ket{\psi}\in\mathcal{H}^d$ selected uniformly from Haar measure, is at most negligibly higher than the probability of estimating the output of a Haar random unitary operator on that state: \begin{equation}\small |\underset{U \leftarrow U^u}{Pr}[F(\mathcal{A}(\ket{\psi}),U\ket{\psi}) \geq \delta(\lambda)] - \underset{U_{\mu} \leftarrow \mu}{Pr}[F(\mathcal{A}(\ket{\psi}),U_{\mu}\ket{\psi}) \geq \delta(\lambda)]| = negl(\lambda). \end{equation} where $\delta(\lambda)$ is any non-negligible function in the security parameter. \end{definition} We note that this definition characterises a notion of \emph{single-shot} indistinguishability from the family of Haar-random unitaries. Thus the adversary has only a single black-box query access to the unitary, but can have some existing prior information from the family that can be used for estimating the output. The definition intuitively states that a family of unitary is unknown when no such useful information exist about the family prior to the query access. \subsection{Quantum Adversarial Model and Security Definitions} Strong notions of the security for quantum cryptographic proposals require cryptanalysis against adversaries which also possess quantum capabilities of varying degrees \cite{boneh2011random, mosca2018cybersecurity, song2014note}. The strongest of such notions is achieved by assuming no restrictions on the adversary's computational power and resources. This security model, also known as security against \emph{unbounded adversary}, is usually too strong to be achieved by most cryptographic primitives such as qPUFs. It has been shown in~\cite{arapinis2021quantum}, that unitary qPUFs cannot remain secure against an unbounded adversary. Thus the standard security model that we also use in this paper is the notion of security against efficient quantum adversaries, or in other words, QPT adversaries. We define such an adversary in the context of qPUFs. A QPT adversary with query access to qPUF is defined as an adversary that can query the qPUF oracle with polynomially many (in the security parameter) arbitrary challenges and has a polynomial sized quantum register to store the quantum CRPs. The QPT adversary is also allowed to run any efficient quantum algorithm. The security of most qPUF-based cryptographic protocols relies on the unforgeability property of qPUF. Here we follow the same definitions of \emph{universal} unforgeability (Also called \emph{selective unforgeability} in the context of quantum PUFs) defined in~\cite{arapinis2021quantum,doosti2021unified} and restate them as follows: \begin{game}\label{game:uni-unf}[Universal Unforgeability] Let $\mathrm{Gen}$, $\mathrm{U_{\mathcal{E}}}$ and $\mathcal{T}$ be the generation, evaluation and test algorithms of the quantum primitive $\mathcal{E}$ respectively. We define the following game $G$ running between an adversary $\mathcal{A}$ and a challenger $\mathcal{C}$: \begin{itemize} \item [] \textbf{Setup phase.} The challenger $\mathcal{C}$ runs $\mathrm{Gen}(\lambda)$ and reveals to the adversary $\mathcal{A}$, the domain and range Hilbert space of $\mathrm{U_{\mathcal{E}}}$ respectively denoted by $\mathcal{H}_{in}$ and $\mathcal{H}_{out}$ \item [] \textbf{Learning phase.} For $i=1:k$ \begin{itemize} \item $\mathcal{A}$ issues arbitrary query $\rho_i \in \mathcal{S}(\mathcal{H}_{in})$ to $\mathcal{C}$; \item $\mathcal{C}$ generates $\rho_i^{out} = \mathrm{U_{\mathcal{E}}}\rho_i\mathrm{U_{\mathcal{E}}}^{\dagger}$ and sends $\rho_i^{out}$, to $\mathcal{A}$; \end{itemize} \item [] \textbf{Challenge phase.} $\mathcal{C}$ chooses a quantum state $\rho^*$ at random from the uniform distribution (Haar) over the Hilbert space $\mathcal{H}_{in}$ and sends $\rho^*$ to $\mathcal{A}$. The challenger can generate arbitrary copies of $\rho^*$. \item [] \textbf{Guess phase.} \begin{itemize} \item $\mathcal{A}$ generates the forgery $\omega$ and sends it to~ $\mathcal{C}$; \item $\mathcal{C}$ runs the test algorithm $b\leftarrow \mathcal{T}((\rho*^{out})^{\otimes \kappa_1},\omega)$ where $b\in\{0,1\}$ and outputs $b$. The adversary wins the game if $b=1$. \end{itemize} \end{itemize} \end{game} \begin{definition}[Quantum Universal Unforgeability]\label{def:qunf} A primitive provides quantum universal unforgeability if the success probability of any QPT adversary $\mathcal{A}$ in winning the Game~\ref{game:uni-unf} is negligible in the security parameter $\lambda$ \begin{equation} Pr[1\leftarrow G(\lambda, \mathcal{A})] = negl(\lambda) \end{equation} \end{definition} Throughout the paper we widely use the result from~\cite{doosti2021unified,arapinis2021quantum} implying that unknown unitary transformations as formalized by Definition~\ref{def:uu}, can satisfy the notion of universal Unforgeability: \begin{theorem}[\cite{doosti2021unified}]\label{th:uu-unforge} Primitives with their evaluation algorithm being an unknown unitary transformation are universally unforgeable. \end{theorem} \subsection{Quantum Equality Tests}\label{sec:test} Distinguishing two unknown quantum states is a central ingredient in quantum information processing. This task is often referred to as the ``state discrimination task''. The celebrated Holevo-Helstrom bound \cite{holevo1973bounds} relates the optimal distinguishability of two unknown states with the trace distance between the density matrices. This implies that unless the states are the same (up to a global factor), it is impossible to deterministically distinguish the two states. An important application of state discrimination is the task of Equality testing \cite{buhrman2001quantum, barenco1997stabilization, xu2015experimental}. This is an extremely simple task but a building block for lots of complicated quantum protocols. The objective of Equality testing, one that we consider in our work, is to test whether two \emph{unknown} quantum states are the same. This is a well-studied topic and we describe the optimal quantum protocols for Equality testing. \subsubsection{SWAP test}\label{sec:swap} Given a single copy of two unknown quantum states $\rho$ and $\sigma$, is there a simple test to optimally determine whether the two states are equal or not? This question was answered in affirmative by Buhrman et al \cite{buhrman2001quantum} when they provided a test called the SWAP test. This test was initially used by the authors to prove an exponential separation between classical and quantum resources in the simultaneous message passing model. Since then it has been used as a standard tool in the design of various quantum algorithms \cite{buhrman2010nonlocality,kumar2017efficient}. A SWAP test circuit takes as an input the two unknown quantum states $\rho$ and $\sigma$ and attaches an ancilla $\ket{0}$. A Hadamard gate is applied to the ancilla followed by the control-SWAP gate and again a Hadamard on the ancilla qubit. Finally, the ancilla is measured in the computational basis and we conclude that the two states are equal if the measurement outcome is `0' (labelled accept). Figure~\ref{fig:swap} illustrates this test in the special case when the state $\sigma$ is a pure state and shown by $\ket\psi$. \begin{figure}[h!] \centering \[ \xymatrix @*=<0em> @C=2em @R=1.4em { & \lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{H} & *=<1.8em,1.4em>{\xy ="j","j"-<.778em,.322em>;{"j"+<.778em,-.322em> \ellipse ur,_{}},"j"-<0em,.4em>;p+<.5em,.9em> **\dir{-},"j"+<2.2em,2.2em>*{},"j"-<2.2em,2.2em>*{} \endxy} \POS ="i","i"+UR;"i"+UL **\dir{-};"i"+DL **\dir{-};"i"+DR **\dir{-};"i"+UR **\dir{-},"i" \qw \\ & \lstick{\rho} & \qw & \multigate{1}{\text{SWAP}} & \qw & \qw \\ & \lstick{\ket{\psi}} & \qw & \ghost{\text{SWAP}} & \qw & \qw }\] \caption{The SWAP test circuit} \label{fig:swap} \end{figure} It can be shown that the probability the SWAP test accepts the states $\rho$ and $\sigma$ is \cite{kobayashi2003quantum}, \begin{equation} \text{Pr}[\text{SWAP accept}] = \frac{1}{2} + \frac{1}{2}\text{Tr}(\rho\sigma) \end{equation} In the special case of when at least one of the states (let's say $\sigma$) is a pure state $\sigma = \ket{\psi}\bra{\psi}$, the probability of acceptance is, \begin{equation} \text{Pr}[\text{SWAP accept}] = \frac{1}{2} + \frac{1}{2} |\bra{\psi}\rho\ket{\psi}| = \frac{1}{2} + \frac{1}{2}F^2(\rho, \ket{\psi}\bra{\psi}) \label{eq:swapaccept} \end{equation} Thus when at-least one of the two states is a pure state, the acceptance probability is related to the fidelity between the states. This implies when the states are the same, the probability of acceptance is 1. However, when the states are different then if the SWAP test accepts the states, this implies an error. Thus the error in the SWAP test when the states are different (also called the one-sided error) is the accept probability of the SWAP test while the states are not equal. This error can, however, be brought down to any desired error $\epsilon > 0$ by running multiple instances of the SWAP test circuit. The number of instances required to bring down the error probability to a desired $\epsilon$ is, \begin{equation*} \begin{split} \text{Pr}[\text{SWAP error}] & = \prod^{M}_{j=1}\text{Pr}[\text{SWAP accept}]_j = (\frac{1}{2} + \frac{1}{2}F^2)^M = \epsilon \\ & \Rightarrow M(\log(1+F^2)-1) = \log(\epsilon) \Rightarrow M\approx \mathcal{O}(\log(1/\epsilon)) \end{split} \end{equation*} where $F = F(\rho, \ket{\psi}\bra{\psi}) = \sqrt{\bra{\psi}\rho\ket{\psi}}$ and we use the fact that fidelity is independent of $\epsilon$. \subsubsection{Generalised SWAP test}\label{sec:gswap} The above SWAP test is optimal in Equality testing (in a single instance) of two unknown quantum states when one has a single copy of the two states. However, there are certain quantum protocols where one has access to multiple copies of one unknown state $\ket{\psi}$ and only a single copy of the other unknown state $\rho$ and the objective is to provide an optimal Equality testing circuit. Considering this scenario, Chabaud et al.~\cite{chabaud2018optimal} provided an efficient construction of such a circuit, generalised SWAP (GSWAP) test circuit. A GSWAP circuit takes as an input a single copy of $\rho$, M copies of $\ket{\psi}$ and $\ceil[\big]{\log M+1}$ copies of the ancilla qubit $\ket{0}$. The generalised circuit is then run on the inputs, and the ancilla qubits are measured in the computational bases. Figure~\ref{fig:gswap} is a generic illustration of such a circuit. For more details on the circuit refer to the original work \cite{chabaud2018optimal}. \begin{figure}[h!] \includegraphics[scale=0.30]{gswapcircuit.png} \centering \caption{GSWAP: A generalisation of the SWAP test with a single copy of $\rho$ and $M$ copies of $\ket{\psi}$. The circuit also inputs $n = \ceil[\big]{\log M+1}$ ancilla qubits in the state $\ket{0}$. At the end of the circuit, the ancilla states are measured in the computational basis.} \label{fig:gswap} \end{figure} It can be shown that the probability the GWAP circuit accepts two quantum states $\rho$ and $\ket{\psi}$ is, \begin{equation} \text{Pr}[\text{GSWAP accept}] = \frac{1}{M+1} + \frac{M}{M+1} \bra{\psi}\rho\ket{\psi} = \frac{1}{M+1} + \frac{M}{M+1}F^2 \label{eq:gswap} \end{equation} where $F = F(\rho, \ket{\psi}\bra{\psi})$. We note that in the special case of $M=1$, the GSWAP test reduces to the SWAP test. Also in a single instance, GSWAP provides a better Equality test compared to the SWAP test since it reduces the one-sided error probability. In the limit $M \rightarrow \infty$, we obtain the optimal acceptance probability of $\text{Pr}[\text{accept}] = F^2 = |\bra{\psi}\rho\ket{\psi}|$. Another important feature of GSWAP is that it can achieve any desired success probability $\epsilon (\geqslant F^2)$ in just a single instance which is impossible to achieve using SWAP circuit. However, the number of copies required is exponentially more than the number of instances that the SWAP circuit has to run to achieve the same error probability, \begin{equation} \begin{split} \text{Pr}[\text{GSWAP error}] & = \text{Pr}[\text{GSWAP accept}] = \frac{1}{M+1} + \frac{M}{M+1}F^2 = \epsilon \\ & \Rightarrow M\approx \mathcal{O}(1/\epsilon) \end{split} \label{eq:gswaperror} \end{equation} Hence one decides the use of either SWAP test or GSWAP test depending on the specific application. \subsection{Quantum Physical Unclonable Functions}\label{sec:prelim-qpuf} A Quantum Physical Unclonable Function, or qPUF, is a secure hardware cryptographic device that is, by assumption, hard to clone or reproduce and utilises properties of quantum mechanics \cite{arapinis2021quantum}. Similar to a classical PUF \cite{armknecht2016towards}, a qPUF is assessed via CRPs. However, in contrast to a classical PUF where the CRPs are classical states, the qPUF CRPs are quantum states. We use the definition of qPUF introduced in~\cite{arapinis2021quantum} for the purpose of this paper. A qPUF manufacturing process involves a quantum generation algorithm, `qGen', which takes as an input a security parameter $\lambda$ and generates a PUF with a unique identifier \textbf{id}, \begin{equation} \text{qPUF}_{\textbf{id}} \leftarrow \text{qGen}(\lambda) \end{equation} Next we define the mapping provided by $\text{qPUF}_{\textbf{id}}$ which takes any input quantum state $\rho_{in} \in \mathcal{S}(\mathcal{H}^{d_{in}})$ to the output state $\rho_{out} \in \mathcal{S}(\mathcal{H}^{d_{out}})$. Here $\mathcal{H}^{d_{in}}$ and $\mathcal{H}^{d_{out}}$ are the input and output Hilbert spaces respectively corresponding to the mapping that $\text{qPUF}_{\textbf{id}}$ provides. This process is captured by the `qEval' algorithm which takes as an input a unique $\text{qPUF}_{\textbf{id}}$ device and the state $\rho_{in}$ and produces the state $\rho_{out}$, \begin{equation} \rho_{out} \leftarrow \text{qEval}(\text{qPUF}_{\textbf{id}}, \rho_{in}) \end{equation} A qPUF needs to satisfy a few requirements. The first property, \textbf{$\delta_r$-Robustness}~\cite{arapinis2021quantum}, ensures that if the qPUF is queried separately with two input quantum states $\rho_{in}$ and $\sigma_{in}$ that are $\delta_r$-indistinguishable to each other, then the output quantum states $\rho_{out}$ and $\sigma_{out}$ must also be $\delta_r$-indistinguishable. The second property, \textbf{$\delta_c$-Collision resistance}~\cite{arapinis2021quantum}, ensures that if the same qPUF is queried separately with two input quantum states $\rho_{in}$ and $\sigma_{in}$ that are $\delta_c$-distinguishable, then the output states $\rho_{out}$ and $\sigma_{out}$ must also be $\delta_c$-distinguishable with an overwhelmingly high probability. Here, the distinguishability is defined with respect to fidelity such that two quantum states $\rho$ and $\sigma$ are $\delta$-distinguishable if $0 \leqslant F(\rho, \sigma) \leqslant 1 - \delta$, where $F(\rho, \sigma)$ is the Uhlmann's fidelity. Alternatively, other distance measures such as trace norm, euclidean norm (any Schatten p-norm) can also be used to define security requirements for qPUF. The last requirement, which we will use in this paper is the \textbf{$\delta_u$-Uniqueness}~\cite{arapinis2021quantum}. This property ensures that the generation process of qPUF can generate sufficiently distinguishable qPUFs. This is captured by modeling each qPUF as a quantum operation characterised by a CPTP map that takes the input quantum states in $\mathcal{H}^{d_{in}}$ to output states in $\mathcal{H}^{d_{out}}$. We say that two such maps $\Lambda^{qPUF}_{i}$ and $\Lambda^{qPUF}_{j}$ are $\delta_u$ distinguishable if \begin{equation}\label{eq:unique} Pr[\parallel \Lambda^{qPUF}_{i} - \Lambda^{qPUF}_{j}\parallel_{\diamond} \geq \delta_u[i\neq j]] \geq 1 - \epsilon(\lambda) \end{equation} Where $\parallel .\parallel_{\diamond}$ is the diamond norm distance measure for the distinguishability of two quantum operations, and $\epsilon(\lambda)$ is a negligible function in the security parameter $\lambda$. The diamond norm is a distance metric for any two completely positive trace preserving quantum operations $\Lambda_1$, $\Lambda_2$. It is defined as, \begin{equation} \parallel \Lambda_1 - \Lambda_2 \parallel_{\diamond} = \underset{\rho}{\max} (\parallel(\Lambda_1 \otimes \mathbb{I})[\rho] - (\Lambda_2 \otimes \mathbb{I})[\rho]\parallel_1) \end{equation} Operationally it quantifies the maximum probability of distinguishing operation $\Lambda_1$ from $\Lambda_2$ in a single-use. It has been shown in~\cite{arapinis2021quantum} that unitary maps and $\epsilon$-close to unitary channels, under certain additional conditions, can be considered as a qPUF. We restate the following theorem from~\cite{arapinis2021quantum}: \begin{theorem}[from~\cite{arapinis2021quantum}]\label{theorem:non-unitary} Let $\mathcal{E}(\rho)$ be a completely positive and trace-preserving (CPT) map described as follows: \begin{equation}\label{eq:non-unitary-puf} \mathcal{E}(\rho) = (1-\epsilon)U \rho U^{\dagger} + \epsilon \Tilde{\mathcal{E}}(\rho) \end{equation} where $U$ is a unitary transformation, $\Tilde{\mathcal{E}}$ is an arbitrary (non-negligibly) contractive channel and $0 \leq \epsilon \leq 1$. Then $\mathcal{E}(\rho)$ is a ($\lambda,\delta_r,\delta_c$)-qPUF for any $\lambda$, $\delta_r$, and $\delta_c$ and with the same dimension of domain and range Hilbert space, if and only if $\epsilon = negl(\lambda)$. \end{theorem} This is because the properties of robustness and collision resistance can be satisfied by an almost unitary map as a subclass of all CPTP qPUFs. The uniqueness property on the other hand, is quite challenging and \cite{arapinis2021quantum,kumar2021efficient} showed that one can achieve uniqueness if one samples a unitary from a Haar random set of unitaries. Further, \cite{kumar2021efficient} numerically showed that one can achieve uniqueness if one sample the unitary from a unitary $t$-design set. In this work, we show that one can achieve this property by sampling from a PRU set. Here we consider the qPUF construction to be a unitary matrix $U \in \mathbb{C}^{d \times d}$, where $d = d_{in} = d_{out}$. A crucial security feature of the qPUF device is its unforgeability property. The unforgeability for qPUFs as a quantum primitive is captured by Definition~\ref{def:qunf}. It has also been shown in~\cite{arapinis2021quantum} that even though qPUFs cannot satisfy a general existential unforgeability, which is a strong notion for capturing the unpredictability of such hardware, all unitary qPUFs that satisfy the notion of unknownness can satisfy the notion of universal unforgeability. This general possibility result is the consequence of the following theorem proved against any QPT adversary: \begin{theorem}[restated from~\cite{arapinis2021quantum}]\label{th:sel-qCM-fid} For any unitary qPUF characterised, and any non-zero acceptance threshold $\delta$ in the fidelity, the success probability of any QPT adversary $\mathcal{A}$ in the universal unforgeability game is bounded as follows: \begin{equation} Pr[1\leftarrow G(\lambda, \mathcal{A})] \leq \frac{\tilde{d}+1}{d} \end{equation} where $d$ is the dimension of the domain Hilbert space, and $0\leq \tilde{d} \leq d-1$ is the dimension of the largest subspace of $\mathcal{H}^d$ that the adversary can span in the learning phase of the Game~\ref{game:uni-unf}. \end{theorem} This possibility result, has been later used in~\cite{doosti2020client} to prove security of qPUF-based identification protocols. \section{Efficient Unforgeability with PRS}\label{sec:unf-prs} In this section, we investigate the problem of \textit{universal unforgeability} with efficiently producible pseudorandom quantum states. As specified in the Game~\ref{game:uni-unf}, the challenge states need to be picked at random from Haar measure by the challenger. This is an important condition for the unforgeability of unknown unitary transformations. Nevertheless, producing Haar random state is a challenging and resource-intensive task. Hence to take the first step towards the realization of universally unforgeable schemes, we attempt to replace this condition with its computational equivalent, i.e. the notion of PRS, introduced in the preliminary. We first relax this condition by defining a variation of the universal unforgeability game, namely \emph{Efficient Universal Unforgeability} where the challenger picks the challenge states from a pseudorandom family of quantum states. Then we formally prove that unknown unitaries satisfy this notion of unforgeability. Furthermore, we discuss how such pseudorandom quantum states can be efficiently generated using classical pseudorandom functions. We define the \emph{Efficient Universal Unforgeability} as bellow: \begin{definition}[Efficent Quantum Universal Unforgeability]\label{def:qunf-efficient} Let Game $G_{eqUnf}$ be same as Game~\ref{game:uni-unf} except that in the challenge phase, the challenge states are being picked from PRS family of states with a generation algorithm $G(k)$ with a key $k\in\mathcal{K}$, realised in the setup phase. A primitive provides efficient quantum universal unforgeability if the success probability of any QPT adversary $\mathcal{A}$ in winning the $G_{eqUnf}$ is negligible in the security parameter $\lambda$, \begin{equation} Pr[1\leftarrow G_{eqUnf}(\lambda, \mathcal{A})] = negl(\lambda) \end{equation} \end{definition} Now, for simplicity in the proof, we also define the pseudorandomness property of the PRS with a game as formalized in the following: \begin{game}\label{game:prs}[PRS distinguishability game] Let $\mathcal{H}$ be a Hilbert space and $\mathcal{K}$ the key space. The dimension of $\mathcal{H}$ and size of $\mathcal{K}$ depend on the security parameter $\lambda$. Let $\{\ket{\phi_k}\in S(\mathcal{H})\}_{k\in\mathcal{K}}$ be a keyed family of quantum states with efficient generation algorithm $G(k) = \ket{\phi_k}$ on input $k$. We define the following distinguishability game between an adversary $\mathcal{A}$ and a challenger $\mathcal{C}$: \begin{itemize} \item [] \textbf{Setup phase.} The challenger $\mathcal{C}$ selects $k \overset{\$}{\leftarrow} \mathcal{K}$ and $b \overset{\$}{\leftarrow} \{0,1\}$ at random. \item [] \textbf{Challenge phase.} \begin{itemize} \item If $b = 0$ (PRS world): $\mathcal{C}$ prepares $m$ copies of $\ket{\phi^0} = \ket{\phi_k}$ by running $G(k)$. \item If $b = 1$ (Random world): $\mathcal{C}$ prepares $m$ copies of a Haar-random state $\ket{\phi^1} = \ket{\psi}$. \item $\mathcal{C}$ sends $\ket{\phi^b}^{\otimes m}$ to $\mathcal{A}$. \end{itemize} \item [] \textbf{Guess phase.} $\mathcal{A}$ guesses $b$. \end{itemize} \end{game} Now we establish our main result regarding the efficient unforgeability of unknown unitary primitives. \begin{theorem}\label{th:efficientuu-prs} Any unitary transformation U selected from an unknown unitary family according to Definition~\ref{def:uu}, satisfies efficient universal unforgeability against QPT adversaries. \end{theorem} \begin{proof} We prove by contraposition in a game-based setting. We want to show that starting from the assumption of pseudorandomness of PRS in the efficient universal unforgeability game, if there exists a QPT adversary who succeeds to win this game, with non-negligible probability, there will also exist an adversary who can efficiently distinguish between PRS and Haar random states, which is in contrast with the initial assumption and as a result show a contradiction. First, we need to specify the following games: \begin{itemize} \item Game 1: This is the universal unforgeability game as specified in Game~\ref{game:uni-unf}, with the only difference that the challenge state $\rho^* = \ket{\phi_{k^*}}\bra{\phi_{k^*}}$ is chosen from a PRS family. \item Game 2: This is the PRS distinguishability game as specified in Game~\ref{game:prs}. \item Game 3: This is a variation of Game~\ref{game:prs} where $\mathcal{C}$ in addition to initial resources, has also access to a publicly known and implementable unitary $U$. In the challenge phase, $\mathcal{C}$ does the following: Generates $m$ copies of $\ket{\phi^0} = \ket{\phi_k}$ using $G(k)$, or $m$ copies of Haar random states $\ket{\phi^1} = \ket{\psi}$ depending of $b$, then on each copies applies the public unitary $U$ and sends $(U\ket{\phi^b})^{\otimes m}$ to $\mathcal{A}$. The rest of the game is similar to Game 2. \item Game 4: This game is similar to Game 3, except that $\mathcal{C}$ publicly chooses an $l$ and $l'$ such that $l + l' = m$ and sends $l$ copies of the generated state and $l'$ copies of the state after applying the unitary $U$, i.e. sends $\ket{\phi^b}^{\otimes l} \otimes (U\ket{\phi^b})^{\otimes l'}$ to $\mathcal{A}$. \item Game 5: This game is similar to Game 4 except the public unitary has been replaced by an unknown unitary $\tilde{U}$ of the same dimension. Hence in this game, similar to Game~\ref{def:prs}, we also assume a learning phase for $\mathcal{A}$ before the challenge phase. The learning phase is as follows: $\mathcal{A}$ issues $q=poly(\lambda)$ queries $\{\rho_i\}^q_{i=1}$ to $\mathcal{C}$, on each query $\mathcal{C}$ generates $\rho^{out}_i = \tilde{U}\rho_i\tilde{U}^{\dagger}$ by applying the unitary on the query state and sends $\rho^{out}_i$ to $\mathcal{A}$. Then the rest of the Game is similar to Game 4 and at the end of the challenge phase $\mathcal{A}$ receives $\ket{\phi^b}^{\otimes l} \otimes (\tilde{U}\ket{\phi^b})^{\otimes l'}$ \end{itemize} \begin{figure}[h!] \includegraphics[scale=0.45]{prs-unf-proof.pdf} \centering \caption{Proof sketch of Theorem~\ref{th:efficientuu-prs} with the intermediate games.} \label{fig:prs-unf-proof} \end{figure} Figure~\ref{fig:prs-unf-proof} illustrates the sketch of the proof. We first show that Game 2, Game 3 and Game 4 are equivalent. We note that unitary transformations are distance invariant and hence they also preserve the distribution of states, as a result applying a unitary to the state will not affect the distribution and the distinguishability of the quantum states, and as a result Game 2 and 3 are equivalent. Furthermore, in Game 4, since the unitary is public, $\mathcal{A}$ can either apply $U$ on the first $l$ copies $\ket{\phi^b}^{\otimes l}$ and end up with $m$ copies of $(U \ket{\phi^b})^{\otimes m}$ or alternatively apply $U^{\dagger}$ on the next $l'$ copies $(U\ket{\phi^b})^{\otimes l'}$ and get $m$ copies of $\ket{\phi^b}^{\otimes m}$, and hence be reduced to either Game 2 or Game 3. As a result, we have \begin{equation} \text{Game 2} \equiv \text{Game 3} \equiv \text{Game 4} \end{equation} Now we show that Game 4 implies Game 5 i.e. if an adversary wins the distinguishability in Game 5 with a probability $p$, she will also win in Game 4 with the same probability. The proof is straightforward as highlighted here. Let $\mathcal{A}$ be an adversary who wins Game 5, which means after the learning phase leading to a polynomial-size database of the input-outputs of the unknown unitary $\tilde{U}$, and receiving $\ket{\phi^b}^{\otimes l} \otimes (\tilde{U}\ket{\phi^b})^{\otimes l'}$, they can guess $b$ with non-negligible probability better than random guess: \begin{equation} \underset{\ket{\phi^b}}{Pr}[b \leftarrow \mathcal{A}(\ket{\phi^b}^{\otimes l} \otimes (\tilde{U}\ket{\phi^b})^{\otimes l'})] = \frac{1}{2} + non\text{-}\negl(\lambda). \end{equation} Now let's assume an adversary $\mathcal{A}'$ who plays the Game 4 and has to guess $b$ by receiving the state $\ket{\phi^b}^{\otimes l} \otimes (U\ket{\phi^b})^{\otimes l'}$ can guess $b$ with same $l$ and $l'$ where $U$ is a public unitary. Now $\mathcal{A}'$ can run $\mathcal{A}$ as a subroutine and $\mathcal{A}'$ sends to $\mathcal{A}$ the response to the same learning phase states from $U$. Since $U$ is public $\mathcal{A}'$ can run it locally and produce the required queries. Then $\mathcal{A}'$ also sends the state $\ket{\phi^b}^{\otimes l} \otimes (U\ket{\phi^b})^{\otimes l'}$ to $\mathcal{A}$ and since $\mathcal{A}$ guesses the $b$ with a probability non-negligibly better than half, so does $\mathcal{A}'$. As a result, we have shown that: \begin{equation} \text{Game 4} \Rightarrow \text{Game 5} \end{equation} Finally, we show that Game 5 implies Game 1. By contradiction, we assume there exist an adversary $\mathcal{A}$ who wins the unforgeability game with non-negligible probability. Let $\tilde{U}$ be the unknown unitary and $\mathcal{A}$'s forgery state be $\ket{\omega}$ and let the challenge state of Game 1 be a PRS state $\ket{\phi_k}$. We have: \begin{equation} \begin{split} Pr[1\leftarrow G_{eqUnf}(\lambda, \mathcal{A})] & = \underset{k}{Pr}[1 \leftarrow \mathcal{T}(\ket{\omega}, (\tilde{U}\ket{\phi_k})^{\otimes \kappa})] \\ & = \underset{k}{Pr}[F(\ket{\omega}, \tilde{U}\ket{\phi_k}) = non\text{-}\negl(\lambda)] \\ & = non\text{-}\negl(\lambda). \end{split} \end{equation} Now we construct an adversary $\mathcal{A}'$ playing an instance of Game 5 where $l = 1$ and $l' = m - 1$. In the learning phase $\mathcal{A}$ interacts with the unknown unitary $\tilde{U}$ with the same learning phase states required for $\mathcal{A}$ and sends the query states $\{\rho^{out}_i\}^q_{i=1}$ together with the challenge state $\ket{\phi^b}$ to $\mathcal{A}$. Then $\mathcal{A}$ produces the forgery $\ket{\omega}$ as their guess for $\tilde{U}\ket{\phi^b}$. Now $\mathcal{A}'$ verifies $\ket{\omega}$ with the same test algorithm $\mathcal{T}$ where $\kappa = m - 1$, since $\mathcal{A}'$ has $m-1$ copies of $\tilde{U}\ket{\phi^b}$ to check with. Then $\mathcal{A}'$ outputs the same $b$ as outputted by the $\mathcal{T}$. The success probability of $\mathcal{A}'$ is as follows. If $b=0$, the state is a PRS and the contradiction assumption is satisfied. Hence $\mathcal{A}$'s forgery state will pass the test algorithm with high probability. On the other hand if $b=1$, the state has been picked from Haar measure and as a result of Theorem~\ref{th:uu-unforge}, the success probability of $\mathcal{A}$ winning the forgery game and producing a state to pass the test is negligible. Since guessing $b$ in Game 5 with probability better than random guess is equivalent to the difference between the success probability of $\mathcal{A}'$ in winning the game in the two different scenarios, we have: \begin{equation} \begin{split} & |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}'(\ket{\phi_k} \otimes (\tilde{U}\ket{\phi_k})^{\otimes m-1})=1] - \underset{\ket{\psi} \leftarrow \mu}{Pr}[\mathcal{A}'(\ket{\psi} \otimes (\tilde{U}\ket{\psi})^{\otimes m-1})=1]| \\ & = |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}(\ket{\phi_k})=1] - \underset{\ket{\psi} \leftarrow \mu}{Pr}[\mathcal{A}(\ket{\psi})=1]| \\ & = non\text{-}\negl(\lambda) - negl(\lambda) = non\text{-}\negl(\lambda) \end{split} \end{equation} Here, as a specific example, we can consider the GSWAP to be the equality test and, we show how this check can efficiently be performed to show the gap and hence the implication of the two later games. Let us denote the adversary's purified forgery state as $\ket{\omega_b}$. According to equation~(\ref{eq:gswap}), the probability of the GSWAP accepting this state given $m-1$ copies of reference state $\tilde{U}\ket{\phi^b}$, has the following relation with the fidelity of the forgery state: \begin{equation}\label{eq:gswap-attack-test} \text{Pr}[\text{GSWAP accept}] = \frac{1}{m} + \frac{m-1}{m} F(\tilde{U}\ket{\phi^b}, \ket{\omega_b})^2 \end{equation} Assuming $\mathcal{A}$ wins the unforgeability game for PRS state with non-negligible probability implies that this fidelity is a non-negligible value in the security parameter, hence $F(\tilde{U}\ket{\phi^0}, \ket{\omega_0}) = \delta = non\text{-}\negl(\lambda)$. On the other hand, for Haar-random state this fidelity is always a negligible value and we have that $F(\tilde{U}\ket{\phi^1}, \ket{\omega_1}) = negl(\lambda)$. As a result the difference between $\mathcal{A}$'s success probability in the two cases is as follows: \begin{equation} \begin{split} & |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}'(\ket{\phi_k} \otimes (\tilde{U}\ket{\phi_k})^{\otimes m-1})=1] - \underset{\ket{\psi} \leftarrow \mu}{Pr}[\mathcal{A}'(\ket{\psi} \otimes (\tilde{U}\ket{\psi})^{\otimes m-1})=1]| \\ & = \frac{1}{m} + \frac{m-1}{m}F(\tilde{U}\ket{\phi^0}, \ket{\omega_0}) - \frac{1}{m} + \frac{m-1}{m}F(\tilde{U}\ket{\phi^1}, \ket{\omega_1}) \\ & = \frac{m-1}{m}(\delta - negl(\lambda)) \approx \frac{m-1}{m}\delta = non\text{-}\negl(\lambda) \end{split} \end{equation} As a result, we have shown that there exist a non-negligible gap and hence $\mathcal{A}'$ can also win the Game 5. In conclusion, we have shown the following relation: \begin{equation} \text{Game 2} \equiv \text{Game 3} \equiv \text{Game 4} \Rightarrow \text{Game 5} \Rightarrow \text{Game 1} \end{equation} This means that an adversary winning the unforgeability game, with the challenge being picked from a PRS family, can also distinguish the PRS states from Haar random state which is a contradiction and it concludes the proof. \end{proof} We have formally shown that PRS states are enough to achieve quantum universal unforgeability. The next question is that how such states can be constructed. Ji, Liu, and Song~\cite{ji2018pseudorandom} propose several constructions for generating a PRS family using classical quantum-secure PRFs. Hence they show that PRS can be constructed under the assumption that quantum-secure one-way function exists. Another similar notion called Asymptotically Random State (ARS) has also been introduced in~\cite{brakerski2019pseudo}. In both works, first, oracle access to a classical random function is given to efficiently construct a PRS that is indistinguishable to Haar random states even for exponential adversaries. Then by relying on the existence of quantum-secure one-way function they replace the random function with a post-quantum secure PRF to achieve security against polynomial adversaries. With this approach, one can construct computationally secure $n$-qubit PRS which is also desired for unforgeability security property. Nevertheless, as discussed in~\cite{brakerski2020scalable}, these methods are not scalable and also an n-qubit PRS generator cannot necessarily be used to produce a random state for k-qubit where $k < n$. For these reasons, in~\cite{brakerski2020scalable} the authors introduce a scalable construction for PRS which, unlike prior works, relies on randomising the amplitudes of the states instead of the phase. The authors use Gaussian sampling methods to efficiently achieve PRS. \section{From Pseudorandom Unitaries to Unknown Unitaries}\label{sec:unf-pru-tdesign} We prove that a family of unitaries satisfying the computational assumption of PRU, is also a family of unknown unitary transformation. As a result of this implication, efficient constructions such as PRU or t-design can also satisfy the notion of universal unforgeability. Moreover, this result establishes for the first time, a link between a computational assumption of PRU with a hardware assumption such as unknownness. \begin{theorem}\label{th:pru-uu} A family of PRU, $\mathcal{U} = \{U_k\}_{k \in \mathcal{K}}$ is also a family of unknown unitary (UU) with respect to Definition~\ref{def:uu}. \end{theorem} \begin{proof} We prove this by contradiction. Let $\mathcal{U}$ be a family of PRU but not a family of UU which means that there is a quantum polynomial-time (QPT) adversary $\mathcal{A}$ who can estimate the output of a randomly picked $U \leftarrow \mathcal{U}$ where $\mathcal{U}$ is a UU, on a state $\ket{\psi}$, non-negligibly better than the output of a $U \leftarrow \mu$ picked from a Haar-random unitaries $\mu$ over a $d$-dimensional Hilbert space. Thus for $\mathcal{A}$ the following holds: \begin{equation} \begin{split} & |\underset{U \leftarrow \mathcal{U}}{Pr}[F(\mathcal{A}(\ket{\psi}),U\ket{\psi}) \geq non\text{-}\negl(\lambda)] - \underset{U_{\mu} \leftarrow \mu}{Pr}[F(\mathcal{A}(\ket{\psi}),U_{\mu}\ket{\psi}) \geq non\text{-}\negl(\lambda)]| \\ & = non\text{-}\negl(\lambda). \end{split} \end{equation} Let $\mathcal{A}$' be a QPT adversary who aims to break the pseudorandomness property of $\mathcal{U}$ using $\mathcal{A}$, and works as follows:\\ \textit{$\mathcal{A}$' picks $\ket\psi$ as one of her chosen inputs in the learning phase of the pseudorandomness game. Then $\mathcal{A}$' also runs $\mathcal{A}$ internally on $\ket{\psi}$.}\\ From the previous equation we know that $\mathcal{A}$ can estimate the output of $U \ket{\psi}$ better than $\mathrm{U}_{\mu}\ket{\psi}$ where $\mathrm{U}_{\mu}$ is a Haar random unitary, by a non-negligible value. Also by definition, we know that The probability that any QPT algorithm estimates the output of any Haar randomly given unitary, is negligible, as the response maps to any random state in the Hilbert space $\mathcal{H}^d$ with exponential distribution~\cite{dankert2006c,nielsen2010quantum}. Thus the equation implies that: \begin{equation} |\underset{U \leftarrow \mathcal{U}}{Pr}[F(\mathcal{A}(\ket{\psi}),U\ket{\psi}) \geq non\text{-}\negl(\lambda)]| = non\text{-}\negl(\lambda). \end{equation} This means that $\mathcal{A}$ can estimate the output with non-negligible fidelity if the $U$ had been picked from the family. Now $\mathcal{A}$' runs a quantum equality test on the $U \ket{\psi}$ obtained in the learning phase and $\mathcal{A}(\ket{\psi})$. In the case where $U$ is picked from the PRU family, the estimated output and the real output have non-negligible fidelity and the test returns equality with a non-negligible probability. Otherwise, the test shows that they are not equal and $\mathcal{A}$' can conclude that the unitary has been picked from Haar unitaries. Thus for $\mathcal{A}$' we have: \begin{equation} \underset{U \leftarrow \mathcal{U}}{Pr}[\mathcal{A}'^{U}(1^{\lambda})=1] - \underset{U_{\mu} \leftarrow \mu}{Pr}[\mathcal{A}'^{U_{\mu}}(1^{\lambda})=1]=non\text{-}\negl(\lambda) \end{equation} Therefore we conclude the contradiction. \end{proof} We have shown that PRU implies unknown unitaries and followed by the results of~\cite{arapinis2021quantum} for unforgeability of UU, we conclude that PRU makes a set of universally unforgeable unitaries. Now we show that PRU can also be considered as a PUF family. In order to do that we need to show that the PUF requirements discussed in Preliminary~\ref{sec:prelim-qpuf} are satisfied. Since the $\delta_r$-Robustness and $\delta_c$-Collision Resistance are trivially satisfied by the unitarity, we only need to argue the $\delta_u$-Uniqueness requirement. \begin{theorem}\label{th:pru-unique} Let $\mathcal{U} \in U(d) = \{U_k\}_{k \in \mathcal{K}}$ be a family of PRU and universally-unforgeable unitary matrices. Then there exist a $\delta_u = non\text{-}\negl(\lambda) = non\text{-}\negl(polylog(d))$ such that $\mathcal{U}$ satisfies $\delta_u$-Uniqueness. \end{theorem} \begin{proof} We prove by contraposition and we assume that a non-negligible $\delta_u$ to satisfy the $\delta_u$-Uniqueness does not exist. This means that for any two unitary $U_i$ and $U_j$ picked uniformly at random from $\mathcal{U}$, the two unitary are $\zeta$-close in the diamond norm with a high probability. Otherwise if there exist a minimum $\zeta_{min} = non\text{-}\negl(\lambda)$ distance in diamond norm between any two unitaries we have already shown the $\delta_u$ exists. Hence we assume that we have the following condition: \begin{equation}\label{eq:proof-contra-diamond} Pr[\parallel (U_i - U_j)_{i\neq j}\parallel_\diamond \leq \zeta] \geq 1 - \epsilon(\lambda) \end{equation} where both $\zeta$ and $\epsilon(\lambda)$ are negligible function in the security parameter. Now we assume an adversary $\mathcal{A}$ wants to distinguish between $\mathcal{U}$ and the set of Haar-random unitaries. By assumption, we have that all the unitaries in $\mathcal{U}$ are universally unforgeable. So now we let the $\mathcal{A}$ play the PRU game (similar to Game~\ref{game:prs}) while running the universal unforgeability game as a distinguishing subroutine. Let $\mathcal{C}$ be the honest party picking at random a bit $b \in \{0,1\}$ where if $b=0$, a unitary $U$ is picked at random from $\mathcal{U}$ and we are in the PRU world and otherwise $U$ is picked from $\mu$ that denotes the set of Haar-random unitary matrices. Then $\mathcal{A}$ gets polynomial oracle access to the $U$ and after the interaction, needs to guess $b$. Now, since there exist an efficient public generation algorithm $Q$ for the PRU set, we let the adversary sample another unitary $U'$ from $Q$ locally and uniformly at random. According to the contraposition assumption give in Equation~\ref{eq:proof-contra-diamond}, if $b=0$, with a high probability these two unitaries are $\zeta$-close in the diamond norm, i.e. $\parallel (U - U')\parallel_\diamond \leq \zeta$. Given this promise, the adversary performs the following strategy: $\mathcal{A}$ locally plays the universal unforgeability game on the $U$, by picking a state $\ket{\psi}$ uniformly at random from Haar measure and querying it to $\mathcal{C}$ as a part of the polynomial oracle interaction with $U$. $\mathcal{A}$ will receive $U\ket{\psi}$ and can ask for multiple copies of it so long as the total number of queries to the oracle remains polynomial. Now we also rely on the fact that since PRU has the efficient computation property, meaning that $\mathcal{A}$ can locally compute the $U'\ket{\psi}$ to get multiple copies. Now $\mathcal{A}$'s strategy to win the unforgeability game is to output $U'\ket{\psi}$ as the forgery for $\ket{\psi}$. Again in the case of $b=0$, since the two unitaries are negligibly close in the diamond norm with a high probability we have the following: \begin{equation} Pr[\parallel (U - U')\parallel_\diamond \leq \zeta] \geq 1 - \epsilon \Rightarrow Pr[F(U\ket{\psi}, U'\ket{\psi}) \geq 1 - \zeta] \geq 1 - \epsilon \end{equation} This holds since the diamond norm is defined as a maximum over all of the density matrices, hence if the two unitaries are very close in the diamond norm, their output over a random state is also very close on average. Thus, the adversary can run a local efficient verification test (for instance a GSWAP test) between $U'\ket{\psi}$ and $U\ket{\psi}$ and use the output of the test as a distinguisher between pseudorandom and Haar-random world. If $b=0$, we have: \begin{equation} Pr[F(U\ket{\psi}, U'\ket{\psi}) \geq 1 - \zeta] \geq 1 - \epsilon \Rightarrow Pr[1\leftarrow G(\lambda, \mathcal{A})] = non\text{-}\negl(\lambda) \end{equation} Hence $\mathcal{A}$ will win the game with a high probability. However, in the case of $b=1$ where $U$ is a Haar-random unitary, we can use lemma 16 in~\cite{kretschmer2021quantum}, that states for a fixed state $\ket{\phi} \in \mathcal{H}^d$ and a Haar-random state $\ket{\psi} \leftarrow \mu$, and any $\epsilon > 0$ we have: \begin{equation} \underset{\ket{\psi} \leftarrow \mu}{Pr}[|\mbraket{\phi}{\psi}|^2 \geq \epsilon] \leq e^{-\epsilon d} \end{equation} This implies that taking the $U'\ket{\psi} = \ket{\phi}$ to be the fixed state, we denote that since $U$ is a Haar-random unitary then $U\ket{\psi}$ is also a Haar-random state and hence the probability that the fidelity $F(U\ket{\psi}, U'\ket{\psi})$ is a non-negligible value (with respect to $polylog(d)$) like $ 1 - \zeta$ is exponentially low. Hence in case $b=1$, the probability that the adversary's state passes the verification is exponentially low. Hence using this strategy, there will be always a distinguisher that can distinguish between $\mathcal{U}$ and Haar-random unitaries i.e.: \begin{equation} \underset{U \leftarrow \mathcal{U}}{Pr}[\mathcal{A}'^{U}(1^{\lambda})=1] - \underset{U_{\mu} \leftarrow \mu}{Pr}[\mathcal{A}'^{U_{\mu}}(1^{\lambda})=1]=non\text{-}\negl(\lambda) \end{equation} But this is in contrast with the assumption that $\mathcal{U}$ is a PRU. Hence we have reached a contradiction and the proof is complete. \end{proof} \section{Pseudorandom Unitaries and States from Hardware Assumptions}\label{sec:composable} As discussed earlier pseudorandom quantum states can be constructed under the assumption of qPRF or quantum one-way functions. Given the relationship that we have explored in the previous section between the unforgeability of qPUF and quantum pseudorandomness, here we ask whether it is possible to construct pseudorandom quantum states under a different set of assumptions? In this section, we discuss how one can achieve PRU and PRS under hardware assumptions on a family of unitary transformations. These hardware assumptions are generally discussed in the context of quantum PUFs, nevertheless, our results can be in general applied to any sets of unitaries with the given properties as long as they can be assumed on a hardware level. Let $\mathcal{U} \subseteq U(d) = \{U_i\}^{\mathcal{K}}_{i=1}$ be a family of unitaries with certain specific assumption that is given by their physical nature. We want to use the above family as a PRU family or generators for PRS. As shown in~\cite{ji2018pseudorandom}, if $\mathcal{U}$ is a PRU then it is also a generators for PRS i.e. $G(k) = U_k\ket{0} = \ket{\phi_k}$. To this end, we investigate the properties of a qPUF family that can be used to achieve pseudorandomness. In the last section, we have shown that PRU implies the notion of unknown unitary assumption, or in other words single-shot unknownness. Now we explore the relation of PRU and another notion of unknownness called \emph{practical unknownness} by Kumar et al.~\cite{kumar2021efficient}. This definition is a more suited definition for t-design unitary sets constructions and is defined as follows: \begin{definition}[$\epsilon,t,d-$ Practical unkownnness~\cite{kumar2021efficient}]\label{def:pu} We say a unitary transformations $U$, from a set $\mathcal{U} \subseteq U(d)$ is $(\epsilon,t,d)$- practically unknown if provided a bounded number $t \leq poly(\log_2 d)$ of queries $U\rho U^{\dagger}$, for any $\rho \in \mathcal{H}^d$, the probability that any $poly(\log_2 d)$-time adversary can perfectly distinguish $U$ from a Haar distributed unitary is upper bounded by $1/2(1 + 0.5 \epsilon)$. Here $0 < \epsilon < 1$, t are functions of $log_2 d$, and $\lim_{\log_2(d) \rightarrow \infty}\epsilon = 0$. \end{definition} For the sake of our proof, we need a variation of this definition which is for any polynomial number of queries in the security parameter: \begin{definition}[$\epsilon,d-$ Practical unkownnness]\label{def:pu-poly} We say a unitary transformations $U$, from a set $\mathcal{U} \subseteq U(d)$ is $(\epsilon,d)$- practically unknown if it is $(\epsilon,t,d)$- practically unknown for any $t=poly(\lambda) = poly(\log d)$. \end{definition} Now we show that the assumption of $\epsilon,d-$ Practical unkownness implies PRU. \begin{theorem}\label{th:practicaluu-pru} A family of $(\epsilon,d)$- practically unknown unitaries where $\epsilon = negl(\lambda)$ is a PRU family. \end{theorem} \begin{proof} We prove this by contraposition. Let $\mathcal{U} = \{U_k\}^{\mathcal{K}}_{i=1} \subseteq U(d)$ be a $(\epsilon,d)$- practically unknown family, that is not a PRU. This means that there exists a QPT adversary $\mathcal{A}$ for which we have the following after some $q=poly(\lambda) = poly(\log(d))$ queries to the unitary oracle: \begin{equation} |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}^{U_k}(1^{\lambda})=1] - \underset{U \leftarrow \mu}{Pr}[\mathcal{A}^U(1^{\lambda})=1]| = \delta = non\text{-}\negl(\lambda). \end{equation} Equivalently, we can say that if a unitary is randomly picked from either of the sets $\mathcal{U}$ or a set of Haar-random distributed unitaries with a random bit $b$, the advantage of the adversary in guessing bit $b$ is a non-negligible function $\delta$ greater than $\frac{1}{2}$. Now if such adversary exists, there exists also an adversary $\mathcal{A}'$ that querying the same $q$ states, can distinguish the $U_k \in \mathcal{U}$ from a Haar-random unitary with the following probability: \begin{equation} Pr[\text{distinguish } U_k] \geq \frac{1}{2} + \delta \end{equation} On the other hand, if $\mathcal{U}$ is $(\epsilon,d)$-practically unknown this probability is equal to $\frac{1}{2}(1 + 0.5 \epsilon)$ where $\frac{\epsilon}{4}$ is a negligible function while as $\delta$ is non-negligible. Hence we reach a contradiction and the proof is complete. \end{proof} We have shown that given the hardware assumption of practical unknownness, over a set of unitary transformations such as unitary qPUFs, one can get PRU and as a result generate PRS by applying random elements of the set on the computational basis state. Now, we want to look at another property of a family of qPUFs and see whether pseudorandomness can be achieved under other related assumptions of such families. One of the main requirements on a qPUF family is the uniqueness property that ensures any two qPUFs in the family are sufficiently distinguishable in the diamond norm. The uniqueness property is formally defined in preliminary section~\ref{sec:prelim-qpuf}, equation (\ref{eq:unique}). In what follows we show a family of unknown and maximally distinguishable unitary matrices, such as unitary qPUFs, also form a family of PRU and are a generator for PRS. \begin{theorem}\label{th:max-unique-pru} Let $\mathcal{U}_{\mathcal{K}} = \{U_k\}^{\mathcal{K}}_{k=1} \subseteq U(d)$ be a family of unitary transformation selected at random from a distribution $\chi_{\mathcal{U}}$ such that they satisfy almost maximal uniqueness i.e. for any randomly picked pairs of unitary matrices from $\mathcal{U}_{\mathcal{K}}$, we have $\parallel (U_i - U_j)_{i\neq j}\parallel_\diamond = 2 - \epsilon$ where $\epsilon = negl(\lambda)$, then for a sufficiently large $\mathcal{K}$ and $d$, the $\mathcal{U}_{\mathcal{K}}$ is also a PRU. \end{theorem} \begin{proof} We first show that if the maximum uniqueness is on average satisfied for any pairs of unitary matrices of $\mathcal{U}_{\mathcal{K}}$, then the distribution $\chi_{\mathcal{U}}$ converges to Haar measure in the limits of large $d$. The first part of our proof is in the spirit of a proof given in~\cite{kumar2021efficient} for proving uniqueness of Haar-random unitaries. We attempt to prove the other direction for a specific degree of uniqueness which is $2 - \epsilon$ where the maximum of the diamond norm is 2. We have, \begin{equation} \parallel (U_i - U_j)_{i\neq j}\parallel_\diamond = 2 - \epsilon = 2\sqrt{1 - \delta(U_i^{\dagger}U_j)^2} \end{equation} Where the $\delta(M) = \underset{\ket{\phi}}{min}|\bra{\phi}M\ket{\phi}|$ is the minimum of absolute value over the numerical range of the operator $M$. From the above equation we have: \begin{equation} \delta(U_i^{\dagger}U_j)^2 = \epsilon - \frac{\epsilon^2}{4} \approx 0 \end{equation} Since the diamond norm is unitary invariant, we can multiply all the unitaries of the family by a fixed unitary matrix which results in the set including the identity matrix $\mathcal{I}$, hence the above equation can be rewritten as: \begin{equation} \delta(U'_k)^2 = \epsilon - \frac{\epsilon^2}{4} \end{equation} where the set of unitary matrices $U'$ is equivalent to the initial set up to a unitary transformation. Now let $\{e^{i\theta_1},\dots,e^{i\theta_d}\}$ be the eigenvalues of $U'_k$. The eigenvalues of a unitary matrix lie on a unit circle $\mathbb{S}^1 \subset \mathcal{C}$. As shown in the Lemma 1.1 of~\cite{kumar2021efficient}, the following relation exist between the distribution of the eigenvalues of a general unitary matrix in an arc of size $\theta$, and the function $\delta(U)$: \begin{equation} \delta(U'_k)^2 = \frac{1}{2} + \frac{1}{2} \cos{\theta} \end{equation} Where $\theta = \theta_j - \theta_k$ for pairs of eigenvalues $\{e^{i\theta_j},e^{i\theta_k}\}$. From the above equation we have: \begin{equation} \theta = \theta_j - \theta_k = \arccos{(-1 + 2\epsilon - \frac{\epsilon^2}{2})} \approx \pi - \sqrt{\epsilon} + \dots \end{equation} Now we can use Theorem~\ref{th:wieand}. Let $N_{\theta}$ be a random variable that represents the number of eigenvalues in an arc of size $\theta$. Then we have the expectation value of this random variable for the given distribution where the $\theta = \pi - \epsilon'$, and $\epsilon' = negl(\lambda)$, to be \begin{equation} \mathbb{E}_d[N_{\theta}] = \frac{d\times\theta}{2\pi} = \frac{d}{2} - \frac{\epsilon'd}{2\pi} \end{equation} which is close to half of the total number of eigenvalues since the second term is always smaller than 1. This means that in the limit of large $d$, every diameter of the unit circle divide the circle into two areas that each on average includes half of the eigenvalues. Also the variance of the random variable $N_{\theta}$ will be: \begin{equation} Var(N_{\theta}) = \frac{1}{\pi^2}(\log(d) + 1 + \gamma + \log|2\sin(\frac{\pi - \epsilon'}{2})|) + o(1) \approx \frac{\log(d)}{\pi^2} + c' + o(1) \end{equation} where $\gamma \approx 0.577$ and $c' < 1$. Next we calculate the probability that for our given distribution, there are more than half of the eigenvalues in each half of the circle denoted by an arc or size $\pi - \epsilon'$. Using the Chernoff bound we have: \begin{equation} Pr[N_{\pi - \epsilon'} - \mathbb{E}_d[N_{\pi - \epsilon'}]| > x\mathbb{E}_d[N_{\pi - \epsilon'}]] \leq e^{-\frac{x^2}{2+x}\mathbb{E}_d[N_{\pi - \epsilon'}]} \end{equation} Here we want the $x\mathbb{E}_d[N_{\pi - \epsilon'}]$ to be equal to $\frac{d}{2}$, so we have $x = \frac{d/2}{d/2 - d\epsilon'/2\pi} = \frac{1}{1 - \epsilon'/\pi}$ and since the $x$ is a small value the above inequality can be used. Substituting this into the above equation we will have: \begin{equation} Pr[N_{\pi - \epsilon'} - \mathbb{E}_d[N_{\pi - \epsilon'}]| > \frac{d}{2}] \leq e^{-\frac{(\frac{1}{1 - \epsilon'/\pi})^2}{2+\frac{1}{1 - \epsilon'/\pi}}\times(d/2 - \epsilon'd/2\pi)} \approx e^{-d/6} \end{equation} since $\epsilon'$ is negligible. This shows that with a very high probability, on every half of the unit circle, there exist half of the eigenvalues of the random matrix from our specified distribution. We conclude eigenvalues of a random unitary from the distribution $\chi_{\mathcal{U}}$ are uniformly distributed on the unit circle. Let us denote this uniform distribution on $\mathbb{S}^1$ by $\nu$. In order to compare the distribution of $\chi_{\mathcal{U}}$ with the Haar measure, we use the empirical spectral measure introduces in the Appendix~\ref{ap:haar}. We denote the empirical spectral distance of $\chi_{\mathcal{U}}$ as $\tilde{\mu}_{\chi}$ and for Haar measure we denote it as $\tilde{\mu}_{H}$. Since we have shown that the eigenvalues of matrices from $\chi_{\mathcal{U}}$ are distributed uniformly on $\mathbb{S}^1$, it is easy to see that $\mathbb{E}(\tilde{\mu}_{\chi}) = \nu$ and in the limit of large $d$ we have the convergence in probability $\tilde{\mu} \overset{d\rightarrow \infty}{\longrightarrow} \nu$. Now we use the Theorem~\ref{th:diaconis-shah} (Appendix~\ref{ap:haar}) that implies the convergence of the empirical spectral measure of the set of unitaries picked from Haar measure to $\nu$, in the limit of large $d$. Having the these two convergence and the properties of the limit we can conclude that the empirical spectral measure for $\chi_{\mathcal{U}}$ converges to the one for Haar measure. Then we look at Kolmogorov distance of the eigenvalues of these two distributions. We rely on the result given in~\cite{meckes2019sharp} that shows the Kolmogorov distance between the distributions of eigenvalues of random unitary matrices is given by $d_K(\mu, \nu) = \underset{0\leq \theta < 2\pi}{sup} |\frac{N_{\theta}}{d} - \frac{\theta}{2\pi}|$ and specifically for Haar measure it is bounded by \begin{equation} d_K(\mu_{H}, \nu) \leq c \frac{\log(d)}{d} \end{equation} Where $c > 0$ is a universal constant. Given the fact that for the specific value of $\theta$ for the distribution of $\chi_{\mathcal{U}}$ the Kolmogorov distance $d_K(\mu_{\chi}, \nu)$ is of the order $\frac{1}{d}$ which is negligible and using the triangle inequality for the Kolmogorov distance we have \begin{equation} \begin{split} d_K(\mu_{H}, \mu_{\chi}) & \leq d_K(\mu_{H}, \nu) + d_K(\nu, \mu_{\chi})\\ & \leq c \frac{\log(d)}{d} + negl(\lambda)\\ & \leq negl(\lambda) \end{split} \end{equation} Thus the distribution of the eigenvalues of the random matrices of $\chi_{\mathcal{U}}$ is negligibly close to the Haar measure. Also for any randomly picked matrix from each of these distributions, the eigenvalues are fixed. As a result, the convergence between the distribution of the eigenvalues of matrices leads to the fact that in the limit of large $d$, $\chi_{\mathcal{U}}$ converges to the Haar measure on the unitary set. Finally, we show that a polynomial time quantum adversary given a polynomial query to each unknown unitary $U_k$ cannot distinguish any member of this family from Haar measure. This is straightforward since the two distributions are asymptotically close. Thus we have: \begin{equation} |\underset{k \leftarrow \mathcal{K}}{Pr}[\mathcal{A}^{U_k}(1^{\lambda})=1] - \underset{U \leftarrow \mu}{Pr}[\mathcal{A}^U(1^{\lambda})=1]| = negl(\lambda). \end{equation} And we have shown that the set $\mathcal{U}_{\mathcal{K}}$ is a PRU. \end{proof} \section{Efficient Quantum Identification Protocols Using Quantum Pseudorandomness}\label{sec:efficient-qpufid} We discuss the application of some of our previously established results, in order to achieve an efficient quantum identification protocol. \emph{Identification} (also called Entity authentication), is a method to prove the identity of one party called \emph{prover} to another party called \emph{verifier}. In the quantum setting, either the verifier or the prover or both have some quantum capabilities and the properties of quantum mechanics are used to enhance the security of such protocols against powerful quantum adversaries. Here we focus on two quantum identification protocols, proposed in~\cite{doosti2020client}. These identification protocols are based on quantum PUFs and use their unforgeability property to achieve exponential security against QPT adversary (polynomial time in the learning phase, and unbounded during the quantum communication) in a polynomial number of rounds. Even though these protocols are resource-efficient in many aspects, one of the main practical challenges in implementing these protocols is the fact that in order to use the unforgeability property of quantum PUFs, the challenge states needs to be sampled at random from Haar measure. As a result, relying on Theorem~\ref{th:efficientuu-prs}, we show that these protocols can still achieve exponential security using PRS. This brings us one step closer to the practical implementations of quantum identification protocols with exponential security against powerful quantum adversaries and can lead to promising solutions to the problem of untrusted manufacturers. Furthermore, using Theorem~\ref{th:pru-uu}, we show that PRU can also be used as an alternative to Hardware assumptions in order to run these identification protocols. First, we briefly mention the two protocols. The full description of both protocols can be found in Appendix~\ref{ap:qpufid-protocols}. \subsection{Identification protocol with high-resource verifier} In this qPUF-based protocol, The verifier uses a database of the challenge-responses of the qPUF in order to identify a party who has access to the qPUF device. Since the verifier needs to run a quantum verification algorithm to check the response-state received by the prover, this protocol is referred to as \emph{high-resource verifier}. The protocol has three phases: \emph{Setup phase}, \emph{Identification phase} and \emph{Verification phase}. In the \emph{Setup phase} the verifier who has physical access to the qPUF device samples some quantum challenges at random from the Haar measure and records the response state of the qPUF on a quantum database. Then publicly sends the qPUF over to the prover. At this stage, polynomial access to the qPUF device has been assumed for the adversary i.e. the quantum adversary can query the qPUF with a polynomial number of arbitrary quantum states attempting to learn the behaviour of the underlying unitary transformation. In the \emph{Identification phase}, the verifier picks one of the challenges in the database at random and sends them to the prover over a public quantum channel, while an adversary has full control over the channel. Then the prover who acquires the qPUF obtains the correct response to the challenge states and sends it back through the same public quantum channel. Finally, in the \emph{Verification phase}, the verifier needs to verify the challenge state to confirm the identity of the other party. To this end, the verifier needs to run a quantum verification or test algorithm on the received response, and the $M$ copies of the correct response that is stored in the database. In~\cite{doosti2020client} the protocol has been proposed and analysed with both SWAP and GSWAP test as the verification algorithm. The following theorem states the security or soundness of the above protocol with both of the tests: \begin{theorem}[\textbf{Th. 2 and 4 in~\cite{doosti2020client}}]\label{th:hr-sound} Let qPUF be a selectively unforgeable\footnote{Universally unforgeable in our terminology} unitary PUF over $\mathcal{H}^d$. The success probability of the adversary to pass the {\normalfont SWAP}-test or {\normalfont GSWAP}-test verification of the high resource verifier protocol is at most $\epsilon$, given that there are $N$ different CRPs, each with $M$ copies. The $\epsilon$ is bounded as follows for each verification: \begin{equation} \text{Pr}[\text{Ver accept}_{\mathcal{A}}] \leqslant \epsilon \quad \quad \epsilon_{\text{SWAP}} \approx \mathcal{O}(\frac{1}{2^{NM}}) \quad \quad \epsilon_{\text{GSWAP}} \approx \mathcal{O}\big(\frac{1}{(M+1)^{N}}\big) \end{equation} \end{theorem} We now introduce a computationally efficient variation of this protocol which we denote as \emph{Efficient hr-verifier identification protocol}, by replacing the qPUF with any general universally unforgeable pseudorandom unitary and the Haar-random challenges with pseudorandom quantum states as follows: \begin{enumerate} \item \emph{Setup phase}: \begin{enumerate} \item Verifier has access to a PRU family $\mathcal{U} \subseteq U(d) = \{U_i\}^\mathcal{K}_{i=1}$. \item Verifier samples at random $k \overset{\$}{\leftarrow} \mathcal{K}$ and selects $U_k$. \item Verifier has also access to a family of PRS $\{\ket{\phi_{k'}}\in S(\mathcal{H}^d)\}_{k'\in\mathcal{K}'}$ and randomly picks $Q \in \mathcal{O}(\text{poly} \log d)$ of them as the challenge states. \item Verifier queries the $U_k$ individually with each challenge $\ket{\phi_{k'}}$ a total of $M$ number of times to obtain $M$ copies of the response state $\ket{\phi^r_{k'}}$ and stores them in their local database $S$. \item The verifier transfers the $U_k$ to Prover or securely sends the key $k$. \end{enumerate} \item \emph{Identification phase}: \begin{enumerate} \item Verifier uniformly selects a challenge labelled ($i \xleftarrow{\$} [Q]$), and sends the state $\ket{\phi_i}$ over a public quantum channel to Prover. \item Prover generates the output $\ket{\phi^p_i}$ by querying the challenge from $U_k$. \item The output state $\ket{\phi^p_i}$ is sent to Veifier over a public quantum channel. \item This procedure is repeated with the same or different states a total of $R \leq Q$ times. \end{enumerate} \item \emph{Verification phase}: \begin{enumerate} \item Verifier runs a quantum equality test algorithm on the received response from Bob and the $M$ copies of the correct response that she has in the database. This algorithm is run for all the $R$ CRP pairs. \item Verifier outputs `1' implying successful identification if the test algorithm returns `1' on all CRPs. Otherwise, outputs `0'. \end{enumerate} \end{enumerate} We note that the protocol assumes that the adversary has only query access to the unitary $U_k$ from a PRU family as it is also assumed in the Definition~\ref{def:pru}. The following theorem which is a corollary of the previous results shows that the \emph{Efficient hr-verifier identification protocol} is also exponentially secure against QPT adversary with the same security bounds. \begin{theorem}\label{th:eff-hr-sound} Let $U_k \in \mathcal{U}$ be unitary randomly selected from a PRU family $\mathcal{U} \subseteq U(d)$. The success probability of the adversary to pass the {\normalfont SWAP}-test or {\normalfont GSWAP}-test verification of the \emph{Efficient hr-verifier protocol} is at most $\epsilon$, given that there are $N$ different CRPs, each with $M$ copies. The $\epsilon$ is bounded as follows for each verification: \begin{equation} \text{Pr}[\text{Ver accept}_{\mathcal{A}}] \leqslant \epsilon \quad \quad \epsilon_{\text{SWAP}} \approx \mathcal{O}(\frac{1}{2^{NM}}) \quad \quad \epsilon_{\text{GSWAP}} \approx \mathcal{O}\big(\frac{1}{(M+1)^{N}}\big) \end{equation} \end{theorem} \begin{proof} First, we use Theorem~\ref{th:pru-uu} that shows $\mathcal{U}$ is also an unknown unitary family. Then we use Theorem~\ref{th:efficientuu-prs} that states any UU unitary satisfies efficient universal unforgeability which is the universal unforgeability given the states are picked from the PRS family. These two results suggest that the $U_k$ within the protocol satisfies the same notion of universal unforgeability that qPUF satisfies in the original protocol. Now we can directly use the result of~\cite{doosti2020client} using the SWAP and GSWAP test which will result in the same security bound in the number of rounds and copies of challenge-response pairs. \end{proof} \subsection{Identification protocol with low-resource verifier} The second identification protocol also introduced in~\cite{doosti2020client}, enables a weak verifier to identify a quantum server prover in the network. The main idea behind this protocol is to delegate the equality testing to the prover so that the verifier can only run a classical verification algorithm. While it might seem this delegation could damage the security, it has been shown that the unforgeability property of qPUF combined with some trapification techniques used in the protocol leads to yet another exponentially secure qPUF-based identification protocol. In addition to enabling the clients to identify quantum servers on the clouds, this protocol has the advantage of one-way quantum communication compared to the previous protocol. We give a brief description of the protocol here. The complete protocol can be found in Appendix~\ref{ap:low-verifier}. The \emph{Setup phase} is similar to the previous protocol, except that in addition to the challenge-response pairs, the verifier also generates some trap states. These trap states need to be orthogonal to the challenge subspace. In the \emph{Identification phase}, the verifier sends two quantum states in every communication rounds. One of the states is the challenge state and the other one is either the correct response or the trap states with no overlap with the correct response. The verifier selects the correct or trap response at random with probability $1/2$\footnote{It has been shown that this probability can be generalised to arbitrary distribution.}. In other words in $N$ rounds, around $N/2$ positions are the states sent with their correct responses. In the \emph{Verification phase}, the prover generates the valid response for every challenge by interacting them with the qPUF device and then runs a SWAP-test on the response produced by the qPUF and the other state sent by the verifier. The prover then sends the classical output of the test to the verifier who receives a classical $S_N = s_1,...,s_N$ where $s_i \in \{0,1\}$. Finally, the verifier runs a classical verification algorithm on this string that checks the expected result for the positions with the valid responses and also the statistics of the remaining positions. The protocol has been proven secure against both collective and coherent attacks with the following bound which we restate from the original paper: \begin{theorem}[\textbf{Th. 6, 7 and 8 in~\cite{doosti2020client}}]\label{th:lr-sound} Let qPUF be a universally unforgeable unitary PUF over $\mathcal{H}^d$. The success probability of any QPT adversary $\mathcal{A}$ (using coherent or collective strategy) to pass the verification of the low resource verifier protocol is at most $\epsilon$, in $N$ rounds. The $\epsilon$ is of the order $\mathcal{O}(\frac{1}{2^{N}})$ \begin{equation} \text{Pr}[\text{Ver accept}_{\mathcal{A}}] \leqslant \epsilon \quad \quad \epsilon \approx \mathcal{O}(\frac{1}{2^{N}}) \end{equation} \end{theorem} Similar to the previous protocol, we introduce an efficient version of this protocol by replacing the Haar-random states with PRS and the qPUF with an unknown unitary selected from a PRU family. We denote this protocol as \emph{Efficient lr-verifier identification protocol} and it is described as follows: \begin{enumerate} \item \emph{Setup phase}: \begin{enumerate} \item Verifier has access to a PRU family $\mathcal{U} \subseteq U(d) = \{U_i\}^\mathcal{K}_{i=1}$. \item Verifier samples at random $k \overset{\$}{\leftarrow} \mathcal{K}$ and selects $U_k$. \item Verifier has also access to a family of PRS $\{\ket{\phi_{k'}}\in S(\mathcal{H}^d)\}_{k'\in\mathcal{K}'}$ and randomly picks $Q \in \mathcal{O}(\text{poly} \log d)$ of them as the challenge states. \item Verifier queries the $U_k$ individually with each challenge $\ket{\phi_{k'}}$ a total of $M$ number of times to obtain $M$ copies of the response state $\ket{\phi^r_{k'}}$ and stores them in their local database $S$. \item Verifier selects states $\ket{\phi^{\perp}}$ orthogonal to the selected challenge's subspace and queries the $U_k$ with them to obtain the trap states labelled as $\ket{\phi^{\text{trap}}}$. The unitary property ensures that $\langle \phi^{\text{trap}}|\phi_{k'}^r\rangle = 0$. \item The verifier transfers the $U_k$ to Prover or securely sends the key $k$. \end{enumerate} \item \emph{Identification phase}: \begin{enumerate} \item Verifier randomly selects a subset $N \subseteq \mathcal{K}'$ different challenges from the database, and sends the state $\ket{\phi_i}$ over a public quantum channel to Prover. \item Verifier randomly selects $N/2$ positions, marks them $b = 1$ and sends the valid response states $\ket{\phi_i^1} = \ket{\phi_i^r}$ to Prover. On the remaining $N/2$ positions, marked as $b = 0$, Verifier sends the trap states $\ket{\phi_i^0} = \ket{\phi_i^{\text{trap}}}$. \end{enumerate} \item \emph{Verification phase}: \begin{enumerate} \item Prover queries $U_k$ with the challenge states to generate the response states $\ket{\phi^p_i}$ for all $i \in [N]$. \item Prover performs a SWAP test between $\ket{\phi^p_i}$ and the response state $\ket{\phi_i^b}$ received from the Verifier. This algorithm is repeated for all the $N$ distinct challenges. \item Prover labels the outcome of $N$ instances of the SWAP test algorithm by $s_i \in \{0,1\}$ and sends them over a classical channel to Prover. \item Verifier runs a classical verification algorithm \texttt{cVer($s_1,...,s_N$)}(as specified in~\cite{doosti2020client} and Appendix~\ref{ap:low-verifier}) and outputs `1' implying that identification has been successful and outputs `0' otherwise. \end{enumerate} \end{enumerate} Again using the previous proof techniques presented in~\cite{doosti2020client} and our results, we show that the \emph{Efficient lr-verifier identification protocol} satisfies exponential security against QPT adversary both under the coherent and collective attack models. \begin{theorem}\label{th:eff-lr-sound} Let $U_k \in \mathcal{U}$ be unitary randomly selected from a PRU family $\mathcal{U} \subseteq U(d)$. The success probability of a QPT adversary $\mathcal{A}$ to pass the verification of the \emph{Efficient lr-verifier protocol} is at most $\epsilon$, in $N$ rounds. The $\epsilon$ is bounded as follows: \begin{equation} \text{Pr}[\text{Ver accept}_{\mathcal{A}}] \leqslant \epsilon \quad \quad \epsilon \approx \mathcal{O}(\frac{1}{2^{N}}) \end{equation} \end{theorem} \begin{proof} First, we specify that we can directly use the result of the Theorem (6) in~\cite{doosti2020client} which bounds the success probability of classical adversary in passing the classical verification algorithm. Then the success probability against a quantum adversary with the collective and coherent attack is defined as the advantage of the quantum adversary over that classical adversary in guessing the trap states, using all the side information obtained from the $U_k$ in the learning phase. Using Theorem~\ref{th:pru-uu} we have that $\mathcal{U}$ is also an unknown unitary family. Then we use Theorem~\ref{th:efficientuu-prs} that states any UU unitary satisfies efficient universal unforgeability which is the universal unforgeability given the states are picked from the PRS family. Next, the conditions of Theorems (7) and (8) in~\cite{doosti2020client} are satisfied and we can directly use those results that state the success probability of such adversaries in guessing the traps is bounded as follows: \begin{equation} \text{Pr}[b \leftarrow \Lambda_{\mathcal{A}}] \leqslant \frac{1}{2} + \mathcal{O}(2^{-N}) \end{equation} Where $\Lambda_{\mathcal{A}}$ denotes any map that $\mathcal{A}$ uses to distinguish the traps states. Finally, putting all the above results together we have \begin{equation} \text{Pr}[\text{Ver accept}_{\mathcal{A}}] \leqslant \epsilon = \text{Pr}[\text{Ver accept}_{\text{Classical Adv}}] + \mathcal{O}(2^{-N}) \approx \mathcal{O}(2^{-N}) \end{equation} This concludes the soundness proof of \emph{Efficient lr-verifier protocol}. \end{proof} \section{Conclusion and Discussion}\label{sec:conclusion} We have explored the relationship between quantum pseudorandomness and quantum hardware assumptions such as quantum physical unclonability. Since one of the main cryptographic properties of quantum physical unclonable functions is the notion of universal unforgeability, we have inspected whether quantum pseudorandomness would be enough as a challenge sampling requirement, to achieve this level of unforgeability. We have formally proved that the answer to this question is positive. This result can improve the practicality of qPUF-based constructions and protocols since it will replace the requirement of Haar-randomness on the challenge states, which is resourceful and experimentally challenging. We have also established the link between the notions of unknown unitary and PRU. We proved that any family of PRU is also a family of unknown unitaries and, hence they could be a potential candidate for the construction of qPUF devices. This result can be a complement to the result of~\cite{kumar2021efficient} where they show t-designs can also satisfy a similar notion, namely practical unknownnes, which leads to an efficient proposal for constructing quantum PUFs. Then we also looked at the problem of generating pseudorandom quantum states from hardware assumptions. Our results show that different physical assumptions that were proposed in the context of PUFs, such as uniqueness or practical unknownnes, can also imply quantum pseudorandomness. This is of theoretical interest as it shows an alternative way for achieving quantum pseudorandomness which is different from current approaches based on post-quantum and computational assumptions. Apart from the cryptography perspective, having a different set of assumptions for PRS and PRU can find potential applications in physics~\cite{bouland2019computational}. Another interesting future direction would be to further explore the relationship between unclonability and quantum pseudorandomness that has been initially proposed in~\cite{ji2018pseudorandom}, relying upon our new results. Finally, to show the consequence of our result in the practicality of qPUF-based protocols, we have revisited the qPUF-based identification protocols proposed in~\cite{doosti2020client} using PRS and we have shown that this more efficient version of these protocols can achieve the same security guarantee as they were initially proposed. An important note that needs to be emphasised regarding these protocols is that they assume during the transition stage (e) in the setup phase, the adversary has only query access to the device. If the PRU is realised by hardware assumptions such as practical unknownness as showed in Theorem~(\ref{th:practicaluu-pru}), then this requirement is satisfied by assumption. Otherwise, the unitary circuit of the $U_k$ selected from the PRU needs to be obfuscated or hidden from the adversary \cite{alagic2016quantum,brakerski2020quantum}. This problem mainly arises if the PRU is built from classical PRF and hence the underlying circuit is publicly known. Another alternative way to go around this problem is that only the key index of the selected unitary is sent in a secure way to the other party who is running the selected unitary locally. Thus the above protocol works naturally with hardware assumptions that imply the unitary transformation is unknown. Nevertheless, using PRU constructions with known unitary circuits has the advantage that removes the quantum memory requirements to store the response pairs, as one can presumably compute the response state having access to the circuit and only store the related classical parameters. Yet another interesting future direction would be to establish concrete bounds on the randomness and pseudorandomness of unitary families given different degrees of uniqueness or distinguishability (not negligibly close to perfect distinguishability), in terms of the diamond norm. This is also related to the study of t-design unitaries and the toolkit from random matrix theory, which we used in this paper can be potentially beneficial and powerful tools for this study. \section*{Acknowledgement} We acknowledge the UK Engineering and Physical Sciences Research Council grant EP/N003829/1 and as well as Innovate UK funded project called AirQKD : product of a UK industry pipeline, grant number 106178. \section*{Competing Interests} The authors declare no competing interest. \medskip \bibliographystyle{unsrt}
55974adebc6e3b479f55439cc395dacf53053470
\section{Introduction} The \program{NBODY6}~code~\citep{Aarseth99} is the state of the art code for performing direct N-body simulations to study the evolution of dense stellar systems. It is the yardstick against which other N-body codes are measured in terms of computational efficiency and accuracy~\citep{iwasawa2015gpuenabled,2016MNRAS.463.2109R}. Moreover, \program{NBODY6}~remains flexible in its support for arbitrary initial conditions compared to Monte-Carlo methods~\citep{2013MNRAS.431.2184G}. The code splits the gravitational force acting on a particle into near and distant forces, allowing the fast-changing near forces to be treated separately from the slow-changing distant ones~\citep{1973JCoPh..12..389A,1992PASJ...44..141M}. Recent improvements to \program{NBODY6}~have included routines for performing distant force calculations on general-purpose graphics processing units (GPUs)~\citep{2012MNRAS.424..545N}, which greatly improves the overall run time. Since the force calculation algorithm used in \program{NBODY6}~has a calculation cost of $O(N^2)$, simulations of large clusters, exceeding $10^6$ stars, are still very expensive~\citep{iwasawa2015gpuenabled,Bonsai,aarseth_2003,2015MNRAS.450.4070W}. An early attempt at an $O(N \log N)$ integration scheme for N-body simulations using a tree code was described by~\citet{1993ApJ...414..200M}. More recently,~\citet{Oshino_2011} described the Particle-Particle Particle-Tree (P3T) scheme which uses a tree code for distant force calculations and a standard leap-frog scheme for integrating these distant forces, while retaining the Hermite scheme for neighbour forces. A working implementation of this scheme with a GPU-enabled tree code was shown in~\citet{iwasawa2015gpuenabled}. Over short simulation times, the P3T scheme performs faster than \program{NBODY6}, however as the system evolves past core collapse, the P3T scheme becomes slow due to the short time steps required by the Hermite integrator. \program{NBODY6}~is able to cope with these short time steps due to the KS~\citep{KSReg} and chain regularisation code for close encounters~\citep{MikkolaAarseth89,2003IAUS..208..295A}. Additionally, \program{NBODY6}~includes routines for modelling stellar evolution and post-Newtonian force corrections, which allow for more realistic simulations. In this paper, we present a version of \program{NBODY6}, called \program{NBODY6+P3T}, which incorporates the P3T scheme while retaining \program{NBODY6}'s routines for KS and chain regularisation. Section~\ref{construction} describes the construction of the improved \program{NBODY6}~code. Section~\ref{results} presents the results of different runs compared to the original \program{NBODY6}. Finally in section~\ref{conclusions} we present conclusions and discuss future applications of the new code. \section{Implementation} \label{construction} \subsection{Formulation} In this section we describe the modifications made to \program{NBODY6}~in order to incorporate the P3T scheme. We adopt the definition of P3T from~\citet{iwasawa2015gpuenabled} with the following deviations. The Plummer softening is omitted from the scheme as the singularity due to the $1/r$ potential is handled by special treatment of close encounters through KS regularisation~\citep{MIKKOLA1998309}. A radius known as the neighbour sphere around each particle separates the force acting on the particle into irregular and regular forces. The irregular forces are those fast-changing forces from nearby particles inside the neighbour sphere. The regular forces are from the distant particles outside of the neighbour sphere, which change more slowly. In \program{NBODY6+P3T}, the irregular and regular accelerations of particle $i$, $\bm{F}_{I,i}$ and $\bm{F}_{R,i}$ respectively, due to particle $j$ are calculated as follows: \begin{align} \label{eq:fi} \bm{F}_{I,i} &= \sum_{j \ne i}^{N} m_{j} \frac{\bm{r}_{ij}}{|\bm{r}_{ij}|^{3}} K_{ij},\\ \label{eq:fidot} \dot{\bm{F}}_{I,i} &= \sum_{j \ne i}^{N} m_{j}\frac{\bm{\dot r}_{ij}}{|\bm{r}_{ij}|^{3}} K_{ij} - 3 \frac{(\bm{r}_{ij} \cdot \bm{\dot r}_{ij}) \bm{r}_{ij}}{|\bm{r}_{ij}|^{5}} + m_{j} \frac{\bm{r}_{ij}}{|\bm{r}_{ij}|^{3}} K'_{ij},\\ \bm{F}_{R,i} &= \sum_{j \ne i}\frac{m_j}{|\bm{r}_{ij}|^{3}}\bm{r}_{ij} - \bm{F}_{I,i},\\ \bm{r}_{ij} &= \bm{x}_i - \bm{x}_j,\\ \bm{\dot r}_{ij} &= \bm{\dot x}_i - \bm{\dot x}_j \end{align}% where $m_i$, $\bm{x}_i$, $\bm{\dot x}_i$ are the mass, position, and velocity of particle $i$, respectively and $\bm{r}_{ij}$ is the Euclidean distance between particles $i$ and $j$. Here, $K$ and $K'$ are cutoff functions to ensure a smooth transition of forces between $\bm{F}_{I}$ and $\bm{F}_{R}$ as particles move in and out of neighbour spheres. We adopt the formulae from~\citet{iwasawa2015gpuenabled,Duncan98amultiple}, we have: \begin{align} \label{eq:duncan} K_{ij} &= 1 - \begin{cases} 0 &\text{if }x<0,\\ -20x^7 + 70x^6 - 84x^5 + 35x^4 &\text{if } 0 \leq x < 1,\\ 1 &\text{if }1 \leq x,\\ \end{cases}\\ \label{eq:duncan2} K'_{ij} &= \begin{cases} (-140x^6 + 420x^5 - 420x^4 + 140x^3)\frac{(\bm{r}_{ij} \cdot \bm{\dot r}_{ij})}{|\bm{r}_{ij}| ({r_{\text{cut}}}-r_{\text{in}})} &\text{if }0 \leq x < 1,\\ 0 &\text{otherwise,} \end{cases} \end{align}% where% \begin{align} x &= \frac{y - \gamma}{1 - \gamma},\\ y &= \frac{|\bm{r}_{ij}|}{r_{\text{cut}}},\\ \gamma &= \frac{r_{\text{in}}}{r_{\text{cut}}}, \end{align}% and $r_{\text{cut}}$ and $r_{\text{in}}$ denote the outer and inner cutoff radii for neighbour forces, respectively. If $|\bm{r}_{ij}| < r_{\text{in}}$ then the force from particle $j$ contributes entirely to $\bm{F}_{I,i}$, and likewise if $|\bm{r}_{ij}| > r_{\text{cut}}$ the force counts wholly as part of $\bm{F}_{R,i}$. As in~\citet{iwasawa2015gpuenabled}, we use $\gamma = 0.1$ for all calculations. In \program{NBODY6+P3T}, we used the standard criterion from~\citet{1992PASJ...44..141M}, which determines the individual time step $\Delta t_i$ for each particle in the Hermite scheme. \begin{equation} \label{eq:dt} \Delta t_{i} = \min \left( \sqrt{\eta \frac {\left|\bm a_i^{(0)}\right| \left|\bm a_i^{(2)}\right| + \left|\bm a_i^{(1)}\right|^2 } {\left|\bm a_i^{(1)}\right| \left|\bm a_i^{(3)}\right| + \left|\bm a_i^{(2)}\right|^2} }, \Delta t_{\text{max}} \right). \end{equation}% Here, $\bm a_i^{(n)}$ is the $N\text{th}$ time derivative of the acceleration of particle $i$ due to $\bm{F}_{I,i}$, with the exception of $\bm a_i^{(0)}$ which is due to $\bm{F}_{I,i} + \bm{F}_{R,i}$~\citep{1992PASJ...44..141M}. We used the suggestion from~\citet{1985mts..conf..377A} and set the accuracy parameter $\eta = 0.02$ for all calculations. The standard P3T implementation includes an initial time step criterion to account for the missing higher order acceleration derivatives. \program{NBODY6}~instead has routines to calculate these derivatives and as such the fourth order standard criterion is used to calculate the initial time step in \program{NBODY6+P3T}. \subsection{Modified Integration} In \program{NBODY6+P3T}~we replaced the regular force Hermite scheme from \program{NBODY6}~with a standard leapfrog integrator. The velocity kick from the regular force is given by% \begin{equation} \label{eq:kick} \bm{\dot x}_i = \bm{\dot x}_i + \frac{\Delta t_{\text{reg}}}{2} \bm{F}_{R,i}. \end{equation}% The regular time step $\Delta t_{\text{reg}}$ is a constant shared by all particles and is defined as a multiple of the maximum irregular Hermite step $\Delta t_{\text{max}}$. In all of our simulations we use $\Delta t_{\text{reg}} = 4 \Delta t_{\text{max}}$. The integration algorithm used in \program{NBODY6+P3T}, illustrated in Fig.~\ref{fig:intgrt}, is as follows:\\ \begin{enumerate} \item Calculate the regular acceleration $\bm{F}_R$ of every particle. \item \label{step2} Apply the velocity kick from equation~\ref{eq:kick} to each particle. \item Integrate $\bm{F}_I$ using the Hermite integrator to time $t = t + \Delta t_{\text{reg}}$. \item Calculate $\bm{F}_R$ again. \item Apply another velocity kick. \item Go to step~\ref{step2}. \end{enumerate}% \begin{figure \centering \input{integration-flow} \caption{The integration algorithm used by \program{NBODY6+P3T}.} \label{fig:intgrt} \end{figure}% The routine which predicts the position and velocity of a particle at time $t$ uses the total force in \program{NBODY6}~in the form \begin{align} \label{eq:nbody-jpred-x} \bm{x}_{\text{pred}, i} &= \bm{x}_i + \Delta t_i \bm{\dot{x}}_i + \frac{\Delta t_i^2 (\bm{F}_{I,i} + \bm{F}_{R,i})}{2} + \frac{\Delta t_i^3 (\bm{\dot{F}}_{I,i} + \bm{\dot{F}}_{R,i})}{6},\\ \label{eq:nbody-jpred-dx} \bm{\dot{x}}_{\text{pred}, i} &= \bm{\dot{x}}_i + \Delta t_i (\bm{F}_{I,i} + \bm{F}_{R,i}) + \frac{\Delta t_i^2 (\bm{\dot{F}}_{I,i} + \bm{\dot{F}}_{R,i})}{2},\\ \Delta t_i &= t - t_i, \end{align}% however due to the velocity kick from the leapfrog routine in \program{NBODY6+P3T}, the regular force would be counted twice. The new routine eschews the regular force and it's derivative, becoming% \begin{align} \label{eq:p3t-jpred-x} \bm{x}_{\text{pred}, i} &= \bm{x}_i + \Delta t_i \bm{\dot{x}}_i + \frac{\Delta t_i^2}{2} \bm{F}_{I,i} + \frac{\Delta t_i^3}{6} \bm{\dot{F}}_{I,i},\\ \label{eq:p3t-jpred-dx} \bm{\dot{x}}_{\text{pred}, i} &= \bm{\dot{x}}_i + \Delta t_i \bm{F}_{I,i} + \frac{\Delta t_i^2}{2} \bm{\dot{F}}_{I,i}. \end{align}% \subsection{Neighbour Radius} \program{NBODY6+P3T}~uses a constant neighbour radius $R_s$ for all particles and introduces a new constant parameter, $R_{\text{buff}}$ such that $R_{\text{cut}} = R_s + R_{\text{buff}}$. The fixed neighbour radius leads to a much smaller average neighbour number, usually $< 1$, with most particles being isolated. In contrast to \program{NBODY6}~which adjusts size of the neighbour sphere to avoid isolated particles, routines in \program{NBODY6+P3T}~must tolerate empty neighbour lists. \subsection{Force Calculation} Historically, the dominant component with regards to computational time in N-body gravitational simulations has been the calculation of the $N^2$ gravitational interactions~\citep{iwasawa2015gpuenabled,Bonsai,aarseth_2003,MIKKOLA1998309,1986Natur.324..446B}. A Barnes-Hut tree~\citep{1986Natur.324..446B} is a data structure and algorithm for approximating the force acting on each particle in $O(n \log n)$ time; the increase in efficiency is a trade-off with accuracy. This trade-off is managed through the opening angle parameter $\theta$. A value of $\theta = 0$ will result in no approximations, reducing the tree code algorithm to the equivalent of a direct-N code. For all runs in this paper we chose $\theta = 0.4$. The regular force calculator in \program{NBODY6+P3T}~is a GPU-enabled tree code provided by the \program{Bonsai}~library~\citep{Bonsai}. Modifications were made to \program{Bonsai}~to populate the neighbour list and return the results in a format compatible with \program{NBODY6}. Irregular forces, the gravitational force acting on each particle due to its neighbours, are calculated with a family of routines which were modified in \program{NBODY6+P3T}~to include the smoothing function from \eqref{eq:duncan} and \eqref{eq:duncan2} and the alternative prediction formulation from \eqref{eq:p3t-jpred-x} and \eqref{eq:p3t-jpred-dx}. \subsection{Regularisation Parameters} \label{reg.params} \program{NBODY6}~defines two values, $\Delta T_{\text{min}}$ and $R_{\text{min}}$, which control when two particles are candidates for KS regularisation. These parameters describe the minimum distance and irregular time step required for two particles to form a regularised pair. Put simply, if for any single particle $i$, $\Delta t_i < \Delta T_{\text{min}}$ and for its nearest neighbour $j$, $|x_i - x_j| < R_{\text{min}}$, then the pair $i,j$ may be removed from the simulation and replaced with a single centre-of-mass particle to represent the binary. Choosing the wrong values for these parameters leads to large energy errors. The values of $R_{\text{min}}$ and $\Delta T_{\text{min}}$ are calculated in the \routine{ADJUST}~routine and are defined as follows.% \begin{align} R_{\text{min}} &= \frac{4 r_h}{N \rho_{\text{core}}^{1/3}},\\ \Delta T_{\text{min}} &= 0.01 \sqrt{\eta / 0.02} \sqrt{R_{\text{min,}}^3} \end{align}% where $r_h$ is the half-mass radius and $\rho_{\text{core}}$ is the central density of the system. In \program{NBODY6+P3T}, we use the same definitions for $\Delta T_{\text{min}}$ and $R_{\text{min}}$ as in \program{NBODY6}. The core quantities, $R_{\text{core}}$, $N_{\text{core}}$, and $\rho_{\text{core}}$ posed a challenge. The method, due to~\citet{1985ApJ...298...80C}, used to find $R_{\text{core}}$ and the core density $\rho_{\text{core}}$ relies on knowing the $6$ nearest neighbours to each particle. \program{NBODY6}~uses the neighbour lists of the particles for this purpose and while \program{NBODY6}~ensures that most neighbour lists have sufficient numbers by varying $r_{\text{cut}}$ for individual particles, \program{NBODY6+P3T}~uses a global fixed $r_{\text{cut}}$. The core quantities are used to determine the regularisation parameters, $R_{\text{min}}$ and $\Delta T_{\text{min}}$. These parameters describe the minimum distance and irregular time step required for two particles to form a regularised pair. If these parameters are too high, \program{NBODY6+P3T}~chooses to regularise pairs in sub-optimal situations, leading to frequent and unnecessary regularisations. Conversely, when the parameters are too low, particles will get too close to each other before they are regularised, leading to small irregular integration steps. We show a solution to this problem in subsection~\ref{collapse.results}. \subsection{Other Modifications} Assumptions are present throughout \program{NBODY6}, particularly with regards to the variability of $R_s$ and the avoidance of isolated particles. Also prevalent in the code are numerous code loops to perform local Hermite scheme predictions and force calculations, outside of the usual routines, which needed to be modified to follow the formulations from \eqref{eq:p3t-jpred-x} and \eqref{eq:p3t-jpred-dx}. To explain all of the changes required in \program{NBODY6+P3T}~would be prohibitively verbose. Tables listing the modifications made to \program{NBODY6}~are made available in appendix~\ref{appendix.tables}. \section{Results} \label{results} \subsection{Accuracy and Performance} \label{acc.perf} We performed a number of test runs in order to compare the accuracy and performance of \program{NBODY6+P3T}~against \program{NBODY6}. In this section, we describe the configuration of each program and the results of the test runs. For initial conditions, we adopted a Plummer model~\citep{Plummer} of equal-mass particles for each run. We used N-Body units~\citep{1986LNP...267...13H} and set the total mass $M = 1$, the gravitational constant $G = 1$, and the total energy $E = -1/4$. Softening is not used as KS regularisation will avoid the singularity of the gravitational potential. All runs were performed on a single compute node, utilising $28$ Intel Xeon Gold 6132 CPUs ($2$ sockets, $14$ cores per socket), $32\text{GB}$ of memory, and $1$ NVIDIA V100 GPU. For each run, we evolved the system over $10$ N-body time units, with the exception of $N=2048k$ which was evolved for only $1$ N-body time unit to avoid lengthy run times. In our results we show the average wall clock time per N-body time unit. \subsubsection{Parameters} The number of particles, $N$, in each run ranged from $N=32k$ to $2048k$ (where $k = 1024$), doubling in size for each successive run. Unless explicitly stated, run time parameters were chosen to follow the standard \program{NBODY6}~scheme. The parameters $r_{\text{buff}}$, $r_{\text{cut}}$, $\Delta t_{\text{reg}}$ were determined according to a modified version of the optimal set of accuracy parameters set out in~\citet{iwasawa2015gpuenabled} and summarised below.% \begin{align} r_{\text{buff}} &= 3 \tau \sigma,\label{eq:rbuff}\\ r_{\text{cut}} &= 4 \tau,\\ \tau &= \frac{1}{128} \left(\frac{N}{2^{14}}\right)^{-1/3},\\ \Delta t_{\text{reg}} &= \text{lowest power of } \frac{1}{2} \mid \Delta t_{\text{reg}} \leq \tau,\\ \Delta t_{\text{max}} &= \frac{\Delta t_{\text{reg}}}{4},\label{eq:dtmax}\\ \sigma &= \frac{1}{\sqrt{2}}. \end{align}% Here, $\sigma$ is the global three-dimensional velocity dispersion and $\tau$ is the calculated regular time step used to get $r_{\text{buff}}$ and $r_{\text{cut}}$. $\Delta t_{\text{max}}$ identifies the maximum irregular time step for an individual particle in \program{NBODY6+P3T}; for \program{NBODY6}~the standard value of $1/8$ was used. For all runs in this paper, $\sigma$ is considered constant throughout the run. In future runs, requiring longer integration times, a periodic recalculation $\sigma$ and subsequently of equations \eqref{eq:rbuff} to \eqref{eq:dtmax} would be necessary. \subsubsection{Results} When $N \leq 256k$, \program{NBODY6+P3T}~performs similarly to~\program{NBODY6}~in terms of run time. For larger runs, \program{NBODY6+P3T}~is measurably faster, completing the run in less than half the time for $N = 1024k$ and more than $5$ times faster for $N = 2048k$. Fig.~\ref{fig:tot-time} shows the evolution of total execution time as the system size increases for both codes. \begin{figure \centering \input{tot-time} \caption{\textbf{Total wall-clock time of execution per N-body time unit as a function of $N$.} \program{NBODY6}~runs used the same initial conditions as the corresponding \program{NBODY6+P3T}~runs.} \label{fig:tot-time} \end{figure} We identified four functional components common to both codes: force calculation and integration, each further separated into regular and irregular parts. Irregular and regular force calculation execution time are denoted $T_{\bm{F}_I}$ and $T_{\bm{F}_R}$ respectively. Similarly, irregular and regular integration are denoted $T_I{^\text{int}}$ and $T_R{^\text{int}}$ respectively. Here, the irregular force calculation and integration corresponds to the direct-N force calculator and Hermite integrator of both \program{NBODY6}~and \program{NBODY6+P3T}. Furthermore, the regular force calculation corresponds to the GPU-enabled direct-N force calculator in \program{NBODY6}~and to the GPU-enabled tree code in ~\program{NBODY6+P3T}. Finally, the regular integration component corresponds to the Hermite integrator in \program{NBODY6}~and to the leap frog integrator in \program{NBODY6+P3T}. We measured the execution time of each component to show how the choice of component algorithm affects the total execution time. Fig.~\ref{fig:force-time}~shows $T_{\bm{F}_I}$ and $T_{\bm{F}_R}$ for different system sizes. \program{NBODY6+P3T}~consistently spends less time calculating either force when $N > 128k$. For smaller $N$ the difference is negligible; only a few seconds per N-body time. The advantage of the tree code can be seen in Fig.~\ref{fig:force-time}. As $N$ increases the execution time clearly increases much slower in \program{NBODY6+P3T}~than the direct-N algorithm of \program{NBODY6}. At $N = 1024k$, the tree code is faster than the direct-N code by approximately a factor of $20$. \begin{figure \centering \input{parts-time-force} \caption{\textbf{Wall-clock time of force calculation as a function of $N$.} Both regular and irregular force calculation times are shown. All runs are the same as those in figure \ref{fig:tot-time}.} \label{fig:force-time} \end{figure} In terms of $T_I{^\text{int}}$, \program{NBODY6+P3T}~performs comparably with \program{NBODY6}~as seen in Fig.~\ref{fig:intgrt-time}. Fig.~\ref{fig:nsteps} shows that \program{NBODY6+P3T}~performs slightly more irregular integration steps than \program{NBODY6}, but Fig.~\ref{fig:nnb} shows that particles in \program{NBODY6+P3T}~have far fewer neighbours on average which would lead to less time spent calculating irregular forces per particle. Both programs also use the same irregular integration algorithm. \begin{figure \centering \input{parts-time-integrate} \caption{\textbf{Wall-clock time of integration as a function of $N$.} Both regular and irregular integration times are shown and do not include force calculation time. All runs are the same as those in figure \ref{fig:tot-time}.} \label{fig:intgrt-time} \end{figure} \begin{figure \centering \input{nsteps.tex} \caption{\textbf{The average number of integration steps per particle per N-body unit time as a function of $N$.} Both regular and irregular integration steps are shown. All runs are the same as those in figure \ref{fig:tot-time}.} \label{fig:nsteps} \end{figure} \begin{figure \centering \input{nnb.tex} \caption{\textbf{The average number of neighbours per particle at start time as a function of $N$.} All runs are the same as those in figure \ref{fig:tot-time}.} \label{fig:nnb} \end{figure} Fig.~\ref{fig:intgrt-time} also shows that \program{NBODY6+P3T}~spends more time performing regular integration $T_R{^\text{int}}$ than \program{NBODY6}. The regular integration algorithms used by each program are not comparable, making it difficult to diagnose the disparity. The total number of regular steps performed by \program{NBODY6+P3T}~is fixed at $N / \Delta t_{\text{reg}}$ per unit time. The number of regular steps performed by \program{NBODY6}~is dependent upon the Hermite algorithm which drives regular integration. Fig.~\ref{fig:nsteps} shows that \program{NBODY6+P3T}~performs more regular steps per particle than \program{NBODY6}, over four times as many for $N > 10^6$, which would explain why \program{NBODY6+P3T}~spends more time overall performing regular integration than \program{NBODY6}. The average number of neighbours in \program{NBODY6+P3T}~is much smaller, which is reflected in the irregular force calculation time. However, the total integration time is slower than in \program{NBODY6}~because the number of irregular and regular steps per particle is higher due to the smaller maximum time step value. In larger system sizes this is not so obvious because the regular force calculation is much more computationally expensive. Fig.~\ref{fig:all-time-proportion} shows the execution time of each component as a proportion of the total execution time for that run. Included in this figure is the time $T_{\text{List}}$, which measures the time needed for the particle scheduler to determine which particles are due to be integrated at each step. As $N$ gets large in \program{NBODY6}, the regular force calculation, $T_{\bm{F}_R}$, becomes the dominant component accounting for nearly all of the computation time. However, in \program{NBODY6+P3T}, $T_{\bm{F}_R}$ is no longer dominant. In fact, irregular integration $T_I{^\text{int}}$ becomes the dominant component accounting for approximately $50\%$ to $60\%$ of the run time consistently. The total time spent performing both irregular and regular integration is consistently $70\%$ to $80\%$ of the total run time. Moreover, the proportion of time spent performing force calculations, both regular and irregular, decreases significantly as $N$ gets large; less that $10\%$ altogether at $N=2048k$. \begin{figure \centering \input{parts-time-all} \caption{\textbf{Proportion of total wall-clock time spent performing each task, as a function of $N$.} The upper panel is for \program{NBODY6}~runs and the lower panel is for \program{NBODY6+P3T}~runs. Included is the value $T_{\text{List}}$ which represents particle scheduling. All runs are the same as those in figure \ref{fig:tot-time}.} \label{fig:all-time-proportion} \end{figure} Both \program{NBODY6}~and \program{NBODY6+P3T}~spend around $10\%$ of the run time in $T_{\text{List}}$, however this proportion is decreasing in \program{NBODY6}~and increasing in \program{NBODY6+P3T}. Both codes use the same algorithm; first the particles with small time steps are copied in to a smaller list $L_Q$ at regular intervals. Next, the smaller list is scanned at every block step to determine which particles are due at that step. The first part has a run time complexity of $O(N)$, and the second is $O(N_Q)$ where $N_Q$ is the size of $L_Q$. If $L_Q$ is repopulated too frequently, or if $N_Q$ is too often larger than the actual number of particles due to be integrated, then it is possible for $T_\text{List}$ to take longer than $T_I{^\text{int}}$. The distributions of time steps are different in \program{NBODY6}~and \program{NBODY6+P3T}, which leads to different measurements of $T_\text{List}$ in each. In \program{NBODY6+P3T}~it is possible to replace the scheduling technique described above with a priority queue, however it is beyond the scope of this paper. \subsection{Core Collapse} As a proof-of-concept, we performed a long run using \program{NBODY6+P3T}~to the initial core collapse. The cluster, containing $64k$ particles, was arranged as a Plummer sphere with the following mass spectrum: % \begin{equation} m_{i} = \begin{cases} \frac{10}{1.9 N} &\text{if }i \equiv 0 \pmod{10},\\ \frac{1}{1.9 N} &\text{otherwise}. \end{cases} \end{equation}% In this way, approximately half of the system's mass is contained in just $10\%$ of the particles. All other parameters were the same as in section \ref{acc.perf}. An \program{NBODY6}~run with identical initial conditions was performed to compare the evolution of system properties. Lagrange radii for a range of mass proportions, total energy error ($\frac{|E(t) - E_0|}{E_0}$), core density ($\rho_{\text{core}}$), core radius ($R_{\text{core}}$), core membership ($N_{\text{core}}$), and the KS regularisation parameters $R_{\text{min}}$ and $\Delta T_{\text{min}}$ were all measured in both runs. \subsubsection{Results} \label{collapse.results} Fig.~\ref{fig:lagrange-radii} shows the evolution of the Lagrange radii to the initial core collapse. Both runs exhibit near-identical overall evolution for the measured mass proportions. Additionally, Fig.~\ref{fig:energy} shows the total energy change as a function of time; the \program{NBODY6+P3T}~run recorded an error rate of the same order of magnitude as that of the \program{NBODY6}~run. A linear regression over both series gave a slope of $~6.47 \times 10^{-6} \pm 1.27 \times 10^{-7} E/T$ and $~3.06 \times 10^{-6} \pm 3.02 \times 10^{-7} E/T$ for the \program{NBODY6}~and~\program{NBODY6+P3T}~runs, respectively. \begin{figure \centering \input{lagr.tex} \caption{Lagrange radii at different proportions of the total mass $M$. The solid and dashed lines represent the \program{NBODY6}~and \program{NBODY6+P3T}~runs, respectively. Both runs used the the same initial conditions; $N=64k$ particles in a Plummer sphere with a mass spectrum.} \label{fig:lagrange-radii} \end{figure} \begin{figure \centering \input{ene.tex} \caption{Energy change as a function of time as a proportion of the initial energy. Both runs are the same as those in figure~\ref{fig:lagrange-radii}.} \label{fig:energy} \end{figure} As was mentioned in subsection~\ref{reg.params}, a fixed neighbour radius can be problematic if too many particles do not have enough neighbours to contribute to the calculation of the core quantities. Fig.~\ref{fig:nnb} shows that the average number of neighbours per particle in \program{NBODY6+P3T}~is quite low. In fact, few particles have the requisite $6$ neighbours required to contribute to the calculations of $R_{\text{core}}$ and $N_{\text{core}}$. A new algorithm was introduced to \program{NBODY6+P3T}~to accurately approximate the $R_{\text{core}}$, $N_{\text{core}}$, and $\rho_{\text{core}}$ values. Instead of using the neighbour list of each particle to determine its $6$ nearest neighbours, our code takes a random sample of $\min({N, \sqrt{5000 N}})$ particles, then performs a brute-force search for those particles' $6$ nearest neighbours. Figures~\ref{fig:core},~\ref{fig:core-n}, and~\ref{fig:core-density} respectively show the changes in these values as a function of time using this new algorithm. With the core quantities correctly determined, then regularisation parameters can be confidently calculated. Fig.~\ref{fig:dtmin} shows both $R_{\text{min}}$ and $\Delta T_{\text{min}}$ calculations; \program{NBODY6+P3T}~maintains small offsets from the output of \program{NBODY6}, but they converge to the same solutions during core collapse when correct regularisation parameters are critical. Furthermore, Fig.~\ref{fig:nks} shows the number of KS pairs formed as a function of time for each program. \program{NBODY6+P3T}~produces roughly four times as many pairs, and slightly more again during the later stages of the run; this may indicate that the new core quantity algorithm requires some adjustment. \begin{figure \centering \input{core.tex} \caption{The core radius of the cluster as a function of time. Both runs are the same as those in figure~\ref{fig:lagrange-radii}.} \label{fig:core} \end{figure} \begin{figure \centering \input{n-core.tex} \caption{The number of particles in the core of the cluster as a function of time. Both runs are the same as those in figure~\ref{fig:lagrange-radii}.} \label{fig:core-n} \end{figure} \begin{figure \centering \input{rhod.tex} \caption{The core density of the cluster as a function of time. Both runs are the same as those in figure~\ref{fig:lagrange-radii}.} \label{fig:core-density} \end{figure} \begin{figure \centering \input{dtmin.tex} \caption{The KS regularisation parameters, $R_{\text{min}}$ and $\Delta T_{\text{min}}$ shown in the top and bottom panels, respectively. Both runs are the same as those in figure~\ref{fig:lagrange-radii}.} \label{fig:dtmin} \end{figure} \begin{figure \centering \input{nks.tex} \caption{The number of KS pairs formed as a function of time . Both runs are the same as those in figure~\ref{fig:lagrange-radii}.} \label{fig:nks} \end{figure} \section{Conclusions} \label{conclusions} We described a modified \program{NBODY6}~version to incorporate a GPU-enabled P3T scheme. Using the optimal set of accuracy parameters outlined in~\citet{iwasawa2015gpuenabled}, our code outperformed \program{NBODY6}~for system sizes $N \ge 512k$. For longer runs, we found that the \program{NBODY6+P3T}~code is sufficiently accurate by comparing the total energy conservation and Lagrange radii to \program{NBODY6}~for the same initial conditions up to core collapse. At present,~\program{NBODY6+P3T}~is not capable of post-collapse simulations due to large errors in the energy conservation at this point. Future work may include the adjustment of parameters over time to mitigate post-collapse energy errors. Our faster code provides the ability to accurately simulate the evolution of dense star clusters, like globular clusters or galactic nuclei, over long time scales (one Hubble time.) A realistic cluster could contain $10^6$ stars, making it advantageous to use our \program{NBODY6+P3T}~code. Simulations of these systems can be used to study the conditions of black hole and neutron star mergers~\citep{2016MNRAS.458.1450W}. We found that the calculation of the KS regularisation parameters needed to be modified due to an inaccurate calculation of the core density $\rho_{\text{core}}$, which in turn was caused by the lower average neighbour number due to a fixed $r_\text{cut}$ value. We have mitigated the issue with a new core density algorithm, although this algorithm may need further adjustment, depending on the type of system that is being simulated. Additional future improvements may include scaling the GPU-enabled force calculator to utilise multiple GPUs, and to scale the computation over many nodes. A method to deal with higher ratios of binaries, like the one in~\citet{Wang_2020}, would also be useful. \section*{Acknowledgements} Parts of this work were performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR program receives funding in part from the National Collaborative Research Infrastructure Strategy (NCRIS) Astronomy allocation provided by the Australian Government. L.W. thanks the financial support from JSPS International Research Fellow (School of Science, The University of Tokyo). \section*{Data Availability} The (benchmark) data underlying this article were generated by using the \program{NBODY6}~and~\program{NBODY6+P3T}~codes at the supercomputer, Getafix, at the University of Queensland. The data underlying this article will be shared on reasonable request to the corresponding author. Our \program{NBODY6+P3T}~code is free for use and is available at \url{https://github.com/anthony-arnold/nbody6-p3t/tree/v1.0.0}. \bibliographystyle{mnras}
3a96cfb02121653c20a7d129b330c25a6709228b
\section{Introduction} The {\em diameter} of a polygon is the largest Euclidean distance between pairs of its vertices. A polygon is said to be {\em small} if its diameter equals one. For an integer $n \ge 3$, the maximal area problem consists in finding a small $n$-gon with the largest area. The problem was first investigated by Reinhardt~\cite{reinhardt1922} in 1922. He proved that \begin{itemize} \item for all $n \ge 3$, the value $\frac{n}{2}\sin \frac{\pi}{n} - \frac{n}{2}\tan \frac{\pi}{2n}$ is an upper bound on the area of a small $n$-gon; \item when $n$ is odd, the regular small $n$-gon is the unique optimal solution; \item when $n=4$, there are infinitely many optimal solutions, including the small square; \item when $n \ge 6$ is even, the regular small $n$-gon is not optimal. \end{itemize} When $n \ge 6$ is even, the maximal area problem is solved for $n \le 12$. The case $n=6$ was solved by Bieri~\cite{bieri1961} in 1961 and Graham~\cite{graham1975} in 1975, the case $n=8$ by Audet, Hansen, Messine, and Xiong~\cite{audet2002} in 2002, and the cases $n=10$ and $n=12$ by Henrion and Messine~\cite{henrion2013} in 2013. Both optimal $6$-gon and $8$-gon are represented in Figure~\ref{figure:6gon:U6} and Figure~\ref{figure:8gon:U8}, respectively. In 2017, Audet~\cite{audet2017} showed that the regular small polygon has the maximal area among all equilateral small polygons. The diameter graph of a small polygon is the graph with the vertices of the polygon, and an edge between two vertices exists only if the distance between these vertices equals one. Diameter graphs of some small polygons are shown in Figure~\ref{figure:4gon}, Figure~\ref{figure:6gon}, and Figure~\ref{figure:8gon}. The solid lines illustrate pairs of vertices which are unit distance apart. In 2007, Foster and Szabo~\cite{foster2007} proved that, for even $n \ge 6$, the diameter graph of a small $n$-gon with maximal area has a cycle of length $n-1$ and one additional edge from the remaining vertex. From this result, they provided a tighter upper bound on the maximal area of a small $n$-gon when $n \ge 6$ is even. \begin{figure} \centering \subfloat[$(\geo{R}_4,0.5)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.5000,0.5000) -- (0,1) -- (-0.5000,0.5000) -- cycle; \draw (0,0) -- (0,1); \draw (0.5000,0.5000) -- (-0.5000,0.5000); \end{tikzpicture} } \subfloat[$(\geo{R}_3^+,0.5)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0.5000,0.8660) -- (0,1) -- (-0.5000,0.8660); \draw (0,1) -- (0,0) -- (0.5000,0.8660) -- (-0.5000,0.8660) -- (0,0); \end{tikzpicture} } \caption{Two small $4$-gons $(\geo{P}_4,A(\geo{P}_4))$} \label{figure:4gon} \end{figure} \begin{figure} \centering \subfloat[$(\geo{R}_6,0.649519)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.4330,0.2500) -- (0.4330,0.7500) -- (0,1) -- (-0.4330,0.7500) -- (-0.4330,0.2500) -- cycle; \draw (0,0) -- (0,1); \draw (0.4330,0.2500) -- (-0.4330,0.7500); \draw (0.4330,0.7500) -- (-0.4330,0.2500); \end{tikzpicture} } \subfloat[$(\geo{R}_5^+,0.672288)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.5000,0.3633) -- (0.3090,0.9511) -- (0,1) -- (-0.3090,0.9511) -- (-0.5000,0.3633) -- cycle; \draw (0,1) -- (0,0) -- (0.3090,0.9511) -- (-0.5000,0.3633) -- (0.5000,0.3633) -- (-0.3090,0.9511) -- (0,0); \end{tikzpicture} } \subfloat[$(\geo{P}_6^*,0.674981)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.5000,0.4024) -- (0.3438,0.9391) -- (0,1) -- (-0.3438,0.9391) -- (-0.5000,0.4024) -- cycle; \draw (0,1) -- (0,0) -- (0.3438,0.9391) -- (-0.5000,0.4024) -- (0.5000,0.4024) -- (-0.3438,0.9391) -- (0,0); \end{tikzpicture} \label{figure:6gon:U6} } \caption{Three small $6$-gons $(\geo{P}_6,A(\geo{P}_6))$} \label{figure:6gon} \end{figure} \begin{figure} \centering \subfloat[$(\geo{R}_8,0.707107)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.3536,0.1464) -- (0.5000,0.5000) -- (0.3536,0.8536) -- (0,1) -- (-0.3536,0.8536) -- (-0.5000,0.5000) -- (-0.3536,0.1464) -- cycle; \draw (0,0) -- (0,1); \draw (0.3536,0.1464) -- (-0.3536,0.8536); \draw (0.5000,0.5000) -- (-0.5000,0.5000); \draw (0.3536,0.8536) -- (-0.3536,0.1464); \end{tikzpicture} } \subfloat[$(\geo{R}_7^+,0.725320)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.4010,0.1931) -- (0.5000,0.6270) -- (0.2225,0.9749) -- (0,1) -- (-0.2225,0.9749) -- (-0.5000,0.6270) -- (-0.4010,0.1931) -- cycle; \draw (0,1) -- (0,0) -- (0.2225,0.9749) -- (-0.4010,0.1931) -- (0.5000,0.6270) -- (-0.5000,0.6270) -- (0.4010,0.1931) -- (-0.2225,0.9749) -- (0,0); \end{tikzpicture} } \subfloat[$(\geo{P}_8^*,0.726868)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.4091,0.2238) -- (0.5000,0.6404) -- (0.2621,0.9650) -- (0,1) -- (-0.2621,0.9650) -- (-0.5000,0.6404) -- (-0.4091,0.2238) -- cycle; \draw (0,1) -- (0,0) -- (0.2621,0.9650) -- (-0.4091,0.2238) -- (0.5000,0.6404) -- (-0.5000,0.6404) -- (0.4091,0.2238) -- (-0.2621,0.9650) -- (0,0); \end{tikzpicture} \label{figure:8gon:U8} } \caption{Three small $8$-gons $(\geo{P}_8,A(\geo{P}_8))$} \label{figure:8gon} \end{figure} For even $n\ge 8$, exact solutions in the maximal area problem appear to be presently out of reach. However, tight lower bounds on the maximal area can be obtained analytically. For instance, Mossinghoff~\cite{mossinghoff2006b} constructed a family of small $n$-gons, for even $n\ge 6$, and proved that the areas obtained cannot be improved for large $n$ by more than $c/n^3$, for a certain positive constant $c$. By contrast, the areas of the regular small $n$-gons cannot be improved for large $n$ by more than $\pi^3/(16n^2)$ when $n \ge 6$ is even. In this paper, we propose tighter lower bounds on the maximal area of small $n$-gons when $n \ge 6$ is even. Thus, the main result of this paper is the following: \begin{theorem}\label{thm:Bn} Suppose $n = 2m$ with integer $m \ge 3$. Let $\ub{A}_n := \frac{n}{2}\sin \frac{\pi}{n} - \frac{n-1}{2}\tan \frac{\pi}{2n-2}$ denote an upper bound on the area $A(\geo{P}_n)$ of a small $n$-gon $\geo{P}_n$~\cite{foster2007}. Let $\geo{M}_n$ denote the small $n$-gon constructed by Mossinghoff~\cite{mossinghoff2006b} for the maximal area problem. Then there exists a small $n$-gon $\geo{B}_n$ such that \[ \ub{A}_n - A(\geo{B}_n) = \frac{(5303-456\sqrt{114})\pi^3}{5808n^3} + O\left(\frac{1}{n^4}\right) < \frac{3\pi^3}{40n^3} + O\left(\frac{1}{n^4}\right) \] and \[ A(\geo{B}_n) - A(\geo{M}_n) = \frac{3d\pi^3}{n^5} + O\left(\frac{1}{n^6}\right) \] with \[ \begin{aligned} d &= \frac{25\pi^2(1747646-22523\sqrt{114})}{4691093528} + \frac{32717202988-3004706459\sqrt{114}}{29464719680}\\ &+ (-1)^{\frac{n}{2}} \frac{15\pi(10124777-919131\sqrt{114})}{852926096}\\ &= \begin{cases} 0.0836582354\ldots &\text{if $n \equiv 2 \bmod 4$,}\\ 0.1180393778\ldots &\text{if $n \equiv 0 \bmod 4$.} \end{cases} \end{aligned} \] Moreover, $\geo{B}_6$ is the largest small $6$-gon. \end{theorem} The remainder of this paper is organized as follows. Section~\ref{sec:ngon} recalls principal results on the maximal area problem. We prove Theorem~\ref{thm:Bn} in Section~\ref{sec:Bn}. We conclude the paper in Section~\ref{sec:conclusion}. \section{Areas of small polygons}\label{sec:ngon} Let $A(\geo{P})$ denote the area of a polygon $\geo{P}$. Let $\geo{R}_n$ denote the regular small $n$-gon. We have \[ A(\geo{R}_n) = \begin{cases} \frac{n}{2}\sin \frac{\pi}{n} - \frac{n}{2}\tan \frac{\pi}{2n} &\text{if $n$ is odd,}\\ \frac{n}{8}\sin \frac{2\pi}{n} &\text{if $n$ is even.}\\ \end{cases} \] For all even $n\ge 6$, $A(\geo{R}_n) < A(\geo{R}_{n-1})$~\cite{audet2009}. This suggests that $\geo{R}_n$ does not have maximum area for any even $n\ge 6$. Indeed, when $n$ is even, we can construct a small $n$-gon with a larger area than $\geo{R}_n$ by adding a vertex at distance $1$ along the mediatrix of an angle in $\geo{R}_{n-1}$. We denote this $n$-gon by $\geo{R}_{n-1}^+$ and we have \[ A(\geo{R}_{n-1}^+) = A(\geo{R}_{n-1}) + \sin \frac{\pi}{2n-2} - \frac{1}{2}\sin \frac{\pi}{n-1}. \] \begin{theorem}[Reinhardt~\cite{reinhardt1922}, Foster and Szabo~\cite{foster2007}]\label{thm:area:opt} For all $n \ge 3$, let $A_n^*$ denote the maximal area among all small $n$-gons. \begin{itemize} \item When $n$ is odd, $A_n^* = \frac{n}{2}\sin \frac{\pi}{n} - \frac{n}{2}\tan \frac{\pi}{2n}$ is only achieved by $\geo{R}_n$. \item $A_4^* = 1/2$ is achieved by infinitely many $4$-gons, including $\geo{R}_4$ and~$\geo{R}_3^+$ illustrated in Figure~\ref{figure:4gon}. \item When $n\ge 6$ is even, the diameter graph of an optimal $n$-gon has a cycle of length $n-1$ plus one additional edge from the remaining vertex and $A_n^* < \ub{A}_n := \frac{n}{2} \sin \frac{\pi}{n} - \frac{n-1}{2} \tan \frac{\pi}{2n-2}$. \end{itemize} \end{theorem} When $n\ge 6$ is even, the maximal area~$A_n^*$ is known for $n \le 12$. Bieri~\cite{bieri1961} and Graham~\cite{graham1975} determined analytically that $A_6^* = 0.674981\ldots > A(\geo{R}_{5}^+)$, and this value is only achieved by the small $6$-gon shown in Figure~\ref{figure:6gon:U6}. Audet, Hansen, Messine, and Xiong~\cite{audet2002} proved that $A_8^* = 0.726868\ldots > A(\geo{R}_{7}^+)$, which is only achieved by the small $8$-gon represented in Figure~\ref{figure:8gon:U8}. Henrion and Messine~\cite{henrion2013} found that $A_{10}^* = 0.749137\ldots > A(\geo{R}_{9}^+)$ and $A_{12}^* = 0.760729\ldots > A(\geo{R}_{11}^+)$. \begin{conjecture} \label{thm:area:sym} For even $n \ge 6$, an optimal $n$-gon has an axis of symmetry corresponding to the pendant edge in its diameter graph. \end{conjecture} From Theorem~\ref{thm:area:opt}, we note that $\geo{R}_{n-1}^+$ has the optimal diameter graph. Conjecture~\ref{thm:area:sym} is only proven for $n=6$ and this is due to Yuan~\cite{yuan2004}. However, the largest small polygons obtained by~\cite{audet2002} and~\cite{henrion2013} are a further evidence that the conjecture may be true. For even $n\ge 6$, $\geo{R}_{n-1}^+$ does not provide the tightest lower bound for $A_n^*$. Indeed, Mossinghoff~\cite{mossinghoff2006b} constructed a family of small $n$-gons $\geo{M}_n$, illustrated in Figure~\ref{figure:Mn}, such that \[ \ub{A}_n - A(\geo{M}_n) = \frac{(5303-456\sqrt{114})\pi^3}{5808n^3} + O\left(\frac{1}{n^4}\right) < \frac{3\pi^3}{40n^3} + O\left(\frac{1}{n^4}\right) \] for all even $n \ge 6$. On the other hand, \[ \begin{aligned} \ub{A}_n - A(\geo{R}_n) &= \frac{\pi^3}{16n^2} + O\left(\frac{1}{n^3}\right),\\ \ub{A}_n - A(\geo{R}_{n-1}^+) &= \frac{5\pi^3}{48n^3} + O\left(\frac{1}{n^4}\right) \end{aligned} \] for all even $n\ge 6$. In the next section, we propose a tighter lower bound for $A_n^*$. \begin{figure} \centering \subfloat[$(\geo{M}_6,0.673186)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.5000,0.4362) -- (0.3701,0.9290) -- (0,1) -- (-0.3701,0.9290) -- (-0.5000,0.4362) -- cycle; \draw (0,1) -- (0,0) -- (0.3701,0.9290) -- (-0.5000,0.4362) -- (0.5000,0.4362) -- (-0.3701,0.9290) -- (0,0); \end{tikzpicture} } \subfloat[$(\geo{M}_{8},0.725976)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.3988,0.2265) -- (0.5000,0.6649) -- (0.2813,0.9596) -- (0,1) -- (-0.2813,0.9596) -- (-0.5000,0.6649) -- (-0.3988,0.2265) -- cycle; \draw (0,1) -- (0,0) -- (0.2813,0.9596) -- (-0.3988,0.2265) -- (0.5000,0.6649) -- (-0.5000,0.6649) -- (0.3988,0.2265) -- (-0.2813,0.9596) -- (0,0); \end{tikzpicture} } \subfloat[$(\geo{M}_{10},0.749029)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.3310,0.1396) -- (0.5000,0.4454) -- (0.4463,0.7687) -- (0.2167,0.9762) -- (0,1) -- (-0.2167,0.9762) -- (-0.4463,0.7687) -- (-0.5000,0.4454) -- (-0.3310,0.1396) -- cycle; \draw (0,1) -- (0,0) -- (0.2167,0.9762) -- (-0.3310,0.1396) -- (0.4463,0.7687) -- (-0.5000,0.4454) -- (0.5000,0.4454) -- (-0.4463,0.7687) -- (0.3310,0.1396) -- (-0.2167,0.9762) -- (0,0); \end{tikzpicture} } \caption{Mossinghoff polygons $(\geo{M}_n,A(\geo{M}_n))$} \label{figure:Mn} \end{figure} \section{Proof of Theorem~\ref{thm:Bn}}\label{sec:Bn} For all $n=2m$ with integer $m\ge 3$, consider a small $n$-gon $\geo{P}_n$ having the optimal diameter graph: an $(n-1)$-length cycle $\geo{v}_{0} - \geo{v}_1 - \ldots-\geo{v}_{k} - \ldots - \geo{v}_{\frac{n}{2}-1} - \geo{v}_{\frac{n}{2}} - \ldots - \geo{v}_{n-k-1} - \ldots - \geo{v}_{n-2}-\geo{v}_0$ plus the pendant edge $\geo{v}_{0} - \geo{v}_{n-1}$, as illustrated in Figure~\ref{figure:model:optimal}. We assume that $\geo{P}_n$ has the edge $\geo{v}_{0}-\geo{v}_{n-1}$ as axis of symmetry. \begin{figure} \centering \begin{tikzpicture}[scale=8] \draw[dashed] (0,0) node[below]{$\geo{v}_0(0,0)$} -- (0.4068,0.2215) node[right]{$\geo{v}_5(x_5,y_5)$} -- (0.5000,0.6432) node[right]{$\geo{v}_3(x_3,y_3)$} -- (0.2619,0.9651) node[right]{$\geo{v}_1(x_1,y_1)$} -- (0,1) node[above]{$\geo{v}_7(0,1)$} -- (-0.2619,0.9651) node[left]{$\geo{v}_6(x_6,y_6)$} -- (-0.5000,0.6432) node[left]{$\geo{v}_4(x_4,y_4)$} -- (-0.4068,0.2215) node[left]{$\geo{v}_2(x_2,y_2)$} -- cycle; \draw (0,1) -- (0,0) -- (0.2619,0.9651) -- (-0.4068,0.2215) -- (0.5000,0.6432) -- (-0.5000,0.6432) -- (0.4068,0.2215) -- (-0.2619,0.9651) -- (0,0); \draw (0.0655,0.2413) arc (74.82:90.00:0.25) node[midway,above]{$\alpha_0$}; \draw (0.0946,0.7793) arc (228.01:254.82:0.25) node[midway,below]{$\alpha_1$}; \draw (-0.1801,0.3269) arc (24.93:48.01:0.25) node[midway,right]{$\alpha_2$}; \draw (0.2500,0.6432) arc (180.00:204.93:0.25) node[midway,left]{$\alpha_3$}; \end{tikzpicture} \caption{Definition of variables $\alpha_0, \alpha_1, \ldots, \alpha_{\frac{n}{2}-1}$: Case of $n=8$ vertices} \label{figure:model:optimal} \end{figure} We use cartesian coordinates to describe the $n$-gon $\geo{P}_n$, assuming that a vertex $\geo{v}_k$, $k=0,1,\ldots,n-1$, is positioned at abscissa $x_k$ and ordinate $y_k$. Placing the vertex $\geo{v}_0$ at the origin, we set $x_0 = y_0 = 0$. We also assume that $\geo{P}_n$ is in the half-plane $y\ge 0$. Let us place the vertex $\geo{v}_{n-1}$ at $(0,1)$ in the plane. Let $\alpha_0 = \angle \geo{v}_{n-1}\geo{v}_{0}\geo{v}_{1}$ and for all $k=1,2,\ldots, n/2-1$, $\alpha_k = \angle \geo{v}_{k-1} \geo{v}_{k} \geo{v}_{k+1}$. Since $\geo{P}_n$ is symmetric, we have \begin{equation}\label{eq:condition} \sum_{k=0}^{n/2-1}\alpha_k = \frac{\pi}{2}, \end{equation} and \begin{subequations}\label{eq:xy} \begin{align} x_{k} &= \sum_{i=0}^{k-1} (-1)^{i} \sin \left(\sum_{j=0}^{i}\alpha_j\right) && = - x_{n-k-1} &\forall k=1,2,\ldots, \frac{n}{2}-1,\\ y_{k} &= \sum_{i=0}^{k-1} (-1)^{i} \cos \left(\sum_{j=0}^{i}\alpha_j\right) && = y_{n-k-1} &\forall k=1,2,\ldots,\frac{n}{2}-1. \end{align} \end{subequations} Since the edge $\geo{v}_{\frac{n}{2}-1} - \geo{v}_{\frac{n}{2}}$ is horizontal and $\|\geo{v}_{\frac{n}{2}-1} - \geo{v}_{\frac{n}{2}}\| = 1$, we also have \begin{equation}\label{eq:x:m} x_{\frac{n}{2}-1} = (-1)^{\frac{n}{2}}/2 = -x_{\frac{n}{2}}. \end{equation} If $A_1$ denote the area of the triangle $\geo{v}_0 \geo{v}_1 \geo{v}_{n-1}$ and $A_k$ the area of the triangle $\geo{v}_0 \geo{v}_{k+1} \geo{v}_{k-1}$ for all $k = 2,3,\ldots,n/2-1$, then the area of $\geo{P}_n$ is $A = \sum_{k=1}^{n/2-1} 2A_k$. From~\eqref{eq:condition} and \eqref{eq:xy}, we have \begin{subequations}\label{eq:A} \begin{align} 2A_1 &= x_1 = \sin \alpha_0,\\ 2A_k &= x_{k+1}y_{k-1} - y_{k+1}x_{k-1} \nonumber\\ &= \sin \alpha_k + 2(-1)^k \left(x_k \sin \left(\sum_{j=0}^{k-1} \alpha_{j} + \frac{\alpha_k}{2}\right)+ y_k \cos \left(\sum_{j=0}^{k-1} \alpha_{j} + \frac{\alpha_k}{2}\right) \right)\sin \frac{\alpha_k}{2} \end{align} \end{subequations} for all $k = 2,3,\ldots,n/2-1$. Then one can construct a large small $n$-gon by maximizing the area $A$ over $n/2$ variables $\alpha_0, \alpha_1, \ldots, \alpha_{\frac{n}{2}-1}$ subject to~\eqref{eq:condition} and~\eqref{eq:x:m}. Instead, we are going to use the same approach as Mossinghoff~\cite{mossinghoff2006b} to obtain a large small $n$-gon with fewer variables. Now, suppose $\alpha_0 = \alpha$, $\alpha_1 = \beta + \gamma$, $\alpha_2 = \beta - \gamma$, and $\alpha_k = \beta$ for all $k = 3,4,\ldots, n/2-1$. Then \eqref{eq:condition} becomes \begin{equation}\label{eq:condition:ab} \alpha + \left(\frac{n}{2} - 1\right)\beta = \frac{\pi}{2}. \end{equation} Coordinates $(x_k,y_k)$ in~\eqref{eq:xy} are given by \begin{subequations}\label{eq:xyab} \begin{align} x_{1} &= \sin \alpha,\\ y_{1} &= \cos \alpha,\\ x_{2} &= \sin \alpha - \sin (\alpha+\beta+\gamma), \label{eq:xyab:2}\\ y_{2} &= \cos \alpha - \cos (\alpha+\beta+\gamma),\\ x_{k} &= x_{2} + \sum_{j=3}^k (-1)^{j-1} \sin (\alpha + (j-1)\beta) \nonumber\\ &= x_{2} + \frac{\sin \left(\alpha + 3\frac{\beta}{2}\right) - (-1)^k\sin \left(\alpha + (2k-1)\frac{\beta}{2}\right)}{2\cos \frac{\beta}{2}} &\forall k=3,4,\ldots,\frac{n}{2}-1, \label{eq:xyab:k}\\ y_{k} &= y_{2} + \sum_{j=3}^k (-1)^{j-1} \cos (\alpha + (j-1)\beta) \nonumber\\ &= y_{2} + \frac{ \cos \left(\alpha + 3\frac{\beta}{2}\right) - (-1)^k\cos \left(\alpha + (2k-1)\frac{\beta}{2}\right)}{2\cos \frac{\beta}{2}} &\forall k=3,4,\ldots,\frac{n}{2}-1. \end{align} \end{subequations} From \eqref{eq:x:m}, \eqref{eq:xyab:2}, \eqref{eq:xyab:k}, and \eqref{eq:condition:ab}, we deduce that \begin{equation}\label{eq:condition:abc} \sin (\alpha + \beta + \gamma) = \sin \alpha + \frac{\sin \left(\alpha + 3\frac{\beta}{2}\right)}{2\cos \frac{\beta}{2}}. \end{equation} The areas $A_k$ in~\eqref{eq:A} determined by $\alpha$, $\beta$, and $\gamma$ are \[ \begin{aligned} 2A_1 &= \sin \alpha,\\ 2A_2 &= \sin (2\beta) - \sin (\beta + \gamma),\\ 2A_k &= \sin \beta + 2(-1)^k \left(x_k \sin \left(\alpha + (2k-1)\frac{\beta}{2}\right)+ y_k \cos \left(\alpha + (2k-1)\frac{\beta}{2}\right) \right) \sin \frac{\beta}{2}\\ &= \sin \beta - \tan \frac{\beta}{2} + 2(-1)^{k-1} \left( 2\sin \frac{\beta + \gamma}{2} \sin \left((k-1)\beta - \frac{\gamma}{2}\right) - \frac{\cos ((k-2)\beta)}{2\cos \frac{\beta}{2}}\right)\sin \frac{\beta}{2} \end{aligned} \] for all $k = 3,4,\ldots,n/2-1$. Using~\eqref{eq:condition:abc}, it follows that \[ \sum_{k=3}^{n/2-1} 2A_k = \left(\frac{n}{2}-3\right)\left(\sin \beta - \tan \frac{\beta}{2}\right) + \left(\cos (\beta - \gamma) - \cos (2\beta) - \frac{1}{2}\right) \tan \frac{\beta}{2}. \] Thus, \begin{equation}\label{eq:A:abc} \begin{aligned} A &= \sin \alpha + \sin (2\beta) - \sin (\beta + \gamma) \\ &+ \left(\frac{n}{2}-3\right)\left(\sin \beta - \tan \frac{\beta}{2}\right) + \left(\cos (\beta - \gamma) - \cos (2\beta) - \frac{1}{2}\right) \tan \frac{\beta}{2}. \end{aligned} \end{equation} Note that, for $n=6$, we have $A = \sin \alpha + \sin (2\beta) - \sin (\beta + \gamma)$. With~\eqref{eq:condition:ab} and~\eqref{eq:condition:abc}, the area $A$ in~\eqref{eq:A:abc} can be considered as a one-variable function $f(\alpha)$. For instance, for $\alpha = \frac{\pi}{2n-2}$, we have $\beta = \frac{\pi}{n-1}$, $\gamma = 0$, and $f\left(\frac{\pi}{2n-2}\right) = A(\geo{R}_{n-1}^+)$. We may now search for a value of $\alpha \in \left[\frac{\pi}{2n-2}, \frac{\pi}{n}\right]$ that maximizes this function. An asymptotic analysis produces that, for large $n$, $f(\alpha)$ is maximized at $\hat{\alpha}(n)$ satisfying \[ \hat{\alpha}(n) = \frac{a\pi}{n} + \frac{b\pi}{n^2} - \frac{c\pi}{n^3} + O\left(\frac{1}{n^4}\right), \] where $a = \frac{2\sqrt{114}-7}{22} = 0.652461\ldots$, $b = \frac{84a^2-272a+175}{4(22a+7)} = \frac{3521\sqrt{114}-34010}{9196} = 0.389733\ldots$, and \[ \begin{aligned} c &= \frac{(7792a^4+16096a^3 + 2568a^2 -6248a +223)\pi^2}{768(22a+7)} - \frac{226a^2 + 84ab-22b^2-542a-136b + 303}{2(22a+7)}\\ &= \frac{17328(663157+3161\pi^2) - (1088031703 - 3918085\pi^2)\sqrt{114}}{507398496} = 1.631188\ldots. \end{aligned} \] Let $\geo{B}_n$ denote the $n$-gon obtained by setting $\alpha = \hat{\alpha}(n)$. We have \[ \begin{aligned} \beta &= \hat{\beta}(n) = \frac{\pi}{n} + \frac{2(1-a)\pi}{n^2} + O\left(\frac{1}{n^3}\right),\\ \gamma &= \hat{\gamma}(n) = \frac{(2a-1)\pi}{4n} + \frac{(a+b-1)\pi}{2n^2} + O\left(\frac{1}{n^3}\right), \end{aligned} \] and the area of $\geo{B}_n$ is \[ \begin{aligned} A(\geo{B}_n) &= f(\hat{\alpha}(n))\\ &= \frac{\pi}{4} - \frac{5\pi^3}{48n^2} - \frac{(5545-456\sqrt{114})\pi^3}{5808n^3} - \left(\frac{7(13817-1281\sqrt{114})}{10648} - \frac{\pi^2}{480}\right) \frac{\pi^3}{n^4}\\ &- \left( \frac{23\pi^2(351468\sqrt{114}-2868731)}{618435840} + \frac{4013754104-375661161\sqrt{114}}{53410368}\right) \frac{\pi^3}{n^5} + O\left(\frac{1}{n^6}\right), \end{aligned} \] which implies \[ \ub{A}_n - A(\geo{B}_n) = \frac{(5303-456\sqrt{114})\pi^3}{5808n^3} + \frac{(192107-17934\sqrt{114})\pi^3}{21296n^4} + O\left(\frac{1}{n^5}\right). \] By construction, $\geo{B}_n$ is small. We illustrate $\geo{B}_n$ for some $n$ in Figure~\ref{figure:Bn}. \begin{figure}[h] \centering \subfloat[$(\geo{B}_6,0.674981)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.5000,0.4024) -- (0.3438,0.9391) -- (0,1) -- (-0.3438,0.9391) -- (-0.5000,0.4024) -- cycle; \draw (0,1) -- (0,0) -- (0.3438,0.9391) -- (-0.5000,0.4024) -- (0.5000,0.4024) -- (-0.3438,0.9391) -- (0,0); \end{tikzpicture} } \subfloat[$(\geo{B}_{8},0.726854)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.4068,0.2215) -- (0.5000,0.6432) -- (0.2619,0.9651) -- (0,1) -- (-0.2619,0.9651) -- (-0.5000,0.6432) -- (-0.4068,0.2215) -- cycle; \draw (0,1) -- (0,0) -- (0.2619,0.9651) -- (-0.4068,0.2215) -- (0.5000,0.6432) -- (-0.5000,0.6432) -- (0.4068,0.2215) -- (-0.2619,0.9651) -- (0,0); \end{tikzpicture} } \subfloat[$(\geo{B}_{10},0.749119)$]{ \begin{tikzpicture}[scale=4] \draw[dashed] (0,0) -- (0.3351,0.1395) -- (0.5000,0.4346) -- (0.4428,0.7678) -- (0.2103,0.9776) -- (0,1) -- (-0.2103,0.9776) -- (-0.4428,0.7678) -- (-0.5000,0.4346) -- (-0.3351,0.1395) -- cycle; \draw (0,1) -- (0,0) -- (0.2103,0.9776) -- (-0.3351,0.1395) -- (0.4428,0.7678) -- (-0.5000,0.4346) -- (0.5000,0.4346) -- (-0.4428,0.7678) -- (0.3351,0.1395) -- (-0.2103,0.9776) -- (0,0); \end{tikzpicture} } \caption{Polygons $(\geo{B}_n,A(\geo{B}_n))$ defined in Theorem~\ref{thm:Bn}} \label{figure:Bn} \end{figure} Mossinghoff's small $n$-gons $\geo{M}_n$, $n=2m$ and $m\ge 3$, constructed in~\cite{mossinghoff2006b} for the maximal area problem were obtained as follows. He first supposed that $\alpha_0 = \alpha$, $\alpha_1 = \beta + \gamma$, $\alpha_2 = \beta - \gamma$, and $\alpha_k = \beta$ for all $k = 3,4,\ldots, n/2-3$. Then he set $\alpha = \frac{a\pi}{n} + \frac{t\pi}{n^2}$, $\beta = \frac{\pi}{n} + \frac{2(1-a)\pi}{n^2}$, and $\gamma = \frac{(2a-1)\pi}{4n} + \frac{(a+t-1)\pi}{2n^2}$, with \[ \begin{aligned} t &= \frac{4(7a^2-32a+25)}{44a+27} + (-1)^{\frac{n}{2}} \frac{15\pi(8a^3+12a^2-2a-3)}{32(44a+27)}\\ &= \frac{103104\sqrt{114}-998743}{200255} + (-1)^{\frac{n}{2}} \frac{15\pi(347\sqrt{114}-714)}{1762244}\\ &= \begin{cases} 0.429901\ldots &\text{if $n \equiv 2 \bmod 4$,}\\ 0.589862\ldots &\text{if $n \equiv 0 \bmod 4$.} \end{cases} \end{aligned} \] Note that we do not require $\alpha_{\frac{n}{2}-2} = \alpha_{\frac{n}{2}-1} = \beta$ in $\geo{M}_n$. The area of $\geo{M}_n$ is given by \[ \begin{aligned} A(\geo{M}_n) &= \frac{\pi}{4} - \frac{5\pi^3}{48n^2} - \frac{(5545-456\sqrt{114})\pi^3}{5808n^3} - \left(\frac{7(13817-1281\sqrt{114})}{10648} - \frac{\pi^2}{480}\right) \frac{\pi^3}{n^4}\\ &-\left(\frac{\pi^2(28622156724\sqrt{114}-177320884133)}{2251724893440} + \frac{182558364974-17072673147\sqrt{114}}{2326162080}\right.\\ &+ (-1)^\frac{n}{2}\left. \frac{45\pi(1012477-919131\sqrt{114})}{852926096}\right) \frac{\pi^3}{n^5} + O\left(\frac{1}{n^6}\right), \end{aligned} \] Therefore, \[ A(\geo{B}_n) - A(\geo{M}_n) = \frac{3d\pi^3}{n^5} + O\left(\frac{1}{n^6}\right) \] with \[ \begin{aligned} d &= \frac{25\pi^2(1747646-22523\sqrt{114})}{4691093528} + \frac{32717202988-3004706459\sqrt{114}}{29464719680}\\ &+ (-1)^{\frac{n}{2}} \frac{15\pi(10124777-919131\sqrt{114})}{852926096}\\ &= \begin{cases} 0.0836582354\ldots &\text{if $n \equiv 2 \bmod 4$,}\\ 0.1180393778\ldots &\text{if $n \equiv 0 \bmod 4$.} \end{cases} \end{aligned} \] We can also note that, for some parameter $u$, \[ A(\geo{B}_n) - f\left(\frac{a\pi}{n} + \frac{u\pi}{n^2}\right) = \begin{cases} \frac{(u-b)^2\pi^3\sqrt{114}}{8n^5} + O\left(\frac{1}{n^6}\right) &\text{if $u \not=b$,}\\ \frac{c^2\pi^3\sqrt{114}}{8n^7} + O\left(\frac{1}{n^8}\right) &\text{if $u =b$.} \end{cases} \] This completes the proof of Theorem~\ref{thm:Bn}.\qed Table~\ref{table:area} shows the areas of $\geo{B}_n$, along with the optimal values $\hat{\alpha}(n)$, the upper bounds $\ub{A}_n$, the areas of $\geo{R}_n$, $\geo{R}_{n-1}^+$, and $\geo{M}_n$ for $n=2m$ and $3 \le m \le 12$. We also report the areas of the small $n$-gons $\geo{M}_n'$ obtained by setting $\alpha = \frac{a\pi}{n} + \frac{t\pi}{n^2}$ in~\eqref{eq:A:abc}, i.e., $A(\geo{M}_n') = f\left(\frac{a\pi}{n} + \frac{t\pi}{n^2}\right)$. Values in the table are rounded at the last printed digit. As suggested by Theorem~\ref{thm:Bn}, when $n$ is even, $\geo{B}_n$ provides a tighter lower bound on the maximal area $A_n^*$ compared to the best prior small $n$-gon~$\geo{M}_n$. For instance, we can note that $A(\geo{B}_{6}) = A_6^*$. We also remark that $A(\geo{M}_n) < A(\geo{M}_n')$ for all even $n\ge 8$. \begin{table}[h] \footnotesize \centering \caption{Areas of $\geo{B}_n$} \label{table:area} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}rlllllll@{}} \toprule $n$ & $\hat{\alpha}(n)$ & $A(\geo{R}_n)$ & $A(\geo{R}_{n-1}^+)$ & $A(\geo{M}_n)$ & $A(\geo{M}_n')$ & $A(\geo{B}_n)$ & $\ub{A}_n$ \\ \midrule 6 & 0.3509301889 & 0.6495190528 & 0.6722882584 & 0.6731855653 & 0.6731855653 & 0.6749814429 & 0.6877007594 \\ 8 & 0.2649613582 & 0.7071067812 & 0.7253199909 & 0.7259763468 & 0.7264449921 & 0.7268542719 & 0.7318815691 \\ 10 & 0.2119285702 & 0.7347315654 & 0.7482573378 & 0.7490291363 & 0.7490910913 & 0.7491189262 & 0.7516135587 \\ 12 & 0.1762667716 & 0.7500000000 & 0.7601970055 & 0.7606471438 & 0.7606885130 & 0.7607153082 & 0.7621336536 \\ 14 & 0.1507443724 & 0.7592965435 & 0.7671877750 & 0.7675035228 & 0.7675178190 & 0.7675203660 & 0.7684036467 \\ 16 & 0.1316139556 & 0.7653668647 & 0.7716285345 & 0.7718386481 & 0.7718489998 & 0.7718535572 & 0.7724408116 \\ 18 & 0.1167583322 & 0.7695453225 & 0.7746235089 & 0.7747776809 & 0.7747819422 & 0.7747824059 & 0.7751926059 \\ 20 & 0.1048968391 & 0.7725424859 & 0.7767382147 & 0.7768497848 & 0.7768531741 & 0.7768543958 & 0.7771522071 \\ 22 & 0.0952114547 & 0.7747645313 & 0.7782865351 & 0.7783722564 & 0.7783738385 & 0.7783739622 & 0.7785970008 \\ 24 & 0.0871560675 & 0.7764571353 & 0.7794540033 & 0.7795196190 & 0.7795209668 & 0.7795213955 & 0.7796927566 \\ \bottomrule \end{tabular} } \end{table} All polygons presented in this work and in~\cite{bingane2021a,bingane2021b,bingane2021c,bingane2021d,bingane2021e} were implemented as a MATLAB package: OPTIGON~\cite{optigon}, which is freely available at \url{https://github.com/cbingane/optigon}. In OPTIGON, we provide MATLAB functions that give the coordinates of their vertices. One can also find an algorithm developed in~\cite{bingane2021a} to find an estimate of the maximal area of a small $n$-gon when $n \ge 6$ is even. \section{Conclusion}\label{sec:conclusion} Tighter lower bounds on the maximal area of small $n$-gons were provided when $n$ is even. For each $n=2m$ with integer $m\ge 3$, we constructed a small $n$-gon $\geo{B}_n$ whose area is the maximum value of a one-variable function. For all even $n\ge 6$, the area of $\geo{B}_n$ is greater than that of the best prior small $n$-gon constructed by Mossinghoff. Furthermore, for $n=6$, $\geo{B}_6$ is the largest small $6$-gon. \section*{Acknowledgements} The author thanks Charles Audet, Professor at Polytechnique Montr\'{e}al, for helpful discussions on extremal small polygons and helpful comments on early drafts of this paper. \bibliographystyle{ieeetr}
f5c3311c31a419c2984ff40307c2ab95aa26c49f
\section{Introduction} \IEEEPARstart{W}{ith} the development of various emerging smart applications (e.g, augmented reality/virtual reality, autonomous driving and digital twin), the number of Internet of things (IoT) devices has increased explosively and the massive data generated from these connected IoT devices have led to a surging demand for very high communication rates in future wireless communications, such as the projected sixth generation (6G) mobile networks. It is envisioned in \cite{iot_device} that by 2025, the active number of IoT devices is expected to be over 75 billion. The massive amounts of data can bring diverse intelligent services due to the recent advances in artificial intelligence (AI) and large-scale machine learning (ML). However, the data originating from massive IoT devices are commonly generated and stored in a distributed manner over wireless networks for a wide range of networked AI applications, e.g., smart grids\cite{smart_grid}, remote health monitoring \cite{remote_health}, etc. Due to the nature of limited wireless communication resources as well as privacy concerns, it is often inefficient or impractical to directly collect all raw data of devices at a central entity (e.g., the cloud). Alternatively, it is increasingly attractive to process data directly in edge clients for data analysis and inference by leveraging edge computing and intelligence with data kept locally. In the regime of distributed machine learning, federated learning (FL), first coined by Google in 2016 \cite{fedavg}, is a paradigm of distributed ML, which pushes the computation of AI applications into edge clients. Therefore, FL decouples the ability of ML from the need to reveal the data to a centralized location, which helps mitigate privacy and latency concerns. During the training process of FL, in which edge clients seek to train a common ML model, each client periodically transmits its locally derived model parameters to a central parameter server (PS). A set of global model parameters are updated in the PS according to aggregation strategies such as federated averaging algorithm ($\textit{FedAvg}$) [3], and the PS then sends its updated global model parameters to clients for their local model updates. Compared with the traditional data-sharing based collaborative learning, both communication efficiency and user data privacy are significantly improved in FL. Since the ML parameters are frequently exchanged between the PS and the edge clients over a wireless network, the performance of FL is largely constrained by the properties of wireless communication networks, which can be unstable and may even fluctuate significantly over time because of the limited wireless resources (e.g., bandwidth) and unreliable wireless channels. Thus, this calls for a new design principle for FL from both learning and wireless communication perspectives. \subsection{Prior Work} Since the proposal of FL \cite{nilsson2018performance, fedavg}, there has been an increasing number of studies related to the implementation of FL over wireless networks \cite{bonawitz2019towards, zhu2020toward, 9062302}. Specifically, the authors in \cite{bonawitz2019towards} report on a system design of FL algorithms in the domain of Android mobile phones and sketch the challenges and corresponding solutions. Despite the advantages of FL in terms of communication overheads and user data privacy over the traditional data-sharing based collaborative learning, the implementation of FL over wireless networks still suffers from bottlenecks. More specifically, since multiple communication rounds are required to reach a desired ML accuracy, especially when the number of participating clients is comparably large, the communication costs incurred by unreliable wireless transmission become non-negligible in wireless FL systems. To reduce communication overhead in distributed ML, various learning algorithms have been proposed in recent years \cite{prakash2020coded, codeadmm, quantization1, fl_quan, sparsfication, sparsfication2, chen2018lag, hosein, lin2017deep}. Among these efforts, one research direction is to reduce the communication footprint in the uploading phase to make the model training communication efficient. Typical approaches in this direction range from i) compressing the uploaded gradients via coding \cite{prakash2020coded, codeadmm}, quantization \cite{quantization1, fl_quan} and sparsification \cite{sparsfication, sparsfication2}, ii) limiting the model sharing by only updating clients with significant training improvement \cite{chen2018lag, hosein}, or iii) accelerating the training process by adopting a momentum method in the sparse update \cite{lin2017deep}. More specifically, a lazily aggregated gradient approach is proposed in \cite{chen2018lag} to skip unnecessary uploads, among which communication censoring schemes are developed to avoid less informative local updates so as to reduce the communication burden. It is also worth mentioning that the impact of network resources on the learning performance is not considered in any of those methods. In addition to communication overhead, another series of work have focused on resource allocation in order to optimize the FL learning performance \cite{shi2020joint, zeng2020energy, nishio2019client, amiri2020update, yang2019scheduling, chen2020joint, chen2020convergence, xia2020multi, 8793221, amiri2021federated, 9187874, incentive_FL, chen2021communication}. To improve wireless network efficiency, considerable research has been carried out and two main research directions play crucial roles including admission control and device scheduling \cite{nishio2019client, amiri2020update, yang2019scheduling, chen2020joint, chen2020convergence, xia2020multi, 8793221, amiri2021federated} and resource scheduling management (e.g., spectrum and power) \cite{chen2020joint, chen2020convergence}. In \cite{nishio2019client}, a new FedCS protocol is proposed to schedule as many devices as possible in a limited time frame. Another device scheduling policy is proposed in \cite{amiri2020update}, among which the channel conditions and the significance of the local model updates are jointly considered. Nonetheless, these proposed policies are only evaluated via experiments and the convergence performance has not been theoretically analyzed. To characterize the performance of FL in wireless networks, an analytical model with regard to the FL convergence rate has been developed and the impacts have been evaluated by three different client scheduling policies, i.e., random scheduling, round-robin, and proportional fair \cite{yang2019scheduling}. By building a connection between the wireless resource allocation and the FL learning performance, the authors in \cite{chen2020joint, chen2020convergence} propose to optimize the user selection and power allocation to minimize the FL training loss. Despite of all these results, the existing methods often still involve high overhead both in computation and communication, especially for large-scale ML. \subsection{Contributions} Motivated by the above observations, we investigate FL with limited wireless resources. We study the problem of jointly optimizing resource and learning performance for reducing communication costs and improving learning performance in wireless FL systems. Different from existing results, in what follows, we will study FL over wireless {\color{black}IoT networks} from the aspect of communication efficiency and wireless resource optimization co-design. Particularly, a communication efficient federated learning (CEFL) scheme is proposed for wireless FL systems jointly taking communication efficiency and resource optimization into account. The main contributions of our work can be summarized as follows: \begin{itemize} \item We aim at communication efficient FL over wireless {\color{black}IoT} networks with limited resources. The joint optimization problem on communication efficiency and resource allocation is first formulated and then decoupled into a client scheduling sub-problem and a resource allocation sub-problem considering both bandwidth and power constraints. \item To reduce the communication costs of FL in wireless {\color{black}IoT} networks, a communication-efficient client scheduling policy is proposed by limiting communication exchanges and reusing stale local model parameters. To optimize the resource allocation at each communication round of FL training, the Lagrange multiplier method is leveraged to reformulate the resource optimization problem and an optimal solution based on linear search method is then derived. \item We investigate the convergence and communication properties of the proposed CEFL algorithm both analytically and by simulation. Given a proper hyper-parameter, we show that CEFL achieves a strong linear convergence rate and $O\left( \log{ \frac{1}{\epsilon}}\right )$ communication loads, where $\epsilon$ is the target accuracy. In addition, the relation between the learning performance and wireless resources, namely bandwidth and power is theoretically analyzed. Experiment results also indicate that the proposed framework is communication efficient and resource optimized over wireless {\color{black}IoT} networks. Our CEFL algorithm outperforms the vanilla FL approach both in communication overhead and training and test performance. \end{itemize} The rest of this paper is structured as follows. Section \ref{system_model} describes the system model, while Section \ref{proposed_algorithm} discusses the design of our proposed FL algorithm optimized for the underlying wireless {\color{black}IoT} network. In Section \ref{section:analysis}, we characterize the performance of our proposed framework over a wireless channel, which is validated via experiments in Section \ref{sec:results}. We conclude the paper in Section \ref{conclusion} and technical proofs are provided in the Appendix. \subsection{Notation} We adopt the following notation in this paper. We denote expectation by $\mathbb{E}\left[\cdot\right]$. $|\cdot|$ is the absolute value. $\norm{\cdot}$ denotes the $\boldsymbol{\ell}_2$-norm of a vector. $| \mathcal{N}_e^t |$ represents the cardinality of set $\mathcal{N}_e^t$. $\lceil{\cdot} \rceil$ is the ceiling function. $\nabla f(\cdot)$ denotes the gradient of a function $f$. In addition, $\langle \cdot, \cdot \rangle $ denotes the inner product in a finite dimensional Euclidean space. $\bm{w}^*\in \mathcal{X}$ denotes the optimal solution to (\ref{eq:fl_ori}), where $\mathcal{X}$ is the domain. In addition, we define $ G_{\mathcal{X}} \overset{\Delta}{=} \mathop{ \text{sup}}\nolimits_{\bm{w}_x, \bm{w}_y \in \mathcal{X}} \norm{\bm{w}_x - \bm{w}_y}$. \section{System Model and Problem Formulation}\label{system_model} \begin{figure} [b] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=88mm]{fig1_archi_v7.pdf}} \caption{An example of FL over the {\color{black} wireless IoT} network with multiple clients and a BS.} \label{fig_network} \end{center} \vskip -0.2in \end{figure} In this section, we describe the framework of FL over wireless multi-client systems. We will discuss the network model, learning model, communication model and problem formulation. For ease of illustration, the notations used frequently in this paper are summarized in Table \ref{tab:table_parameters}. \begin{table*}[t] \caption{Summary of Main Notation (at communication round $t$)} \label{tab:table_parameters} \centering \fontsize{9}{8}\selectfont \begin{tabular}{|c|c|c|c|} \hline Notation & Description & Notation & Description \\ \hline \hline $\mathcal{N}$ & Set of clients, $\mathcal{N}=\{1,...,N\}$ & $B^t$ & Total available bandwidth resource\\ \hline $\mathcal{N}_e^t$ & Set of scheduled clients & $\bm{B}^t$ & Bandwidth allocation vector\\ \hline $\mathcal{N}_c^t$ & Set of in-active clients& $\bm{P}^t$ & Power allocation vector\\ \hline $\mathcal{D}_i$ & Set of private data in client $i$&$P_i^{\text{max}}$ & Maximum power limit of client $i$ \\ \hline $D_i$ & Number of training samples in client $i$ & $P_i^{\text{min}}$ & Minimum power limit of client $i$ \\ \hline $D$ & Total number of training data samples,$ ~\sum_{i \in \mathcal{N}} D_i =D$ &$c_i^t$ & Transmission rate of client $i$ \\ \hline $\bm{w}^t$ & Global FL model & $\bm{a}^t$ & Transmission indicating vector \\ \hline $\reallywidehat{\bm{w}}_i^{t}$ &Local copy of the local FL model of client $i$ at PS & $\Gamma^t$ & Time threshold \\ \hline $\widetilde{\bm{w}}_i^t$ & Local copy of the global FL model at client $i$ &$S$ &Transmitted packet size \\ \hline $\bm{\xi}_{i,l}$ & Training sample $l$ of client $i$, $\mathcal{D}_i=\{ \bm{\xi}_{i,1},...,\bm{\xi}_{i,D_i} \}$ &$\tau_{i}^t$ & Communication time of client $i$ \\ \hline \end{tabular} \end{table*} \subsection{Network Model} As depicted in Fig. \ref{fig_network}, we consider a general one-hop {\color{black}FL-supported wireless IoT} network with a base station (BS) and $N$ distributed clients denoted as the set $\mathcal{N}=\{1,.., N \}$. In this system, the BS directly connects to the PS, which is equipped with computational resources to provide communication and computation services to the clients. The clients represent {\color{black}IoT} sensors gathering data for an FL task, such as mobile devices or organizations, which are communicated with the BS via wireless links. We assume that each client $i$ collects measurement data and owns a fraction of labeled training samples, which is denoted as $\mathcal{D}_i=\{\bm{\xi}_{i,l}\}_{l=1}^{D_i}$ with $D_i = |\mathcal{D}_i|$ data samples and $\bm{\xi}_{i,l}$ representing the $l$-th training sample at client $i, \forall i \in \mathcal{N}$. The whole dataset is thus denoted by $\mathcal{D} = \bigcup_{i\in \mathcal{N} } \mathcal{D}_i $ with the total number of data samples $D = \sum_{i=1}^{N} D_i$. We consider training an ML model of interest over this network (e.g., a classifier), where the PS and clients collaboratively build a shared model parameter for data analysis and inference by exchanging model parameters information while keeping all the data locally. \subsection{Federated Learning Process} In the FL system, a global learning model of interest is trained in a distributed manner among geographically dispersed clients and then aggregated in a central server (i.e., PS). The goal of the training process is to find a model parameter $\bm{w} \in \mathbbm{R}^d$ with the objective of minimizing a loss function $f(\bm{w})$ on the whole dataset $\mathcal{D}$. The global learning objective of the network can be expressed as \begin{align} \label{eq:fl_ori} \min_{\bm{w}} \left \{f(\bm{w})\right \} &\overset{\Delta}{=} \min_{\bm{w}} \frac{1}{D} \sum _{i=1}^{N} \sum_{l=1}^{D_i}F_i(\bm{w},\bm{\xi}_{i,l})\notag\\ & = \min_{\bm{w}} \sum_{i= 1}^{N} \frac{D_i}{D} f_i(\bm{w}), \end{align} where the local loss function $f_i(\bm{w})$ of client $i$ is defined as $f_i(\bm{w})\overset{\Delta}{=} \frac{1}{D_i}\sum _{l=1}^{D_i} F_i(\bm{w},\bm{\xi}_{i,l} )$ and $F_i(\bm{w},\bm{\xi}_{i,l} )$ characterizes the loss of the model parameter $\bm{w}$ on the training sample $\bm{\xi}_{i,l} $. Our analysis is based on the widely used federated averaging ($\textit{FedAvg}$) algorithm \cite{fedavg}. The whole training process is periodical with an arbitrary number of communication rounds (denoted as $T$), each of which has $E$ local epochs. Then the $t$-th communication round is described by the following phases: \begin{enumerate} \item \textbf{Broadcasting phase:} The PS (located in BS) wirelessly broadcasts the global model parameter $\bm{w}^t$ to all clients in the $t$-th round; \item \textbf{Local updating phase:} After receiving the global model parameter, each client $i \in \mathcal{N}$ trains its local model $\bm{w}_i^{t+1}$ by applying $E$ epochs of the gradient descent (GD) method, i.e., \begin{align} \label{eq:gd_method} \bm{w}_i^{t+1}=\bm{w}^t - \eta \nabla f_i(\bm{w}^t), \end{align} where $\eta$ is the learning rate and $\nabla f_i(\bm{w}^t)$ is the gradient of local loss function. Then client $i$ uploads its updated local parameter $\bm{w}_i^{t+1}$ back to the PS. It is noted that alternative methods, such as stochastic gradient descent (SGD), can also be used for local updates; \item \textbf{Aggregating and averaging phase:} For general FL framework, once receiving all the local model parameters, the PS aggregates them and obtains an updated global model by \begin{align} \bm{w}^{t+1} = \sum _{i=1}^{N} \frac{D_i}{D} \bm{w}_i^{t+1}. \end{align} \end{enumerate} The FL learning process implies that the FL model parameters are iteratively exchanged between the edge clients and the PS over wireless networks. \subsection{Communication Model} We consider FL over a wireless medium with limited bandwidth and power. After local training, clients upload their local FL models to the BS via frequency-division multiple access (FDMA). Therefore, the achievable rate of client $i$ at the $t$-th communication round is given by \begin{equation} c_i^t=B_i^t \log_2\left(1+ \frac{P_i^t |h_i^t|^2}{B_i^t N_0}\right), \end{equation} where $B_i^t $ and $P_i^t$ are the allocated bandwidth and transmission power of client $i$, respectively. $|h_i^t|^2$ denotes the corresponding single-carrier block-fading channel gain, and $N_0$ denotes the noise power spectral density. For simplicity, it is also assumed that for client $i$, $\bm{w}_i^{t+1}$ is transmitted as a single packet in the uplink. Denoting by $S$ the packet size of transmitted FL model as \cite{chen2020joint}, the communication time from client $i$ to the BS can be then given by \begin{equation} \label{eq:S_ci} \tau_{i}^t = \frac{S}{c_i^t}. \end{equation} Since the transmit power of the BS can be generally much higher than that of the client and the whole downlink bandwidth can be utilized to broadcast the global model $\bm{w}^t$, the latency of downlink transmission is ignored to simplify illustration \cite{infocom_NNguyen, 9261995, energy_fl_chen}. Moreover, to capture the effect of random channel variations on the transmission of each local model parameter $\bm{w}_i^t$, we consider the current transmission failure if $\tau_i^t > \Gamma^t$ holds in a time duration $\Gamma^t $, and the corresponding outage probability is defined as: \begin{equation} p_i^t = Pr (\tau_i^t > \Gamma^t). \end{equation} \subsection{Problem Formulation} To achieve fast learning, the FL training process typically schedules as many clients as possible at each communication round \cite{low_latency_FL}. However, it is undesirable for all clients engaged in learning to transmit their fresh local FL models to the PS especially when the updates are conveyed over a wireless medium with limited resources (e.g., transmit power and network bandwidth). Having more clients scheduled and uploading local models simultaneously can result in large overheads in communication, more unstable connections, and higher latency, which inevitably lead to learning task with less accuracy. To this end, we aim for an optimal solution of joint client scheduling and their associated resource allocation scheme in each communication round to pursue the best learning performance. By denoting the transmission indicating vector as $\bm{a}^t$, we formulate the following optimization problem with the objective of optimizing both communication and resource for FL over wireless {\color{black}IoT} networks: \begin{subequations}\label{eq:opti1} \begin{align} (\text{P-0} )& \max_{\bm{a}^t, \bm{B}^t, \bm{P}^t} {\sum_{i\in \mathcal{N}} a_i^t} \tag{\ref{eq:opti1}} \label{eq:op_objective}\\ \textrm{s.t.~~~} &P_i^{\text{min}} \leq P_i^t \leq P_i^{\text{max}}, \forall i \in \mathcal{N},\label{eq:op_const10}\\ & \sum_{i=1}^{N} B_i^t \leq B^t, \forall i \in \mathcal{N},\label{eq:op_const20}\\ &a_i^t=\left\{\begin{aligned} &1, \tau_{i}^t \leq \Gamma^t,\\ &0, \tau_{i}^t > \Gamma^t, \forall i \in \mathcal{N}, \end{aligned} \right . \label{eq:op_const50} \end{align} \end{subequations} where the objective of (P-0) is to maximize the utilization of transmitted FL parameters (i.e., the number of successful transmission) while sustaining the learning performance. Constraints (\ref{eq:op_const10}) and (\ref{eq:op_const20}) are the feasibility conditions on the power allocation of clients and the bandwidth limits, respectively. Constraint (\ref{eq:op_const50}) represents the successful transmission condition. Here $a_i^t = 1 $ represents the successful transmission of the fresh local model $\bm{w}_i^{t+1}$ from client $i$; otherwise, we have $a_i^t =0 $. \section{Federated Learning Algorithm Design over Wireless {\color{black}IoT} Networks} \label{proposed_algorithm} (P-0) is a non-convex optimization problem due to the nonconvexity of its objective function and constraint (\ref{eq:op_const50}). To solve problem (P-0), we decompose it into two sub-problems, i.e., i) determining the client scheduling policy at each communication round, and ii) deciding the optimal resource allocation scheme for the clients that have been selected from sub-problem i). We refer to the first subproblem as the \textit{client scheduling} problem and the second subproblem as the \textit{resource allocation} problem. \subsection{Client Scheduling} With the rapid development of integrated circuits, local computation time can be several orders of magnitude shorter than communication time between the clients and PS\cite{ic}. The vanilla FL framework can lead to large communication overheads (e.g., communication time) and it can be inefficient to sequentially update the trained models from all clients before global aggregating and averaging \cite{client_sel}. Accordingly, a subgroup of clients can be actively selected to transmit their local FL models simultaneously. Then the communication efficiency can be improved and the communication latency can be reduced. To this end, client scheduling policy plays a crucial role in FL process especially when wireless resources are limited with. With the goal of reducing communication overheads per communication round, a communication-efficient client selection policy will be developed below. For the $\textit{FedAvg}$ method in (\ref{eq:fl_ori}), in communication round $t$, after receiving the global model parameters $\bm{w}^t$ from the PS, every client $i \in \mathcal{N}$ updates its local parameter $\bm{w}_i^{t+1}$ via (\ref{eq:gd_method}) for $E$ epochs and is activated to feed updated $\bm{w}_i^{t+1}$ back to the PS. Instead of requesting fresh local model parameters from all clients in (\ref{eq:gd_method}), our client scheduling policy runs as follows. During each communication round $t$, client with informative messages (i.e., $\bm{w}_i^t$) is enabled to upload its current new model parameters if the following selection criterion meets: \begin{align} \label{eq:comm_cond} N^2 \eta^2 \norm{\nabla f_i(\bm{w}^t) - \nabla f_i(\widetilde{\bm{w}}_i^{t})}^2 \geq \sum _{k=1}^K \delta_k \norm{\bm{w}^{t+1-k}- \bm{w}^{t-k}}^2, \end{align} where $\{ \delta_k \}_{k=1}^K$ and $K$ are pre-defined constants, $\nabla f_i(\bm{w}^t) - \nabla f_i(\widetilde{\bm{w}}_i^{t})$ is the gradient difference between two evaluations of $\nabla f_i(\bm{w})$ at current model parameter $\bm{w}^t$ and the previous round model parameter $\widetilde{\bm{w}}_i^{t}$. This condition compares the new local gradient to the stale copy at the client: Only when the gradient difference is larger than the recent changes in $\bm{w}$, the new local model will be transmitted. Otherwise, the PS will reuse the stale copy at the PS. In addition, to avoid clients inactive for a long time, we force it to upload its local model parameters $\bm{w}_i^t$ to the PS if any client $i$ has not been active for transmitting fresh model parameters during the past $T_0$ communication rounds. To this regard, we set a clock $T_i, i \in \mathcal{N}$ for each client $i$, counting the number of inactive communication rounds since last time it uploaded its local models. Thus, it always holds that \begin{align} \label{eq:time_cond} T_i \leq T_0, \forall i \in \mathcal{N}. \end{align} Once the fresh local model $\bm{w}_i^{t+1}$ in client $i$ satisfies the above conditions (\ref{eq:comm_cond}) and (\ref{eq:time_cond}), it will be uploaded to the PS, whilst the PS in BS will reuse the outdated local model parameters from the rest of clients. { \color{black} We will prove in the next section that the proposed client scheduling policy based algorithm can still converge in a linear rate and is communication efficient. } Then, on the PS, the current copy of $\bm{w}_i$ from client $i$, denoted by $\reallywidehat{\bm{w}}_i^{t+1}$, is updated as \begin{align} \label{eq:local_cpy} &\reallywidehat{\bm{w}}_i^{t+1}:=\left\{\begin{aligned} &\bm{w}_i^{t+1}, ~a_i^t = 1;\\ &\reallywidehat{\bm{w}}_i^{t}~~~,~a_i^t=0, \end{aligned} \right. \end{align} where $\reallywidehat{\bm{w}}_i^{t}$ is the local model of client $i$ from previous rounds. Here $a_i^t = 1 $ implies that the server receives the fresh local model $\bm{w}_i^{t+1}$ from client $i$, otherwise, we have $a_i^t =0 $. \subsection{Power and Bandwidth Allocation} Once \textit{client scheduling} is determined, the remaining subproblem is the bandwidth and power allocation among these scheduled clients. Given the set of scheduled clients $\mathcal{N}_e^t$ in the $t$-th communication round, \textit{resource allocation} subproblem can be formulated as follows: \begin{subequations}\label{eq:opti1_1} \begin{align} (\text{P-1} )& \max_{\bm{a}^t, \bm{B}^t, \bm{P}^t } {\sum_{i\in \mathcal{N}_e^t} a_i^t} \tag{\ref{eq:opti1_1}} \label{eq:op_objective}\\ \textrm{s.t.~~~} &P_i^{\text{min}} \leq P_i^t \leq P_i^{\text{max}}, \forall i \in \mathcal{N}_e^t,\label{eq:op_const1}\\ & \sum_{i=1}^{N} B_i^t \leq B^t, \forall i \in \mathcal{N}_e^t,\label{eq:op_const2}\\ &a_i^t=\left\{\begin{aligned} &1, \tau_{i}^t \leq \Gamma^t,\\ &0, \tau_{i}^t > \Gamma^t, \forall i \in \mathcal{N}_e^t. \end{aligned} \right . \label{eq:op_const5} \end{align} \end{subequations} Problem (P-1) is a mixed integer non-linear programming (MINLP) problem due to the binary variable $\{a_i^t\}$ and continuous variables $\{B_i^t\}$ and $\{P_i^t\}$. By introducing big-$M$ constant for constraint (\ref{eq:op_const5}), problem (P-1) can be equivalently rewritten as \begin{subequations}\label{eq:opti2} \begin{align} (\text{P-2} )& \min_{a_i^t \in \{0,1\}, \bm{B}^t, \bm{P}^t} {-\sum_{i\in \mathcal{N}_e^t} a_i^t} \tag{\ref{eq:opti2}} \label{eq:op_objective2}\\ \textrm{s.t.~~~} &(\ref{eq:op_const1})- (\ref{eq:op_const2}), \notag \\ & \tau_{i}^t \leq \Gamma^t + M \left( 1 - a_i^t \right),\label{eq:op_const22}\\ & \tau_{i}^t \geq \Gamma^t - M a_i^t. \label{eq:op_const23} \end{align} \end{subequations} By relaxing each binary variable $a_i^t \in \{ 0, 1\}$ to a continuous variable $\widetilde{a}_i^{t} \in [0,1], \forall i \in \mathcal{N}_e^t$, we simplify (P-2) to a non-linear programming problem, given by \begin{subequations}\label{eq:opti3} \begin{align} (\text{P-3} )& \min_{\widetilde{a}_i^{t} \in [0,1], \bm{B}^t, \bm{P}^t} {-\sum_{i\in \mathcal{N}_e^t} \widetilde{a}_i^{t}} \tag{\ref{eq:opti3}} \label{eq:op_objective3}\\ \textrm{s.t.~~~} &(\ref{eq:op_const1})- (\ref{eq:op_const2}), \notag \\ & \tau_{i}^t \leq \Gamma^t + M \left( 1 - \widetilde{a}_i^{t} \right),\label{eq:op_const32}\\ & \tau_{i}^t \geq \Gamma^t - M \widetilde{a}_i^{t}. \label{eq:op_const33} \end{align} \end{subequations} (P-3) is still non-convex, and we resort to the Karush-Kuhn-Tucker (KKT) conditions for building the relation between bandwidth and power allocation. More specifically, we first construct the associated Lagrangian function of (P-3) as follows: \begin{align}\label{eq:p3_La} &\mathcal{L}_{\rho}(\bm{\widetilde{a}}^{t},\bm{B}^t, \bm{P}^t)\notag\\ &= -\sum_{i\in \mathcal{N}_e^t} \widetilde{a}_i^{t} + \sum_{i\in \mathcal{N}_e^t} x_i \left ( \widetilde{a}_i^{t} -1 \right) +\sum_{i\in \mathcal{N}_e^t} y_i \left (- \widetilde{a}_i^{t} \right) \notag\\ &\quad+ \mu \left ( \sum_{i\in \mathcal{N}_e^t} B_i^t - B^t \right) + \sum_{i\in \mathcal{N}_e^t} p_i \left ( P_i^{\text{min}} - P_i^t \right) \notag\\ &\quad+ \sum_{i\in \mathcal{N}_e^t} l_i \left ( P_i^t - P_i^{\text{max}} \right)+ \sum _{i\in \mathcal{N}_e^t} \lambda_i \left( \frac{S}{c_i^t} - \Gamma^t + M\left( \widetilde{a}_i^{t} - 1 \right) \right)\notag\\ &\quad+ \sum _{i\in \mathcal{N}_e^t} \upsilon_i \left( \Gamma^t - M\widetilde{a}_i^{t} - \frac{S}{c_i^t} \right), \end{align} where $\{x_i, y_i, p_i, l_i, \lambda_i, \upsilon_i, \mu | i\in \mathcal{N}_e^t \}$ are nonnegative Lagrangian multipliers. The KKT conditions for (P-3) are written as \begin{align} &\frac{\partial \mathcal{L}}{\partial \widetilde{a}_i^{t}} = -1 + x_i - y_i + \left(\lambda_i -\upsilon_i\right) M = 0,\label{eq:kkt_con1}\\ & 0 \leqslant \widetilde{a}_i \leqslant 1, \label{eq:a_range}\\ & x_i \left ( \widetilde{a}_i -1 \right) = 0,\label{eq:kkt_con2}\\ & y_i \left (- \widetilde{a}_i \right) = 0,\label{eq:kkt_con3}\\ &\lambda_i \left( \frac{S}{c_i} - \Gamma^t + M\left( \widetilde{a}_i - 1 \right) \right) =0,\label{eq:kkt_con4}\\ & \upsilon_i \left( \Gamma^t - M\widetilde{a}_i - \frac{S}{c_i} \right) = 0,\label{eq:kkt_con5} \forall i \in \mathcal{N}_e^t, \end{align} where the solution pair of primal vectors and dual vectors is denoted as ($\bm{\widetilde{a}^{*}}, \bm{B^{*}}, \bm{P^{*}}$) and ($\bm{x^*}, \bm{y^*}, \bm{p^*}, \bm{l^*}, \bm{\lambda^*}, \bm{\upsilon^*}, \mu^* $). Then, according to (\ref{eq:kkt_con1})-(\ref{eq:kkt_con5}), two lemmas can be obtained, as detailed below. \begin{lemma}\label{lemma1_new} Given the solution ($\bm{\widetilde{a}^{*}}, \bm{B^{*}}, \bm{P^{*}}$) of (P-3), the relation among transmission indicator, bandwidth and power allocation is given by \begin{align} \frac{S}{c_i^{*}} = \Gamma^t + M\left( 1- \widetilde{a}_i^{*} \right), \forall i \in \mathcal{N}_e^t, \label{eq:proposition} \end{align} where $c_i^{*}$ is defined by $c_i^{*}=B_i^{*} \log_2\left(1+ \frac{P_i^{*} |h_i^t|^2}{B_i^{*} N_0}\right)$. \end{lemma} \begin{IEEEproof} To prove this lemma, we split $\{\widetilde{a}_i^{*}\}$ into three cases and we will show that (\ref{eq:proposition}) is valid in every case. Specifically, \textbf{Case} 1) If $0<\widetilde{a}_i^{*}<1 $ holds, it is easy to obtain $x_i^* = y_i^* = 0$, and $\left(\lambda_i^* -\upsilon _i^*\right) M = 1$, based on (\ref{eq:kkt_con1})-(\ref{eq:kkt_con5}). Thus, $\lambda_i^*>0, \upsilon_i^*=0$, and $\frac{S}{c_i^{*}} = \Gamma^t + M\left( 1- \widetilde{a}_i^{*} \right)$ are achieved, $\forall i \in \mathcal{N}_e^t$; \textbf{Case} 2) If $a_i^*=0$ holds, we obtain $x_i^* = 0, \left(\lambda_i^* -\upsilon _i^*\right) M = 1 + y_i^*$, followed by $\lambda_i^*>0, \upsilon_i^* = 0$, and $\frac{S}{c_i^{*}} = \Gamma^t + M, \forall i \in \mathcal{N}_e^t$; \textbf{Case} 3) If $a_i^*=1$ holds, the following are implied as $y_i^*=0,~ \upsilon_i^*=0,~ x_i^* + M \lambda_i^* = 1 ,~\frac{S}{c_i^*} \leq \Gamma^t$, and $\lambda_i^*\left( \frac{S}{c_i^*} - \Gamma^t \right)=0$. In this case, though $\frac{S}{c_i^*} < \Gamma^t$ with $\lambda_i^*=0$ satisfies the KKT conditions, such a solution consumes more resources than condition of $\frac{S}{c_i^*} = \Gamma^t$ with $\lambda_i^*=\geq 0$. Since we aim to achieve a resource optimized FL, we choose $\frac{S}{c_i^*} = \Gamma^t$, which proves (\ref{eq:proposition}) holds as well. This completes the proof. \end{IEEEproof} \begin{remark} (P-3) is a direct extension of the MINLP problem (P-1) or (P-2), and the proof of Lemma \ref{lemma1_new} shows that (\ref{eq:proposition}) also holds both in $\widetilde{a}_i^{*}=0$ and $\widetilde{a}_i^{*}=1$. Thus, the relation among transmission indicator, allocated bandwidth and transmission power is also applicable for the initial problem (P-1) or (P-2), i.e., \begin{align} \frac{S}{c_i^{*}} = \Gamma^t + M\left( 1- a_i^{*} \right), \forall i \in \mathcal{N}_e^t, \label{eq:remark1} \end{align} although it is an MINLP problem. Particularly, for scheduled clients with successful transmission $a_i^{*}=1$, we have $S / c_i^{*} =\Gamma^t .$ In addition, with the allocated power and transmission indicator, bandwidth allocation can be obtained directly via (\ref{eq:remark1}). \end{remark} \begin{algorithm}[t] \caption{Linear Search Method for Resource Allocation} \label{algorithm:rsa_lsm} \begin{algorithmic}[1] \STATE \textbf{Input}: $\mathcal{N}_e^t$, $B^t$,$\{ h_i^t, P_i^{\text{min}}, P_i^{\text{max}} | \forall i \in \mathcal{N}_e^t \}$ ; \STATE \textbf{Initialize}: $U_{nc}^t = \left\{ i| i \in \mathcal{N}_e^t \right \}, U_{c}^t = \emptyset$; \STATE construct the channel gain $|h_i^t|^2$ and sort it into a descending order as \begin{align} \label{eq:sort_order} |h_1^t|^2 \geq |h_2^t|^2\geq ... \geq|h_i^t|^2\geq...\geq |h_{|\mathcal{N}_e^t|}^t|^2, \forall i \in \mathcal{N}_e^t; \end{align} \WHILE{$\sum_{i \in U_c^t}B_i^t \leq B^t$ and $U_{nc}^t \neq \emptyset$} \STATE find client $i$ whose $H$ is maximum in $U_{nc}^t$ as: \begin{align} &i = \argmax_{i \in U_c^t} \{ |h_i^t|^2 \}; \end{align} \STATE set ${a}_i^{t} = 1$ and allocate power for client $i$ as $P_i^{t}=P_i^{\text{max}}$ and compute its required bandwidth $B_i^{t}$ via (\ref{eq:remark1}); \STATE assign \begin{align} &U_{nc}^t = U_{nc}^t / \{i\}, U_{c}^t = U_{c}^t \cup \{ i \} ; \notag \end{align} \ENDWHILE \FOR{client $i \in U_{nc}^t$} \STATE set ${a}_i^{t} = 0$ and do not allocate network resources; \ENDFOR \end{algorithmic} \end{algorithm} \begin{lemma}\label{lemma2_new} Given the selected client $i (i \in \mathcal{N}_e^t) $ in communication round $t$, its communication time $\tau_i^t$ is a decreasing function of $P_i^t$, $B_i^t$, and $|h_i^{t}|^2$. \end{lemma} \begin{IEEEproof} The first derivative and second derivative of $c_i^t$ with respect to $P_i^t$ can be both proven to be larger than zero. Thus, $c_i^t$ is an increasing and convex function of $P_i^t$. According to (\ref{eq:S_ci}), $\tau_i^t$ is then a decreasing function of the power $P_i^t$. The same procedure works for $B_i^t$, and $|h_i^{t}|^2$, which proves Lemma \ref{lemma2_new}. \end{IEEEproof} Lemma \ref{lemma2_new} suggests that allocating larger transmission power $P_i^t$ contributes to less communication time $\tau_i^t$, which reduces the outage probability of transmission per communication round. In \textbf{Algorithm} \ref{algorithm:rsa_lsm}, we summarize our proposed resource allocation approach, i.e., linear search (LS) algorithm, which is exactly based on these two lemmas. In step 3, we first sort the channel gains of the scheduled clients ($\forall i \in \mathcal{N}_e^t$) in a descending order, which is denoted as $H \overset{\Delta}{=} |h_i^t|^2, \forall i \in \mathcal{N}_e^t$. Then, for client $i$ with the maximum channel gain in the un-allocated set $U_{nc}^t (i \in U_{nc}^t)$, we allocate its required power $P_i^t$ and bandwidth $B_i^t$ before categorizing it into the allocated set $U_{c}^t$ via steps 4-8. The above steps are repeated until all scheduled clients are considered or all the available bandwidth and power resources are used up. It is noted that according to the proposed LS method, those clients in the un-allocated set $U_{nc}^t$ will not be allocated bandwidth or transmission power resources. Then, we give the performance analysis for the proposed LS method. \begin{theorem}[Optimal solution] \label{theorem:1} The proposed Algorithm 1 can provide an optimal solution for problem (P-1). \end{theorem} \begin{IEEEproof} {\color{black} The objective of problem (P-1) is to maximize the number of successful transmission. Suppose that the solution based on Algorithm 1 can serve $K^*$ clients at most to successfully transmit their local FL models. For the convenience of explanation, based on (\ref{eq:sort_order}), we assume $K^*$ clients from $U_0^t=\{1,...,K^*\}$ are selected and the transmission indicating vector can be denoted as $\bm{a}^*=\{\underbrace{1,...,1,...,1}_{K^*},\underbrace{0,...,0}_{N-K^*}\}$. With Lemmas \ref{lemma1_new} and \ref{lemma2_new}, we obtain \begin{align} \label{eq: bw_opt} P_k^t:=\left\{\begin{aligned} &P_k^{\text{max}}, k \leq K^*,\\ &P_i^{\text{min}},\text{otherwise}; \end{aligned} \right. \sum _{k=1}^{K^*} B_k^t \leq B^t; \sum _{k=1}^{K^* + 1} B_k^t > B^t, \end{align} where $B_1^t\leq ...\leq B_{K^*}^t \leq B_{K^*+ 1}^t \leq ... \leq B_N^t, \forall k \in \mathcal{N}$. Obviously, to support $K^*$ clients successfully transmit models, the resource allocation based on Algorithm 1 will consume the minimum bandwidth. We assume that there exists another allocation scheme in which $(K^* + 1)$ local fresh FL models can be successfully transmitted to PS. Denote the active clients by $U_{1}^t (U_{1}^t \subset \mathcal{N}_e^t, |U_{1}^t|= K^*+1)$, then the following holds \begin{align} \label{eq:scheme_u} P_l^{\text{min}} \leq P_l^t \leq P_l^{\text{max}}, \forall l \in U_{1}^t; \sum _{l \in U_{1}^t, |U_{1}^t|= K^*+1} B_l^t \leq B^t. \end{align} In (\ref{eq:scheme_u}), one possible solution can be $\{P_l^{'} =P_l^{\text{max}}, B_l^{'} \leq B_l^t,\forall l \in U_{1}^t\}$ due to Lemma \ref{lemma2_new}, in which $\sum_{l \in U_{1}^t} B_l^{'} \leq \sum_{l \in U_{1}^t} B_l^t$ holds. Then we could have \begin{align} B^t \geq \sum_{l \in U_{1}^t} B_l^t \geq \sum_{l \in U_{1}^t} B_l^{'} \geq \sum_ {k=1}^{K^* + 1} B_k^t, \end{align} where it is shown to lead to a contradiction with (\ref{eq: bw_opt}). Thus, the proposed Algorithm 1 can provide an optimal solution for (P-1), which proves Theorem 1. } \end{IEEEproof} Finally, we summarize the proposed CEFL framework in \textbf{Algorithm} \ref{algorithm:cefl} for clarity. For each communication round $t$, the PS broadcasts the global FL model $\bm{w}^t$ to all selected clients. Each client trains its local model after receiving $\bm{w}^t$ and independently decides whether or not to upload its own fresh local model via criteria (\ref{eq:comm_cond}) and (\ref{eq:time_cond}). Upon receiving the updated models from the scheduled clients, the PS updates the global model with (\ref{eq:cefl_sum}). The above steps (i.e., steps 4-19) are repeated until the stopping criterion is satisfied. \begin{algorithm}[t] \caption{ Communication-Efficient Federated Learning (CEFL) over Wireless {\color{black}IoT} Networks} \label{algorithm:cefl} \begin{algorithmic}[1] \STATE \textbf{Input}: learning rate $\eta >0$, and constants $\{ \delta_k \}$; \STATE \textbf{Initialize}: $\{ \bm{w}^1, \reallywidehat{\bm{w}}_i^1, \widetilde{\bm{w}}_i^1, \nabla f_i(\widetilde{\bm{w}}_i^1),|\forall i \in \mathcal{N} \}$; \FOR{$t=1,2,...$} \STATE Server broadcasts $\bm{w}^t$ to all clients; \STATE \ul{\textbf{Client Scheduling Policy:}} \FOR{each client $i\in \mathcal{N}$ \textbf{in parallel}} \STATE Receive $\bm{w}^t$ and compute $\nabla f_i(\bm{w}^t)$; \STATE Check condition (\ref{eq:comm_cond}) and (\ref{eq:time_cond}); \IF{condition holds at client $i$} \STATE Update $\bm{w}_i^{t+1} $ based on (\ref{eq:gd_method}) and upload it; \STATE Update $\widetilde{\bm{w}}_i^{t+1} = \bm{w}^{t}$; \ELSE \STATE Update $\widetilde{\bm{w}}_i^{t+1} = \widetilde{\bm{w}}_i^{t}$ and upload nothing; \ENDIF \ENDFOR \STATE \textbf{update} the scheduling clients set $\mathcal{N}_e^t$; \STATE \ul{\textbf{Resource Allocation: call Algorithm 1;}} \STATE Server receives $\bm{w}_i^{t+1}$ and updates $\reallywidehat{\bm{w}}_i^{t+1}$ via (\ref{eq:local_cpy}); \STATE Server compute $\bm{w}^{t+1}$ by \begin{equation} \label{eq:cefl_sum} \bm{w}^{t+1} := \sum _{i=1}^{N} \frac{D_i}{D} \reallywidehat{\bm{w}}_i^{t+1}; \end{equation} \STATE \textbf{until} the stopping criterion is satisfied. \ENDFOR \end{algorithmic} \end{algorithm} \section{Convergence and Communication Analysis}\label{section:analysis} In this section, we will first provide the theoretical analysis on the convergence of the proposed CEFL algorithm. Then we analyze the communication cost. \subsection{Convergence Analysis} To facilitate the convergence analysis, following \cite{infocom_NNguyen, 9261995, energy_fl_chen}, we assume that the BS-to-client transmission is error-free due to the rich power and bandwidth budget at the BS, and we evaluate the impact of noisy upload transmission on the convergence performance. In addition, local epoch $E=1$ is considered for all clients to train a global FL model. In the following, before analyzing the convergence of the CEFL algorithm, we first provide the following sufficient conditions, which are widely adopted in the analysis of decentralized optimization. \begin{assumption}[Smoothness] \label{ass:smoothness} The global loss function $f(\bm{x})$ is \textit{L-}smooth, i.e., for any $\bm{x}, \bm{y} \in\mathbbm{R}^{d}$, \begin{equation} \label{ass:smooth_eq1} \begin{aligned} \norm{ \nabla f(\bm{x}) - \nabla f(\bm{y}) } \leqslant L \norm{ \bm{x} -\bm{y} }, \end{aligned} \end{equation} and this is equivalent to \begin{equation} \label{ass:smooth_eq2} f(\bm{x}) \leqslant f(\bm{y}) + \left\langle \nabla f(\bm{y}), \bm{x}-\bm{y} \right\rangle + \frac{L}{2} \norm{\bm{x}-\bm{y}}^2. \end{equation} \end{assumption} \begin{assumption}[Coercivity] \label{ass:coverci} The local loss function $f(\bm{x})$ is coercive over its feasible set $\mathcal{F}$, i.e., $f(\bm{x})\to \infty$ if $\bm{x}\in\mathcal{F}$ and $\|\bm{x} \|\to \infty$. The global loss function $f(\bm{x})$ is lower bounded over $\bm{x}\in\mathcal{F}$. \end{assumption} \begin{assumption}[Strong convexity]\label{assump:strong_convex} The loss function $f_i(\bm{x})$ is $\mu$-$\textit{strongly}$ convex, satisfying that \begin{equation} f(\bm{x}) \geq f(\bm{y}) + \left\langle \nabla f(\bm{y}), \bm{x}-\bm{y} \right\rangle + \frac{\mu}{2} \norm{\bm{x}-\bm{y}}^2, \end{equation} and \begin{equation} \label{eq:ass_grad_inequality} 2\mu \left( f(\bm{x}) -f(\bm{x}^*) \right) \leq \norm{\nabla f(\bm{x})}^2. \end{equation} \end{assumption} With these assumptions, we conclude the convergence properties of CEFL algorithm as follows. \begin{lemma} \label{lemma_1} Suppose Assumptions 1 and 2 hold. Let $\{\bm{w}^t\}$ be the iterates generated by \textit{FedAvg} approach. If the learning rate satisfies $\eta = \frac{1}{L}$ and the outage probability is $p_i^t=0$, the \textit{FedAvg} update per communication round yields the following descent \begin{equation} \label{eq:lemma1_new} f(\bm{w}^{t+1}) \leq f(\bm{w}^t) - \Delta _{\textit{FedAvg}}^t, \end{equation} where $\Delta _{\textit{FedAvg}}^t \overset{\Delta}{=} \frac{1}{2L} \norm{\nabla f(\bm{w}^t)}^2$. \end{lemma} \begin{IEEEproof} The proof of Lemma \ref{lemma_1} is similar to that in \cite{chen2018lag} and we omit it here due to space limitation. \end{IEEEproof} \begin{lemma} \label{lemma_2} Suppose Assumptions 1 and 2 hold. Let $\{\bm{w}^t\}$ be the iterates generated by CEFL approach. If the learning rate satisfies $\eta = \frac{1}{L}$ and the outage probability is $p_i^t=0$, the CEFL update per communication round yields the following descent \begin{equation} \label{eq:lemma2} f(\bm{w}^{t+1}) \leq f(\bm{w}^t) - \Delta _{\textit{CEFL}}^t, \end{equation} where $\Delta _{\textit{CEFL}}^t$ is defined as \begin{align} &\Delta _{\textit{CEFL}}^t \overset{\Delta}{=} \notag\\ &\frac{1}{2L} \norm{\nabla f(\bm{w}^t)}^2 - \frac{1}{2L} \norm{ \sum _{i \in \mathcal{N}_e^t} \frac{D_i}{D} \left [ \nabla f_i(\widetilde{\bm{w}}_i^t) - \nabla f_i(\bm{w}^t)\right] }^2. \end{align} \end{lemma} \begin{IEEEproof} The proof of Lemma \ref{lemma_2} is similar to that in \cite{chen2018lag}, and we omit it here due to space limitation. \end{IEEEproof} \begin{remark} \label{remark_condition} With the above lemmas, the rationale of (\ref{eq:comm_cond}) follows next. Similar to the work in \cite{chen2018lag}, the proposed client scheduling policy selects the fresh local models by assessing its contribution to the loss function decrease. To improve the communication efficiency of CEFL, each CEFL upload should bring more descent, i.e., \begin{align} \label{eq:descent_compare} \frac{\Delta _{\textit{CEFL}}^t}{| \mathcal{N}_e^t |} \geq \frac{\Delta _{\textit{FedAvg}}^t}{N}. \end{align} As stated in \cite{chen2018lag}, $\nabla f(\bm{w}^t)$ can be approximated by recent gradients or weight differences since $f(\bm{w}^t)$ is \textit{L}-smooth. Then we define \begin{align} \label{eq:gradient_diff} \nabla f(\bm{w}^t) \approx \frac{1}{\eta ^2}\sum _{k=1}^K \delta_k \norm{\bm{w}^{t+1-k}- \bm{w}^{t-k}}^2, \end{align} where $\{ \delta_k \}_{k=1}^K$ and $K$ are constants. It is also noted that \begin{align} \label{eq:relax_grad_diff} &\norm{ \sum _{i \in \mathcal{N}_e^t} \frac{D_i}{D} \left [ \nabla f_i(\widetilde{\bm{w}}_i^t) - \nabla f_i(\bm{w}^t)\right] }^2 \notag\\ &\leq | \mathcal{N}_e^t | \sum_{i \in \mathcal{N}_e^t} \norm{ \nabla f_i(\widetilde{\bm{w}}_i^t) - \nabla f_i(\bm{w}^t) }^2. \end{align} With (\ref{eq:lemma1_new})-(\ref{eq:relax_grad_diff}), the condition (\ref{eq:comm_cond}) can be easily formed to decide if uploading fresh models. \end{remark} Then we conclude the convergence rate of CEFL algorithm as follows. \begin{theorem}[Convergence rate]\label{theorem1} Let Assumptions \ref{ass:smoothness}-\ref{assump:strong_convex} hold and $L, \mu$ be defined therein, our proposed CEFL over wireless {\color{black}IoT} networks in Algorithm \ref{algorithm:cefl} can achieve a strong expected linear rate, i.e., \begin{equation} \label{eq:theorem} \begin{aligned} \mathbbm{E} \left[ f(\bm{w}^{t+1}) -f(\bm{w}^*) \right] \leq \left(1-\rho\right) \mathbbm{E}\left[ f(\bm{w}^t) -f(\bm{w}^*)\right], \end{aligned} \end{equation} where $\rho$ is a positive constant satisfying the following condition \begin{equation} \label{eq:rho_condition} \rho \leq \frac{\mu}{L}\sum_{i\in \mathcal{N}_e^t}\frac{D_i \left (1- p_i^t \right)}{D}, \end{equation} with $p_i^t$ denoting the outage probability during uploading. \end{theorem} \begin{IEEEproof} The proof is detailed in Appendix \ref{secondAppendix}. \end{IEEEproof} \begin{remark} Theorem \ref{theorem1} implies that conditioned on (\ref{eq:rho_condition}), the proposed CEFL still exhibits the same order of convergence rate as that of the original GD method even though some communications are skipped in CEFL. Theorem \ref{theorem1} also presents the impact of wireless factors on the convergence properties i.e., outage probability. Thus, with $p_i^t \rightarrow 0$, the proposed CEFL algorithm will converge in a strong convergence rate. Otherwise, the linear convergence rate does not hold any longer. Note that Theorem \ref{theorem1} provides a sufficient condition to guarantee the convergence speed of the proposed CEFL approach. \end{remark} \subsection{Communication Analysis} Next, we analyze the communication cost based on the linear convergence rate. In following analysis, communication of model parameters between the PS and client is taken as 1 unit of communication. \begin{corollary}\label{colloary1_total_comm_round} Assume that positive constant $\rho$ meets (\ref{eq:rho_condition}). To realize an expected convergence of $f(\bm{w})$ under an accuracy threshold $\epsilon$, i.e., $\mathbbm{E} \left[ f(\bm{w}^{t}) -f(\bm{w}^*) \right] \leq \epsilon $, the total number of communication rounds $T_{\textit{total}}$ for CEFL algorithm is lower bounded by $ T_{\textit{total}}\geq \left \lceil \log_{1-\rho} \frac{\epsilon}{\mathbbm{E}\left[ f(\bm{w}^{1})\right]}\right \rceil $. \begin{IEEEproof} If we assume $\rho=\frac{\mu}{L}\sum_{i\in \mathcal{N}_e^t}\frac{D_i \left (1- p_i^t \right)}{D}$, we have \begin{align} \mathbbm{E}\left[ f(\bm{w}^{t+1})-f(\bm{w}^*)\right] &\leq \left( 1- \rho \right)\mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*)\right]\notag\\ &\leq \left( 1- \rho \right)^2 \mathbbm{E}\left[ f(\bm{w}^{t-1})-f\bm{w}^*)\right] \notag\\ &~...\notag\\ &\leq \left( 1- \rho \right)^t \mathbbm{E}\left[ f(\bm{w}^{1})-f(\bm{w}^*)\right]. \end{align} To achieve a pre-defined deviation defined by $f(\bm{w}^{t})-f(\bm{w}^*) \leq \epsilon$, it is sufficient to have \begin{align} \left( 1- \rho \right)^t \mathbbm{E}\left[ f(\bm{w}^{1})-f(\bm{w}^*)\right] \leq \epsilon. \end{align} Then, we can further derive \begin{align} t &\geq \log_{1-\rho} \frac{\epsilon}{\mathbbm{E}\left[ f(\bm{w}^{1})-f(\bm{w}^*)\right]}\notag\\ &\geq \log_{1-\rho} \frac{\epsilon}{\mathbbm{E}\left[ f(\bm{w}^{1})\right]}. \end{align} Since the convergence round must be an integer, we get the result in Corollary \ref{colloary1_total_comm_round}. \end{IEEEproof} \end{corollary} \begin{corollary}[Communication cost] \label{corollary2} Let (\ref{eq:rho_condition}) holds, under the same conditions as Corollary \ref{colloary1_total_comm_round}, with deviation defined by $f(\bm{w}^{t})-f(\bm{w}^*) \leq \epsilon$, the communication cost of the proposed CEFL algorithm is $O\left ( \log{\frac{1}{\epsilon}} \right)$. \end{corollary} \begin{IEEEproof} With the formula of change of base of logarithms, Corollary \ref{corollary2} can be easily obtained. \end{IEEEproof} \begin{remark} Compared with $N$ activated clients per communication round in \textit{FedAvg}-based FL algorithm, only a subset $\mathcal{N}_e^t$ of clients ($\left |\mathcal{N}_e^t \right| \leq N$) is active to upload fresh models in the CEFL-based approach. Corollary \ref{corollary2} shows that for the proposed CEFL algorithm, the total communication cost under an accuracy threshold $\epsilon$ is reduced to $O\left ( \log{\frac{1}{\epsilon}} \right)$. \end{remark} \section{Simulation Results and Analysis} \label{sec:results} In this section, we evaluate the performance of the proposed CEFL approach under real datasets. \subsection{Simulation Setup} \begin{figure*}[t] \centering \vskip -0.1in \subfloat[ ]{\hspace*{1mm}\includegraphics[width=60mm]{comm_vs_loss_mnist_iid.png}\label{fig_loss_vs_cost}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{comm_vs_accuracy_mnist_iid.png}\label{fig_acc_vs_cost}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{comm_vs_frac_mnist_iid_new.png}\label{fig_per_cost}} \\ \subfloat[ ]{\hspace*{1mm}\includegraphics[width=60mm]{comm_vs_loss_mnist_noniid.png}\label{fig_loss_vs_cost_v2}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{comm_vs_accuracy_mnist_noniid.png}\label{fig_acc_cost_v2}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{comm_vs_frac_mnist_noniid_new.png}\label{fig_per_cost_v2}} \caption{Training neural network for classification on MNIST dataset: (a): communication overhead comparison (IID); (b) accuracy comparison (IID); (c) percentage of the involved clients (IID); (d) communication overhead comparison (non-IID); (e) accuracy comparison (non-IID); (f) percentage of the involved clients (non-IID).} \label{fig:learning_performance} \end{figure*} For simulations, we consider a typical single-cell wireless {\color{black}IoT} network that consists of $N=10$ edge clients and a BS located at its center similar to the example network shown in Fig. \ref{fig_network}. We assume that the BS has two ring-shaped boundary regions. The inner and outer boundaries have radii of 10 m and 500 m, respectively. The $N$ clients are uniformly and randomly distributed between the two boundaries, where the distance (in meter) between client $i$ and the BS is denoted as $d_i$. The wireless channels from each client to the BS follow i.i.d. Rayleigh fading with the total allowed bandwidth $B= \text{ 20 MHz}$, and the channel $h_i^t$ is modeled as \begin{equation} h_i^t = \sqrt{ L(d_i)} o_i^t, \end{equation} where $o_i^t \sim \mathcal{CN}\left(0, \sigma ^2\right)$ is the small-scale fading coefficient of the link between client $i$ and BS, and $L(d_i)=\beta_0 (d_i)^{-\alpha}$ is the distance-dependent pathloss with exponent $\alpha$ and coefficient $\beta_0$ \cite{9187874}. $\beta_0$ is a frequency-dependent constant, which is set as $(\frac{c}{4\pi f_c})^2$ with $c=3\times10^8 \text{ m/s}$ and the carrier frequency $f_c=3 \text{ GHz}$. Then, the $p_i^t$ can be formulated as \begin{equation} p_i^t=Pr(\tau_i > \Gamma^t)= 1 -\exp{\left(-\frac{Q_i^t}{P_i^t}\right)}, \end{equation} where $Q_i^t= \frac{B_i^t N_0}{L(d_i) \sigma ^2}\left( 2^{\frac{S}{B_i^t \Gamma^t}} -1\right)$. Unless specifically stated otherwise, other parameters are given in Table \ref{tab:simu_para}, following the studies in \cite{chen_fl1, wang2021federated, joint_schedul}. To investigate the performance especially in communication efficiency, we compare our CEFL approach against the vanilla FL approach \cite{fedavg}, over wireless networks, among which $\textit{FedAvg}$ method is adopted. {\color{black} For the target model, we consider a convolutional neural network (CNN) architecture which has two $5\times5$ convolutional layers (the first with 10 channels, the second with 20 channels, each followed by ReLU function), each followed by a $2\times2$ max pooling layer, a fully connected layer with 500 units and ReLU function, and a final softmax output layer. } Our model was simulated by Tensorflow in Python 3.7 and all experiments were carried out on the environment with the following hardware specifications: CPU Intel Core i5 @2.3 GHz; RAM 16 GB. \begin{table}[!t] \caption{Simulation Parameters \cite{simulation_para}} \label{tab:simu_para} \centering \fontsize{9}{8}\selectfont \begin{tabular}{|c|c|c|c|} \hline \textbf{Parameter} & \textbf{Value}&\textbf{Parameter} & \textbf{Value} \\ \hline $f_c$ & $3\text{ GHz}$ & $B$ & $20 \text{ MHz}$ \\ \hline $\alpha$ & $2.9$ & $P^{\text{max}}$ & $20 \text{ dBm}$\\ \hline $N_0$ & $-174 \text{ dBm/Hz}$ & $P^{\text{min}}$ & $0 \text{ dBm}$\\ \hline \end{tabular} \end{table} \subsection{Simulation Results} We evaluate the performance of the proposed approach via the MNIST dataset for handwritten digits classification \cite{mnist_data}. The MNIST dataset has 60, 000 training images and 10, 000 testing images of the 10 digits. We adopt the common assumption that each client is connected with the equal amount of training data samples and the local training samples are non-overlapping with each other [15] [20]. Besides, different data distributions of training samples are considered, both i.i.d. case and non-i.i.d. case. For the i.i.d. case, the original dataset is first uniformly partitioned into $N$ pieces and each client is assigned one piece. While for the non-i.i.d. case, the original training dataset is first partitioned into $N$ pieces according to the label order, and each piece is then randomly partitioned into 2 shards (i.e., $2N$ shards in total). Finally, each of $N$ clients is assigned 2 shards with different label distribution. \begin{figure}[t] \centering \vskip -0.1in \subfloat[ ]{\hspace*{1mm}\includegraphics[width=88mm]{comm_vs_loss_mnist_iid_rsa.png}\label{fig_ham1}}\\ \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=88mm]{comm_vs_accuracy_mnist_iid_rsa.png}\label{fig_b1}} \caption{Impact of resource allocation scheme on the convergence of the proposed CEFL on MNIST dataset: (a) training loss comparison; (b) test accuracy comparison.} \label{fig:rsa_impact} \end{figure} The performance of the proposed CEFL approach for classification on MNIST dataset is evaluated in Fig. \ref{fig:learning_performance} by training a CNN model over cumulative communication overhead. We first report the results for i.i.d. data partition case. The corresponding experimental results about the training loss, test accuracy, and the utilization of clients are shown in Fig. 2(a), Fig. 2(b), and Fig. 2(c), respectively. It is observed that under the same total communication overhead, the proposed CEFL approach performs better than the vanilla FL method (\textit{FedAvg}). This is because that less informative messages (i.e., local FL models) from clients are restricted to upload to the PS in CEFL approach whilst all messages are transmitted to the PS for updating in \textit{FedAvg} method at each communication round. We use an intuitive explanation as shown in Fig. \ref{fig:learning_performance}(c) to showcase the effectiveness of CEFL on selectively uploaded local models. In Fig. \ref{fig:learning_performance}(c), one blue stick refers to the percentage of participated clients to upload local models at each communication round. In the initial several communication rounds, communication events happen sparsely. While during the late communication rounds (150-th to 200-th), almost all messages (i.e., local FL models) from clients are critical and selected to transmit in our proposed policy, implying that client scheduling policy works better at the first half communication rounds. Similar performance results have also been observed in Fig. \ref{fig:learning_performance}(d)-Fig. \ref{fig:learning_performance}(f), where non-i.i.d. case on the MNIST training dataset is taken into account. It is worth noting that compared with that of i.i.d. case, the percentage of participated clients is much higher in non-i.i.d. case. Furthermore, we evaluate the impact of the resource allocation scheme on the learning performance in Fig. \ref{fig:rsa_impact}. For performance comparison, we also implement the equal resource allocation approach of \cite{chen2020convergence} as a benchmark. For this allocation scheme, in each communication round $t$, both the transmission power and its bandwidth of each client are identical, i.e., $B_i^t = B/N, P_i^t = P^{\text{max}}, \forall i \in \mathcal{N}$. In Fig. \ref{fig:rsa_impact}(a), we first show how resource allocation scheme affects the convergence behavior of the global FL model training in terms of the value of the training loss. As the communication cost increases, the training losses of the considered algorithms decrease at different rates, whilst the proposed CEFL framework consisting of the new client scheduling policy and the LS based resource allocation (denoted as 'CEFL-Opt') achieves the lowest loss. Fig. \ref{fig:rsa_impact}(b) also represents that the proposed LS based CEFL algorithm achieves the highest test accuracy among all schemes under the same communication budget on MNIST. This is reasonable since both client selection and resource allocation are taken into account in the proposed CEFL framework so as to reduce the effect of wireless transmission errors in FL. \section{Conclusions} \label{conclusion} We have studied the joint optimization problem of communication and resources with federated learning over wireless {\color{black}IoT} networks, in which both client selection and resource allocation are considered. A CEFL framework was proposed combined with a new client scheduling policy and an LS based allocation method, respectively. We showed that the presented LS approach was able to provide an optimal solution for bandwidth and power allocation and the convergence and communication properties of the proposed CEFL algorithm were also theoretically analyzed. Extensive experimental results revealed that the proposed CEFL algorithm outperforms the state-of-the-art baseline method both in communication overheads and learning performance under different data distributions. Besides, the proposed CEFL framework can effectively schedule clients according to both the learned model parameter characteristics and wireless channel dynamics. \appendices \section{Supporting Lemma \ref{lemma_3}} \label{firstAppendix} Before proving Theorem 1, we first introduce the following Lemma \ref{lemma_3}. \begin{lemma}\label{lemma_3} Suppose that the iterates $\{\bm{w}^{t }\}$ of problem (1) are generated by full gradient descent over wireless {\color{black}IoT} networks: $\bm{w}^{t+1}=\bm{w}^t - \eta \bm{g}^t $ with $\bm{g}^t \overset{\Delta}{=} \nabla f(\bm{w}^t) + \bm{e}^t$ and learning rate $\eta^=\frac{1}{L}$ and the error $\bm{e}^t$ meets \begin{align} \label{eq:error_condi} 0\leq \mathbbm{E} \left [ \norm{ \bm{e}^t}^2\right ] \leq 2L(\frac{\mu}{L}-\rho)\mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*)\right], \end{align} where $\rho \leq \frac{\mu}{L}$ is a positive constant. Under Assumptions \ref{ass:smoothness}-\ref{assump:strong_convex}, the following inequality holds {\color{black} \begin{align}\label{eq:lemma3} \mathbbm{E} \left[ f(\bm{w}^{t+1}) -f(\bm{w}^*) \right] \leq \left(1-\rho\right) \mathbbm{E}\left[ f(\bm{w}^t) -f(\bm{w}^*)\right]. \end{align} } \end{lemma} \begin{IEEEproof} Using Assumption \ref{ass:smoothness}, we can derive \begin{align} \label{eq:derived_from_smooth} &f(\bm{w}^{t+1})\notag\\ &\leq f(\bm{w}^t) + \langle \bm{w}^{t+1}-\bm{w}^t, \nabla f(\bm{w}^t) \rangle +\frac{L}{2} \norm{\bm{w}^{t+1}-\bm{w}^t}^2 \notag\\ &=f(\bm{w}^t) - \langle \eta \left( \nabla f(\bm{w}^t) + \bm{e}^t\right), \nabla f(\bm{w}^t) \rangle +\frac{L}{2} \norm{\bm{w}^{t+1}-\bm{w}^t}^2 \notag\\ &=f(\bm{w}^t) - \langle \frac{1}{L} \left( \nabla f(\bm{w}^t) + \bm{e}^t\right), \nabla f(\bm{w}^t) \rangle \notag\\ &\quad +\frac{L}{2} \norm{\frac{1}{L}\left( \nabla f(\bm{w}^t) + \bm{e}^t\right)}^2 \notag\\ &=f(\bm{w}^t) - \frac{1}{2L} \norm{\nabla f(\bm{w}^t)}^2 +\frac{1}{2L} \norm{\bm{e}^t}^2. \end{align} Subtracting $f(\bm{w}^*)$ from both sides of (\ref{eq:derived_from_smooth}), it gives \begin{align} \label{eq:derived_from_strong_convex} &f(\bm{w}^{t+1})-f(\bm{w}^*)\notag\\ &\leq f(\bm{w}^t)-f(\bm{w}^*)- \frac{1}{2L} \norm{\nabla f(\bm{w}^t)}^2 +\frac{1}{2L} \norm{\bm{e}^t}^2\notag\\ &\overset{(a)}{\leq}f(\bm{w}^t)-f(\bm{w}^*)-\frac{2\mu}{2L}\left[ f(\bm{w}^t)-f(\bm{w}^*) \right]+\frac{1}{2L} \norm{e^t}^2\notag\\ &=\left( 1-\frac{\mu}{L}\right)\left[ f(\bm{w}^t)-f(\bm{w}^*) \right]+\frac{1}{2L} \norm{\bm{e}^t}^2, \end{align} where (a) holds due to (\ref{eq:ass_grad_inequality}). Taking expectations of (\ref{eq:derived_from_strong_convex}), we can derive \begin{align} &\mathbbm{E} \left[(\bm{w}^{t+1})-f(\bm{w}^*)\right] \notag\\ &\leq \left( 1-\frac{\mu}{L}\right)\mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*) \right]+\frac{1}{2L} \mathbbm{E}\left[\norm{e^t}^2\right] \notag\\ &\overset{(b)}{\leq}\left( 1-\frac{\mu}{L} +\frac{\mu}{L} - \rho \right) \mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*) \right]\notag\\ &=\left( 1- \rho \right)\mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*) \right], \end{align} where (b) uses the condition (\ref{eq:error_condi}). This completes the proof of Lemma \ref{lemma_3}. \end{IEEEproof} \section{Proof of Theorem \ref{theorem1}} \label{secondAppendix} \begin{IEEEproof} Based on the result given in Lemma \ref{lemma_3}, we present the proof for Theorem 1 in detail. Combing (\ref{eq:gd_method}) with (\ref{eq:cefl_sum}), we have \begin{align} \bm{w}^{t+1} - \bm{w}^{t} = -\eta \left( \nabla f(\bm{w}^t) + \bm{e}^t \right), \end{align} where $\bm{e}^t$ is gradient deviation caused by the PS at communication round $t$ that it uses old copy of local FL model from client $i \in{\mathcal{N}}$ when the newly local FL model can not be successfully received. Let $\mathcal{N}_e^t$ and $\mathcal{N}_c^t$ be the sets of clients that \textit{do} and \textit{do not} communicate with the PS, respectively. In particular, $\bm{e}^t$ can be expressed as (\ref{eq:e_def}), shown on the upper of the next page. \begin{figure*}[t] \begin{align} \label{eq:e_def} e^t &= - \nabla f(\bm{w}^t) + \frac{\sum_{i\in \mathcal{N}_e^t}D_i a_i^t\nabla f_i(\bm{w}^t) +\sum_{i\in \mathcal{N}_e^t} D_i \left( 1-a_i^t\right) \nabla f_i(\widetilde{\bm{w}}_i^t) + \sum_{i\in \mathcal{N}_c^t} D_i \nabla f_i(\widetilde{\bm{w}}_i^t) }{D}\notag\\ &\overset{(c)}{=}\frac{-\sum_{i\in \mathcal{N}_e^t}D_i \left( 1-a_i^t\right) \nabla f_i(\bm{w}^t) +\sum_{i\in \mathcal{N}_e^t} D_i \left( 1-a_i^t\right) \nabla f_i(\widetilde{\bm{w}}_i^t) +\sum_{i\in \mathcal{N}_c^t} D_i \nabla f_i(\widetilde{\bm{w}}_i^t) - \sum_{i\in \mathcal{N}_c^t} D_i \nabla f_i(\bm{w}^t)}{D}\notag\\ &= \sum_{i\in \mathcal{N}_e^t}\frac{D_i \left( 1-a_i^t\right)}{D} \left( \nabla f_i(\widetilde{\bm{w}}_i^t) -\nabla f_i(\bm{w}^t)\right) +\sum_{i\in \mathcal{N}_c^t} \frac{D_i}{D} \left(\nabla f_i(\widetilde{\bm{w}}_i^t) -\nabla f_i(\bm{w}^t)\right), \end{align} where (c) holds because of $\nabla f(\bm{w}^t)=\left( \sum_{i\in \mathcal{N}_e^t} D_i \nabla f_i(\bm{w}^t) + \sum_{i\in \mathcal{N}_c^t} D_i \nabla f_i(\bm{w}^t) \right) / D$. \hrulefill \end{figure*} Thus, with $\eta^t=\frac{1}{L}$, we have \begin{align} \label{eq:weight_diff} &\bm{w}^{t+1} - \bm{w}^{t} \notag\\ &= -\frac{1}{L} \left( \nabla f(\bm{w}^t) + \sum_{i\in \mathcal{N}_e^t}\frac{D_i \left( 1-a_i^t\right)}{D} \left( \nabla f_i(\widetilde{\bm{w}}_i^t) -\nabla f_i(\bm{w}^t)\right) \right . \notag\\ &\qquad\qquad \left . +\sum_{i\in \mathcal{N}_c^t} \frac{D_i}{D} \left(\nabla f_i(\widetilde{\bm{w}}_i^t) -\nabla f_i(\bm{w}^t)\right) \right). \end{align} Then, by taking expectations and norms in both sides of (\ref{eq:e_def}), we have \begin{align} \label{eq:e_expectation} &\mathbbm{E} \left[\norm{\bm{e}^t}^2 \right] \notag\\ &= \mathbbm{E} \left[\left \lVert \sum_{i\in \mathcal{N}_e^t}\frac{D_i \left( 1-a_i^t\right)}{D} \left( \nabla f_i(\reallywidehat{\bm{w}}_i^{t}) -\nabla f_i(\bm{w}^t)\right) \right. \right . \notag\\ &\quad\qquad\left . \left . +\sum_{i\in \mathcal{N}_c^t} \frac{D_i}{D} \left(\nabla f_i(\reallywidehat{\bm{w}}_i^{t}) -\nabla f_i(\bm{w}^t)\right) \right \rVert ^2 \right ] \notag\\ &\leq\mathbbm{E} \left[\left \lVert \sum_{i\in \mathcal{N}_e^t}\frac{D_i \left( 1-a_i^t\right)}{D} \norm{ \nabla f_i(\reallywidehat{\bm{w}}_i^{t}) -\nabla f_i(\bm{w}^t)} \right. \right. \notag\\ &\qquad\quad \left. \left .+\sum_{i\in \mathcal{N}_c^t} \frac{D_i}{D} \norm{\nabla f_i(\reallywidehat{\bm{w}}_i^{t}) -\nabla f_i(\bm{w}^t)} \right \rVert ^2 \right] \notag\\ &\overset{(e)}{\leq}\mathbbm{E} \left[\norm{ \sum_{i\in \mathcal{N}_e^t}\frac{D_i \left( 1-a_i^t\right)L G_{\mathcal{X}}}{D} +\sum_{i\in \mathcal{N}_c^t} \frac{D_i L G_{\mathcal{X}}}{D} }^2 \right] \notag\\ &=\mathbbm{E} \left[\norm{ L G_{\mathcal{X}} \left( \sum_{i\in \mathcal{N}}\frac{D_i}{D} - \sum_{i\in \mathcal{N}_e^t} \frac{D_i a_i^t}{D} \right) }^2 \right] \notag\\ &\overset{(f)}{\leq} L^2 G_{\mathcal{X}}^2 \left( 1- \sum_{i\in \mathcal{N}_e^t} \frac{D_i \mathbbm{E} \left[ a_i^t\right]}{D} \right) \notag\\ & \overset{(g)}{=} L^2 G_{\mathcal{X}}^2\left( 1- \sum_{i\in \mathcal{N}_e^t} \frac{D_i \left (1- p_i^t \right) }{D} \right), \end{align} where (e) is due to Assumption 1, (f) is because of $\sum_{i\in \mathcal{N}}D_i = D$ and (g) is based on $\mathbbm{E} \left[ a_i^t\right] = 1 - p_i^t$, respectively. According to Lemma \ref{lemma_3}, if let (\ref{eq:error_condi}) always holds, we have \begin{align} \mathbbm{E} \left[\norm{\bm{e}^t}^2 \right]&\leq L^2 G_{\mathcal{X}}^2\left( 1- \sum_{i\in \mathcal{N}_e^t} \frac{D_i \left (1- p_i^t \right)}{D} \right) \notag\\ &\leq 2L(\frac{\mu}{L}-\rho)\mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*)\right]. \end{align} That is, to guarantee (\ref{eq:lemma3}), the available $\rho$ must satisfy \begin{align} \label{eq:rho_cal} \rho &\leq \frac{\mu}{L} - \frac{L G_{\mathcal{X}}^2 \left( 1- \sum_{i\in \mathcal{N}_e^t} \frac{D_i \left (1- p_i^t \right)}{D} \right)}{2\mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*)\right]}. \end{align} According to (\ref{ass:smooth_eq1}) in Assumption \ref{ass:smoothness} and (\ref{eq:ass_grad_inequality}) in Assumption \ref{assump:strong_convex}, we have \begin{align} \mathbbm{E}\left[ f(\bm{w}^t)-f(\bm{w}^*)\right]\leq \frac{\norm{\nabla f(\bm{w}^t)}^2}{2\mu}\leq \frac{L^2 G_{\mathcal{X}}^2}{2\mu}. \end{align} Then, (\ref{eq:rho_cal}) can be derived as \begin{align} \rho &\leq \frac{\mu}{L} - \frac{\mu \left( 1- \sum_{i\in \mathcal{N}_e^t} \frac{D_i \left (1- p_i^t \right)}{D} \right)}{L} \notag\\ &=\frac{\mu}{L}\sum_{i\in \mathcal{N}_e^t}\frac{D_i \left (1- p_i^t \right)}{D}. \end{align} This completes the proof of Theorem \ref{theorem1}. \end{IEEEproof}
1b9d9072101969402a57dddb16c4e3d7268363cc
\section{Introduction} We study the discrete optimal transport (OT) problem: \begin{align}\label{eq:ot:dist} \begin{array}{ll} \underset{X \geq 0}{\minimize} & \InP{C}{X}\\ \subjectto & X \ones_n = p \\ & X^\top \ones_m = q, \end{array} \end{align} where $X \in \R_{+}^{m\times n}$ is the transport plan, $C\in \R_{+}^{m\times n}$ is the cost matrix, and $p \in \R_{+}^m$ and $q\in \R_{+}^n$ are two discrete probability measures. OT has a very rich history in mathematics and operations research dated back to at least the 18th century. By exploiting geometric properties of the underlying ground space, OT provides a powerful and flexible way to compare probability measures. It has quickly become a central topic in machine learning and has found countless applications in tasks such as deep generative models \citep{ACB17}, domain adaptation \citep{CFT16}, and inference of high-dimensional cell trajectories in genomics \citep{SST19}; we refer to \cite{PC19} for a more comprehensive survey of OT theory and applications. However, the power of OT comes at the price of an enormous computational cost for determining the optimal transport plan. Standard methods for solving linear programs (LPs) suffer from a super-linear time complexity in terms of the problem size \cite{PC19}. Such methods are also challenging to parallelize on modern processing hardware. Therefore, there has been substantial research in developing new efficient methods for OT. This paper advances the-state-of-the-art in this direction. \subsection{Related work} Below we review some of the topics most closely related to our work. \paragraph{Sinkhorn method} The Sinkhorn (SK) method~\citep{Cur13} aims to solve an approximation of \eqref{eq:ot:dist} in which the objective is replaced by a regularized version of the form$\InP{C}{X} - \eta H(X)$. Here, $H(X) = - \sum_{ij}X_{ij} \log (X_{ij})$ is an entropy function and $\eta>0$ is a regularization parameter. The Sinkhorn method defines the quantity $ K = \exp({-C/\eta})$ and repeats the following steps \begin{align*} u_{k} = {p}/(K v_{k-1}) \quad \mbox{and} \quad v_{k} = {q}/(K^\top u_{k}), \end{align*} until $\norm{u_{k}\odot (K v_k)-p} + \Vert{v_{k}\odot (K^\top u_k)-q}\Vert$ becomes small, then returns $\diag (u_k) K \diag(v_k)$. The division $(/)$ and multiplication $(\odot)$ operators between two vectors are to be understood entrywise. Each SK iteration is built from matrix-vector multiplies and element-wise arithmetic operations, and is hence readily parallelized on multi-core CPUs and GPUs. However, due to the entropic regularization, SK suffers from numerical issues and can struggle to find even moderately accurate solutions. This problem is even more prevalent in GPU implementations as most modern GPUs are built and optimized for single-precision arithmetic \cite{KW16, CGM14}. Substantial care is therefore needed to select an appropriate $\eta$ that is small enough to provide a meaningful approximation, while avoiding numerical issues. In addition, the entropy term enforces a dense solution, which can be undesirable when the optimal transportation plan itself is of interest~\citep{BSR18}. We mention that there has been substantial research in improving the performance of SK~\citep{ABG19, AWR17, BSR18, LHJ19}. Most of these contributions improve certain aspects of SK: some result in more stable but much slower methods, while others allow to produce sparse solutions but at a much higher cost per iteration. Also, some sophisticated changes make parallelization challenging due to the many branching conditions that they introduce. \paragraph{Operator splitting solvers for general LP} With a relatively low per-iteration cost and the ability to exploit sparsity in the problem data, operator splitting methods such as Douglas-Rachford splitting \citep{DR56} and ADMM \citep{GM76} have gained widespread popularity in large-scale optimization. Such algorithms can quickly produce solutions of moderate accuracy and are the engine of several successful first-order solvers \citep{OCPB16, SBG+20, GCG20}. As OT can be cast as an LP, it can, in principle, be solved by these splitting-based solvers. However, there has not been much success reported in this context, probably due to the \emph{memory-bound} nature of large-scale OTs. For OTs, the main bottleneck is not floating-point computations, but rather time-consuming memory operations on large two-dimensional arrays. Even an innocent update like $X \gets X - C$ is more expensive than the two matrix-vector multiplies in SK. To design a high-performance splitting method, it is thus crucial to minimize the memory operations associated with large arrays. In addition, since these solvers target general LPs, they often solve a linear system at each iteration, which is prohibitively expensive for many OT applications. \paragraph{Convergence rates of DR for LP} Many splitting methods, including Douglas-Rachford, are known to converge linearly under strong convexity (see e.g. \cite{giselsson2016linear}). Recently, it has been shown that algorithms based on DR/ADMM often enjoy similar convergence properties also in the absence of strong convexity. For example, \citet{LFP17} derived local linear convergence guarantees under mild assumptions; \citet{applegate2021faster} established global linear convergence for Primal-Dual methods for LPs using restart strategies; and \citet{wang2017new} established linear convergence for an ADMM-based algorithm on general LPs. Yet, these frameworks quickly become intractable on large OT problems, due to the many memory operations required. \subsection{Contributions} We demonstrate that DR splitting, when properly applied and implemented, can solve large-scale OT problems to modest accuracy reliably and quickly, while retaining the excellent memory footprint and parallelization properties of the Sinkhorn method. Concretely, we make the following contributions: \begin{itemize \item We develop a DR splitting algorithm that solves the original OT directly, avoiding the numerical issues of SK and forgoing the need for solving linear systems of general solvers. We perform simplifications to eliminate variables so that the final algorithm can be executed with only one matrix variable, while maintaining the same degree of parallelization. Our method implicitly maintains a primal-dual pair, which facilitates a simple evaluation of stopping criteria. \item We derive an iteration complexity $O(1/\epsilon)$ for our method. This is a significant improvement over the best-known estimate $O(1/\epsilon^2)$ for the Sinkhorn method (\cf~\cite{LHJ19}). We also provide a global linear convergence rate that holds independently of the initialization, despite the absence of strong convexity in the OT problem. \item We detail an efficient GPU implementation that fuses many immediate steps into one and performs several on-the-fly reductions between a read and write of the matrix variable. We also show how a primal-dual stopping criterion can be evaluated at no extra cost. The implementation is available as open source and gives practitioners a fast and robust OT solver also for applications where regularized OT is not suitable. \end{itemize} \noindent As a by-product of solving the original OT problem, our approximate solution is guaranteed to be sparse. Indeed, it is known that DR can identify an optimal support in a finite time, even before reaching a solution \cite{IM20}. To avoid cluttering the presentation, we focus on the primal OT, but note that the approach applies equally well to the dual form. Moreover, our implementation is readily extended to other splitting schemes (e.g. \citet{CP11}). \section{Background} \paragraph{Notation} For any $x,y\in\R^n$, $\InP{x}{y}$ is the Euclidean inner product of $x$ and $y$, and $\ltwo{\cdot}$ denotes the $\ell_2$-norm. For matrices $X, Y \in \R^{m\times n}$, $\InP{X}{Y}=\tr(X^\top Y)$ denotes their inner product and $\lfro{\cdot} = {\sqrt{\InP{\cdot}{\cdot}}}$ is the the induced Frobenius norm. We use $\norm{\cdot}$ to indicate either $\ltwo{\cdot}$ or $\lfro{\cdot}$. For a closed and convex set $\mc{X}$, the distance and the projection map are given by $\dist(x, \mc{X})=\min_{z\in\mc{X}}\norm{z-x}$ and $\proj{\mc{X}}{x}=\argmin_{z\in\mc{X}}\norm{z-x}$, respectively. The Euclidean projection of $x\in\R^n$ onto the nonnegative orthant is denoted by $[x]_+=\max(x, 0)$, and $\Delta_n = \{x\in \R_+^n : \sum_{i=1}^{n}x_i=1\}$ is the $(n-1)$ dimensional probability simplex. \paragraph{OT and optimality conditions} Let $e=\ones_n$ and $f=\ones_m$, and consider the linear mapping \begin{align*} \mc{A}: \R^{m\times n} \to \R^{m+n} : X \mapsto ({Xe}, { X^\top f}), \end{align*} and its adjoint $ \mc{A}^* : \R^{m+n} \to \R^{m\times n}: (y, x) \mapsto ye^\top + f x^\top.$ The projection onto the range of $\mc{A}$ is $\proj{\mathrm{ran} \mc{A}} {(y,x)} = (y,x) - \alpha(f, -e)$, where $\alpha= ({f^\top y - e^\top x})/(m+n)$ \cite{BSW21}. With $b = (p, q) \in \R^{m+n}$, Problem~\eqref{eq:ot:dist} can be written as a linear program on the form \begin{align}\label{eq:ot:dist:operator:primal} \begin{array}{lll} \underset{X \in\R^{m\times n}}{\minimize} \,\,\, \InP{C}{X} \quad \subjectto \,\,\, \mc{A}(X) = b, \quad X \geq 0. \end{array} \end{align} Let $(\mu, \nu)\in\R^{m+n}$ be the dual variable associated the affine constraint in \eqref{eq:ot:dist:operator:primal}. Then, using the definition of $\mc{A}^*$, the dual problem of \eqref{eq:ot:dist:operator:primal}, or equivalently of \eqref{eq:ot:dist}, reads \begin{align}\label{eq:ot:dist:dual} \begin{array}{ll} \underset{\mu, \nu}{\maximize} \,\,\, p^\top \mu + q^\top \nu \quad \subjectto \,\,\, \mu e^\top + f \nu^\top \leq C. \end{array} \end{align} OT is a bounded program and always admits a feasible solution. It is thus guaranteed to have an optimal solution, and the optimality conditions can be summarized as follows. \begin{proposition}\label{prop:optimality:cond} A pair $X$ and $(\mu,\nu)$ are primal and dual optimal if and only if: (i) $Xe=p,\; X^\top f = q,\; X \geq 0$, (ii) $\mu e^\top + f \nu^\top - C \leq 0$, (iii) $\langle X, C - \mu e^\top - f^\top\nu \rangle = 0$. \end{proposition} These conditions mean that: (i) $X$ is primal feasible; (ii) $\mu, \nu$ are dual feasible, and (iii) the duality gap is zero $\InP{C}{X} = p^\top \mu + q^\top \nu$, or equivalently, complementary slackness holds. Thus, solving the OT problem amounts to (approximately) finding such a primal-dual pair. To this end, we rely on the celebrated Douglas-Rachford splitting method \cite{LM79, EB92, Fuk96, BC17}.\medskip \noindent{\textbf{Douglas-Rachford splitting}}\hspace*{0.5em} Consider composite convex optimization problems on the form \begin{align}\label{eq:composite:problem} \minimize_{x\in \R^n} \; f(x) + g(x), \end{align} where $f$ and $g$ are proper closed and convex functions. To solve Problem~\eqref{eq:composite:problem}, the DR splitting algorithm starts from $y_0\in \R^n$ and repeats the following steps: \begin{align}\label{eq:dr:def} x_{k+1} = \prox{\rho f} {y_k}, \quad z_{k+1} = \prox{\rho g}{2 x_{k+1} - y_k}, \quad y_{k+1} = y_k + z_{k+1} - x_{k+1}, \end{align} where $\rho>0$ is a penalty parameter. The preceding procedure can be viewed as a fixed-point iteration $y_{k+1}=T(y_k)$ for the mapping \begin{align}\label{eq:dr:operator:T} T(y) = y + \prox{\rho g}{2 \prox{\rho f} {y} - y} - \prox{\rho f} {y}. \end{align} The DR iterations \eqref{eq:dr:def} can also be derived from the ADMM method applied to an equivalent problem to~\eqref{eq:composite:problem} (see, Appendix~\ref{appendix:dr:admm}). Indeed, they are both special instances of the classical proximal point method in \cite{Roc76}. As for convergence, \citet{LM79} showed that $T$ is a \emph{firmly-nonexpansive }mapping, from which they obtained convergence of $y_k$. Moreover, the sequence $x_k$ is guaranteed to converge to a minimizer of $f+g$ (assuming a minimizer exists) for any choice of $\rho>0$. In particular, we have the following general convergence result, whose proof can be found in \cite[Corollary~28.3]{BC17}. \begin{lemma}\label{lem:dr:convergence} Consider the composite problem \eqref{eq:composite:problem} and its Fenchel–Rockafellar dual defined as \begin{align}\label{eq:composite:dual:problem} \maximize_{u\in \R^n} \,\,\, -f^*(-u) - g^*(u). \end{align} Let $\mc{P}^\star$ and $\mc{D}^\star$ be the sets of solutions to the primal \eqref{eq:composite:problem} and dual \eqref{eq:composite:dual:problem} problems, respectively. Let $x_k, y_k$, and $z_k$ be generated by procedure \eqref{eq:dr:def} and let $u_{k} := (y_{k-1}-x_k)/\rho$. Then, there exists $y\opt\in\R^n$ such that $y_k\to y\opt$. Setting $x\opt = \prox{\rho f} {y\opt}$ and $u\opt = (y\opt-x\opt)/\rho$, then it holds that (i) $x\opt \in \mc{P}^\star$ and $u\opt\in \mc{D}^\star$; (ii) $x_k - z_k \to 0$, $x_k \to x\opt$ and $z_k \to x\opt$; (iii) $u_k \to u\opt$. \end{lemma} \section{Douglas-Rachford splitting for optimal transport} To efficiently apply DR to OT, we need to specify the functions $f$ and $g$ as well as how to evaluate their proximal operations. We begin by introducing a recent result for computing the projection onto the set of \emph{real-valued} matrices with prescribed row and column sums \cite{BSW21}. \begin{lemma}\label{lem:projection:gen:doubly:stoc:matrices} Let $e = \ones_n$ and $f = \ones_m$. Let $p\in \Delta_m$ and $q\in \Delta_n$. Then, the set $\mc{X}$ defined by: \begin{align*} \mc{X} := \big\{ X\in \R^{m\times n} \, \big| \, Xe = p \ \mbox{and} \ X^\top f = q \big\} \end{align*} is non-empty, and for every given $X\in\R^{m\times n}$, we have \begin{align*} \proj{\mc{X}}{X} = X - \frac{1}{n} \left(\left(Xe-p\right)e^\top - \gamma fe^\top\right) - \frac{1}{m} \left( f( X^\top f- q)^\top - \gamma fe^\top \right), \end{align*} where $\gamma = f^\top \left(Xe-p\right)/(m+n) = e^\top\left( X^\top f- q\right)/(m+n)$. \end{lemma} \noindent The lemma follows immediately from \cite[Corollary~3.4]{BSW21} and the fact that $(p, q)= \proj{\mathrm{ran} \mc{A}}{(p, \, q)}$. It implies that $\proj{\mc{X}}{X}$ can be carried out by basic linear algebra operations, such as matrix-vector multiplies and rank-one updates, that can be effectively parallelized. \subsection{Algorithm derivation} Our algorithm is based on re-writing~\eqref{eq:ot:dist:operator:primal} on the standard form for DR-splitting~\eqref{eq:composite:problem} using a carefully selected $f$ and $g$ that ensures a rapid convergence of the iterates and an efficient execution of the iterations. In particular, we propose to select $f$ and $g$ as follows: \begin{align}\label{eq:ot:dist:operator:primal:composite} f(X) = \InP{C}{X} + \indcfunc{\R^{m\times n}_+}{X} \quad \mbox{and} \quad g(X) = \indcfunc{\{Y: \mc{A}(Y) = b\}}{X} . \end{align} By Lemma~\ref{lem:projection:gen:doubly:stoc:matrices}, we readily have the explicit formula for the proximal operator of $g$, namely, $ \prox{\rho g}{\cdot} = \proj{\mc{X}}{\cdot}$. The proximal operator of $f$ can also be evaluated explicitly as: \begin{align*} \prox{\rho f}{X} = \proj{\R^{m\times n}_+}{X - \rho C} = [X - \rho C]_+. \end{align*} The Douglas-Rachford splitting applied to this formulation of the OT problem then reads: \begin{align}\label{eq:drot:general:form} X_{k+1} = [Y_k - \rho C]_{+},\quad Z_{k+1} = \proj{\mc{X}}{2X_{k+1} - Y_k},\quad Y_{k+1} = Y_k + Z_{k+1}- X_{k+1}. \end{align} Despite their apparent simplicity, the updates in \eqref{eq:drot:general:form} are inefficient to execute in practice due to the many \emph{memory operations} needed to operate on the large arrays $X_k, Y_k, Z_k$ and $C$. To reduce the memory access, we will now perform several simplifications to eliminate variables from \eqref{eq:drot:general:form}. The resulting algorithm can be executed with only one matrix variable while maintaining the same degree of parallelization. We first note that the linearity of $\proj{\mc{X}}{\cdot}$ allows us to eliminate $Z$, yielding the $Y$-update \begin{align*} Y_{k+1} = X_{k+1} - n^{-1} \big(2X_{k+1}e -Y_k e-p- \gamma_k f \big) e^\top - m^{-1} f \big(2 X_{k+1}^\top f - Y_k^\top f- q - \gamma_k e \big)^\top, \end{align*} where $\gamma_k = f^\top \left(2X_{k+1}e -Y_k e-p\right)/(m+n) = e^\top\left(2 X_{k+1}^\top f - Y_k^\top f- q\right) / (m+n)$. We also define the following quantities that capture how $Y_k$ affects the update of $Y_{k+1}$ \begin{align*} a_k=Y_ke -p, \quad b_k = Y_k^\top f - q, \quad \alpha_k = f^\top a_k/(m+n) =e^\top b_k/(m+n). \end{align*} Similarly, for $X_k$, we let: \begin{align*} r_k = X_ke - p, \quad s_k = X_k^\top f - q, \quad \beta_k = f^\top r_k/(m+n)=e^\top s_k /(m+n). \end{align*} Recall that the pair $(r_k, s_k)$ represents the primal residual at $X_k$. Now, the preceding update can be written compactly as \begin{align}\label{alg:drot:y:update} Y_{k+1} = X_{k+1} + \phi_{k+1} e^\top + f \varphi_{k+1}^\top, \end{align} where \begin{align*} \phi_{k+1} &= \left(a_k - 2 r_{k+1} + (2\beta_{k+1} - \alpha_k) f\right) / n\\ \varphi_{k+1} &= \left(b_k - 2 s_{k+1} + (2\beta_{k+1} - \alpha_k) e\right)/m. \end{align*} We can see that the $Y$-update can be implemented using 4 matrix-vector multiples (for computing $a_k, b_k, r_{k+1}, s_{k+1}$), followed by 2 rank-one updates. As a rank-one update requires a read from an input matrix and a write to an output matrix, it is typically twice as costly as a matrix-vector multiply (which only writes the output to a vector). Thus, it would involve 8 memory operations of large arrays, which is still significant. Next, we show that the $Y$-update can be removed too. Noticing that updating $Y_{k+1}$ from $Y_k$ and $X_{k+1}$ does not need the \emph{full} matrix $Y_k$, but only the ability to compute $a_k$ and $b_k$. This allows us to use a single \emph{physical} memory array to represent both the sequences $X_k$ and $Y_k$. Suppose that we overwrite the matrix $X_{k+1}$ as: \begin{align*} X_{k+1} \gets X_{k+1} + \phi_{k+1} e^\top + f \varphi_{k+1}^\top, \end{align*} then after the two rank-one updates, the $X$-array holds the value of $Y_{k+1}$. We can access the actual $X$-value again in the next update, which now reads: $X_{k+2} \gets \big[X_{k+1} - \rho C\big]_{+}$. It thus remains to show that $a_k$ and $b_k$ can be computed efficiently. By multiplying both sides of \eqref{alg:drot:y:update} by $e$ and subtracting the result from $p$, we obtain \begin{align*} Y_{k+1} e - p = X_{k+1}e - p + \phi_{k+1} e^\top e + (\varphi_{k+1}^\top e) f. \end{align*} Since $e^\top e = n$ and $(b_k - 2s_{k+1})^\top e = (m+n)(\alpha_k - 2\beta_{k+1})$, it holds that $(\varphi_{k+1}^\top e) f = (\alpha_k - 2\beta_{k+1}) f$. We also have $\phi_{k+1} e^\top e = a_{k} - 2r_{k+1} + (2\beta_{k+1} - \alpha_k) f$. Therefore, we end up with an extremely simple recursive form for updating $a_{k}$: \begin{align*} a_{k+1} = a_{k} - r_{k+1}. \end{align*} Similarly, we have $b_{k+1} = b_{k} - s_{k+1}$ and $\alpha_{k+1} = \alpha_k - \beta_{k+1}.$ In summary, the DROT method can be implemented with a single matrix variable as summarized in Algorithm~\ref{alg:drot}. \begin{algorithm}[!t] \caption{Douglas-Rachford Splitting for Optimal Transport (DROT)} \begin{algorithmic}[1]\label{alg:drot} \REQUIRE OT($C$, $p$, $q$), initial point $X_0$, penalty parameter $\rho$ \STATE $\phi_0 = 0$, $\varphi_0 = 0$ \STATE $a_0=X_0e -p$, $b_0 = X_0^\top f - q$, $\alpha_0 = f^\top a_0 / (m+n)$ \FOR{$k=0,1, 2,\ldots $} \STATE $X_{k+1} = \big[X_k + \phi_k e^\top + f \varphi_k^\top -\rho C\big]_{+}$ \STATE $r_{k+1} = X_{k+1}e -p$, $s_{k+1} = X_{k+1}^\top f -q$, $\beta_{k+1}= f^\top r_{k+1} / (m+n)$ \STATE $\phi_{k+1} = \left(a_k - 2 r_{k+1} + (2\beta_{k+1} - \alpha_k) f\right) / n$ \STATE $\varphi_{k+1} = \left(b_k - 2 s_{k+1} + (2\beta_{k+1} - \alpha_k) e\right)/m$ \STATE $a_{k+1} = a_{k} - r_{k+1}$, $b_{k+1} = b_{k} - s_{k+1}$, $\alpha_{k+1} = \alpha_k - \beta_{k+1}$ \ENDFOR \ENSURE $X_K$ \end{algorithmic} \end{algorithm} \paragraph{Stopping criteria} It is interesting to note that while DROT directly solves to the primal problem~\eqref{eq:ot:dist}, it maintains a pair of vectors that \emph{implicitly} plays the role of the dual variables $\mu$ and $\nu$ in the dual problem \eqref{eq:ot:dist:dual}. To get a feel for this, we note that the optimality conditions in Proposition~\ref{prop:optimality:cond} are equivalent to the existence of a pair $X\opt$ and $(\mu\opt,\nu\opt)$ such that \begin{align*} (X\opt e, {X\opt}^\top f) = (p, q) \quad \mbox{and} \quad X\opt = \big[X\opt + \mu\opt e^\top + f{\nu\opt}^\top -C \big]_+. \end{align*} Here, the later condition encodes the nonnegativity constraint, the dual feasibility, and the zero duality gap. The result follows by invoking \cite[Theorem~5.6(ii)]{DD01} with the convex cone $\mc{C}=\R_{+}^{m\times n}$, $y=X\opt\in \mc{C}$ and $z=\mu\opt e^\top + f{\nu\opt}^\top -C \in \mc{C}^\circ$. Now, compared to Step~4 in DROT, the second condition above suggests that $\phi_k/\rho$ and $\varphi_k/\rho$ are such implicit variables. To see why this is indeed the case, let $U_{k} = (Y_{k-1} - X_{k})/\rho$. Then it is easy to verify that \begin{align*} (Z_{k} - X_{k})/\rho = (X_{k} - Y_{k-1} + \phi_k e^\top + f \varphi_k^\top)/\rho = -U_{k} + (\phi_k/\rho) e^\top + f (\varphi_k/\rho)^\top. \end{align*} By Lemma~\ref{lem:dr:convergence}, we have $Z_{k} - X_{k} \to 0$ and $U_{k} \to U\opt \in \R^{m \times n}$, by which it follows that \begin{align*} (\phi_k/\rho) e^\top + f (\varphi_k/\rho)^\top \to U\opt. \end{align*} Finally, by evaluating the conjugate functions $f^*$ and $g^*$ in Lemma~\ref{lem:dr:convergence}, it can be shown that $U\opt$ must have the form $\mu\opt e^\top + f{\nu\opt}^\top$, where $\mu\opt\in \R^m$ and $\nu\opt\in\R^n$ satisfying $\mu\opt e^\top + f{\nu\opt}^\top \leq C$; see Appendix~\ref{appendix:Fenchel-Rockafellar:duality} for details. With such primal-dual pairs at our disposal, we can now explicitly evaluate their distance to the set of solutions laid out in Proposition~\ref{prop:optimality:cond} by considering: \begin{align*} r_{\mathrm{primal}} &=\sqrt{\norm{r_k}^2 + \norm{s_k}^2}\\ r_{\mathrm{dual}} &= \Vert[(\phi_k/\rho) e^\top + f (\varphi_k/\rho)^\top - C]_+\Vert\\ \mathrm{gap} &= \big|\InP{C}{X_k} - (p^\top \phi_k + \varphi_k^\top q)/\rho\big|. \end{align*} As problem~\eqref{eq:ot:dist} is feasible and bounded, Lemma~\ref{lem:dr:convergence} and strong duality guarantees that all the three terms will converge to zero. Thus, we terminate DROT when these become smaller than some user-specified tolerances. \subsection{Convergence rates} In this section, we state the main convergence results of the paper, namely a sublinear and a linear rate of convergence of the given splitting algorithm. In order to establish the sublinear rate, we need the following function: \begin{align*} V(X, Z, U) = f(X) + g(Z) + \InP{U}{Z-X}, \end{align*} which is defined for $X\in \R_{+}^{m\times n}$, $Z\in \mc{X}$ and $U\in \R^{m\times n}$. We can now state the first result. \begin{theorem}[Sublinear rate]\label{thm:drot:sublinear:rate} Let $X_k, Y_k, Z_k$ be the generated by procedure \eqref{eq:drot:general:form}. Then, for any $X\in \R_{+}^{m\times n}$, $Z\in \mc{X}$, and $Y\in \R^{m\times n}$, we have \begin{align*} V(X_{k+1}, Z_{k+1}, (Y-Z)/\rho) &- V(X, Z, (Y_k-X_{k+1})/\rho) \leq \frac{1}{2\rho} \norm{Y_k -Y}^2 - \frac{1}{2\rho} \norm{Y_{k+1} -Y}^2. \end{align*} Furthermore, let $Y\opt$ be a fixed-point of $T$ in \eqref{eq:dr:operator:T} and let $X\opt$ be a solution of \eqref{eq:ot:dist} defined from $Y\opt$ in the manner of Lemma~\ref{lem:dr:convergence}. Then, it holds that \begin{align*} \InP{C}{\bar{X}_k} - \InP{C}{X\opt} &\leq \frac{1}{k}\left( \frac{1}{2\rho } \norm{Y_0}^2 + \frac{2}{\rho } \norm{X\opt} \norm{Y_0-Y\opt}\right) \nonumber\\ \norm{\bar{Z}_k-\bar{X}_k} &= \frac{\norm{Y_k-Y_0}}{k} \leq \frac{2\norm{Y_0-Y\opt}}{k}, \end{align*} where $\bar{X}_k = \sum_{i=1}^{k}{X_i}/k$ and $\bar{Z}_k = \sum_{i=1}^{k}{Z_i}/k$. \end{theorem} The theorem implies that one can compute a solution satisfying $\InP{C}{X}-\InP{C}{X\opt} \leq \epsilon$ in $O(1/\epsilon)$ iterations. This is a significant improvement over the best-known iteration complexity $O(1/\epsilon^2)$ of the Sinkhorn method (\cf~\citep{LHJ19}). Note that the linearity of $\InP{C}{\cdot}$ allows to update the \emph{scalar} value $\InP{C}{\bar{X}_k}$ recursively without ever needing to form the ergodic sequence $\bar{X}_k$. Yet, in terms of rate, this result is still conservative, as the next theorem shows. \begin{theorem}[Linear rate]\label{thm:drot:linear:rate} Let $X_k$ and $Y_k$ be generated by \eqref{eq:drot:general:form}. Let $\mc{G}\opt$ be the set of fixed points of $T$ in \eqref{eq:dr:operator:T} and let $\mc{X}\opt$ be the set of primal solutions to \eqref{eq:ot:dist}. Then, $\{Y_k\}$ is bounded, $\norm{Y_k}\leq M$ for all $k$, and \begin{align*} \dist (Y_k, \mc{G}\opt) &\leq \dist(Y_0, \mc{G}\opt) \times r^k\\ \dist (X_k, \mc{X}\opt) &\leq \dist(Y_0, \mc{G}\opt) \times r^{k-1}, \end{align*} where $r = c/\sqrt{c^2+1} < 1$, $c = \gamma(1+\rho(\Vert e \Vert + \Vert f \Vert)) \geq 1$, and $\gamma = \theta_{\mc{S}\opt}(1+\rho^{-1}(M+1))$. Here, $\theta_{\mc{S}\opt}>0$ is a problem-dependent constant characterized by the primal-dual solution sets only. \end{theorem} This means that an $\epsilon$-optimal solution can be computed in $O(\log 1/\epsilon)$ iterations. However, it is, in general, difficult to estimate the convergence factor $r$, and it may in the worst case be close to one. In such settings, the sublinear rate will typically dominate for the first iterations. In either case, DROT always satisfies the better of the two bounds at each $k$. \subsection{Implementation} In this section, we detail our implementation of DROT to exploit parallel processing on GPUs. We review only the most basic concepts of GPU programming necessary to describe our kernel and refer to \citep{KW16, CGM14} for comprehensive treatments. \paragraph{Thread hierarchy} When a kernel function is launched, a large number of threads are generated to execute its statements. These threads are organized into a two-level hierarchy. A \emph{grid} contains multiple blocks and a \emph{block} contains multiple threads. Each block is scheduled to one of the streaming multiprocessors (SMs) on the GPU concurrently or sequentially, depending on available hardware. While all threads in a thread block run \emph{logically} in parallel, not all of them can run \emph{physically} at the same time. As a result, different threads in a thread block may progress at a different pace. Once a thread block is scheduled to an SM, its threads are further partitioned into \emph{warps}. A warp consists of 32 consecutive threads that execute the same instruction at \emph{the same time}. Each thread has its own instruction address counter and register state, and carries out the current instruction on its own data. \paragraph{Memory hierarchy} \emph{Registers} and \emph{shared memory} (``on-chip'') are the fastest memory spaces on a GPU. Registers are private to each thread, while shared memory is visible to all threads in the same thread block. An automatic variable declared in a kernel is generally stored in a register. Shared memory is programmable, and users have full control over when data is moved into or evicted from the shared memory. It enables block-level communication, facilitates reuse of on-chip data, and can greatly reduce the global memory access of a kernel. However, there are typically only a couple dozen registers per thread and a couple of kBs shared memory per thread block. The largest memory on a GPU card is \emph{global memory}, physically separated from the compute chip (``off-chip''). All threads can access global memory, but its latency is much higher, typically hundreds of clock cycles.\footnote{\url{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html}} Therefore, minimizing global memory transactions is vital to a high-performance kernel. When all threads of a \emph{warp} execute a load (store) instruction that access \emph{consecutive} memory locations, these will be \emph{coalesced} into as few transactions as possible. For example, if they access consecutive 4-byte words such as \emph{float32} values, four coalesced 32-byte transactions will service that memory access. \begin{figure*}[!t] \centering \vskip -1.2cm {\includegraphics[width=0.9\textwidth]{Figures/kernel_draft.pdf}} \caption{ \footnotesize Left: Logical view of the main kernel. To expose sufficient parallelism to GPU, we organize threads into a 2D grid of blocks ($3 \times 2$ in the figure), which allows several threads per row. The threads are then grouped in 1D blocks (shown in red) along the columns of $X$. This ensures that global memory access is aligned and \emph{coalesced} to maximize bandwidth utilization. We use the parameter work-size $\mathsf{ws}$ to indicate how many elements of a row each thread should handle. For simplicity, this parameter represents multiples of the block size $\mathsf{bs}$. Each arrow denotes the activity of a single thread in a thread block. Memory storage is assumed to be column-major. Right: Activity of a normal working block, which handles a submatrix of size $\mathsf{bs}\times (\mathsf{ws}\cdot\mathsf{bs}).$} \label{fig:kernel} \vskip -0.3cm \end{figure*} Before proceeding further, we state the main result of the section. \begin{claim}\label{claim:drot} On average, an iteration of DROT, including all the stopping criteria, can be done using 2.5 memory operations of $m \times n$ arrays. In particular, this includes one read from and write to $X$, and one read from $C$ in every other iteration. \end{claim} \paragraph{Main kernel} Steps~4--5 and the stopping criteria are the main computational components of DROT, since they involve matrix operations. We will design an efficient kernel that: (i) updates $X_{k}$ to $X_{k+1}$, (ii) computes $u_{k+1} := X_{k+1}e$ and $v_{k+1} := X_{k+1}^\top f$, (iii) evaluates $\InP{C}{X_{k+1}}$ in the duality gap expression. The kernel fuses many immediate steps into one and performs several on-the-fly reductions while updating $X_k$, thereby minimizing global memory access. Our kernel is designed so that each thread block handles a \emph{submatrix} $x$ of size $\mathsf{bs}\times (\mathsf{ws}\cdot\mathsf{bs})$, except for the corner blocks which will have fewer rows and/or columns. Since all threads in a block need the same values of $\varphi$, it is best to read these into shared memory once per block and then let threads access them from there. We, therefore, divide $\varphi$ into chunks of the block size and set up a loop to let the threads collaborate in reading chunks in a coalesced fashion into shared memory. Since each thread works on a single row, it accesses the same element of $\phi$ throughout, and we can thus load and store that value directly in a register. These allow maximum reuse of $\varphi$ and $\phi$. In the $j$-th step, the working block loads column $j$ of $x$ to the chip in a coalesced fashion. Each thread $i \in \{0, 1, \ldots, \mathsf{bs-1}\}$ uses the loaded $x_{ij}$ to compute and store $x_{ij}^+$ in a register: \begin{align*} x_{ij}^+ = \max(x_{ij} + \phi_i + \varphi_j -\rho c_{ij}, 0), \end{align*} where $c$ is the corresponding submatrix of $C$. As sum reduction is order-independent, it can be done locally at various levels, and local results can then be combined to produce the final value. We, therefore, reuse $x_{ij}^+$ and perform several such partial reductions. First, to compute the local value for $u_{k+1}$, at column $j$, thread $i$ simply adds $x_{ij}^+$ to a running sum kept in a register. The reduction leading to $\InP{C}{X_{k+1}}$ can be done in the same way. The vertical reduction to compute $v_{k+1}$ is more challenging as coordination between threads is needed. We rely on \emph{warp-level} reduction in which the data exchange is performed between registers. This way, we can also leverage efficient CUDA's built-in functions for collective communication at warp-level.\footnote{\url{https://developer.nvidia.com/blog/using-cuda-warp-level-primitives/}} When done, the first thread in a warp holds the total reduced value and simply adds it atomically to a proper coordinate of $v_{k+1}$. Finally, all threads write $x_{ij}^+$ to the global memory. The process is repeated by moving to the next column. When the block finishes processing the last column of the submatrix $x$, each thread adds its running sum atomically to a proper coordinate of $u_{k+1}$. They then perform one vertical (warp-level) reduction to collect their private sums to obtain the partial cost value $\InP{c}{x}$. When all the submatrices have been processed, we are granted all the quantities described in (i)--(iii), as desired. It is essential to notice that each entry of $x$ is only read and written once during this process. To conclude Claim~\ref{claim:drot}, we note that one can skip the load of $C$ in every other iteration. Indeed, if in iteration $k$, instead of writing $X_{k+1}$, one writes the value of $X_{k+1}-\rho C$, then the next update is simply $X_{k+2}=\big[X_{k+1} + \phi_{k+1} e^\top + f \varphi_{k+1}^\top\big]_{+}$. Finally, all the remaining updates in DROT only involve vector and scalar operations, and can thus be finished off with a simple auxiliary kernel. \section{Experimental results} In this section, we perform experiments to validate our method and to demonstrate its efficiency both in terms of accuracy and speed. We focus on comparisons with the Sinkhorn method, as implemented in the POT toolbox\footnote{\url{https://pythonot.github.io}}, due to its minimal per-iteration cost and its publicly available GPU implementation. All runtime tests were carried out on an NVIDIA Tesla T4 GPU with 16GB of global memory. The CUDA C++ implementation of DROT is open source and available at \url{https://github.com/vienmai/drot}. We consider six instances of SK, called SK1--SK6, corresponding to $\eta = 10^{-4}, 10^{-3}, 5\times 10^{-3}, 10^{-2}, 5 \times 10^{-2}, 10^{-1}$, in that order. Given $m$ and $n$, we generate source and target samples as $x_{\mathrm{s}}$ and $x_{\mathrm{t}}$, whose entries are drawn from a 2D Gaussian distribution with randomized means and covariances $(\mu_{\mathrm{s}}, \Sigma_{\mathrm{s}})$ and $(\mu_{\mathrm{t}}, \Sigma_{\mathrm{t}})$. Here, $\mu_{\mathrm{s}}\in \R^2$ has normal distributed entries, and $\Sigma_{\mathrm{s}} = A_{\mathrm{s}}A_{\mathrm{s}}^\top$, where $A_{\mathrm{s}} \in \R^{2\times 2}$ is a matrix with random entries in $[0,1]$; $\mu_{\mathrm{t}} \in \R^2$ has entries generated from $\mc{N}(5, \sigma_{\mathrm{t}})$ for some $\sigma_{\mathrm{t}}>0$, and $\Sigma_{\mathrm{t}}\in \R^{2\times 2}$ is generated similarly to $\Sigma_{\mathrm{s}}$. Given $x_\mathrm{s}$ and $x_\mathrm{t}$, the cost matrix $C$ represents pair-wise squared Euclidean distance between samples: $C_{ij} = \ltwo{x_\mathrm{s}[i]- x_\mathrm{t}[j]}^2$ for $i\in \{1, \ldots, m\}$ and $j\in \{1, \ldots, n\}$ and is normalized to have $\norm{C}_{\infty}=1$. The marginals $p$ and $q$ are set to $p=\ones_m/m$ and $q=\ones_n/n$. For simplicity, we set the penalty parameter $\rho$ in DROT to $\rho_0/(m+n)$ for some constant $\rho_0$. This choice is inspired by a theoretical limit in the \citet{CP11} method: $\rho< 1/\norm{\mc{A}}^2$, where $\norm{\mc{A}}^2=\norm{e}^2+\norm{f}^2=m+n$. We note however that this choice is conservative and it is likely that better performance can be achieved with more careful selection strategies (cf.~\cite{BPC11}). Finally, DROT always starts at $X_0=pq^\top$. \paragraph{Robustness and accuracy} For each method, we evaluate for each $\epsilon>0$ the percentage of experiments for which $\abs{f(X_K) - f(X\opt)}/f(X\opt) \leq \epsilon$, where $f(X\opt)$ is the optimal value and $f(X_K)$ is the objective value at termination. An algorithm is terminated as soon as the constraint violation goes below $10^{-4}$ or $1000$ iterations have been executed. Figure~\ref{fig:profile} depicts the fraction of problems that are successfully solved up to an accuracy level $\epsilon$ given on the $x$-axis. For each subplot, we set $m=n=512$ and generate 100 random problems, and set $\rho_0=2$. \begin{figure*}[!t] \vskip-0.3in \centering \begin{minipage}{0.35\textwidth} \centering {\includegraphics[width=1.\textwidth]{Figures/dims_512_test_100_mean_5_f32.pdf}} \subcaption{Float 32, $\sigma_{\mathrm{t}}=5$} \end{minipage} \hskip-0.2in \begin{minipage}{0.35\textwidth} \centering {\includegraphics[width=1.\textwidth]{Figures/dims_512_test_100_mean_5_f64.pdf}} \subcaption{Float 64, $\sigma_{\mathrm{t}}=5$} \end{minipage} \hskip-0.2in \begin{minipage}{0.35\textwidth} \centering {\includegraphics[width=1.\textwidth]{Figures/dims_512_test_100_mean_10_f64.pdf}} \subcaption{Float 64, $\sigma_{\mathrm{t}}=10$} \end{minipage} \caption{The percentage of problems solved up to various accuracy levels for $\sigma_{\mathrm{t}}=5, 10$.}\label{fig:profile} \end{figure*} We can see that DROT is consistently more accurate and robust. It reinforces that substantial care is needed to select the right $\eta$ for SK. We can see that SK is extremely vulnerable in single-precision. Even in double-precision, an SK instance that seems to work in one setting can run into numerical issues in another. For example, by slightly changing a statistical properties of the underlying data, nearly $40\%$ of the problems in Fig.~\ref{fig:profile}(c) cannot be solved by SK2 due to numerical errors, even though it works reasonably well in Fig.~\ref{fig:profile}(b). \paragraph{Runtime} As we strive for the excellent per-iteration cost of SK, this work would be incomplete if we do not compare these. Figure~\ref{fig:runtime}(a) shows the median of the per-iteration runtime and the 95\% confidence interval, as a function of the dimensions. Here, $m$ and $n$ range from 100 to 20000 and each plot is obtained by performing 10 runs; in each run, the per-iteration runtime is averaged over 100 iterations. Since evaluating the termination criterion in SK is very expensive, we follow the POT default and only do so once every 10 iterations. We note also that SK calls the highly optimized CuBLAS library to carry out their updates. The result confirms the efficacy of our kernel, and for very large problems the per-iteration runtimes of the two methods are almost identical. Finally, Figs~\ref{fig:runtime}(b)-(c) show the median times required for the total error (the sum of function gap and constraint violation) to reach different $\epsilon$ values. It should be noted that by design, all methods start from different initial points.\footnote{The SK methods have their own matrices $K=\exp(-C/\eta)$ and $X_0 = \diag (u_0) K \diag(v_0)$. For DROT, since $X_{k+1} = [X_k + \phi_k e^\top f \varphi_k^\top - \rho C ]_+$, where $X_0=pq^\top=\ones/(mn)$, $\norm{C}_\infty=1$, $\rho=\rho_0/(m+n)$, $\phi_0=\varphi_0=0$, the term inside $[\cdot]_+$ is of the order $O(1/(mn) - \rho_0/(m+n)) \ll 0$. This makes the first few iterations of DROT identically zeros. To keep the canonical $X_0$, we just simply set $\rho_0$ to $1/\log(m)$ to reduce the warm-up period as $m$ increases, which is certainly suboptimal for DROT performance.} Therefore, with very large $\epsilon$, this can lead to substantially different results since all methods terminate early. It seems that SK can struggle to find even moderately accurate solutions, especially for large problems. This is because for a given $\eta$, the achievable accuracy of SK scales like $\epsilon=O(\eta\log(n))$, which can become significant as $n$ grows \cite{AWR17}. Moreover, the more data entries available in larger problems can result in the higher chance of getting numerical issues. Note that none of the SK instances can find a solution of accuracy $\epsilon=0.001$. \begin{figure*}[!t] \centering \begin{minipage}{0.33\textwidth} \centering {\includegraphics[width=1\textwidth]{Figures/runtime.pdf}} \subcaption{Runtime per iteration} \end{minipage} \hskip-0.1cm \begin{minipage}{0.33\textwidth} \centering {\includegraphics[width=1.\textwidth]{{Figures/timeprofile_dims_10000_test_5_eps_0.01}.pdf}} \subcaption{Time to $\epsilon=0.01$} \end{minipage} \hskip-0.1cm \begin{minipage}{0.33\textwidth} \centering {\includegraphics[width=1.\textwidth]{{Figures/timeprofile_dims_10000_test_5_eps_0.001}.pdf}} \subcaption{Time to $\epsilon=0.001$} \end{minipage} \caption{Wall-clock runtime performance versus the dimensions $m=n$ for $\sigma_{\mathrm{t}}=5$.}\label{fig:runtime} \end{figure*} \paragraph{Sparsity of transportation} By design, DROT efficiently finds sparse transport plans. To illustrate this, we apply DROT to a color transferring problem between two images (see \cite{BSR18}). By doing so, we obtain a highly sparse plan (approximately $99\%$ zeros), and in turn, a high-quality artificial image, visualized in Figure~~\ref{fig:color-transfer}. In the experiment, we quantize each image with KMeans to reduce the number of distinct colors to $750$. We subsequently use DROT to estimate an optimal color transfer between color distributions of the two images. \begin{figure*}[!ht] \vskip 0.3cm \centering {\includegraphics[width=.9\textwidth]{Figures/img_experiments_paper.jpg}} \caption{Color transfer via DROT: The left-most image is a KMeans compressed source image (750 centroids), the right-most is a compressed target image (also obtained via 750 KMeans centroids). The middle panel displays an artificial image generated by mapping the pixel values of each centroid in the source to a weighted mean of the target centroid. The weights are determined by the sparse transportation plan computed via DROT.}\label{fig:color-transfer} \end{figure*} \section{Conclusions} We developed, analyzed, and implemented an operator splitting method (DROT) for solving the discrete optimal transport problem. Unlike popular Sinkhorn-based methods, which solve approximate regularized problems, our method tackles the OT problem directly in its primal form. Each DROT iteration can be executed very efficiently, in parallel, and our implementation can perform more extensive computations, including the continuous monitoring of a primal-dual stopping criterion, with the same per-iteration cost and memory footprint as the Sinkhorn method. The net effect is a fast method that can solve OTs to high accuracy, provide sparse transport plans, and avoid the numerical issues of the Sinkhorn family. Our algorithm enjoys strong convergence guarantees, including an iteration complexity $O(1/\epsilon)$ compared to $O(1/{\epsilon}^2)$ of the Sinkhorn method and a problem-dependent linear convergence rate. \section*{Acknowledgement} This work was supported in part by the Knut and Alice Wallenberg Foundation, the Swedish Research Council and the Swedish Foundation for Strategic Research, and the Wallenberg AI, Autonomous Systems and Software Program (WASP). The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC), partially funded by the Swedish Research Council through grant agreement no. 2018-05973. \bibliographystyle{abbrvnat}
3547e064a9d848527204d14e84dd5687d30d8bfa
\section{Introduction} Unlike the set of manifolds with boundary, the set of \emph{manifolds with boundary and corners} forms a monoid under the product of the enlarged category of \emph{manifolds with boundary and corners}. (Obviously {manifolds with boundary} does not form a monoid: If $X$ and $Y$ are manifolds with nonempty boundary, then $X \times Y$ canonically becomes a manifold with boundary and corners.) For example, the topological boundary $$ \partial(X \times Y) = \partial X \times Y \coprod X \times \partial Y $$ itself is a manifold with corners but without boundary. For the simplicity of exposition, we will regard the boundary as a corner of codimension 1 and just call a manifold with boundary and corners a \emph{manifold with corners}. \subsection{Sectional characterization of Liouville sectors} The original definition of Liouville sectors given in \cite{gps} makes it somewhat awkward to identify the structure group of a bundle of Liouville sectors, as it is not a priori manifest (though it is verifiable) that the standard notion of Liouville automorphism of $\Mliou$ is suitably compatible with certain sectorial data. It becomes even more awkward when one tries to define the bundle of Liouville sectors with \emph{corners} as in \cite{oh-tanaka-liouville-bundles}. which was the starting point of current investigation. In this paper, we introduce a more intrinsic definition but equivalent definition of Liouville sector which skirts this issue: We say it is more intrinsic in that our definition is closer to one in the sense of $G$-structures. (See \cite{chern} or \cite[Chapter VII]{sternberg} for a general introduction to $G$-structures.) We start with our discussion of $M$ for the case without corners. Let $(M, \omega)$ be a symplectic manifold with boundary. The boundary $\partial M$ (or more generally any coisotropic submanifold $H$) then carries a natural structure of a \emph{presymplectic manifold} in the sense that the restriction two form $$ \omega_\partial: = \iota^*\omega $$ has constant nullity. (See \cite{gotay}, \cite{oh-park} for some detailed explanation on presymplectic manifolds.) Here $\iota: \partial M \to M$ is the inclusion map. \begin{notation}[$\cD_{\partial M}$, $\cN_{\partial M}$ and $\pi: \partial M \to \cN_{\partial M}$] We denote the characteristic distribution of $(\partial M,\omega_\partial)$ by $$ \cD_{\partial M} = \ker \omega_\partial. $$ With a slight abuse of notation, we also denote by $\cD_{\partial M}$ the associated integrable foliation, and let $\pi_{\partial M}: \partial M \to \cN_{\partial M}$ be its leaf map. \end{notation} Now consider a Liouville manifold $(M,\lambda)$ with boundary and denote by $$ (\partial_\infty M, \xi_\infty) $$ its ideal boundary as a contact manifold equipped with the contact \emph{distribution} $\xi_\infty$ canonically induced by the Liouville form $\lambda$. (See \cite{giroux}. We recall that there is no contact \emph{form} on $\partial_\infty M$ canonically induced from $\lambda$.) \begin{defn}[Liouville $\sigma$-sectors]\label{defn:liouville-sector-intro} We say a Liouville manifold with boundary $(M,\lambda)$ is a {\em Liouville $\sigma$-sector} if the following hold: \begin{enumerate}[(a)] \item\label{item. contact boundary} The Liouville vector field $Z$ of $M$ is outward pointing along $\partial M$, and is tangent to $\partial M$ at infinity. \item\label{item. convex corner} $\partial_\infty M \cap \partial M$ is the boundary of $\partial_\infty M$, and is convex (as a hypersurface of the contact manifold $\partial_\infty M$). \item\label{item. foliation section} The canonical projection map $\pi:\partial M \to \cN_{\partial M}$ (to the leaf space of the characteristic foliation) admits a continuous section, and has fibers abstractly homeomorphic to ${\mathcal R}$. \end{enumerate} \end{defn} The condition (\ref{item. foliation section}) in this definition is the difference from that of the \emph{Liouville sector} of \cite{gps} and is responsible for our naming of \emph{Liouville $\sigma$-sectors} where $\sigma$ stands either for `section' or for `sectional'. It can be replaced by the contractibility of fibers. (See Corollary \ref{cor:contractible-fiber}.) \begin{remark}\label{rem:presymplectic} \begin{enumerate} \item Our definition is closer to the one given in the spirit of $G$-structure (with integrability condition) \cite{chern}, \cite{sternberg}. (See also Corollary \ref{cor:convexity} for a similar characterization of convexity at infinity imposed in Definition (\ref{item. convex corner}).) In this sense, the choice of a section corresponds to a reduction of the structure group from $\operatorname{Diff}({\mathbb R})$ to $\operatorname{Diff}({\mathbb R},\{0\})$ of the ${\mathbb R}$-bundle associated to the null foliation. \item It is worthwhile to mention that the presymplectic structure on $(\partial M, \omega_\partial)$ uniquely determines a symplectic structure on the germ of a neighborhood up to symplectic diffeomorphism. (See \cite{gotay}.) Our definition of Liouville $\sigma$-sectors \emph{with corners} is much based on Gotay's coisotropic embedding theorem of presymplectic manifolds \cite{gotay}, applied to a germ of neighborhoods of the boundary $\partial M$ or more generally of coisotropic submanifolds of $(M,d\lambda)$. \item The condition \eqref{item. foliation section} depends only on the presymplectic geometry of $(\partial M, d\lambda_\partial)$ with $\lambda_\partial = i_{\partial M}^*\lambda$ while the conditions \eqref{item. contact boundary} and \eqref{item. convex corner} depend on the Liouville geometry of the ideal contact boundary $\partial_\infty M$ the details of $\lambda$ on which matter only ``at infinity''. The two geometries are connected by the global topological triviality of the characteristic foliation implied by \eqref{item. foliation section}. (See Theorem \ref{thm:GPS-question-intro}.) \end{enumerate} \end{remark} Note that a Liouville ($\sigma$-)sector $\Mliou$ is a smooth manifold (possibly with non-compact corners) and the Liouville flow determines a well-defined contact manifold $\partial_\infty \Mliou$ ``at infinity'' (possibly with boundary). We will informally write \eqnn \partial_\infty \Mliou \cap \partial \Mliou \eqnd to mean the boundary of $\partial_\infty \Mliou$ and call it the \emph{ceiling corner} of the Liouville sector. (When $\partial_\infty M$ has corners, ``boundary'' means the union of all boundary strata.) Throughout this paper, by ``near infinity,'' we mean ``on the complement on some compact subset of $M$.'' \begin{theorem}[Theorem \ref{thm:equivalence} for $H = \partial M$] \label{thm:equivalence-intro} Under the above definition of Liouville $\sigma$-sector, the following hold: \begin{enumerate}[(1)] \item\label{item.equivalence cL manifold} $\cN_{\partial \Mliou}$ carries the structure of Hausdorff smooth manifold with corners such that $\pi$ is a smooth submersion. \item\label{item. equivalence symplectic structure} $\cN_{\partial \Mliou}$ carries a canonical symplectic structure denoted by $\omega_{\cN_{\partial M}}$ as a coisotropic reduction of $\partial \Mliou \subset \Mliou$. \item $(\cN_{\partial M},\omega_{\cN_{\partial M}})$ carries a canonical Liouville one-form $\lambda_{\cN_{\partial M}}$ induced from the convexity hypothesis of $\partial_\infty M \cap \partial M \subset \partial_\infty M$. \item\label{item.equivalence commutative diagram} We have a commutative diagram \eqn\label{eq:diagram} \xymatrix{\partial \Mliou \ar[d]^\pi \ar[r]^{\Psi} & F \times {\mathcal R} \ar [d]^{\pi_F}\\ \cN_{\partial \Mliou} \ar[r]^{\psi} & F } \eqnd where \begin{itemize} \item $\pi$ is a smooth map which admits a smooth section $\sigma: \cN_{\partial \Mliou} \to \partial \Mliou$ for which $\sigma$ satisfies $\sigma^*\omega_\partial = \omega_{\cN_{\partial M}}$, \item Setting $F: = \operatorname{Image} \sigma$, $\psi$ is a Liouville diffeomorphism between $(\cN_{\partial M},\lambda_{\cN_{\partial M}})$ and the induced form $(F, \lambda|_F)$ defined by $\psi(\ell) = \sigma(\ell)$ for $\ell \in \cN_{\partial M}$, and \item $ \Psi: \partial \Mliou \to F \times {\mathcal R} $ is a diffeomorphism, \end{itemize} \end{enumerate} \end{theorem} We refer to Section \ref{sec:intrinsic} for the precise description on the dependence of various structures and maps on the choice of section $\sigma$. The following can be also derived in the course of proving the above theorem. (In fact the argument deriving this proposition is nearly identical to that of the proof of \cite[Lemma 2.5]{gps}.) \begin{prop}\label{prop:equivalence-intro} Let $(M,\lambda)$ be a Liouville $\sigma$-sector. Then \begin{enumerate} \item Each choice of smooth section $\sigma$ of $\pi$ and a constant $0 < \alpha \leq 1$ canonically provides a smooth function $I: \partial M \to {\mathcal R}$ such that $Z(I) = \alpha I$, \item There is a germ of neighborhood $\nbhd(\partial M)$ (unique up to a symplectomorphism fixing $\partial M$) on which the natural extension of $I$, still denoted by $I$, admits a unique function $R: \nbhd(\partial M) \to {\mathcal R}$ satisfying $\{R,I\} = 1$. \end{enumerate} \end{prop} \subsection{Solution to \cite[Question 2.6]{gps}: presymplectic geometry versus Liouville geometry} Another interesting consequence, when combined with Gotay's normal form theorem of neighborhoods of coisotropic submanifolds, is the following affirmative answer to a question raised by Ganatra-Pardon-Shende in \cite{gps}. \begin{theorem}[Theorem \ref{thm:GPS-question}; Question 2.6 \cite{gps}]\label{thm:GPS-question-intro} Suppose $M$ is a Liouville manifold-with-boundary such that \begin{enumerate} \item the Liouville vector field is tangent to $\partial M$ at infinity, and \item there is a diffeomorphism $\partial M = F \times {\mathcal R}$ sending the characteristic foliation to the foliation by leaves ${\mathcal R} \times \{p\}$. \end{enumerate} Then $\partial_\infty M \cap \partial M$ is convex in $\partial_\infty M$. In particular $M$ is a Liouville sector in the sense of \cite{gps}. \end{theorem} The main task is to construct a contact vector field transversal to the ceiling corner $$ \partial_\infty M \cap \partial M =: F_\infty $$ in the contact manifold $\partial_\infty M$. Subtlety of the proof again lies in the question how to compromise the difference between the two flows, the Liouville flow and the characteristic flow, given on $H$ at infinity, the former arising from the presymplectic geometry of $\partial M$ while the latter from the Liouville geometry of $\partial_\infty M$. The theorem claims that the presymplectic geometry of the characteristic foliation constrains the Liouville geometry at infinity, i.e., triviality of the characteristic foliation of $\partial M$ implies convexity of the intersection $$ \partial_\infty M \cap \partial M \subset \partial _\infty M $$ at infinity. We make our construction of the aforementioned contact vector field by utilizing Gotay's normal form theorem of the neighborhoods of $\partial M \subset M$, whose details we refer readers to the proof of Theorem \ref{thm:GPS-question} in Section \ref{sec:GPS-question}. The following equivalence theorem is an immediate corollary of Theorem \ref{thm:equivalence-intro} and Theorem \ref{thm:GPS-question-intro}. \begin{theorem}\label{thm:GPS-equivalence-intro} Let $(M,\lambda)$ be a Liouville manifold with boundary. Suppose the Liouville vector field $Z$ of $\lambda$ is tangent to $\partial M$ at infinity. Then the followings are equivalent: \begin{enumerate} \item $(M,\lambda)$ is a Liouville sector in the sense of \cite{gps}. \item $(M,\lambda)$ is a Liouville $\sigma$-sector. \item There is a diffeomorphism $\partial M = F \times {\mathcal R}$ sending the characteristic foliation to the foliation by leaves ${\mathcal R} \times \{p\}$. \end{enumerate} \end{theorem} \begin{remark}[Liouville sectors as a $G$-structure] In particular, the second characterization provides a natural characteristic of Liouville sectors in the spirit of $G$-structures purely in terms of the presymplectic geometry of $\partial M$ in addition to a little bit of Liouville geometry of a neighborhood of the corner $\nbhd(\partial_\infty M \cap \partial M)$ in $M$ requiring the Liouville vector field $Z$ to be tangent to $\partial M$ therein. Other defining data of Liouville sectors follow therefrom as `properties'. \end{remark} \subsection{Clean coisotropic collections and Liouville $\sigma$-sectors with corners} The definition of Liouville $\sigma$-sector can be extended to the case with corners. The definition of Liouville $\sigma$-sectors with corners strongly relies on the general intrinsic geometry of the clean coisotropic collection. Study of this geometry in turn strongly relies on the coisotropic calculus and Gotay's coisotropic embedding theorem of general \emph{presymplectic manifolds} \cite{gotay}. \begin{defn}[Clean coisotropic collection]\label{defn:clean coisotropic collection-intro} Let $(M,\lambda)$ be a Liouville manifold with corners. Let $H_1, \ldots, H_m \subset M$ be a collection of hypersurfaces $Z$-invariant at infinity, that satisfies \begin{enumerate} \item\label{item. clean intersection sectorial} The $H_i$ cleanly intersect, \item\label{item. coisotropic sectorial} All pairwise intersections $H_i \cap H_j$ are coisotropic. \end{enumerate} \end{defn} Denote the associated codimension $m$ corner by $$ C = H_1 \cap \cdots \cap H_m $$ and by $\cN_C$ the leaf space of the null-foliation of the coisotropic submanifold $C$. Then we prove that for each choice of sections $\sigma=\{\sigma_1, \cdots, \sigma_m\}$, \begin{itemize} \item there is a natural fiberwise ${\mathcal R}^m$-action on $C$ which is a simultaneous linearization of the characteristic flows of the sectorial hypersurfaces $H_i$'s. \item each fiber is diffeomorphic to ${\mathcal R}^m$ utilizing the standard construction of action-angle variables in the integrable system. \end{itemize} (See \cite{arnold:mechanics} and Corollary \ref{cor:contractible-fiber} for the relevant discussion.) This leads us to the final definition of Liouville $\sigma$-sectors with corners. \begin{defn}[Liouville $\sigma$-sectors with corners]\label{defn:intrinsic-corners-intro} Let $M$ be a manifold with corners equipped with a Liouville one-form $\lambda$. We call $(M,\lambda)$ a \emph{Liouville $\sigma$-sector with corners} if at each corner $\delta$ of $\partial M$, the corner can be expressed as $$ C_\delta := H_{\delta,1} \cap \cdots \cap H_{\delta,m} $$ for a clean coisotropic collection $\{H_{\delta,1}, \cdots, H_{\delta,m}\}$ such that each fiber of the canonical projection $$ \pi_{C_\delta}: C_\delta \to \cN_{C_\delta} $$ is contractible. We call such a corner a \emph{$\sigma$-sectorial corner of codimension $m$}. \end{defn} We will show that each choice of $\sigma$ will canonically provide an equivariant splitting data $$ (F, \{(R_i,I_i)\}_{i=1}^m), \quad d\lambda = \omega_F \oplus \sum_{i=1}^m dR_i \wedge dI_i $$ on $\nbhd(C_\delta) \cong F \times {\mathbb C}_{\opname{Re} \geq 0}^m$ for $\sigma$-sectorial corners that is equipped with the Hamiltonian ${\mathcal R}^m$-action whose moment map is precisely the coordinate projection $$ \nbhd(C) \to {\mathbb R}_{\geq 0}^m; \quad x \mapsto (R_1(x), \ldots, R_m(x)). $$ (See Theorem \ref{thm:splitting-data-corners} for the precise statement.) We also prove the following equivalence result. \begin{theorem} Definition \ref{defn:intrinsic-corners-intro} is equivalent to that of Liouville sectors with corners from \cite{gps-2}. \end{theorem} We refer to Definition \ref{defn:sectorial-collection} for the comparison between Definition \ref{defn:intrinsic-corners-intro} and the definition of Liouville sectors with corners from \cite{gps-2}. \subsection{Automorphism group of Liouville $\sigma$-sectors with corners} Thanks to Theorem \ref{thm:equivalence-intro} or Theorem \ref{thm:GPS-equivalence-intro}, our definition of Liouville $\sigma$-sectors with corners enables us to give a natural notion of Liouville automorphisms of Liouville sectors from \cite{gps} which is similar to the case without boundary and which does not depend on choices of auxiliary defining function $I$ that appear in the original definition of \cite[Definition 2.4]{gps}. We start with the following observation \begin{lemma}[Lemma \ref{lem:phi-preserving-cD}]\label{lem:presymplectic-bdy} \label{lem:phi-preserving-cD-intro} Fix a diffeomorphism $\phi: (M, \partial M)\to (M, \partial M)$ and suppose $\phi^*\lambda = \lambda +df$ for a function $f: M \to {\mathcal R}$, not necessarily compactly supported. Then the restriction $\phi|_{\partial M} = \phi_\partial: \partial M \to \partial M$ is a presymplectic diffeomorphism, i.e., satisfies $\phi_\partial^*\omega_\partial = \omega_\partial$. In particular, it preserves the characteristic foliation of $\partial M$. \end{lemma} \begin{remark}\label{rem:stratawise-presymplectic} Recall that a manifold with corners $X$ is (pre)symplectic if there is a stratawise (pre)symplectic form $\omega$, i.e., a collection of (pre)symplectic forms $$ \{\omega_\alpha\}_{\alpha \in I} $$ that is compatible under the canonical inclusion map of strata $$ \iota_{\alpha\beta}: X_\alpha \hookrightarrow X_\beta, \quad \alpha < \beta $$ i.e., $\omega_\alpha = \iota_{\alpha\beta}^*\omega_\beta$. Here $I$ is the POSET that indexes the strata of the stratified manifold $X$. By definition, a diffeomorphism between two manifolds with corners preserves dimensions of the strata. \end{remark} Lemma \ref{lem:presymplectic-bdy} enables us to define the ``structure'' of Liouville $\sigma$-sectors (Definition \ref{defn:geometric-structure}), and to identity its automorphism group $\aut(M,\lambda)$ in the same way as for the Liouville manifold case. \begin{defn}[Automorphisms group $\aut(M,\lambda)$] Let $(M,\lambda)$ be a Liouville $\sigma$-sector, possibly with corners. We call a diffeomorphism $\phi: (M,\partial M) \to (M,\partial M)$ a Liouville automorphism if $\phi$ satisfies the following: $$ \phi^*\lambda = \lambda + df $$ for a compactly supported function $f: M \to {\mathcal R}$. We denote by $\aut(M,\lambda)$ the set of automorphisms of $(M,\lambda)$. \end{defn} Obviously $\aut(M,\lambda)$ forms a topological group which is a subgroup of $\symp(M,d\lambda)$, the group of symplectic diffeomorphisms of $(M,d\lambda)$. \subsection{Monoid of Liouville $\sigma$-sectors with corners} \label{subsec:sectors-with-corners-intro} In \cite{oh:sectorial}, the present author introduces the notion of \emph{$\lambda$-sectorial} packages in the study of Floer theory on the Liouville sectors, where Lagrangian branes entering in the construction of Fukaya category are still assumed to be $Z$-invariant-at-infinity Lagrangian submanifolds as common in the literature such as \cite{abouzaid-seidel}, \cite{gps}. In the present paper, we introduce a different class of Lagrangian submanifolds called \emph{gradient-sectorial Lagrangian submanifolds}, with respect to which we can make this new Floer package go well along with the aforementioned monoidal structure of Liouville $\sigma$-sectors with corners. (See Section \ref{sec:monoid} for the detailed description of this class of Lagrangian branes.) We anticipate that this new package will facilitate the study of the K\"unneth-type formula in wrapped Fukaya category, the study of which we postpone elsewhere. (See \cite[Conjecture 3.40, Conjecture 4.39]{gps} for relevant conjectures concerning this monoidality issue.) We show that the product $L_1 \times L_2$ of gradient-sectorial Lagrangians automatically are objects in the product $M_1 \times M_2$ of two Liouville ($\sigma$-)sectors with corners \emph{without making any deformation} in our new framework. For example, we have a natural inclusion map $$ \text{\rm Ob}(\Fuk(X,\omega_X)) \times \text{\rm Ob}((\Fuk(Y,\omega_Y)) \hookrightarrow \text{\rm Ob}(\Fuk(X\times Y, \omega_X \oplus \omega_Y)) $$ on the nose. (See Definition \ref{defn:gradient-sectorial-Lagrangian} and Theorem \ref{thm:product-brane}.) \begin{remark}\label{Z-invariant-branes} The standard definition of $Z$-invariant-at-infinity branes for the Liouville sectors is \emph{not} monoidal in any obvious sense. For example, the product $L_1 \times L_2$ is not $(Z_X\oplus Z_Y)$-invariant for the $Z_X$-invariant-brane-at-infinity $L_1$ of $X$ and a $Z_Y$-invariant-at-infinity brane $L_2$ of $Y$. Because of this, one must take the following deformation process which has been used in the literature (See \cite{groman,gao,gps,gps-2}, especially \cite[Section 6]{gps-2} for a detailed explanation of this procedure in relation to a construction of K\"unneth-type embedding.) : \begin{itemize} \item take a corner smoothing of the corner $\partial_\infty M \cap \partial M$, \item take a deformation of the product Liouville form $\lambda_1 \times \lambda_2$ and the product vector field $Z_1 \times Z_2$ to the associated Liouville vector field, \item Then deform $L_1 \times L_2$ to a $Z$-invariant-at-infinity Lagrangian on $X \times Y$. \end{itemize} \end{remark} This whole deformation process will not be needed in our \emph{gradient-sectorial framework}: \emph{This process is already subsumed in the sectorial package introduced in \cite{oh:sectorial} and augmented here}: We modify the package from \cite{oh:sectorial} to one in the present paper so that it respects the aforementioned monoidal structure of manifolds with corners. For the study of this monoidality property of the wrapped Fukaya category, our sectional characterization of Liouville sectors makes it natural to take the product of Liouville sectors with corners. The following is an easy consequence of our definition of Liouville $\sigma$-sectors with corners. \begin{prop}[Proposition \ref{prop:product-bulk}] The collection of Liouville $\sigma$-sectors with corners naturally forms a commutative monoid under direct product (with commutativity and associativity holding up to natural isomorphism). \end{prop} The discussion in the present subsection and henceforth equally applies both with the definition of Liouville sectors of \cite{gps} and with that of Liouville $\sigma$-sectors introduced in the present paper. We will mainly work with our definition of Liouville $\sigma$-sectors, unless otherwise mentioned. \subsection{Sectorial almost complex structures and gradient-sectorial Lagrangians} Now we propose a class of \emph{gradient-sectorial} Lagrangian branes to form the object of a wrapped Fukaya category $\Fuk(M)$. (Our notation for $\Fuk(M)$ suppresses the dependence on $\lambda$ and our choice working with gradient-sectorial Lagrangians instead of $Z$-invariant-at-infinity ones.) For this purpose, we first equip $(M,\lambda)$ with a splitting data \eqn\label{eq:splittng-data} \nbhd(\partial M) \cong F \times {\mathbb C}^k_{\text{\rm Re}\geq 0}, \quad \{(R_i,I_i)\} \eqnd and an \emph{end-profile function} $$ \frak s:= \frak s_{\varphi} $$ which is introduced in \cite{oh:sectorial}. (See also Subsection \ref{subsec:smoothing-profiles} of the present paper for a brief description of $\frak s_{\varphi}$.) \begin{remark} We will show in Theorem \ref{thm:splitting-data-corners} that this data itself is canonically induced from the choice of sections $\sigma = \{\sigma_1, \ldots, \sigma_k\}$ for the clean coisotropic collection $$ \{H_1, \cdots, H_k\} $$ of $\sigma$-sectorial hypersurface $H_i$'s. We call this canonical splitting data the \emph{$\sigma$-splitting data}. \end{remark} Recall from \cite[Section 3]{oh:sectorial} that $\frak s_\varphi$ is a collection of functions \eqn\label{eq:endprofile} s_{k_\delta +1,\varphi_\delta} = -\log \varphi(R_{\delta,1}, \cdots, R_{\delta, k}, e^{-s}) \eqnd associated to each sectorial corner $\delta$ of $M$ which are glued by a partition of unity on $M$. Here $\varphi: {\mathbb R}^{k+1}_{\geq 0} \to {\mathbb R}_{\geq 0}$ is a convex corner-smoothing function. (See \cite{fooo:book-virtual}, \cite{oh:sectorial} for the details.) An upshot of this function $\frak s_{\varphi}$ is that it provides a pseudoconvex pair $(\frak s_{\varphi}, J)$ on $\nbhd(\partial_\infty M \cup \partial M)$ for a \emph{sectorial complex structures} $J$. (See \cite{oh:sectorial} for detailed discussion on the notion of pseudoconvex pairs.) The splitting data given above also provides $\nbhd(\partial M)$ with a foliation $\cF_F$ whose leaves are given by \eqn\label{eq:cFF} \cF_F: \quad F \times \{(R,I)\}, \quad (R,I) = (R_1 + \sqrt{-1} I_1, \cdots, R_k + \sqrt{-1} I_k) \in {\mathbb C}^k. \eqnd \newenvironment{corner-J-intro}{ \renewcommand*{\theenumi}{(J\arabic{enumi})} \renewcommand*{\labelenumi}{(J\arabic{enumi})} \enumerate }{ \endenumerate } \begin{defn}[Sectorial almost complex structures]\label{defn:weakly-sectorial-J-intro} Let $(M,\lambda)$ be a Liouville sector with boundary and corners equipped with a splitting data and an end-profile function ${\frak s}_\varphi$. An $\omega$-tame almost complex structure $J$ on a Liouville sector is said to be \emph{sectorial} (with respect to the given smoothing profile) if $J$ satisfies the following: \begin{corner-J-intro} \item {\textbf{[$\cF_F$ is $J$-complex]}}\label{item. piF is holomorphic-intro} In a neighborhood of $\nbhd(\partial \Mliou)$ of $\partial \Mliou$, we require \eqn\label{eq:J-versus-JF} J\left(T^*F \oplus 0_{\text{\rm span}\{dR_i, dI_i\}_{i=1}^k}\right) \subset T^*F \oplus 0_{\text{\rm span}\{dR_i, dI_i\}_{i=1}^k}, \eqnd and $J$ restricts to an almost complex structure of contact-type on $F$. \item {\textbf{[$({\mathfrak s}_\varphi,J)$ is a pseudoconvex pair]}}\label{item. ds is dual to lambda_kappa-intro} In a neighborhood $\nbhd^Z(\partial \Mliou)\cup \partial_\infty)$ of $\partial \Mliou \setminus \nbhd(\partial_\infty M)$, we have $$ -d(d{\mathfrak s}_{\varphi} \circ J) \geq 0 $$ as a $(1,1)$-current. \end{corner-J-intro} We denote by $\cJ^{\text{\rm sec}}_{{\mathfrak s}_\varphi} = \cJ^{\text{\rm sec}}_{{\mathfrak s}_\varphi}(M)$ the set of sectorial almost complex structures. \end{defn} \begin{remark} Obviously any almost complex structure $J$ satisfying \ref{item. piF is holomorphic-intro} is sectorial if it satisfies $$ -d{\mathfrak s}_{\varphi} \circ J = \lambda + df $$ for some function $f$, \emph{not necessarily compactly supported.} For example, the \emph{$\kappa$-sectorial} or \emph{$\lambda$-sectorial} almost complex structures considered in \cite{oh:sectorial} are sectorial. This in particular shows that sectorial almost complex structures exist in abundance. The more complicated notion of \emph{$\lambda$-sectorial} almost complex structures is needed to make the usual \emph{$Z$-invariant-at-infinity Lagrangians} amenable to the \emph{strong maximum principle}: Recall from \cite{oh:sectorial} that a pseudoconvex pair $(\psi,J)$ is called \emph{Liouville-pseudoconvex} if it satisfies the stronger condition $$ -d\psi \circ J = \lambda $$ in place of \ref{item. ds is dual to lambda_kappa-intro}. This duality requirement for $(-d\psi, \lambda)$ suffices to show that the $Z$-invariant-at-infinity Lagrangian submanifolds are amenable to the strong maximum principle. (See \cite[Section 11]{oh:sectorial} for complete details of the construction of such a pair $(\psi,J)$.) \end{remark} Now we introduce the notion of \emph{gradient-sectorial Lagrangians} with respect to the end-profile function $\frak s = \frak s_{\varphi}$, which is amenable to the strong maximum principle with respect to sectorial almost complex structures for the given end-profile function $\frak s$. We consider its normalized gradient vector field \eqn\label{eq:Z-fraks} Z_{\frak s}: = \frac{\opname{grad} \frak s}{|\opname{grad} \frak s|^2} \eqnd with respect to the usual metric $$ g_J(v, w): = \frac{d\lambda(v, J w) + d\lambda(w,Jv)}{2}. $$ \begin{defn}[Gradient-sectorial Lagrangian branes]\label{defn:gradient-sectorial-Lagrangian-intro} Let $(M,\lambda)$ be a Liouville sector with corners. Let ${\frak s}$ be the end-profile function associated to a given smoothing profile. We say that an exact Lagrangian submanifold $L$ of $(M,\omega)$ is \emph{gradient-sectorial} if \begin{enumerate} \item $L \subset \operatorname{Int} M \setminus \partial M$ and $\text{\rm dist}(L,\partial M) > 0$. \item There exists a sufficiently large $r_0> 0$ such that $L \cap {\frak s}^{-1}([r_0,\infty))$ is $Z_{\mathfrak s}$-invariant, i.e., $Z_{\mathfrak s}$ is tangent to $L \cap {\frak s}^{-1}([r_0,\infty))$. \end{enumerate} \end{defn} Then in Theorem \ref{thm:unwrapped} we prove the confinement results --hence Gromov compactness--for $J$-holomorphic curves with boundaries on gradient-sectorial Lagrangians for {\em sectorial} almost-complex structures by proving that such $J$-holomorphic curves are amenable to the strong maximum principle in terms of the end-profile function ${\mathfrak s}_\varphi$. We establish the product Lagrangian $L_1 \times L_2$ itself of two gradient-sectorial Lagrangian submanifolds $L_1$ and $L_2$ (not just up to isotopy as in \cite{gps,gps-2,gao}) can be used as boundary conditions for holomorphic curves---in other words, as objects for an appropriate wrapped Fukaya category. \begin{theorem}[Theorem \ref{thm:product-brane}]\label{thm:sectorial-product-intro} Let $(X,\omega_X)$ and $(Y,\omega_Y)$ be Liouville $\sigma$-sectors. Then $L_1 \times L_2$ is gradient-sectorial if both $L_1$ and $L_2$ are gradient-sectorial. \end{theorem} We refer readers to Section \ref{sec:monoid} for more detailed discussion on the product. \bigskip {\bf Acknowledgments:} The present work is supported by the IBS project IBS-R003-D1. We would like to thank Hiro Lee Tanaka for his collaboration on the study of Liouville sectors and for useful comments on the preliminary draft of the present work. \bigskip \noindent{\bf Conventions:} \begin{itemize} \item Hamiltonian vector field $X_H$: $X_H \rfloor \omega = dH$, \item Canonical one-form $\theta_0$ on $T^*Q$: $\theta_0 = \sum_{i=1}^n p_i dq_i$, \item Canonical symplectic form $\omega_0$ on $T^*Q$: $\omega_0 = d(-\theta) = \sum_{i=1}^n {dq_i \wedge dp_i}$, \item Liouville one-form on $(T^*Q, \omega_0)$: $\lambda = -\theta= -\sum_{i=1}^n p_i dq_i$, \item Symplectization $SC$ of contact manifold $(C,\theta)$: $SC = C \times {\mathbb R}$ with $\omega = d(e^s \pi^*\theta)$. Here note that \emph{we write the ${\mathcal R}$-factor after the $C$-factor}. \item Contact Hamiltonian: The contact Hamiltonian of contact vector field $X$ on a contact manifold $(M,\theta)$ is given by $-\theta(X)$. (See \cite{oh:contacton-Legendrian-bdy} for the same convention adopted in the general framework of contact dynamics.) \end{itemize} \bigskip \noindent{\bf Notations:} \begin{itemize} \item $\partial_\infty M$: the asymptotic boundary of $M$. \item $\overline M$: the completion of $M$ which is $\partial_\infty M \coprod \partial M$. \item $DM$: the union $\partial_\infty M \cup \partial M$ in $\overline M$. \item $F_\infty: = \partial_\infty M \cap \partial M$: the ideal boundary of $\partial M$. \item $\partial_\infty^{\text{\rm Liou}}M$: the ideal boundary of a Liouville manifold $M$ (or sector). \item $\aut(M,\lambda)$: The group of Liouville diffeomorphisms of Liouville $\sigma$-sector $(M,\lambda)$. \item $\omega_\partial = d\lambda_\partial$ : The induced presymplectic form on $\partial M$ with $\lambda_\partial := \iota^*\lambda$. \item $\aut(M,\lambda_\partial)$: The group of pre-Liouville diffeomorphisms of exact presymplectic manifolds $(M,d\lambda_\partial)$. \item $H$ : a $\sigma$-sectorial hypersurface $H \subset M$. \item $H_\infty = \partial_\infty M \cap H$: the ideal boundary $H$. \end{itemize} \section{Sectional characterization of sectorial hypersurfaces}\label{sec:intrinsic} We start with the case without corners but with nonempty boundary $\partial M$, postponing the study of the case with corners till Section \ref{sec:sectors-with-corners}. For the comparison, we recall the definition of Liouville sectors in \cite{gps}. In fact we will consider the definition of sectorial hypersurfaces in \cite[Definition 9.2]{gps-2} and restrict that to the sectorial boundary of a Liouville domain. To facilitate our exposition, we utilize Giroux's notion of the \emph{ideal completion} of the Liouville domain $(M,\lambda)$. \begin{defn}[Ideal completion $\overline M$] \cite{giroux} \begin{enumerate} \item An \emph{ideal Liouville domain} $(W,\omega)$ is a domain endowed with an ideal Liouville structure $\omega$. \item The \emph{ideal Liouville structure} is an exact symplectic form on $\operatorname{Int} W$ admitting a primitive $\theta$ such that: For some (and then any) function $u:W \to {\mathcal R}_{\geq 0}$ with regular level set $\partial_\infty W = \{u=0\}$, the product $u \theta$ extends to a smooth one-form on $W$ which induces a contact form on $\partial W$. \item When a Liouville manifold $(M,\lambda)$ is Liouville isomorphic to $(\operatorname{Int} W,\theta)$, we call $W$ the ideal completion of $M$ and denote it by $\overline M$. \end{enumerate} \end{defn} \begin{remark} First, this definition provides a natural topology and smooth structure on the completion $\overline M$ and a Liouville structure on $M(=\operatorname{Int} W)$ as an open Liouville manifold. Secondly it also provides a natural class of Liouville diffeomorphisms on $M$ as the restriction of diffeomorphisms of $\overline M = W$. (See \cite{giroux}.) \end{remark} For a (noncompact) Liouville manifold $(M,\lambda)$ (without boundary) its ideal boundary, denoted by $\partial_\infty M$, is defined to be the set of asymptotic rays of Liouville vector field $Z$. Then the \emph{ideal completion} is the coproduct $$ \overline M = M \coprod \partial_\infty M $$ equipped with the obvious topology. When $(M,\lambda)$ is a Liouville sector with boundary $\partial M$, its ideal boundary is still well-defined by the $Z$-invariance requirement at infinity put on $\partial M$ in the definition of Liouville sectors \cite{gps} and so is its completion $\overline M$. Then we have the formula for the topological boundary $$ \partial \overline M = \partial_\infty M \cup \partial M. $$ To ease our exposition, we often abuse our notation $$ DM: = \partial_\infty M \cup \partial M $$ for the coproduct $\partial_\infty M \coprod \partial M$ after the present section, as long as there is no danger of confusion. Likewise we also abuse the notation like $$ \partial_\infty M \cap H $$ for ideal boundary of $\sigma$-sectorial hypersurface $H$ where the intersection is actually taken as a subset of $\overline M$. For the simplicity of notation, we will also use $$ H_\infty: = \partial_\infty M \cap H $$ similarly as we denoted $F_\infty = \partial_\infty M \cap \partial M$ when $H = \partial M$. This being said, the following definition is nothing but when the sectorial collection there in \cite[Definition 9.2]{gps-2} is a single element. \begin{defn}[See Definition 9.2 \cite{gps}]\label{defn. sectorial hypersurface-gps} A \emph{sectorial hypersurface} $H \subset M$ is a hypersurface Liouville manifold-with-boundary $M$ satisfying the following equivalent definitions: \begin{itemize} \item For some $\alpha> 0$, there exists a function $I: H \to {\mathcal R}$ with $ZI = \alpha I$ and $dI_{\cD} > 0$. \item For every $\alpha> 0$, there exists a function $I: H \to {\mathcal R}$ with $ZI = \alpha I$ and $dI_{\cD} > 0$. \item The ideal boundary $\partial_\infty \Mliou \cap H$ of $H$ is convex in $\partial_\infty M$ and there is a diffeomorphism $\Psi_H: H \to F_H \times {\mathcal R}$ sending the characteristic foliation of $H$ to the foliation of $F_H \times {\mathcal R}$ by leaves $\{p\} \times {\mathcal R}$ for some smooh manifold $F_H$. \end{itemize} \end{defn} \subsection{Definitions of $\sigma$-sectorial hypersurfaces and Liouville $\sigma$-sectors} Here we give another more intrinsic definition of sectorial hypersurface. The definition is intrinsic in that it is closer to one in the spirit of $G$-structures (with integrability condition), not involving the defining function $I$: Existence of the data of function $I$ appearing in the definition of Liouville sectors in \cite{gps} is now a `property', not a `defining data', of Liouville $\sigma$-sector in our definition. (We refer readers to \cite[Chapter VII]{sternberg} for a nice introduction to the geometry of $G$-structures.) \begin{defn}[$\sigma$-sectorial hypersurface]\label{defn:sectorial hypersurface} Let $(M, \lambda)$ be a Liouville manifold with boundary (without corners). Let $H \subset M$ be a smooth hypersurface such that its completion $\overline H$ has the union $$ (\partial_\infty M \cap \overline H)\cup (\overline H \cap \partial M) = : \partial_\infty H \cup \partial \overline H $$ as its (topological) boundary. $H$ is a \emph{$\sigma$-sectorial hypersurface} if it satisfies the following: \begin{enumerate} \item $Z$ is tangent to $H$ at infinity, \item $H_\infty (= \partial_\infty H) = \partial_\infty M \cap H \subset \partial_\infty M$ is a convex hypersurface of the contact manifold $\partial_\infty M$, \item The canonical projection map $\pi:H \to \cN_H$ has a continuous section and each of its fiber is homeomorphic to ${\mathcal R}$. \end{enumerate} \end{defn} By applying the notion of $\sigma$-sectorial hypersurface to the boundary $\partial M \subset M$, we obtain the following definition. \begin{defn}[Liouville $\sigma$-sector]\label{defn:liouville sector} Let $M$ be a noncompact manifold with boundary such that its completion $\overline M$ has (topological) boundary given by the union $$ \partial_\infty M \cup \partial M = DM $$ and $\partial_\infty M \cap \partial M$ is the codimension two corner of $\overline M$. $M$ is called an \emph{Liouville $\sigma$-sector} if its boundary $\partial M \subset M$ is a $\sigma$-sectorial hypersurface in the sense of Definition \ref{defn:sectorial hypersurface}. \end{defn} To avoid some confusion with the corners in $\partial M$, we call the intersection $$ \partial_\infty M \cap \partial \overline M $$ the \emph{ceiling corner}. This is the corner of the ideal completion $\overline M$ of $M$ of codimension 2. (We will call the genuine corners of $M$ the \emph{sectorial corners} in Section \ref{sec:sectors-with-corners} when we consider the Liouville sectors with corners.) \subsection{Preliminaries} We start with the well-known fact that each hypersurface $H \subset \Mliou$ in a symplectic manifold $(\Mliou,\omega)$ carries the canonical characteristic foliation $\cD$. The definition of this foliation is based on the fact that any hypersurface $S$ of $(M,\omega)$ is a \emph{coisotropic} submanifold in that \begin{enumerate} \item We have $$ (T_x H)^{\omega_x} \subset T_x H, $$ for any $x \in H$, where $(T_x H)^{\omega_x}$ is the $\omega_x$-orthogonal complement $$ (T_x H)^{\omega_x}: = \{v \in T_x M \mid \omega_x(v,w) = 0 \, \forall w \in T_xH\}. $$ \item Let $\iota_H: H \to M$ be the inclusion map and $$ \ker \iota_H^*\omega_x:= \{v \in T_x H \mid \omega_x(v, w) = 0 \, \forall w \in T_xH\} $$ has constant rank 1 for all $x \in H$. \end{enumerate} Then we denote $\cD = \ker \iota_H^*\omega$ which defines a 1-dimensional (integrable) distribution of $H$, and call it the characteristic distribution or the null distribution of $H$. We denote by $\cN_H$ the leaf space of the associated foliation. It is also well-known that $\cD$ carries a transverse symplectic structure which induces one on the leaf space \eqn\label{eq:NH} \cN_{H}: = H /\sim \eqnd chart-wise. With slight abuse of notation, we will also denote by $\cD$ the associated foliation. Of course, the quotient topology of a leaf space may not be Hausdorff in general. We will show that under the conditions laid out in Definition~\ref{defn:liouville-sector-intro}, the aforementioned transverse symplectic form, as well as its smooth structure, descends to the leaf space. For the rest of this section, we always assume $\Mliou$ is a Liouville $\sigma$-sector as in Definition~\ref{defn:liouville-sector-intro}, unless otherwise said. \subsubsection{Orientations} We recall $$ H_\infty = \partial_\infty \Mliou \cap H. $$ At each point $x \in H \cap \nbhd(\partial_\infty M) \supset H_\infty$, we have a natural exact sequence \eqn\label{eq:D-orientation} 0 \to \cD_x \to T_x H \to T_x H/\cD_x \to 0. \eqnd The quotient carries a canonical symplectic bilinear form and so carries a natural symplectic orientation. \begin{choice}[Orientation of $\cD$] Let $H \subset M$ be a $\sigma$-sectorial hypersurface. Make a choice of orientation on the trivial line bundle $\cD \to H$. \end{choice} \begin{defn}[Presymplectic orientation on $H$]\label{defn:presymplectic-or} Let $\cD \to H$ be given an orientation $o_{\cD}$ on a neighborhood of $H_\infty$ in $\partial_\infty M$. We call the orientation on $TH|_{H_\infty}$ given by the direct sum orientation $$ T_x H|_{H_\infty} = (T_x H/\cD_x) \oplus \cD_x $$ the \emph{presymplectic orientation} of $H$ relative to $o_{\cD}$. \end{defn} When we are given a section $\sigma: \cN_H \to H$ such that $\operatorname{Image} \sigma = H_N$ far out close to $\partial_\infty M$, then this orientation coincides with that of $T_xH$ induced by the exact sequence \eqn\label{eq:presymplectic-or} 0 \to T_x H_N \to T_xH \to \cD_x \to 0 \eqnd as oriented vector spaces. (We alert readers that the presymplectic orientation of $H$ is \emph{not} the one naturally given by the exact sequence \eqref{eq:D-orientation}, rather arising from \eqref{eq:presymplectic-or}.) \begin{defn}[Asymptotic boundary orientation of $H_\infty \subset \overline H$]\label{defn:boundary-or} Regard $$ H_\infty = \partial_\infty M \cap H = \partial_\infty H $$ as the asymptotic boundary of $H$ where $H$ is equipped with the \emph{presymplectic orientation} of $H$ relative to $o_{\cD}$. We call this orientation of $H_\infty$ the \emph{asymptotic boundary orientation} in $H$. \end{defn} The orientation on $H_\infty$ given in Definition \ref{defn:boundary-or} may or may not coincide with the symplectic orientation of $H_\infty$. \begin{defn}\label{defn:Finfty-decompose} Equip $\cD$ with an orientation $o_\cD$ and in turn orient $H$ by the presymplectic orientation. The intersection $H_\infty$ is decomposed into $$ H_\infty = H_\infty^+ \coprod H_\infty^- $$ such that the asymptotic boundary orientation of $H_\infty \subset \overline H$ coincides with the symplectic orientation on $H_\infty^+$ and not on $H_\infty^-$. \end{defn} This discussion leads us to the following: \begin{defn}\label{defn:end-boundary} Let $\Mliou$ be a Liouville $\sigma$-sector and $H$ be a $\sigma$-sectorial hypersurface. \begin{enumerate} \item We call $H_\infty = \partial_\infty M \cap H$ the {\em end} of $H$. \item Let $\cD$ be equipped with an orientation $o_\cD$. We call $H_\infty^\pm$ the \emph{positive (resp. the negative) end} of $H$ and the \emph{positive (resp. the negative) boundary} of $\partial_\infty \Mliou$ with respect to $o_{\cD}$. \end{enumerate} \end{defn} We illustrate the decomposition in Definition \ref{defn:Finfty-decompose} in the case of sectorial boundary $\partial M$ of a Liouville sector $M$. \begin{example}[Boundary orientation for $H = \partial M$] \label{exam:H=delM} In the special case of the boundary $H = \partial M$ of a Liouville sector, $\partial M$ itself carries the canonical boundary orientation of the symplectic $(M, d\lambda)$, and hence a natural orientation on $\cD_{\partial M}$ induced by the short exact sequence \eqref{eq:presymplectic-or}, $$ 0 \to T_xF_\infty \to T_x(\partial M)|_{F_\infty} \to \cD_{\partial M}|_x \to 0. $$ This in turn induces the presymplectic orientation on $\partial M$ and the asymptotic boundary orientation on $F_\infty$. \end{example} \begin{example}[{$F^\pm_\infty$ on $T^*[0,1]$}] Now consider the case of the cotangent bundle $M = T^*[0,1]$ of the closed interval $[0,1]$ equipped with the Liouville form \eqn\label{eq:Liouville-form} \lambda = - p\, dq. \eqnd (This is the negative of the standard Liouville one-form $pdq$ in the cotangent bundle.) The standard orientation of the interval induces a diffeomorphism $\Mliou \cong [0,1]_q \times {\mathcal R}_p$ which carries the symplectic orientation induced by the symplectic form $$ dq \wedge dp. $$ (We alert the readers that this is the negative of the convention $dp \wedge dq$ used by \cite{gps}.) The boundary $\partial \Mliou \cong \{0,1\} \times {\mathcal R}_p$ has 2 connected components. \emph{The characteristic foliation's orientation is compatible with the vector field $\frac{\partial}{\partial p}$.} Note that the Liouville vector field of the Liouville form \eqref{eq:Liouville-form} on $T^*[0,1] \cong [0,1]_q \times {\mathcal R}_p$ is given by the Euler vector field \eqn\label{eq:Liouville-vector-field} \vec E:= p\frac{\partial}{\partial p} \eqnd on $T^*M$ which vanishes at $p = 0$. So each leaf $\{q\} \times {\mathcal R}_p$ of the foliation consists of 3 different orbit sets of the Liouville vector field $$ {\mathcal R}_+ = (0,\infty), \quad \{0\}, {\mathcal R}_- = (-\infty, 0). $$ We may identify $\partial_\infty \Mliou$ with two disjoint copies of $[0,1]$ at ``$p= \pm \infty$.'' $F_\infty$ consists of four points, which we will denote by $(0,\pm\infty)$ and $(1,\pm \infty)$ again using the informal notation allowing $p$ to attain $\pm \infty$. Under this notation, we have that \eqnn F^+_\infty = \{(0,-\infty) , (1,\infty)\}, \qquad\text{and}\qquad F^-_\infty = \{(0,\infty), (1,-\infty)\}. \eqnd \end{example} \begin{example} More generally, let $Q = Q^n$ be a connected $n$-manifold with boundary and let $M = T^*Q$. The inclusion $T(\partial Q) \into TQ$ induces a quotient map $T^*Q|_{\partial Q} \to T^*(\partial Q)$ of bundles on $\partial Q$; the kernel induces the characteristic foliation on $$ T^*Q|_{\partial Q} = \partial \Mliou. $$ Informally: At a point $(q,p) \in \partial M$, the oriented vector defining the characteristic foliation is the symplectic dual to an inward vector normal to $\partial Q$. For example, identifying $Q$ near $\partial Q$ with the right half plane with final coordinate $p_n$, in standard Darboux coordinate $(q,p)$, the characteristic foliation is generated by ${\frac {\partial}{\partial p_n}}$. If $\opname{dim} Q \geq 2$, we have that $F^+_\infty = F_\infty$ (and so $F^-_\infty = \emptyset$), and $F_\infty$ is identified with the restriction of $\partial Q$ of the cosphere-at-infinity bundle of $T^*Q$. \end{example} \subsubsection{Convexity of $H_\infty =\partial_\infty M \cap H$ and choice of contact vector field} Recall that $\partial_\infty M$ is naturally oriented as the ideal boundary of symplectic manifold $M$ with $Z$ pointing outward along $\partial_\infty M$. We take a contact-type hypersurface $S_0 \subset M$ that is transverse to $H$ and identify a neighborhood $\nbhd(\partial_\infty M)$ with the (half) of the symplectization $S(S_0)$ of the contact manifold $(S_0, \iota_{S_0}^*\lambda)$ $$ S_+(S_0): = S_0 \times [0,\infty) $$ and decompose $M$ into $$ M = (M \setminus \nbhd(\partial_\infty M) )\cup (S_0 \times [0,\infty) $$ so that $Z = \frac{\partial}{\partial s}$ for the symplectization form $d(e^s \pi^* \iota_{S_0}^*\lambda)$ of the contact manifold $(S_0, \iota_{S_0}^*\lambda)$ on $\nbhd(\partial_\infty M) = S_0 \times [0,\infty)$. Next, by the convexity hypothesis of $H_\infty$ in $\partial_\infty \Mliou$, there exists a contact vector field $\eta$ of the contact structure $(\partial_\infty M,\xi_\infty)$ on a neighborhood of $H_\infty$ in $\partial_\infty M$ that is transverse to $H_\infty$. This naturally equips $H_\infty$ with the reduced symplectic form and so the symplectic orientation thereon. On the other hand, a choice of contact vector field $\eta$ transverse to $H_\infty$ in $\partial_\infty M$, also gives rise to an isomorphism $$ \left( T (\partial_\infty \Mliou)/\xi\right)|_{H_\infty} \cong \text{\rm span}_{\mathcal R}\{\eta\}. $$ In particular we have an exact sequence \eqn\label{eq:convex-orientation} 0 \to TH_\infty \to T (\partial_\infty \Mliou)|_{H_\infty} \to \text{\rm span}_{\mathcal R}\{\eta\} \to 0. \eqnd \begin{choice}[Choice of contact vector field $\eta$]\label{choice:eta} Equip the asymptotic boundary $\partial_\infty M$ of the symplectic manifold $(M,d\lambda)$ with the (asymptotic) boundary orientation, i.e., the one oriented by the Liouville vector field $Z$ on $M$ which is outward pointing along $\partial_\infty M$. We also equip $H_\infty$ the symplectic orientation. Then we make the choice of the contact vector field $\eta$ so that the sequence \eqref{eq:convex-orientation} becomes an exact sequence of oriented vector spaces. \end{choice} \begin{example} \begin{enumerate} \item The choice of contact vector field $\eta$ in Choice \ref{choice:eta} is consistent with the convention of writing the symplectization as $S_0 \times [0,\infty)$, i.e., writing the ${\mathcal R}$-factor after the $S_0$-factor. On the symplectization of general contact manifold $(C,\theta)$ in general, we have the splitting $$ T(SC) = \xi_C \oplus {\mathbb R}\langle R_\theta \rangle \oplus {\mathbb R} \langle Z \rangle. $$ Then $\eta = \pm R_\theta$ so that $\{Z, \eta\}$ forms the positively oriented basis of the complex orientation of ${\mathbb C} \cong {\mathbb R}\langle R_\theta \rangle \oplus {\mathbb R} \langle Z \rangle$. \item For $H = \partial M$, the choice is also compatible with the writing of the splitting data in \cite{gps} $$ \nbhd(\partial M) = F \times {\mathbb C}_{\text{\rm Re}\geq 0} $$ as oriented manifolds. In this case, the orientation given by $\{Z,\eta\}$ above corresponds to the orientation $$ \left\{\frac{\partial}{\partial p}, - \frac{\partial}{\partial q}\right\} \cong \left\{\frac{\partial}{\partial q}, \frac{\partial}{\partial p}\right\} $$ where the contact vector field $\eta= - \frac{\partial}{\partial q} = X_p$ on $s^{-1}(N)$ is pointing \emph{outward} of ${\mathbb C}_{\text{\rm Re}\geq 0}$, when we take the symplectization radial function $s = \log |p|$ at infinity of $T^*{\mathcal R} \cong {\mathbb C}$. \end{enumerate} \end{example} \emph{ In the remaining section and henceforth, we will always assume that $H \subset \Mliou$ is a $\sigma$-sectorial hypersurface (Definition~\ref{defn:liouville-sector-intro}) without further mentioning, unless otherwise mentioned.} \subsection{The leaf space is a topological manifold} Let $H \subset M$ be a $\sigma$-sectorial hypersurface of a Liouville $\sigma$-sector $(M,\lambda)$. Equip the leaf space $\cN_{H}$ with the quotient topology induced by the projection $\pi = \pi_H: H \to \cN_{H}$. Before providing a smooth atlas on $\cN_{H}$, our first order of business is to prove: \begin{prop}\label{prop. leaf space is topological manifold} The leaf space $\cN_{H}$ is a topological manifold. (In particular, $\cN_{H}$ is second countable and Hausdorff.) \end{prop} The proof of Proposition~\ref{prop. leaf space is topological manifold} occupies the rest of this subsection. We consider the given continuous section $\sigma_{\text{\rm ref}}: \cN_{H} \to H$ guaranteed by Definition~\ref{defn:liouville-sector-intro}. We write \eqn\label{eq:F-ref} H_{\text{\rm ref}}: = \operatorname{Image} \sigma_{\text{\rm ref}} \subset H. \eqnd We choose a contact-type hypersurface $Y \subset M$ far out near infinity so that $Y$ is transversal to the characteristic foliation of $H$. (Recall that the Liouville vector field $Z$ is tangent to $H$ near infinity by definition of $\sigma$-sectorial hypersurface.) This induces a natural inclusion maps $\iota^\pm_\infty: H_\infty^\pm \to H$ and the composition map $\pi_{\infty}^+: H_\infty^+ \to \cN_{H}$ given by $\pi_{\infty}^+ = \pi \circ \iota_\infty^+$. We denote by $$ \nbhd^\pm(\partial_\infty M \cap H) : = \nbhd(H_\infty^\pm) $$ respectively. \begin{lemma} There exists a pair of smooth functions \eqn\label{eq:h} h_\pm: \nbhd^\pm(\partial_\infty \Mliou \cap H) \to {\mathcal R} \eqnd satisfying $Z[h_\pm] = h_\pm$ and $h_\pm$ are submersions along the characteristic leaves of $\partial M$. \end{lemma} \begin{proof} By the defining data of Liouville $\sigma$-sectors, $Z$ is tangent to $H$ at infinity and $H_\infty = \partial_\infty M \cap H$ is convex in $\partial_\infty M$. Therefore we can choose a contact-type hypersurface $S_0$ far out close to $\partial_\infty M$ so that $S_0 \pitchfork H$. We take the symplectization end $S_0 \times {\mathbb R}$ of $M$, and its associated radial function $s$ satisfying $s^{-1}(0) = S_0$. Denote by $\pi: S_0 \times {\mathbb R} \to S_0$ the projection. We identify $H_\infty$ with $S_0 \cap H$ via the Liouville flow. Then $S_0 \cap H$ is a convex hypersurface in the contact manifold $S_0$ and hence we have a contact vector field $\eta$ on $S_0$ such that $\lambda(\eta) \neq 0$ on $S_0 \cap H$. The function $h = - \lambda(\eta)$ is the contact Hamiltoian of $\eta$ on a neighborhood of $S_0 \cap H$ in $S_0$. (Recall the sign convention from \cite{oh:contacton-Legendrian-bdy} adopted in the present paper.) Through the aforementioned identification of $H_\infty$ and $S_0 \cap H$, we regard $\pi^*h$ as a function defined on a neighborhood of $H_\infty$ in $\partial_\infty M$. We take the associated homogeneous Hamiltonian function on the symplectization, denoted by $\widetilde h := e^s \pi^* h$ (which is denoted by $I$ in the proof of \cite[Lemma 2.5]{gps}), in a neighborhood of $H_\infty$ in $M$. Then we have $$ d\widetilde h(X) = d\lambda(X_{\widetilde h}, X) \neq 0 $$ for any nonzero vector $X \in \ker d\lambda_H$ with $\lambda_H = \iota_H^*\lambda$: We derive the nonvanishing by combining the following: \begin{itemize} \item $\lambda(\eta) \neq 0$ along $S_0 \cap H$, \item $d\lambda(X, TH) = 0$ by definition of $\ker d\lambda_H$, \item $\eta \not\in TH$ and so $TM = TH + \operatorname{span}\{\eta\}$, and \item nondegeneracy of $d\lambda$. \end{itemize} In particular $\widetilde h$ are submersions along the characteristic leaves. Finally we compute $$ Z[\widetilde h] = Z[e^s \pi^*h] = e^s(Z[s]\pi^*h + s Z[\pi^*h]) = e^s \pi^*h = \widetilde h $$ where we use $Z[s] =1$ and $Z[\pi^*h] = 0$. By re-denoting $\widetilde h$ by $h_\pm$ on neighborhoods of $H_\infty^\pm$ respectively, we have finished the proof. \end{proof} In particular the level set of $h_+$ is a smooth submanifold that is transverse to $H$, We take a symplectization radial function $s$ so that $$ \nbhd^+(\partial_\infty M) = s^{-1}(0) \times {\mathcal R}_+ $$ and $Z = \frac{\partial}{\partial s}$ and the Liouville form $$ \lambda = e^s \pi^* \theta, \quad \theta := \iota_{S_0}^*\lambda $$ where $\pi: S_0 \times {\mathcal R} \to S_0$ with $S_0: = s^{-1}(0)$. Denote \eqn\label{eq:FN+} H_N^+: = s^{-1}(N) \cap H \eqnd for a sufficiently large $N > 0$. We similarly define $H_N^-$ by considering $N \ll 0$. We also have the continuous maps \eqn\label{eqn. piN plus} \pi_N^\pm: H_N^\pm \to \cN_{H} \eqnd given as the restrictions of $\pi$. We orient $\cD$ so that it points upward in terms of $h_+$ on $H_N^+$, i.e., $$ dh_+(\cD) = d\lambda(X_{h_+},\cD) > 0. $$ Then both $Z$ and $\cD$ point upward: By suitably adjusting the contact-type hypersurface $h_+^{-1}(N)$ near $H$ we may choose the symplectization end radial function $s$ so that $s = \frac1\alpha \log h_+ \pm C$ (for any given $0 < \alpha \leq 1$) in a neighborhood of $\nbhd^+(\partial_\infty M \cap H)$ and then clearly we have $Z[h_+] > 0$. The canonical $Z$-flow projection provides a natural diffeomorphism between the ideal boundary $H_\infty^\pm$ and $H_N^\pm$. \begin{prop}\label{prop. piN homeo} The map $\pi_N^+$ in \eqref{eqn. piN plus} is a homeomorphism. \end{prop} \begin{proof} The domain of $\pi_N^+$ is compact, while the codomain is Hausdorff because of the existence of the continuous section $\sigma_{\text{\rm ref}}$. Thus it suffices to show that $\pi_N^+$ is a bijection. Recall the convexity hypothesis on $H_\infty$ in $\partial_\infty M$ and the choice of orientation on $\cD$ above imply $dh_+(\cD) > 0$ on $H_N^+$ for sufficiently large $N > 0$. Therefore since each leaf is connected, $y_1, \, y_2 \in H_N^+$ cannot be contained in same leaf of $H$ if $y_1 \neq y_2$. This shows that $\pi_N^+$ is a one-one map. (In fact, consider the restriction of $\pi$ to $h_+^{-1}([N,\infty))\cap H$. Then we see that $\pi$ is nothing but the assignment $y \mapsto \ell_y \in \cN_{H}$ where $\ell_y$ is the leaf of $H$ through $y \in H_N^+$.) We next show that the map $\pi_N^+$ is also surjective. \emph{This is the key step in the proof of this proposition. To prove surjectivity, we need to rule out possibilities of the following phenomena: \begin{itemize} \item appearance of closed leaves, \item appearance of limit cycles or more generally of \emph{non-proper} leaves. \end{itemize} } Using these preparations, we prove the following. \begin{lemma} There exists a smooth vector field $\widetilde Z$ on $\nbhd^+(\partial_\infty M) \cap H$ such that \eqn\label{eq:Zpm-complete} dh_\pm (\widetilde Z) = 1. \eqnd \end{lemma} \begin{proof} Without loss of generality, we set $\alpha = 1$ for the simplicity of exposition. Existence of such a vector field is obvious along the leafwise. We have only to ensure smoothness of such a choice. For this purpose, we focus on $\nbhd^+(\partial_\infty M)$ since the other case is similar. Since the line bundle $\cD$ is trivial with orientation, we can choose a \emph{unique} section $\widetilde Z$ thereof on $\nbhd^+(\partial_\infty M) \cap H$ so that $$ dh_+(\widetilde Z) = 1 $$ which is smooth. By construction, $\widetilde Z$ is also tangent to the leaves of $H$ on $\nbhd^+(\partial_\infty M) \cap H$. \end{proof} In particular, the vector field $\widetilde Z$ is forward-complete on $\nbhd^+(\partial_\infty M) \cap H$ (resp. back-ward complete on $\nbhd^-(\partial_\infty M) \cap H$). The flow map of $\widetilde Z$ gives rise to a diffeomorphism which we denote by \eqn\label{eq:phiN+} \phi_N^\pm: H_N^\pm \to H_\infty^\pm. \eqnd It also induces an identification \begin{eqnarray}\label{eq:splitting-at-infty} \nbhd^+(\partial_\infty \Mliou) \cap H & \cong & H_N^+ \times (N, \infty),\ \nonumber\\ \nbhd^-(\partial_\infty \Mliou) \cap H & \cong & H_N^- \times (-\infty, -N) \end{eqnarray} of a neighborhood $\nbhd^+(\partial_\infty \Mliou) \cap H$ of $H_\infty$ for some $N$. (Note that these are codimension 0 open subsets of $H$.) \begin{remark} One of the consequences of the convexity of $H_\infty^\pm$ in $\partial_\infty M$ is the presence of the function $h_\pm$ which gives rise to this taming of the behavior of the characteristic foliation of the boundary $\partial M$ in a neighborhood $\nbhd^+(\partial_\infty \Mliou) \cap H$ via the gradient-like complete vector field $\widetilde Z$ of the function $h_\pm$. \end{remark} By choosing $N$ sufficiently large, we can make $$ H_{\text{\rm ref}} \subset H \setminus \left(H_N^+ \times [N-\delta, \infty) \cup H_N^- \times (-\infty, -N+\delta ]\right) =: H_{\text{\rm mid}}. $$ for a sufficiently small $\delta > 0$. Then the three open subsets form a covering of $H$ where $H_{\text{\rm mid}}$ is compact. Next we would like to extend the vector field $\widetilde Z$ to everywhere on $H$ in the way that \begin{itemize} \item $Z'$ is nowhere vanishing, \item tangent to the foliation $\cD$, and \item is compatible with the orientation~\eqref{eq:D-orientation} of the leaves. \end{itemize} For this purpose, we consider the pair $H_{\text{\rm ref}} \subset H_{\text{\rm mid}}$. We also have the natural projection $H_{\text{\rm mid}} \to H_{\text{\rm ref}}$ which forms a trivial fibration each of whose fiber is homeomorphic to a closed interval equipped with orientation compatible with the one induced from $\widetilde Z$ on $\nbhd^+(\partial_\infty M) \cap H$. By taking a Riemanninan metric $g$ on $H$ so that $g(\widetilde Z, \widetilde Z) = 1$ on $$ \nbhd^+(\partial_\infty M) \cap H \setminus (\phi_N^+)^{-1}(H_N^+ \times [N, \infty))), $$ we can uniquely extend $\widetilde Z$ to a vector field $Z'$ so that $Z'$ is tangent to $\cD$ and $g(Z', Z') = 1$ on $H_{\text{\rm mid}}$. \begin{defn}[Leaf-generating vector field $Z'$ of $\cD_H$]\label{defn:Z'} We call the above constructed vector field $Z'$ on $H$ a \emph{leaf-generating vector field of $\cD_H$}. \end{defn} Since $Z' = \widetilde Z$ which is complete forward or backward on $$ \nbhd^+(\partial_\infty \Mliou) \cup \nbhd^-(\partial_\infty \Mliou) $$ respectively, and $H_{\text{\rm mid}}$ is compact, the vector field $Z'$ is complete and so defines a global flow on $H$. Now the proof of surjectivity of $\pi_N^+$ relies on the following lemma: \begin{lemma}\label{lemma. M forward backward} Any trajectory of $Z'$ eventually exits from $H_{\text{\rm mid}}$ both forward and backward. Moreover every leaf is a flow orbit of $Z'$ and vice versa. \end{lemma} \begin{proof} It is a standard fact that each leaf is second countable because the manifold $\Mliou$ is assumed to be second countable. (This rules out the possibility for a leaf becomes a `Long line' \cite[pp. 71-72]{steen-seebach}.) Note that since $Z'$ is regular, each leaf of $H$ of the characteristic foliation is a flow line of the regular vector field $Z'$. (See \cite[Section 2.1]{candel-conlon}.) Furthermore no leaf can be a point. \emph{By the condition stated in Definition \ref{defn:liouville-sector-intro} (d), $Z'$ cannot have a nontrivial periodic orbit either}. Therefore each flow map from ${\mathcal R}$ to $H$ is one-one and so there is a uniquely defined $T \in {\mathcal R}$ such that $\phi_{Z'}^T(\sigma_{\text{\rm ref}}(\pi(x)) = x$ for each $x \in \partial X$. Combining this discussion with the aforementioned completeness, we can define a flow map $$ \Psi_{\text{\rm ref}}: H \to H_{\text{\rm ref}} \times {\mathcal R}; \quad \Psi_{\text{\rm ref}}(x) = (\sigma_{\text{\rm ref}}(\pi(x)), T(x)) $$ where $T(x)$ is the time for $\sigma_{\text{\rm ref}}(\pi(x))$ to reach the point $x$ along the flow of $Z'$. We define a continuous function $T: \Mliou \to {\mathcal R}$ by $$ T(x): = \text{``the reaching time of the flow of $Z'$ issued at $\sigma_{\text{\rm ref}}(\pi(x))$''} $$ By definition, $H$ is an increasing union $$ H = \bigcup_{N \in {\mathcal N}} T^{-1}(-N,N) $$ of open subset $T^{-1}(-N,N)=: U_N$. Since $H_{\text{\rm mid}}$ is compact, there is some $N_0 \in {\mathcal N}$ such that $H_{\text{\rm mid}} \subset U_{N_0}$. Therefore any point $y \in H_{\text{\rm ref}}$ has its forward (resp., backward) flow point $x_+ \in \nbhd^+(H_\infty)$ (resp., $x_- \in \nbhd^-(H_\infty)$. Once the flow reaches there, it just follows the flow of $Z_0^\pm$ forward and backward respectively which escape to infinity. This finishes the proof of Lemma~\ref{lemma. M forward backward}. \end{proof} We now wrap up the proof of Proposition~\ref{prop. piN homeo}. Lemma~\ref{lemma. M forward backward} implies that any trajectory of $Z'$ is an extension of some trajectories of the vector fields $Z_0^\pm$, one from each of $\nbhd_{\pm}(\partial_\infty \Mliou)$, where we write $$ Z_0^\pm = \widetilde Z|_{ \nbhd^\pm(H_\infty)}. $$ On the other hand, if $\phi_{Z'}(t) \in \nbhd_{\pm}(\partial_\infty \Mliou)$, we have $dh_\pm(Z') = dh_\pm(Z_0^\pm) = 1$. In particular we have $Z'[T] = 1$ and $$ T(x) = h_\pm (x) + C'_\pm, $$ on each of $\nbhd^{\pm}(\partial_\infty \Mliou)$ respectively for some constant $C'_\pm = C'_\pm(\cD_x)$ depending only on the leaf $\cD_x$ containing $x$, and hence $T(x) \to \pm \infty$ if $x \to \partial_\infty \Mliou \cap H$ along the leaf $\pi(x(\ell))$. This proves that the function $x \mapsto T(x)$ restricts to a homeomorphism from the leaf $\pi(x)$ to ${\mathcal R}$. This in particular implies that any leaf of $\cN_{H}$ is an extension of the asymptotic rays of the vector field $Z'$ issued from $H_N^+$ and so the map $\pi_N^+$ is also surjective. \end{proof} \begin{lemma}\label{lemma. Fref is leaf space} $\cN_{H}$ equipped with the quotient topology is homeomorphic to $H_{\text{\rm ref}}$. \end{lemma} \begin{proof} As we mentioned above, any leaf of $\cN_{H}$ is an extension of the asymptotic rays of $H_N^+$. Therefore by Proposition~\ref{prop. piN homeo} we have a homeomorphism \eqn\label{eq:Psi-ref} \Psi_{\text{\rm ref}}: H \to H_{\text{\rm ref}} \times {\mathcal R}, \qquad \Psi_{\text{\rm ref}}(x) = (\sigma_{\text{\rm ref}}(\pi(x)), T(x)) \eqnd for a continuous map $T: H \to {\mathcal R}$. In particular $\cN_\Mliou$ equipped with the quotient topology of $\cN_{H}$ is homeomorphic to $H_{\text{\rm ref}}$. \end{proof} \begin{prop}\label{prop. Fref is a manifold} $H_{\text{\rm ref}}$ with the subspace topology of $H$ is Hausdorff and locally Euclidean (and in particular, locally compact). \end{prop} \begin{proof} Since the function $T: H \to {\mathcal R}$ is continuous and $H_{\text{\rm ref}} = T^{-1}(0)$, $H_{\text{\rm ref}}$ is a closed subset of a smooth manifold $H$. In particular $H_{\text{\rm ref}}$ with the subspace topology of $H$ is Hausdorff. Furthermore since $Z'[T] > 0$, $T$ is monotonically increasing. To see the locally Euclidean property of $H_{\text{\rm ref}}$, let $x_0 \in H_{\text{\rm ref}}$ be any given point. We have only to note that \eqref{eq:Psi-ref} induces a homeomorphism $$ U/\sim \to H_{\text{\rm ref}}\cap U $$ for a sufficiently small foliation chart $U$ containing $x_0$ where $\sim$ is the orbit equivalence with respect to $Z'$. Since $U/\sim$ is homeomorphic to ${\mathbb R}^{2n-1}$, so is $H_{\text{\rm ref}}\cap U$. This proves that $H_{\text{\rm ref}}\cap U$ is locally Euclidean. \end{proof} \begin{proof}[Wrap up of the proof of Proposition~\ref{prop. leaf space is topological manifold}] We have only to combine Proposition~\ref{prop. Fref is a manifold} and Lemma~\ref{lemma. Fref is leaf space}. \end{proof} The following corollary of the above proof will be useful for the study of smooth and symplectic structures of the leaf space. \begin{cor}[Section $\sigma_N^+$]\label{cor:choice-sigmaN+} The definition \eqref{eq:FN+} defines another continuous section $\sigma_N^+$ defined by \eqn\label{eq:sigmaN+} \sigma_N^+(\ell) = \ell \cap H_N^+ \subset \nbhd(\partial_\infty M \cap \partial M) \eqnd for $\ell \in \cN_H$. \end{cor} \subsection{Smooth structure on the leaf space} \label{subsec:smooth-structure} The goal of this section is to prove the first item of Theorem~\ref{thm:equivalence-intro}. We start with the following proposition. \begin{prop}\label{prop:leaf-space-structure} The leaf space $\cN_{H}$ carries a canonical smooth manifold structure such that \begin{enumerate} \item $\pi: H \to \cN_H$ is a smooth submersion, and \item there is a smooth diffeomorphism $\Psi: H \to \cN_H \times {\mathbb R}$ which makes the following diagram commute \eqn\label{eq:Psi-diagram} \xymatrix{ H \ar[dr]_{\pi_H} \ar[rr]^{\Psi} && \ar[dl]^{\pi_1}\cN_H \times {\mathbb R} \\ & \cN_H } \eqnd \end{enumerate} \end{prop} Actually, when the leaf space is Hausdorff and locally Euclidean, the well-known construction of coisotropic reduction (or symplectic reduction) applies to prove existence of smooth structure and the symplectic structure on the leaf space. (See \cite{abraham-marsden} for example.) Since we also need to construct the map $\Psi$ and will also use the details of the proof later, we provide the full details of the existence proofs of both structures below along the way for readers' convenience. We follow the standard notation of~\cite{candel-conlon} in our discussion of foliations. It follows from a well-known result in foliation theory that the foliation $\cF$ is determined by its holonomy cocycle $\gamma = \{\gamma_{\alpha\beta}\}_{\alpha, \beta \in \mathfrak U}$ with $$ \gamma_{\alpha\beta}: y_\beta(U_\alpha \cap U_\beta) \to y_\alpha(U_\alpha \cap U_\beta). $$ arising from the transverse coordinate map $y_\alpha: U_\alpha \to {\mathcal F}^{2n-2} = {\mathcal R}^{2n-2}$ or ${\mathcal H}^{2n}$. Each $y_\alpha$ is a submersion and $\gamma_{\alpha\beta}$ is given by $y_\alpha = y_\alpha(y_\beta)$ in coordinates. (See e.g., \cite[Definition 1.2.12]{candel-conlon}.) Furthermore for the null foliation $\cF$ of the coisotropic submanifold $H$, we can choose a foliated chart $\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\chi\Mliou{\mathcal \Mliou = \{(U_\alpha,\varphi_\alpha)\}_{\alpha \in \mathfrak U}$ so that the associated cocycle elements $\gamma_{\alpha\beta}$ become symplectic, i.e., the foliation $\cF$ carries a transverse symplectic structure. We refer readers to the proof of Proposition \ref{prop:leaf-space-structure} below for the details. \begin{remark} When $H$ has corners, the foliated chart $B = B_\tau \times B_{\pitchfork}$ means that the \emph{tangential factor} $B_\tau$ of the foliated chart has no boundary but the \emph{transverse factor} $B_\pitchfork$ has a boundary. (See e.g., \cite[Definition 1.1.18]{candel-conlon} for the definition.) \end{remark} \begin{proof}[Proof of Proposition \ref{prop:leaf-space-structure}] We will show that the above holonomy cocycle naturally descends to a smooth atlas on $\cN_{H}$ \emph{under the defining condition of $\sigma$-sectorial hypersurface above, especially in the presence of a continuous section of the projection $\pi_H:H \to \cN_{H}$.} For this purpose, we consider a coherent regular foliated atlas $\{\varphi_{\alpha}: U_\alpha \to {\mathcal R}^{2n-1}\}$, and its associated foliation cocycle $\gamma =\{\gamma_{\alpha\beta}\}$ (see e.g., \cite[Section 1.2.A]{candel-conlon}). Let $\sigma_N^+: \cN_H \to H$ be the section given by \eqref{eq:sigmaN+}. We take the sub-collection $\{U_{\alpha'}\}$ that covers $H_N^+$ and such that $U_{\alpha'} \cap H_N^+ \neq \emptyset$. Furthermore since $H_N^+=h^{-1}(N)$ is a level set of the function $h$ in \eqref{eq:h} and we may assume that each plaque in a small $\nbhd(H_N^+)$ is the gradient trajectory of $h$: By considering a refinement $\{U_{\alpha'}\}$ of the given covering, we can choose a collection of foliated charts $\varphi_{\alpha'}: U_{\alpha'} \to {\mathcal R} \times {\mathcal R}^{2n-2}$ of the form $$ \varphi_{\alpha'} = (h, y_1, \cdots, y_{2n-2}) $$ whose transversal coordinates $(y_1,\cdots, y_{2n-2})$ satisfy $$ dy_i(Z) = 0. $$ We take a maximal such collection of $H_N^+$ which we denote by \eqn\label{eq:cO'} \cO' = \{(\varphi_{\alpha'},U_{\alpha'})\}. \eqnd By the definition of transverse coordinates $(y_1,\cdots, y_{2n-2})$ of the foliated chart, it follows that the collection thereof defines a {\em smooth} atlas of $\cN_{H}$. We write the resulting atlas of $\cN_{H}$ by $$ [\cO']:= \{[\varphi_{\alpha'}]: [U_{\alpha'}]\ \to {\mathcal R}^{2n-2}\}. $$ \begin{lemma}\label{lemma. pi is submersion} The projection map $\pi: H \to \cN_{H}$ is a smooth submersion. \end{lemma} \begin{proof} To show smoothness of $\pi$, we will show that for any smooth function $f: \cN_{H} \to {\mathcal R}$ the composition $f\circ \pi$ is smooth. For this purpose, at any point $x$, we consider the foliated chart $\varphi_\alpha: U_\alpha \to {\mathcal R}^{2n-1}$ of the form \eqn\label{eq:yi'H} (h', y_1, \ldots, y_{2n-2}) \eqnd whose transversal coordinate $(y_1, \ldots, y_{2n-2})$ satisfies \eqn\label{eq:yi'H-chart} dh'(Z') \equiv 1, \quad dy_i(Z') = 0, \quad i=1 \ldots, 2n-2. \eqnd Let $f: \cN_{H} \to {\mathcal R}$ be any smooth function on $\cN_{H}$. With respect to the aforementioned foliated atlas of $H$, we will show that $f \circ \pi$ is smooth at every point $x \in H$. \emph{If $x$ is contained in $U_{\alpha'}$,} we have $$ (f\circ \pi) \circ (\varphi_{\alpha'})^{-1}(h', y_1, \cdots, y_{2n-2}) = f \circ [\varphi_{\alpha'}]^{-1}(y_1,\cdots, y_{2n-2}) $$ The right hand side is smooth in the variables $y_1, \cdots, y_{2n-2}$ by the hypothesis on $f$, and does not depend on $h'$-variable. This in particular implies that the left hand map $(f\circ \pi) \circ (\varphi_{\alpha'})^{-1}$ is smooth at $x$. \emph{Otherwise,} let $(\varphi_\beta, U_\beta)$ be a foliation chart at $x$. We take a flow map $\phi_{Z'}^T$ satisfying $y: = \phi_{Z'}^T(x) \in U_\beta'$ for some chart $(\varphi_{\beta'}, U_{\beta'}) \in \cO'$ at $y$ given by $$ \left(U_{\beta'} = \phi_{Z'}^T(U_{\beta}), \quad \varphi_{\beta'} = \varphi_\beta \circ (\phi_{Z'}^T)^{-1}\right) $$ which is contained in $\cO'$ by the maximality of the collection $\cO'$. Therefore the map $(f\circ \pi) \circ \varphi_{\beta'}^{-1}$ is smooth at $y = \phi_{Z'}^T(x) \in U_{\beta'}$. We note $$ f\circ \pi = \left((f\circ \pi) \circ \varphi_{\beta'}^{-1}\right) \circ \left((\varphi_{\beta'}^{-1} \circ \phi_{Z'}^T|_{U_\beta})\right) $$ which is a composition of two smooth maps and so smooth at $x$. This implies $f\circ \pi$ is smooth at $x$ again. This finishes the proof of smoothness $\pi \circ f$ for all smooth function $f: \cN_{H} \to {\mathcal R}$. This proves that $\pi$ is smooth. Submersivity of $\pi$ is obvious by the above construction. \end{proof} Now a closer examination of the above proof also shows the following whose proof we leave to the readers. \begin{cor} Let $\cN_{H}$ be equipped with this smooth structure. \begin{enumerate} \item The smooth structure on $\cN_H$ constructed above does not depend on the choice of the section \eqref{eq:sigmaN+}, rewritten as \eqn\label{eq:sN+} \sigma_N^+(\ell) : = \ell \cap h^{-1}(N), \eqnd does not depend on the choice of such $\sigma_N^+$. \item The map $\pi_N^+$ in \eqref{eqn. piN plus} is a smooth submersion with respect to the smooth structure on $\cN_{H}$ with which we have just equipped $\cN_{H}$. \item The map $\sigma_N^+$ is a smooth section of the submersion $\pi_N^+$. \end{enumerate} \end{cor} \subsection{Symplectic structure on the leaf space} \label{subsec:symplectic-structure} Now we turn to the construction of symplectic structure. We fix the section $\sigma_N^+: \cN_H \to H$ constructed before. When we choose the above used coherent atlas, we can choose them so that the associated cocycle $\gamma_{\alpha\beta}$ becomes symplectic by requiring the chart $(U_\alpha, \varphi_\alpha)$ also to satisfy the defining equation \eqn\label{eq:local-coisotropic-reduction} y_\alpha^*\omega_0 = \iota_{H}^*\omega, \quad \omega = d\lambda \eqnd of the general coisotropic reduction (see \cite[Theorem 5.3.23]{abraham-marsden} for example) where $\iota_{H}: H \to M$ is the inclusion map and $\omega_0$ is the standard symplectic from on ${\mathcal R}^{2n-2}$. (See also \cite{gotay}, \cite{oh-park}.) By using such a foliated chart satisfying \eqref{eq:local-coisotropic-reduction}, the associated holonomy cycles define symplectic atlas and so a symplectic structure on $\cN_{H}$, \emph{when the holonomy is trivial as in our case where we assume the presence of smooth section.} This will then finish construction of reduced symplectic structures on $\cN_H$. (We refer to \cite[Section 5]{oh-park} for a detailed discussion on the construction of transverse symplectic structure for the null foliation of general coisotropic submanifolds.) Now it remains to construct a diffeomorphism $\Psi^\sigma: H \to \cN_H \times {\mathbb R}$ which makes the diagram \eqref{eq:Psi-diagram} commute. For this, it follows from the above discussions that we can define another map \eqn\label{eq:Psi} \Psi: H \to \cN_{H} \times {\mathcal R}; \quad \Psi(x) = (\pi(x), t(x)) \eqnd by replacing the original defining section $\sigma_{\text{\rm ref}}$ by the new \emph{smooth} section $\sigma_N^+$ defined by \eqref{eq:sN+}: Here we put \eqn\label{eq:function-t} t(x): = \text{\rm the unique time determined by $\phi_{Z'}^{t(x)}(\sigma_N^+(\pi(x))) = x$ } \eqnd which is clearly a smooth function. Furthermore, we have shown $\pi_{\text{\rm ref}} = \pi_N^+$ which also shows that $\pi_{\text{\rm ref}}$ is smooth with respect to the smooth structure just given to $\cN_H$. Its inverse map $\Phi: \cN_{H} \times {\mathcal R} \to H$ is given by $$ \Phi(\ell,t) = \phi_{Z'}^t(\sigma_N^+(\ell)) $$ which is obviously smooth. This finishes the proof of Proposition~\ref{prop:leaf-space-structure} . \end{proof} An immediate corollary of the above proof is the following construction of the defining function $I: \partial M \to {\mathcal R}$ appearing in the definition of Liouville sectors in \cite{gps} for the case $H = \partial M$. \begin{cor} Let $t$ be the function used in the definition of the map \eqref{eq:Psi}. For each $\alpha> 0$, we define the function $I: H \to {\mathcal R}$ by $$ I(x) = \pm e^{\alpha t(x)}. $$ Then we have $Z[I] = \alpha I$ on $\nbhd(\partial_\infty M) \cap H$. \end{cor} This proves Proposition \ref{prop:equivalence-intro}, i.e., that any Liouville $\sigma$-sector is a Liouville sector in the sense of \cite{gps}. \begin{remark} On the other hand, the converse is almost a tautological statement in that \cite[Lemma 2.5]{gps} shows that any of their three defining conditions given in \cite[Definition 2.4]{gps} is equivalent to the condition \begin{itemize} \item There exists a diffeomorphism $\Psi: H \to F \times {\mathbb R}$ making \eqref{eq:Psi-diagram} commute \end{itemize} Once this is in our disposal, $\Psi$ induces a diffeomorphism $ [\Psi]: \cN_H \to F. $ Therefore we can choose a continuous section $ \sigma_{\text{\rm ref}}: \cN_H \to H $ required for the definition of $\sigma$-sectorial hypersurface to be $$ \sigma_{\text{\rm ref}}(\ell): = [\Psi]^{-1}(\ell), \quad \ell \in \cN_H. $$ \end{remark} Now we wrap up the proof of Theorem~\ref{thm:equivalence-intro} as the special case $H = \partial M$ of the following theorem. We will postpone the proof of Statement (3) till the next subsection. \begin{theorem}\label{thm:equivalence} Under the above definition of $\sigma$-sectorial hypersurface $H \subset M$, the following hold: \begin{enumerate}[(1)] \item\label{item.equivalence cL manifold} $\cN_{H}$ carries the structure of Hausdorff smooth manifold with corners such that $\pi$ is a smooth submersion. \item\label{item. equivalence symplectic structure} $\cN_{H}$ carries a canonical symplectic structure denoted by $\omega_{\cN_{H}}$ as a coisotropic reduction of $H \subset \Mliou$. \item\label{item.liouville form} $(\cN_{H},\omega_{\cN_{H}})$ carries a canonical Liouville one-form $\lambda_{\cN_{H}}$ induced from the convexity hypothesis of $\partial_\infty M \cap H \subset \partial_\infty M$. \item\label{item.equivalence commutative diagram} We have a commutative diagram \eqn\label{eq:diagram} \xymatrix{H \ar[d]^\pi \ar[r]^{\Psi} & F \times {\mathcal R} \ar [d]^{\pi_F}\\ \cN_{H} \ar[r]^{\psi} & F } \eqnd where \begin{itemize} \item $\pi$ is a smooth map which admits a smooth section $\sigma: \cN_{H} \to H$ for which $\sigma$ satisfies $\sigma^*\omega_\partial = \omega_{\cN_{H}}$, \item Setting $F: = \operatorname{Image} \sigma$, $\psi$ is a Liouville diffeomorphism between $(\cN_{H},\lambda_{\cN_{H}})$ and the induced form $(F, \lambda|_F)$ defined by $\psi(\ell) = \sigma(\ell)$ for $\ell \in \cN_{H}$, and \item $ \Psi: H \to F \times {\mathcal R} $ is a diffeomorphism, \end{itemize} \end{enumerate} \end{theorem} \begin{proof} Proposition~\ref{prop:leaf-space-structure} and Lemma~\ref{lemma. pi is submersion} prove \eqref{item.equivalence cL manifold} of the theorem. Let $$ \iota_F= \iota_{H_N}: H_N \to H $$ be the inclusion map and consider the diagram $$ \xymatrix{H \ar[r]^{\iota_H} \ar[d]^{\pi_H} & \Mliou \\ \cN_{H} &}. $$ Then the reduced symplectic form $\omega_{\cN_{H}}$ is characterized by \eqn\label{eq:reduced-form} (\pi_H)^*\omega_{\cN_H} = (\iota_H)^*\omega_H, \quad \omega_H: = \iota_H^*(d\lambda) \eqnd as an example of coisotropic reduction. This proves \eqref{item. equivalence symplectic structure}. Now we take $\sigma = \sigma_N^+$ and $F= H_N^+: = \operatorname{Image} \sigma_N^+$. By pulling back the two form $(\iota_H)^*\omega_H$ by the inclusion $F \hookrightarrow H$, we have obtained a two-form $(\iota_{H_N}^+)^*\omega$ on $H_N^+$ which is symplectic for any sufficiently large $N> 0$ by the convexity hypothesis of $H_\infty^+$ in $\partial_\infty \Mliou$. This finishes the proof of \eqref{item.equivalence commutative diagram}. \end{proof} It is useful for the later study of intrinsic characterization of Liouville sectors \emph{with corners} to keep in mind the following corollary of the above proof. \begin{cor}\label{cor:cDH-trivial} The line bundle $\cD_H \to H$ of characteristic distribution is trivial for any $\sigma$-sectorial hypersurface. \end{cor} Of course this is a tautological property with the original definition of Liouville sectors from \cite{gps}. \subsection{Induced Liouville structure on the leaf space} \label{subsec:liouville-structure-leaf-space} Finally we prove Statement (3) of Theorem \ref{thm:equivalence} by extracting some consequences on the above constructed symplectic structure on $\cN_H$ derived from the given property of the characteristic foliation $\cD$ at infinity. Recall the flow map $\phi_N^+: H_N^+ \to H_\infty^+$ from \eqref{eq:phiN+}. By composing the two symplectic diffeomorphisms $$ \cN_H \stackrel{\sigma_N^+}{\longrightarrow} H_N^+ \stackrel{\phi_N^+}{\longrightarrow} H_\infty^+ \hookrightarrow \partial_\infty M $$ we have a smooth map $\sigma_\infty^+: \cN_H \to \partial_\infty M$ which is a diffeomorphism onto the convex hypersurface $(H_\infty^+,\omega^+)$ of the contact manifold $(\partial_\infty M, \xi)$. By the convexity hypothesis on $H_\infty$, we have a contact vector field $\nu$ on $\partial_\infty M$ that is transverse to $H_\infty^+$. \begin{lemma}\label{lem:cNH-exact} The symplectic manifold $(\cN_H,\omega_{\cN_H})$ is exact. \end{lemma} \begin{proof} Note that the symplectic form on $H_N^+$ is nothing but the exact symplectic form given by $d\lambda_N^+$ where $$ \lambda_N^+ = (\iota_{H_N^+})^*\lambda $$ for the inclusion map $\iota_{H_N^+}: H_N^+ \hookrightarrow H \hookrightarrow (M,\lambda)$. Therefore it follows from \eqref{eq:reduced-form} and $\pi_F^*\circ \sigma_N^+ = id_{\cN_H}$ \begin{eqnarray*} \omega_{\cN_H} & = & (\pi_{H_N^+}\circ \sigma_N^+)^*\omega_{\cN_H} = (\sigma_N^+)^*(\pi_{H_N^+}^*\omega_{\cN_H})\\ & = & (\sigma_N^+)^*(\iota_{H_N^+}^*d\lambda) = (\sigma_N^+)^*d\lambda_N^+ = d((\sigma_N^+)^*\lambda_N^+) \end{eqnarray*} which proves exactness of $\omega_{\cN_H}$: Here the third equality follows from the defining equation \eqref{eq:reduced-form} and the equalities $$ \pi_{H_N^+} = \pi_H \circ \phi_N^+, \quad \iota_{H_N^+} = \iota_H \circ \phi_N^+ $$ with the map $\phi_N^+$ given in \eqref{eq:phiN+}. \end{proof} We next prove that the one-form $(\sigma_N^+)^*\lambda_N^+$ on $\cN_H$ appearing in the above proof does not depend on $N$. \begin{lemma}\label{lem:independent-of-N} We have $$ (\sigma_N^+)^*\lambda_N^+ = (\sigma_{N'}^+)^*\lambda_{N'}^+ $$ on $\cN_H$ for all $N, \, N' \geq 0$. We denote this common one-form by $\lambda_{\cN_H}$. \end{lemma} \begin{proof} The flow map $\phi_{NN'}$ of the vector field $Z'$ between $H_N^+$ and $H_{N'}^+$ is the same as $$ (\phi_{N'}^+)^{-1}\circ \phi_N^+: H_N^+ \to H_{N'}^+. $$ Therefore it intertwines two maps $\phi_N^+$ and $\phi_{N'}^+$ and satisfies $$ \phi_{NN'}^+(\lambda_{N'}^+) = \lambda_N^+ $$ for all large $N, \, N' \geq 0$. This finishes the proof. \end{proof} \begin{defn}[Reduced Liouville structure] We call the primitive $\lambda_{\cN_H}$ of $\omega_{\cN_{\partial H}}$ defined as above the canonical Liouville structure on $(\cN_H,\omega_{\cN_H})$. \end{defn} \begin{remark}\label{rem:sigma-independence} We remark that while we have used the existence of section $\sigma_{\text{\rm ref}}$ to equip $\cN_H$ with a smooth structure for which there exists a diffeomorphism $\Psi: H \to \cN_H \times {\mathbb R}$, construction of the symplectic structure does not depend on the choice of the section, but uses only the Liouville geometry of the neighborhood $\nbhd(\partial_\infty M \cap \partial M)$. \end{remark} \section{Geometry of clean coisotropic collections} \label{sec:sectors-with-corners} Recall that~\cite{gps-2} imposes the following restriction on the boundary strata when studying Liouville sectors: \newenvironment{sectorial-collection}{ \renewcommand*{\theenumi}{(S\arabic{enumi})} \renewcommand*{\labelenumi}{(S\arabic{enumi})} \enumerate }{ \endenumerate } \begin{defn}[Definition 9.2 \& Lemma 9.4 \& Definition 9.14 \cite{gps-2}]\label{defn:sectorial-collection} A \emph{sectorial collection} is a collection of $m$ hypersurfaces $H_1, \ldots, H_m \subset M$, cylindrical at infinity, such that: \begin{sectorial-collection} \item\label{item. clean intersection sectorial} The $H_i$ cleanly intersect, \item\label{item. coisotropic sectorial} All pairwise intersections $H_i \cap H_j$ are coisotropic, and \item\label{item. I} There exist functions $I_i: \nbhd(\partial M) \to {\mathcal R}$, linear near infinity, satisfying the following on the characteristic foliations $\cD_i$ of $H_i$: \eqn\label{eq:function-Ii} dI_i|_{\cD_i} \neq 0, \, dI_i|_{\cD_j} = 0 \quad \text{\rm for } i \neq j, \quad \{I_i, I_j\} = 0. \eqnd \end{sectorial-collection} A Liouville sector $(M,\lambda)$ with corners is a Liouville manifold-with-corners whose codimension one boundary strata form a sectorial collection. \end{defn} We will introduce another definition of sectorial collection by replacing Condition \ref{item. I} in the spirit of Definition \ref{defn:sectorial hypersurface}. For this purpose, we need some preparations. We start with introducing the following definition \begin{defn}[Clean coisotropic collection]\label{defn:clean coisotropic collection} Let $(M,\lambda)$ be a Liouville manifold with boundary and corners. Let $H_1, \ldots, H_m \subset M$ be a collection hypersurfaces cylindrical at infinity, that satisfies Conditions \ref{item. clean intersection sectorial}, \ref{item. coisotropic sectorial} of Definition \ref{defn:sectorial-collection}. \end{defn} In the remaining section, we first study the underlying geometry and prove a general structure theorem of such a collection. In the next section, based on the theorem, we will provide an intrinsic characterization of the sectorial collection and Liouville sectors with corners above purely in terms of geometry of coisotropic submanifolds. We call the resulting structure the structure of \emph{Liouville $\sigma$-sectors with corners}. \subsection{Gotay's coisotropic embedding theorem of presymplectic manifolds} \label{subsec:neighborhoods} For a finer study of the neighborhood structure of the sectorial corner $C$, we first recall below some basic properties of the coisotropic submanifolds and the coisotropic embedding theorem of Gotay \cite{gotay}. See also \cite{weinstein-cbms}, \cite{oh-park} for relevant material on the geometry of coisotropic submanifolds. We will mostly adopt the notations used in \cite{gotay}, \cite[Section 3]{oh-park}. Let $(Y,\omega_Y)$ be any presymplectic manifold. The null distribution on $Y$ is the vector bundle $$ E: = (TY)^{\omega_Y} \subset TY, \, \quad E_y = \ker \omega_Y|_y. $$ This distribution is integrable since $\omega_Y$ is closed. We call the corresponding foliation the {\it null foliation} on $Y$ and denote it by $$ \cF = \cF_Y. $$ (Then $E$ is nothing but the total space of the foliation tangent bundle $T\cF$.) We now consider the dual bundle $\pi: E^* \to Y$ which is the foliation cotangent bundle $$ E^* = T^*\cF. $$ The tangent bundle $TE^*$ of the total space $E^*$ has its restriction to the zero section $Y \hookrightarrow E^*$; this restriction carries a canonical decomposition $$ TE^*|_Y \cong TY \oplus E^*. $$ \begin{example} A typical example of a presymplectic manifold is given by $$ (Y,\omega_Y) = (H, \omega_H), \quad \omega_H: = \iota_H^*\omega $$ arising from any coisotropic submanifold $H \subset^{\iota_H} (X,\omega)$. Then $E = \cD_H$, the null distribution of $(H,\omega_H)$. It is easy to check that the isomorphism $$ TX \to T^*X $$ maps $TY^\omega$ to the conormal $N^*Y \subset T^*X$, and induces an isomorphism between $NY = (TX)|_Y/TY$ and $E^*$. \end{example} Gotay \cite{gotay} takes a transverse symplectic subbundle $G$ of $TY$ and associates to each splitting \eqn\label{eq:splitting} \Gamma: \quad TY = G \oplus E, \quad E = T \cF \eqnd the zero section map $$ \Phi_\Gamma: Y \hookrightarrow T^*\cF = E^* $$ as a coisotropic embedding with respect to a `canonical' two-form $\omega_{E^*}$ on $E^*$ which restricts to a symplectic form on a neighborhood of the zero section of $E^*$ such that $$ \omega_Y = \Phi_\Gamma^*\omega_{E^*}. $$ \begin{remark} When $\omega_Y = 0$, Gotay's embedding theorem reduces to the well-known Weinstein's neighborhood theorem of Lagrangian submanifolds $L$ in which case $E^* = T^*L$ with $Y = L$. \end{remark} We now describe the last symplectic form closely following \cite{gotay}. We denote the aforementioned neighborhood by $$ V \subset T^*\cF = E^*. $$ Using the splitting $\Gamma$, which may be regarded as an `Ehresmann connection' of the `fibration' $$ T \cF \to Y \to \cN_Y, $$ we can explicitly write down a symplectic form $\omega_{E^*}$ as follows. First note that as a vector bundle, we have a natural splitting $$ TE^*|_Y \cong TY \oplus E^* \cong G \oplus E \oplus T^*\cF $$ on $Y$, which can be extended to a neighborhood $V$ of the zero section $Y \subset E^*$ via the `connection of the fibration' $T^*\cF \to Y$. (We refer readers to \cite{oh-park} for a complete discussion on this.) We denote $$ p_\Gamma : TY \to T\cF $$ the (fiberwise) projection to $E=T\cF$ over $Y$ with respect to the splitting \eqref{eq:splitting}. We have the bundle map $$ TE^* \stackrel{T\pi}\longrightarrow TY \stackrel{p_\Gamma} \longrightarrow E $$ over $Y$. \begin{defn}[Canonical one-form $\theta_\Gamma$ on $E^*$] Let $\zeta \in E^*$ and $\xi \in T_\zeta E^*$. We define the one form $\theta_\Gamma$ on $E^*$ whose value is to be the linear functional $$ \theta_{\Gamma}|_\zeta \in T_\zeta^*E^* $$ at $\zeta$ that is determined by its value \eqn\label{eq:thetag} \theta_\Gamma|_{\zeta}(\xi): = \zeta(p_\Gamma \circ T\pi(\xi)) \eqnd against $\xi \in T_\zeta(T^*\cF)$. \end{defn} (We remark that this is reduced to the canonical Liouville one-form $\theta$ on the cotangent bundle $T^*L$ for the case of Lagrangian submanifold $L$ in which case $\omega_Y = 0$ and the splitting is trivial and not needed.) Then we define the closed (indeed exact) two form on $E^* = T^*\cF$ by $$ - d\theta_\Gamma. $$ Together with the pull-back form $\pi^*\omega_Y$, we consider the closed two-form $\omega_{E^*,\Gamma}$ defined by \eqn\label{eq:omega*} \omega_{E^*,\Gamma}:= \pi^*\omega_Y - d\theta_\Gamma \eqnd on $E^* = T^*\cF$. It is easy to see that $\omega_{E^*,\Gamma}$ is non-degenerate in a neighborhood $V \subset E^* $ of the zero section (See the coordinate expression \cite[Equation (6.6)]{oh-park} of $d\theta_\Gamma$ and $\omega_V$.) \begin{defn}[Gotay's symplectic form \cite{gotay}]\label{defn:gotay's-two-form} We denote the restriction of $\omega_{E^*,\Gamma}$ to $V$ by $\omega_V$, i.e., $$ \omega_V: = (\pi^*\omega_Y - d\theta_\Gamma)|_V. $$ We call this two-form \emph{Gotay's symplectic form} on $V \subset E^*$. \end{defn} The following theorem ends the description of Gotay's normal form for the neighborhood of a coisotropic submanifold $C \subset (M,\omega)$ of any symplectic manifold $(M,\omega)$ as a neighborhood $V$ of the zero section of $T^*\cF_C$ of its null foliation $\cF_C$ on $C$ equipped with the symplectic form. \begin{theorem}[See \cite{gotay,oh-park}]\label{thm:normal-form} Let $Y \subset (X,\omega_X)$ be any coisotropic submanifold. Fix a splitting $\Gamma$ in \eqref{eq:splitting}. Then there is a neighborhood $\nbhd(Y):= U \subset X$ and a diffeomorphism $$ \Phi_\Gamma : U \to V \subset E^* $$ such that the following hold: \begin{enumerate} \item $\omega_X = \Phi_\Gamma^*\omega_{E^*,\Gamma}$ on $U \subset X$. \item For two different choices, $\Gamma$ and $\Gamma'$, of splitting of $TY$, the associated two forms $\omega_{E^*,\Gamma}$ and $\omega_{E^*,\Gamma'}$ are diffeomorphic relative to the zero section $Y \subset E^*$, on a possibly smaller neighborhood $V' \subset E^*$ of $Y$. \end{enumerate} \end{theorem} \begin{proof} The first statement is proved in \cite{gotay}. Statement (2) is then proved in \cite[Theorem 10.1]{oh-park}. \end{proof} We have the natural projection map \eqn\label{eq:projection-to-Y} \widetilde \pi_Y: \nbhd(Y) \to Y \eqnd defined by \eqn\label{eq:piY} \widetilde \pi_Y := \pi_{E^*}\circ \Phi_\Gamma \circ \iota_Y, \eqnd for the inclusion map $\iota_Y: Y \hookrightarrow \nbhd(Y)=:U \subset X$, which is induced by restricting the canonical projection $E^* \to Y$ to the neighborhood $V \subset E^*$ of the zero section $Y$. In particular, we have $$ \ker d_x\pi_Y = E_x = \cD_Y|_x. $$ \subsection{Structure of the null foliation of $\sigma$-sectorial corners} We apply the discussion in the previous subsection to general clean coisotropic collection $$ \{H_1, \cdots, H_m\}. $$ For any given subset $I \subset \{1, \cdots, m\}$, we denote $$ H_I = \bigcap_{i \in I} H_i $$ and $\pi_{H_I}: H_I \to \cN_{H_I}$ be the canonical projection. We also denote the full intersection by $$ C= \bigcap_{i=1}^m H_i. $$ Furthermore, by the clean intersection property of the coisotropic collection, we can choose the collection $\{\sigma_{C,1}, \ldots, \sigma_{C,m}\}$ to have the complete intersection property in that their images form a collection of clean intersection. More precisely, we fix the following choice of smooth sections for a finer study of the neighborhood structure of further constructions we will perform \begin{choice}[Choice of sections $\sigma_i: \cN_{H_i} \to H_i$]\label{choice:sigma} For each $i = 1, \ldots, m$, we choose a smooth section $$ \sigma_i: \cN_{H_i} \to H_i $$ for each $i = 1, \ldots, n$. Denote the set of sections $\sigma_i: \cN_{H_i} \to H_i$ by \eqn\label{eq:sigma-collection} \sigma = \{\sigma_1, \ldots, \sigma_m\}. \eqnd \end{choice} Recall from Section \ref{sec:intrinsic} that for each $i$ a choice of smooth section $$ \sigma_i: \cN_{H_i} \to H_i $$ provides the trivialization map $$ \Psi_i^{\sigma_i}: H_i \to \cN_{H_i} \times {\mathcal R}, \quad \Psi_i^{\sigma_i}(x) = (\pi_{H_i}(x), t_i^{\sigma_i}(x)) $$ given in \eqref{eq:Psi}. We choose each $\sigma_i$ to be $\sigma_i = \sigma_{H_i}^+$ as defined in \eqref{eq:sigmaN+}. For the given choice of $\sigma = \{\sigma_1, \ldots, \sigma_m\}$, we collectively write \eqn\label{eq:Psi-i-sigma} \Psi_i^\sigma: = \Psi_i^{\sigma_i}, \quad i = 1, \ldots, m. \eqnd The following theorem is the generalization of Theorem \ref{thm:equivalence-intro} whose proof also extends the one used in Section \ref{sec:intrinsic} to the case with corners. The main task for this extension is to establish compatibility of the null foliations of various coisotropic intersections arising from taking a sub-collection $I \subset \{1, \ldots, m\}$: This compatibility condition and construction of relevant strata is in the same spirit as the combinatorial construction of a toric variety out of its associated fan. (See \cite{fulton:toric} for example.) \begin{theorem}\label{thm:Z-projectable} Let $(M,\lambda)$ be a Liouville $\sigma$-sector with corners, and let $Z$ be the Liouville vector field of $(M,\lambda)$. Let $$ \sigma = \{\sigma_1, \cdots, \sigma_m\} $$ be a collection of sections $\sigma_i: \cN_{H_i} \to H_i$ for $i=1, \ldots, m$. Then the leaf space $\cN_{C}$ carries a canonical structure $\lambda_{\cN_{C}}$ of a Liouville manifold with boundary and corners. \end{theorem} We also define the function $t_i^{C,\sigma}: C \to {\mathcal R}$ to be the restriction \eqn\label{eq:tiC} t_i^{C,\sigma}= t_i^{\sigma_i}|_C \eqnd where $t_i^{\sigma_i}$ is the function appearing in \eqref{eq:Psi}. The collection $\sigma = \{\sigma_i\}$ also induces a surjective map $\Psi_C: C \to \cN_C \times {\mathcal R}^m$, \eqn\label{eq:PsiC} \Psi_C^\sigma(x): = \left(\pi_C(x), \left(t_1^{C,\sigma}(x),\ldots, t_m^{C,\sigma}(x)\right)\right) \eqnd which is also smooth with respect to the induced smooth structure on $\cN_C$. (The functions $t_i^{C,\sigma}$ correspond to $t_i$ appearing in \cite[Section 49]{arnold:mechanics} in the discussion following below.) \begin{prop}\label{prop:PsiC} There is an ${\mathcal R}^m$-action on $C$ that is free, proper and discontinuous and such that $C$ is foliated by the ${\mathcal R}^m$-orbits. In particular the map $$ \Psi_C^\sigma: C \to \cN_C \times {\mathcal R}^m $$ is an ${\mathcal R}^m$-equivariant diffeomorphism with respect to the ${\mathcal R}^m$-action on $C$ and that of linear translations on ${\mathcal R}^m$. \end{prop} \begin{proof} Let $(s_1, \ldots, s_m)$ be the standard coordinates of ${\mathcal R}^m$. We set \eqn\label{eq:Zi} Z_i : = (\Psi_C^\sigma)^*\left(\vec 0_{\cN_C} \oplus \frac{\partial}{\partial s_i}\right). \eqnd Then $Z_i \in \cD_C$, and $[Z_i,Z_j] = 0$ since $[\frac{\partial}{\partial s_i},\frac{\partial}{\partial s_j}] = 0$. On $C$, we also have $$ t_j^{C,\sigma}(Z_i) = d(s_j \circ \Psi_C^\sigma)\left((\Psi_C^\sigma)^*\left(\vec 0_{\cN_C} \oplus \frac{\partial}{\partial s_i}\right)\right) = ds_j(\frac{\partial}{\partial s_i}) = \delta_{ij}. $$ In particular $Z_i$ is tangent to all level sets of $t_i^{C,\sigma}$ with $j \neq i$, and is transversal to the level sets of $t_i^{C,\sigma}$ for each $i$. The so-constructed global frame $\{Z_1, \cdots, Z_m\}$ of $TC$ on $C$ are commuting vector fields. Therefore we have an ${\mathcal R}^m$-action on $C$ induced by the flows of commuting vector fields $\{Z_1, \cdots, Z_m\}$. \begin{lemma}\label{lem:discrete-isotropy} This ${\mathcal R}^m$-action is also proper and discontinuous. In particular, its isotropy subgroup is a discrete subgroup of ${\mathcal R}^m$. \end{lemma} \begin{proof} The Liouville vector field $Z$ is tangent to every $H_i$ at infinity. Since $Z$ is tangent to $H_i$ for all $i$ at infinity, the flag $$ H_1 \cap \cdots \cap H_m \subset H_1 \cap \cdots \cap H_{m-1} \subset \cdots \subset H_1 $$ is $Z$-invariant at infinity, and in particular we have $$ Z \in TC $$ at infinity of $C$. Since $Z[s] = 1$, $Z$ is also transversal to $s^{-1}(r)$ for all sufficiently large $r > 0$. Therefore the ${\mathcal R}^m$-action induces a free ${\mathcal R}^m/{\mathcal R}$-action on the set $\partial_\infty C = \partial_\infty M \cap C$ of asymptotic Liouville rays tangent to $C$. Since the latter set is compact, it follows that the ${\mathcal R}^m/{\mathcal R}$-action is proper and discontinuous. Since the flow of $Z$ or the ${\mathcal R}$-action induced by $Z$ moves the level of $s$ by 1 as time varies by 1, we conclude that the ${\mathcal R}^m$-action on $C$ is proper and discontinuous. Once the action is proved to be proper and discontinuous, the second statement of the lemma follows e.g. from the proof in \cite[Section 49, Lemma 3]{arnold:mechanics}, to which we refer. This finishes the proof. \end{proof} With Lemma \ref{lem:discrete-isotropy} in our disposal, the standard argument in the construction of action-angle coordinates proves that each orbit of the ${\mathcal R}^m$-action is homeomorphic to ${\mathcal R}^{n_1} \times T^{n_2}$ for some $n_1, \, n_2$ with $n_1 + n_2 = n$. (See \cite[Section 49, Lemma 3]{arnold:mechanics} and its proof.) Since we are assuming that fibers are contractible, we immediate conclude the following. \begin{cor}\label{cor:contractible-fiber} Suppose $\pi_C: C \to \cN_C$ has contractible fibers. Then \begin{enumerate} \item The ${\mathcal R}^m$-action is free and its fiber is naturally diffeomorphic to ${\mathcal R}^m$, i.e., it is a principle ${\mathcal R}^m$ bundle over $\cN_C$. \item The map $\Psi$ is an ${\mathcal R}^m$-equivariant diffeomorphism with respect to the translations of ${\mathcal R}^m$. \end{enumerate} \end{cor} The inverse of $\Psi_C^\sigma$ denoted by \eqn\label{eq:PhiC} \Phi_C^\sigma: \cN_C \times {\mathcal R}^m \to C \eqnd is also easy to explicitly write down as follows. First we note $$ t_i^{C,\sigma}(\sigma_{C,i}(\pi_C(x))) = 0 $$ for all $i=1,\ldots, m$ by the definitions of $\sigma_{C,i}$ and $t_i^{C,\sigma}$. Now let a point $$ (\ell, (t_1, \ldots, t_m)) \in \cN_C \times {\mathcal R}^m $$ be given. Then there is a unique point $x \in C$ satisfying \eqn\label{eq:PhiC} \begin{cases} \pi_C(x) = \ell\\ x = \bigcap_{i=1}^n (t_i^{C,\sigma})^{-1}(t_i). \end{cases} \eqnd (See \eqref{eq:function-t} for the definition of $t_i^{C,\sigma}$ and Definition \ref{defn:Z'} for the definition of the vector field $Z_i'$ respectively.) Then we define $\Phi_C^\sigma(\ell, (t_1, \ldots, t_m))$ to be this unique point. It is easy to check from definition that $\Phi_C^\sigma$ is indeed the inverse of $\Psi_C^\sigma$. This finishes the proof of Proposition \ref{prop:PsiC}. \end{proof} By applying the above proof and Proposition \ref{prop:PsiC} to any sub-collection $I \subset \{1, \cdots, m\}$ including the full collection itself, we also obtain the following stronger form of Theorem \ref{thm:Z-projectable} \begin{theorem}\label{thm:stronger-Z-projectable} Let $I \subset \{1, \cdots, m\}$ be any sub-collection, and define $$ H_I = \bigcap_{i \in I} H_i. $$ Assume $\pi_{H_I}: H_I \to \cN_{H_I}$ has contractible fibers. Let $\lambda_{\cN_{H_I}}$ be the canonical induced Liouville form as before. Then the following hold: \begin{enumerate} \item There is an ${\mathcal R}^{|I|}$-action on $H_I$ that is free, proper and discontinuous and such that $H_I$ is foliated by the ${\mathcal R}^{|I|}$-orbits. In particular the map $$ \Psi_{H_I}^\sigma: H_I \to \cN_{H_I} \times {\mathcal R}^{|I|} $$ is an ${\mathcal R}^{|I|}$-equivariant diffeomorphism with respect to the ${\mathcal R}^{|I|}$-action on $H_I$ and that of linear translations on ${\mathcal R}^{|I|}$. \item The leaf space $\cN_{H_I}$ carries a canonical structure of Liouville manifold with boundary and corners. \end{enumerate} \end{theorem} By applying the above to the full collection $C = H_{\{1,\ldots, m\}}$, we have finished the proof of Theorem \ref{thm:Z-projectable}. \subsection{Compatibility of null foliations of clean coisotropic intersections} Let $C_\delta = C$ as in the previous section and let $\{\sigma_1, \cdots, \sigma_m\}$ a collection of sections $\sigma_i:\cN_{H_i} \to H_i$ made in Choice \ref{choice:sigma}. For each subset $I \subset \{1, \cdots, m\}$, we have the following section $$ \sigma_I: \cN_{H_I} \to H_I $$ defined by \eqn\label{eq:sigma-I} \sigma_I([\ell]): = \Phi_{H_I}^\sigma([\ell], (0,\cdots, 0)) = (\Psi_{H_I}^\sigma)^{-1}([\ell], (0,\cdots, 0)) \eqnd for the diffeomorphism $\Phi_{H_I}$ given in \eqref{eq:PhiC} applied to $C = H_I$. Then for each pair of subsets $I \subset J $ of $\{1,\cdots, n\}$, we have the map $$ \psi_{JI}^\sigma: \cN_{H_J} \to \cN_{H_I} $$ given by \eqn\label{eq:psi-IJ} \psi_{JI}^\sigma([\ell]): = \pi_{\cN_{H_I}}(\Phi_{H_J}^\sigma([\ell], (0,\cdots,0)). \eqnd In particular consider the cases with $I = \{i\}$, $J = \{i, j\}$ and $K = \{i,j,k\}$. Then we prove the following compatibility of the collection of maps $\psi_{IJ}$: For each $i \neq j$, we consider the maps $$ \psi_{ij,i}^\sigma: \cN_{H_i\cap H_j} \to \cN_{H_i} $$ defined by $\psi_{ij,i}^\sigma := \psi_{\{ij\}\{i\}}$, and the inclusion maps $$ \iota_{ij,i}: H_i \cap H_j \to H_i. $$ \begin{prop}\label{prop:sectorial-diagram} Let $\{H_1,\ldots,H_m\}$ be a collection of hypersurfaces satisfying only ~\ref{item. clean intersection sectorial} and~\ref{item. coisotropic sectorial}. Then the maps $\psi_{ij,i}^\sigma$ satisfy the following: \begin{enumerate} \item They are embeddings. \item The diagram \eqn\label{eq:sectorial-diagram} \xymatrix{ H_i \cap H_j \ar[r]^{\iota_{ij,i}} \ar[d]_{\pi_{ij}} & H_i \ar[d]^{\pi_i}\\ \cN_{H_i\cap H_j} \ar[r]^{\psi_{ij,i}^\sigma} & \cN_{H_i} } \eqnd commutes for all pairs $1 \leq i, \, j \leq n$. \item The diagrams are compatible in the sense that we have $$ \psi_{ij,i}^\sigma \circ \psi_{ijk,ij}^\sigma = \psi_{ijk,i}^\sigma. $$ for all triples $1 \leq i,\, j, \, k \leq n$. \end{enumerate} \end{prop} \begin{proof} We first show that the map $\psi_{ij,i}^\sigma$ is an embedding. Let $\ell_1, \, \ell_2$ be two leaves of the null-foliation of $H_i \cap H_j$ such that $$ \ell_1 \cap H_i = \ell_2 \cap H_i. $$ By definition of leaves, we have only to show that $\ell_i \cap \ell_j \neq \emptyset$. Let $x \in H_i$ in the above two common intersection which obviously implies $$ x \in \ell_1 \cap \ell_2 \subset H_i \cap H_j. $$ This proves $\psi_{ij,i}^\sigma$ is a one-one map. Then smoothness and the embedding property of $\psi_{ij,i}^\sigma$ follow from the definition of smooth structures given on the leaf spaces. For the commutativity, we first note \eqn\label{eq:psi-iji=} \psi_{ij,i}^\sigma (\pi_{ij}(x)) = \pi_i(\Phi_{ij}^\sigma((\pi_{ij}(x), 0,0))) \eqnd by the definition of the maps $\psi_{ij,i}^\sigma$. But by the definition \eqref{eq:PhiC} of $\Phi_{ij}^\sigma$, the point $$ y := \Phi_{ij}^\sigma((\pi_{ij}(x), 0,0)) $$ is the intersection point $$ y \in \operatorname{Image} \sigma_i \cap \operatorname{Image} \sigma_j. $$ Since $x \in H_i \cap H_j$, we can express it as $$ x = \Phi_{ij}^\sigma(\pi_{ij}(x), t_1, t_2) $$ for some $t_1, \, t_2 \in {\mathcal R}$. In other words, it is obtained from $y$ by the characteristic flows of $H_i$ and $H_j$ by definition of $\Phi_{ij}^\sigma$ in \eqref{eq:PhiC}. In particular, we have $$ \pi_i(\iota_{ij,i}(x)) = \pi_i(y). $$ On the other hand, the definition of the null foliation of $\cN_{H_i}$ implies \eqn\label{eq:pi-i=} \pi_i(y) = \psi_{ij,i}^\sigma (\pi_{ij}(x)) \eqnd for all $x \in H_i \cap H_j$. Combining the last two equalities and commutativity of the diagram $\pi_i \circ \iota_{ij,i} = \psi_{ij,i}^\sigma \circ \pi_{ij}$, we have proved the commutativity of \eqref{eq:sectorial-diagram}. Finally we show that $\psi_{ij,i}^\sigma$ is a symplectic map. Consider the pull-back $$ \omega_{ij}^\sigma: = (\psi_{ij,i}^\sigma)^*(\omega_{\cN_{H_i}}). $$ We will show that $\omega_{ij}^\sigma$ satisfies the defining property $$ \pi_{H_i \cap H_j}^*\omega_{ij}^\sigma = \iota_{H_i \cap H_j}^*\omega, \quad \omega = d\lambda $$ of the reduced form on $\cN_{H_i \cap H_j}$ under the coisotropic reduction on the coisotropic submanifolds $H_i \cap H_j \subset M$. We compute \begin{eqnarray*} \pi_{H_i \cap H_j}^*\omega_{ij}^\sigma & = & \pi_{H_i \cap H_j}^*(\psi_{ij,i}^\sigma)^*(\omega_{\cN_{H_i}})\\ & = & (\psi_{ij,i}^\sigma \circ \pi_{H_i \cap H_j})^*(\omega_{\cN_{H_i}}) \\ & = & (\pi_{H_i}\circ \iota_{H_i\cap H_j,H_i})^* \omega_{\cN_{H_i}} \\ & = & (\iota_{H_i\cap H_j,H_i})^*(\pi_{H_i}^*\omega_{\cN_{H_i}}) \\ & = & (\iota_{H_i\cap H_j,H_i})^*(\iota_{H_i}^*\omega) = \iota_{H_i\cap H_j}^*\omega \end{eqnarray*} where we use the defining condition of the reduced form $\omega_{\cN_{H_i}}$ of $\omega_{\partial H_i}$ $$ \pi_{H_i}^*\omega_{\cN_{H_i}} = \iota_{H_i}^*\omega $$ for the penultimate equality. Therefore we have proved $$ \pi_{H_i \cap H_j}^*\omega_{ij}^\sigma = \iota_{H_i \cap H_j}^*\omega. $$ This shows that the form $\omega_{ij}^\sigma$ satisfies the defining equation \eqref{eq:reduced-form} of the reduced form $\omega_{H_i \cap H_j}$. Then by the uniqueness of the reduced form, we have derived $$ \omega_{ij}^\sigma = \omega_{H_i \cap H_j}. $$ This proves $(\psi_{ij,i}^\sigma)^*\omega_{H_i} = \omega_{H_i \cap H_j}$, which finishes the proof of Statement (1). Statement (2) also follows by a similar argument this time from the naturality of the \emph{coisotropic reduction by stages}: Consider $H_i, \, H_j, \, H_k$ in the given coisotropic collection and consider the two flags \eqn\label{eq:flag1} H_i \cap H_j \cap H_k \subset H_i \cap H_j \subset H_i \eqnd and \eqn\label{eq:flag2} H_i \cap H_j \cap H_k \subset H_i. \eqnd The composition $\psi_{ij,i}^\sigma \circ \psi_{ijk,ij}^\sigma$ is the map obtained by the coisotropic reductions in two stages and $\psi_{ijk,i}^\sigma$ is the one obtained by the one stage reduction performed in the proof of Statement 1 with the replacement of the pair $(H_i \cap H_j, H_i)$ by $(H_i\cap H_j \cap H_k, H_i)$. Then by the naturality of the coisotropic reduction, we have proved Statement 2. This finishes the proof of the proposition. \end{proof} The following is an immediate corollary of the above proposition and its proof. (See Remark \ref{rem:stratawise-presymplectic} for the relevant remark on the stratified presymplectic manifolds.) \begin{cor} The collection of maps $$ \{\psi_{I}\}_{I \subset \{1, \ldots, m\}} $$ are compatible in that the leaf space $\cN_{H_I}$ carries the structure of symplectic manifold with boundary and corners. \end{cor} \section{Liouville $\sigma$-sectors and canonical splitting data} Let $\{H_1, \cdots, H_m\}$ be a clean coisotropic collection as in Definition \ref{defn:clean coisotropic collection}. We denote their intersection by $$ C = H_1 \cap \cdots \cap H_m $$ as before, which is a coisotropic submanifold of codimension $m$ associated thereto. \subsection{Definition of Liouville $\sigma$-sectors with corners} Denote by $\iota_{CH_i}: C \to H_i$ the inclusion map, and $\sigma = \{\sigma_1, \ldots, \sigma_m\}$ be the collection as before. This induces the diagram \eqn\label{eq:sectorial-diagram-C} \xymatrix{C \ar[r]^{\iota_{CH_i}} \ar[d]_{\pi_{C}} & H_i \ar[d]^{\pi_i}\\ \cN_{C} \ar[r]^{\psi_{CH_i}^\sigma} & \cN_{H_i} } \eqnd for all $i$ which are compatible in the sense of Statement (2) of Proposition \ref{prop:sectorial-diagram}. In fact, we have \eqn\label{eq:cNC} \cD_C = \cD_{H_1}|_C + \cD_{H_2}|_C + \cdots + \cD_{H_m}|_C \eqnd which canonically induces the leaf map $\psi_{CH_i}^\sigma$ in the bottom arrow that makes the diagram commute. With these preparations, we are finally ready to provide the sectional characterization of Liouville sectors with corners. \begin{defn}[Liouville $\sigma$-sectors with corners]\label{defn:intrinsic-corners} Let $M$ be a manifold with corners equipped with a Liouville one-form $\lambda$. We call $(M,\lambda)$ a \emph{Liouville $\sigma$-sector with corners} if at each sectorial corner $\delta$ of $\partial M$, the corner can be expressed as $$ C_\delta := H_{\delta,1} \cap \cdots \cap H_{\delta,m} $$ for a clean coisotropic collection $$ \{H_{\delta,1}, \cdots, H_{\delta,m}\} $$ of $\sigma$-sectorial hypersurfaces such that fibers of the map $$ \pi_{C_\delta}: C_\delta \to \cN_{C_\delta} $$ are contractible. We call such a corner $C_\delta$ a \emph{$\sigma$-sectorial corner of codimension $m$}. \end{defn} In the remaining section, we will derive the consequences of this definition. \subsection{Integrable systems and canonical splitting data} By applying Theorem \ref{thm:normal-form} to the coisotropic submanifold $C$, we will obtain a neighborhood $\nbhd(C) \subset M$ and the projection $$ \widetilde \pi_C: \nbhd(C) \to C. $$ \begin{choice}[Splitting $\Gamma_C^\sigma$]\label{choice:GC} Let $\sigma=\{\sigma_1,\cdots, \sigma_m\}$ be a choice of sections of clean coisotropic collection $\{H_1, \cdots, H_m\}$. Then we associate the splitting \eqn\label{eq:C-splitting} \Gamma = \Gamma_C^\sigma : \quad TC = G_C^\sigma \oplus \cD_C \eqnd thereto given by the transversal symplectic subspace \eqn\label{eq:GC} G_C^\sigma |_x : = (d\Psi_C^\sigma|_x)^{-1}(T_{\pi_C(x)}\cN_C \oplus \{0\}_{{\mathcal R}^m}). \eqnd \end{choice} Applying Theorem \ref{thm:normal-form}, we obtain a diffeomorphism $$ \Psi_\Gamma^\sigma: \nbhd(C) \to V \subset E^* = T^*\cF_C $$ where $\cF_C$ is the null foliation of $C$. Furthermore the pushforward of symplectic form $d\lambda$ on $U$ is given by the canonical Gotay's symplectic form on $V \subset E^*$ $$ (\Psi_\Gamma^\sigma)_*(d\lambda) = \pi^*\omega_C - d\theta_\Gamma $$ for the presymplectic form $\omega_C = \iota_C^*(d\lambda)$ on $C$. (See Theorem \ref{thm:normal-form}.) Note that we have $$ \cD_C|_x = \text{\rm span}_{\mathcal R}\{Z_1(x), \cdots, Z_m(x)\} $$ by definition of $Z_i$ above. Therefore the aforementioned ${\mathcal R}^m$-action induces an ${\mathcal R}^m$-equivariant bundle isomorphism $$ \cD_C \cong C \times {\mathcal R}^m $$ over $C$. (This isomorphism does not depend on the choice of $\sigma$ but depends only on the Liouville geometry of $\nbhd(C \cap \partial_\infty M)$. See Remark \ref{rem:sigma-independence}) Then we have made the aforementioned splitting $ TC = G_C^\sigma \oplus \cD_C $ given in \eqref{eq:GC} ${\mathcal R}^m$-equivariant. In other words, for each group element $\mathsf t = (t_1, \dots, t_m) \in {\mathcal R}^m$, we have the equality $$ d \mathsf t (G_x^\sigma ) = G_{\mathsf t\cdot x}^\sigma. $$ For a fixed $\alpha> 0$, we put \eqn\label{eq:definition-Ii} I_i^\sigma = \pm e^{\alpha t_i^{C,\sigma}} \eqnd which then satisfies $dI_i^\sigma(Z_i) = \alpha\, I_i^\sigma$ on $C$. Noting that the induced ${\mathcal R}^m$-action on $TC$ preserves the subbundle $$ T \cF_C = \cD_C \subset TC, $$ the canonically induced action on $T^*C$ also preserves the subbundle $$ \cD_C^\perp \subset T^*C $$ for which we have the isomorphism $$ T^*\cF \cong \cD_C^{\perp}. $$ Therefore the ${\mathcal R}^m$-action on $C$ can be lifted to $T^*\cF$ which is the restriction of the canonical induced action on $T^*C$ of the one on $C$. \begin{lemma} We can lift the vector fields $Z_j$'s to $\widetilde Z_j$ on $T^*\cF$ which are the generators of the induced ${\mathcal R}^m$-action such that \begin{enumerate} \item $\widetilde Z_j|_C = Z_j$, \item The collection $\{\widetilde Z_j\}$ are commuting. \end{enumerate} \end{lemma} \begin{proof} Let $\phi_{Z_j}^t$ be the flow of $Z_j$ on $C$. Since the ${\mathcal R}^m$-action is abelian, the vector fields $Z_j$'s are pairwise commuting. Then lifting $\widetilde Z_j$ is nothing but the vector field generating the isotopy of canonical derivative maps $$ ((d\phi_{Z_j}^t)^*)^{-1}: T^*C \to T^*C $$ on $T^*C$. Since the flows $\phi_{Z_j}^t$ are commuting, their derivatives are also commuting. Then obviously their dual flows $((d\phi_{Z_j}^t)^*)^{-1}$ on $T^*C$ are also commuting and hence $\widetilde Z_j$'s too. The first condition also follows since any derivative maps zero vector to a zero vector. This finishes the proof. \end{proof} We now define $$ \widetilde I_i^\sigma = I_i^\sigma \circ \pi_{T^*\cF}. $$ Then $\{d\widetilde I_1^\sigma,\cdots, d\widetilde I_m^\sigma\}$ are linearly independent on a neighborhood of the zero section of $T^*\cF$ if we choose the neighborhood small enough. This is because $\{dI_1^\sigma, \ldots, dI_m^\sigma\}$ are linearly independent on $C$. By suitably adjusting the parametrization $t_i^{C,\sigma}$ of the ${\mathcal R}^m$-action, we can make the equation \eqn\label{eq:tilde-dIiZj} d\widetilde I_i^\sigma(\widetilde Z_j) = \alpha \delta_{ij} \widetilde I_i^\sigma \eqnd hold. \emph{This is precisely the situation of completely integrable system to which we can apply the standard construction of action-angle coordinates.} (See \cite[Section 49]{arnold:mechanics} for example.) Therefore, regarding $\{\widetilde I_1^\sigma, \ldots, \widetilde I_m^\sigma\}$ as the (fiberwise) angle coordinates, we can find a unique choice of (fiberwise) action coordinates $$ \{\widetilde R_1^\sigma, \cdots, \widetilde R_m^\sigma\} $$ over $\cN_C$ satisfying $$ \{\widetilde R_i^\sigma, \widetilde I_j^\sigma\} = \delta_{ij}, \quad \widetilde R_i^\sigma \circ \Phi_C^\sigma|_{H_j} = 0 $$ on a neighborhood $V \subset T^*\cF_C$ of the zero section $0_{T^*\cF_C} \cong C$. Now we define the pull-back functions $$ R_i^\sigma: = \widetilde R_i^\sigma \circ \Phi_C^\sigma, \quad I_j^\sigma: = \widetilde I_j^\sigma \circ \Phi_C^\sigma $$ on $U= \nbhd(C)$. We also pull-back the vector fields $\widetilde Z_j$ to $\nbhd(C)$ by $\Phi_C^\sigma$ and denote them by $Z_j$. (Note that the notations $I_j^\sigma$ and $Z_j$ are consistent in that their restrictions to $C$ are nothing but the above already given $I_j^\sigma$ or $Z_j$ respectively on $C$.) Furthermore, we have the relationship $$ Z_j = X_{R_j^\sigma}. $$ (See the definition \eqref{eq:Zi} of $Z_i$ on $C$.) Then we have $$ \{R_i^\sigma,R_j^\sigma\} = \omega(X_{R_i^\sigma},X_{R_j^\sigma})= \omega(Z_i,Z_j) = 0 $$ on $\nbhd(C)$. Since $ Z_i : = X_{R_i^\sigma}, $ we have \eqn\label{eq:moment-map} Z_i \rfloor \omega = dR_i^\sigma \eqnd on $U = \nbhd^Z(C)$. This is precisely the defining equation of the moment map $\phi_{G,C}^\sigma: \nbhd(C) \to \frak g^* \cong {\mathcal R}^m$ with $G = {\mathcal R}^m$ given by $$ \phi_{G,C}^\sigma(x) = (R_1^\sigma(x), \cdots, R_m^\sigma(x)) $$ for the above $G={\mathcal R}^m$-action. Recall that the hypersurfaces $H_i$ are $Z$-invariant at infinity. Therefore we can choose the neighborhood $\nbhd(C)$ so that it is $Z$-invariant at infinity. Then by the requirement put on the Liouville vector field $Z$ which is pointing outward along $\partial M$, we can choose the whole neighborhood $\nbhd(C)$ $Z$-invariant. Together with the normalization condition of $R_i$'s $$ R_i^\sigma|_{H_i} = \widetilde R_i \circ \Phi_C^\sigma|_{H_i} = 0, $$ it also implies $R_i^\sigma \geq 0$ on $\nbhd(C)$ for all $i$. We now take the neighborhood $U \subset M$ to be this $Z$-invariant neighborhood $$ U = \nbhd^Z(C). $$ The content of the above discussion can be summarized into the following intrinsic derivation of the splitting data. \begin{theorem}[$\sigma$-Splitting data]\label{thm:splitting-data-corners} Let $C \subset \partial M$ be a sectorial corner of codimension $n$ associated to the sectorial coisotropic collection $\{H_1, \ldots, H_m\}$ on $\partial M$. Then for each choice $$ \sigma = \{\sigma_1, \cdots, \sigma_m\} $$ of sections $\sigma_i: \cN_{H_i} \to H_i$ of $\pi_{H_i}$ for $i = 1, \cdots, n$, there is a diffeomorphism $$ \Psi_C^\sigma: \nbhd^Z(C) \cap \partial M \to F \times {\mathcal R}^m $$ and $$ \psi_C^\sigma: \cN_{C} \to F_C^\sigma $$ such that \begin{enumerate} \item $F_C^\sigma = \text{\rm Image } \sigma_1 \cap \cdots \cap \text{\rm Image } \sigma_m$, \item $(\Psi_C^\sigma)_*\omega_\partial = \pi_F^*\omega_F$, \item The following diagram \eqn\label{eq:split-coisotropic-diagram} \xymatrix{ \partial M|_{C} \ar[r]^{\Psi_C^\sigma} \ar[d]^{\pi_{\partial M}} & F_C^\sigma \times {\mathcal R}^m \ar[d]_{\pi_{F_C^\sigma}}\\ \cN_{\partial M|_{C}} \ar[r]_{\psi_C^\sigma} & F_C^\sigma. & } \eqnd commutes for the map $$ \Psi_C^\sigma = (\sigma_C \circ \pi_{F_C},(I_1^\sigma, \cdots, I_m)). $$ \item The $G$-action with $G = {\mathcal R}^m$ has the moment map $\phi_{G,C}^\sigma: \nbhd_\epsilon^Z(C) \to {\mathcal R}^m$ given by $$ \phi_{G,C}^\sigma = (R_1^\sigma, \cdots, R_m^\sigma) $$ for a collection of Poisson-commuting $R_i$'s satisfying the simultaneous normalization condition $$ R_i^\sigma|_{H_i} = 0, \quad R_i^\sigma \geq 0 $$ for all $i$ on $\nbhd^Z(C)$. \item The map $\nbhd(\partial M) \to F_C \times {\mathbb C}^m_{\text{\rm Re} \geq 0}$ is given by the formula \eqn\label{eq:tilde-PsiC} \widetilde \Psi_C^\sigma(x) = \left(\sigma_C(\pi_{F_C}(x)), R_1^\sigma(x) + \sqrt{-1} I_1^\sigma(x), \ldots, R_m^\sigma(x) + \sqrt{-1}I_m^\sigma(x)\right). \eqnd such that \eqn\label{eq:tilde-Psi*-omega} (\widetilde \Psi_C^\sigma)_*\omega = \pi_F^*\omega_{F_C} + \sum_{i=1}^m dR_i^\sigma \wedge dI_i^\sigma. \eqnd \end{enumerate} We call these data a \emph{$\sigma$-splitting data} of $\nbhd(C)$ associated to the choice $\sigma = \{\sigma_1, \cdots, \sigma_m\}$ of sections $\sigma_i: \cN_{H_i} \to H_i$. \end{theorem} We also gather the following consequences of the above discussion separately. The first one , in particular, states that Proposition \ref{prop:equivalence-intro} still holds for the Liouville $\sigma$-sectors with corners. \begin{theorem}\label{thm:equivalence} \begin{enumerate} \item Each Liouville $\sigma$-sector with corners is a Liouville sector in the sense of Definition \ref{defn:sectorial-collection}. \item The leaf space $\cN_{C_\delta}$ carries a natural structure of manifold with corners at each sectorial corner $\delta$ such that the map $\pi_{C_\delta}: \partial M \to \cN_{C_\delta}$ is a morphism of manifolds with corners. \end{enumerate} \end{theorem} \begin{proof} We have already constructed a diffeomorphism $$ \Psi_\delta^\sigma : \partial M|_{C_\delta} \to F_\delta^\sigma \times {\mathcal R}^m $$ given by $$ \Psi_\delta^\sigma (x) = (\pi_{F_\delta^\sigma}(x), I_1^\sigma(x), \ldots, I_m^\sigma(x)). $$ Each $I_i^\sigma$ defined on $\partial M$ is extended to the function $\widetilde I_i^\sigma \circ \Phi_{C_\delta}^\sigma$ on a symplectic neighborhood $U_\delta: = \nbhd^Z(C_\delta) \subset M$ via Gotay's coisotropic neighborhood map $$ \Phi_{C_\delta}^\sigma: \nbhd(C_\delta) \hookrightarrow T^*\cF_{C_\delta} $$ where the function $\widetilde I_i^\sigma$ is canonically defined on a neighborhood $$ V \subset E^* =T^*\cF_{C_\delta}. $$ This diffeomorphism $\Phi_{C_\delta}$ onto $V_\delta \subset T^*\cF$ also induces a splitting of the tangent bundle $TC_\delta$ $$ \Gamma_{C_\delta}^\sigma: \quad TC_\delta = G_\delta^\sigma \oplus T\cF_{C_\delta} = G_\delta^\sigma \oplus \cD_{C_\delta} $$ such that $G_\delta^\sigma$ is a transverse symplectic subbundle of $TC_\delta$ $$ G_\delta^\sigma|_x: = d\Psi^{-1}\left(T_{\pi_{F_\delta^\sigma}(x)}F_\delta^\sigma \oplus \{0\}\right) $$ at each $x \in C_\delta$. Theorem \ref{thm:splitting-data-corners} then finishes the construction of the data laid out in Definition \ref{defn:sectorial-collection}. For the proof of Statement (2), we start with the observation that for each $H = H_i$ the canonical smooth structure on $\cN_H$ carries the natural structure of a manifold with boundary and corners through a choice of smooth section made in Choice \ref{choice:sigma}, whose existence relies on the defining hypothesis of $\sigma$-sectorial hypersurfaces that the projection map $\pi_H: H \to \cN_H$ admits a continuous section. For each choice of smooth section, by the same construction as in Subsection \ref{subsec:liouville-structure-leaf-space}, we have a symplectic structure $(\cN_H,\omega_{\cN_H})$, and a smooth map $\sigma_\infty^+: \cN_H \to \partial_\infty M$ which is a symplectic diffeomorphism onto the convex hypersurface $(F_\infty^+,\omega^+)$ of the contact manifold $(\partial_\infty M,\xi)$. For two different choices of splittings, the resulting structures are diffeomorphic. Finally it remains to verify the property of $\cN_C$ carrying the structure of the Liouville manifolds with corners. But this immediately follows from the compatibility result, Proposition \ref{prop:sectorial-diagram}: The moment map $\phi_{G,\delta}^\sigma: \nbhd^Z(C_\delta) \to {\mathcal R}^m_+$ provides local description of the codimension $k$-corner of $\cN_{C_\delta}$. This finishes the proof. \end{proof} \section{Solution to \cite[Question 2.6]{gps} and convexity at infinity} \label{sec:GPS-question} As an application of our arguments used to derive the canonical splitting data, we can now provide the affirmative answer to a question raised by Ganatra-Pardon-Shende in \cite{gps}. \begin{theorem}[Question 2.6 \cite{gps}]\label{thm:GPS-question} Suppose $(M,\lambda)$ is a Liouville manifold-with-boundary that satisfies the following: \begin{enumerate} \item Its Liouville vector field $Z$ is tangent to $\partial M$ at infinity. \item There is a diffeomorphism $\partial M = F \times {\mathcal R}$ sending the characteristic foliation to the foliation by leaves ${\mathcal R} \times \{p\}$. \end{enumerate} Then $M$ is a Liouville $\sigma$-sector. \end{theorem} The proof will be divided into two parts: we first examine the presymplectic geometry component of the proof, and then combine the discussion with that of the Liouville geometry. In the mean time, the following is an immediate corollary of Theorem \ref{thm:GPS-question}. \begin{cor}\label{cor:convexity} In the presence of other conditions, the convexity condition (b) in Definition (\ref{defn:liouville-sector-intro}) is equivalent to the existence of a diffeomorphism $$ \partial M \cap \nbhd(\partial_\infty M) \cong F_0 \times [N,\infty) $$ sending the characteristic foliation to the foliation by leaves $\{p\} \times [N,\infty)$ for a sufficiently large $N> 0$. \end{cor} \subsection{Presymplectic geometry} Denote by $\iota_{\partial M}: \partial M \to M$ the inclusion map. Then the one-form $\lambda_\partial: = \iota_{\partial M}^*\lambda$ induces the structure of presymplectic manifold $$ (\partial M, d\lambda_\partial). $$ By definition, $\cD_{\partial M} = \ker d\lambda_\partial$. Denote by $\Psi: \partial M \to F \times {\mathbb R}$ the diffeomorphism entering in the hypothesis. Then the hypothesis implies that we have a commutative diagram \eqn\label{eq:hypothesis-diagram} \xymatrix{\partial M \ar[d]^{\pi_{\partial M}} \ar[r]^\Psi & F \times {\mathbb R} \ar[d]^{\pi_1} \\ \cN_{\partial M} \ar[r]^{\psi} & F } \eqnd where $\psi:= [\Psi]: \cN_{\partial M} \to F$ the obvious quotient map, which becomes a diffeomorphism. Obviously the map $\sigma: \cN_{\partial M} \to \partial M$ defined by \eqn\label{eq:section-sigma} \sigma(\ell): = \Psi^{-1}(\psi(\ell),0) \eqnd defines a continuous section of $\pi_{\partial M}: \partial M \to \cN_{\partial M}$, one of the defining data of Liouville $\sigma$-sectors. Next, by Condition (1), we have $$ \partial_\infty M \cap \partial M = \partial_\infty(\partial M). $$ Therefore it remains to show convexity of $\partial_\infty M \cap \partial M$ in $\partial_\infty M$, i.e., that there exists a contact vector field defined on $\nbhd(\partial_\infty M \cap \partial M) \subset \partial_\infty M$ that is transversal to the hypersurface $$ F_\infty: = \partial_\infty M \cap \partial M. $$ We denote the reduced symplectic form on $\cN_{\partial M}$ of the presymplectic form $d\lambda_{\cN_{\partial M}}$ by $$ \omega_{\cN_{\partial M}}. $$ Next we prove \begin{lemma}\label{lem:X} Suppose that $Z$ is tangent to $\partial M$ outside a compact subset $K \subset M$. Consider the pull-back $\lambda_\partial:= \iota_{\partial M}^*\lambda$ which is a presymplectic form on $\partial M$. Let $X$ be a vector field tangent to $\ker \iota_{\partial M}^*\lambda = \cD_\partial M$. Then we have $$ \cL_X \lambda_\partial = 0 $$ on $\partial M \cap (M \setminus K)$. \end{lemma} \begin{proof} Let $Z$ be tangent to $\partial M\cap (M \setminus K)$ for a compact set $K \subset M$. Since $X$ spans the characteristic distribution of $(\partial M,\lambda_\partial)$, we have $$ X \rfloor d\lambda_\partial = 0 $$ on $\partial M$. On the other hand, since $Z$ is tangent to $\partial M \cap (M \setminus K)$ and $X \in \ker \omega_\partial = d\lambda_\partial$, we also have $$ 0 = d\lambda_\partial(Z,X) = \lambda_\partial(X) $$ where the second equality follows by definition of Liouville vector field $Z$. Therefore on $\partial M \cap (M \setminus K)$, we compute $$ \cL_X \lambda_\partial = (d(X \rfloor \lambda) + X \rfloor d\lambda)|_{\partial M} = 0 $$ which finishes the proof. \end{proof} We push-forward the presymplectic structure on $\partial M$ to $F \times {\mathbb R}$ by $\Psi$ and write the resulting presymplectic form by $$ \lambda^{\text{\rm pre}}: = \Psi_*(\lambda_\partial) $$ on $F \times {\mathcal R}$. \begin{lemma} $\cL_{\frac{\partial}{\partial t}} \lambda^{\text{\rm pre}} = 0$ on $F \times [N,\infty)$ and so $$ \lambda^{\text{\rm pre}} = \pi_F^*\lambda_F $$ for some one-form $\lambda_F$ on $F$, where $\pi_F: F \times {\mathcal R} \to F$ is the projection. \end{lemma} \begin{proof} Let $X$ be the pull-back vector field defined by $X = \Psi^*(\frac{\partial}{\partial I})$ on $\partial M$. Suppose that $Z$ is tangent to $\partial M$ on $(M \setminus K) \cap \partial M$ for a compact subset $K \subset M$. Since $K$ is compact, there exists a sufficiently large $N > 0$ such that $$ \Psi(K) \subset F \times [-N,N]. $$ In particular, we have $F \times [N,\infty) \subset \Psi(\partial M \setminus K)$ for a sufficiently large $N \in {\mathcal R}$. Then Lemma \ref{lem:X} applied to $H = \partial M$ implies $$ \cL_{\frac{\partial}{\partial t}} \lambda^{\text{\rm pre}} = 0 $$ on $F \times [N,\infty)$, i.e., it is $\frac{\partial}{\partial t}$-invariant and hence there exists a one-form $\lambda_F$ on $F$ such that $\pi_F^*\lambda_F = \lambda^{\text{\rm pre}}$ thereon. This finishes the proof. \end{proof} We denote by $$ (Y, d\lambda^{\text{\rm pre}}), \quad Y: = F \times {\mathcal R} $$ the resulting presymplectic manifold $(F \times {\mathcal R}, d\lambda^{\text{\rm pre}})$. Using Condition (1) of Theorem \ref{thm:GPS-question}, we can choose a smooth section defined as in Lemma \ref{lem:independent-of-N}. From now on, we fix $\sigma = \sigma_N^+$ to be such a smooth section, which also induces the natural Liouville form $\lambda_{\cN_{\partial M}}$ from $\lambda_\partial$ as constructed in Lemma \ref{lem:independent-of-N}, which satisfies \eqn\label{eq:lambda=lambdaF+df} \lambda_\partial|_{\{I \geq N\}} = \pi_{\partial M}^*\lambda_{\cN_{\partial M}} \eqnd by choosing $N$ so large that the form $\lambda_\partial$ on $\{I \geq N\}$ is projectable where $I = t\circ \pi_{\mathbb C}$ for the coordinate function $t$ of ${\mathcal R}$. (Similar discussion also applies on $\{I \leq -N\}$ with $N > 0$.) Then we have $$ \psi_*(\omega_{\cN_{\partial M}}) = d\lambda_F: = \omega_F $$ which is also the reduced symplectic form of $d\lambda^{\text{\rm pre}}$ of $F \times {\mathcal R}$ relative to the choice of section \eqn\label{eq:section-sigmaF} \sigma_F := \Psi \circ \sigma \circ \circ \psi^{-1} \eqnd of the trivial fibration $F \times {\mathbb R} \to F$. By applying Theorem \ref{thm:splitting-data-corners} for $n=1$, we can extend the presymplectic map $\Psi: \partial M \to F \times {\mathcal R}$ to a symplectic thickening $$ \widetilde \Psi: \nbhd(\partial M) \to F \times {\mathbb C}. $$ We equip $V=\nbhd(F \times {\mathcal R}) \subset F \times {\mathbb C}$ with the pushforward symplectic form \eqn\label{eq:omegaV} \omega_V = \widetilde \Psi_*(d\lambda). \eqnd We have the canonical isomorphism $$ T^*\cF \cong Y \times {\mathcal R} = F \times {\mathcal R} \times {\mathcal R} \cong F \times {\mathbb C} $$ and $H$ is contained in $V \subset T^*\cF$ as the zero section of $T^*\cF$. We denote by $$ \text{\rm pr}: V \subset T^*\cF \to Y $$ the projection. \begin{lemma} Let $s + \sqrt{-1} t$ be the standard coordinates of ${\mathbb C}$. We define $$ R = s \circ \pi_{\mathbb C} \circ \widetilde \Psi, \quad I = t \circ \pi_{\mathbb C}\circ \widetilde \Psi $$ on $F \times {\mathbb C}$. Then there is a neighborhood $V' \subset V$ and a diffeomorphism $$ \Upsilon : V' \cap \{I > N'\} \to V $$ onto its image that preserves $(Y,d\lambda^{\text{\rm pre}})$ and so that \eqn\label{eq:omegaV1} \omega_{V'}: = \Upsilon^*\omega_V = \pi_F^*\omega_F + dR \wedge dI \eqnd on $\{I > N'\} \cap V'$ for some sufficiently large $N' > 0$. \end{lemma} \begin{proof} Recall from \eqref{eq:lambda=lambdaF+df} that $$ \text{\rm pr}^*\lambda^{\text{\rm pre}} = \pi_F^*\lambda_F $$ on $\{I \geq N\}$ for a sufficiently large $N> 0$. Then the inclusion $$ (F\times \{I \geq N\}, d\lambda^{\text{\rm pre}}) \hookrightarrow (F \times {\mathbb C}, \pi_F^*\omega_F + dR \wedge dI) $$ is a coisotropic embedding and the lemma follows by the uniqueness of Gotay's coisotropic embedding theorem. \end{proof} Based on this lemma, we may and will assume \eqn\label{eq:tildePsidlambda} \widetilde \Psi_*d\lambda = \omega_V = \pi_F^*\omega_F + dR \wedge dI, \quad \omega_F = \psi_*\omega_{\cN_{\partial M}} \eqnd on $\nbhd(F \times {\mathcal R}) \subset F \times {\mathbb C}_{\text{\rm Re} \geq 0}$. From now on, we will work with the trivial fibration $F \times {\mathbb R} = :H$ as a hypersurface in the Liouville manifold $(V,\omega_V)$ with $V \subset F \times {\mathbb C} \cong T^*\cF$. \subsection{Liouville geometry} Now we take the Liouville form of $(V,\omega_V)$ given by $$ \lambda_V = \widetilde \Psi_*\lambda $$ and let $Z_V = \widetilde \Psi_*Z$ be its associated Liouville vector field. By definition, $Z_V$ satisfies \eqn\label{eq:Z-defining1} Z_V \rfloor d\lambda_V = \lambda_V \eqnd where we have \eqn\label{eq:dlambdaV} d\lambda_V = \pi_F^*\omega_F + dR \wedge dI \eqnd from \eqref{eq:tildePsidlambda}. By decomposing the Liouville vector field $Z_V$ into \eqn\label{eq:ZV} Z_V = X_F + a \frac{\partial}{\partial R} + b \frac{\partial}{\partial I} \eqnd in terms of the splitting $TV = TF \oplus T{\mathbb C}$, we compute $$ Z_V \rfloor d\lambda_V = X_F \rfloor \pi_F^*\omega_F + a\, dI - b\, dR $$ for some coefficient functions $a = a(y,R,I), \, b = b(y,R,I)$ for $(y,R,I) \in F \times {\mathbb C}$. Then \eqref{eq:Z-defining1} becomes \eqn\label{eq:Z-defining2} X_F \rfloor \pi_F^*\omega_F + a\, dI - b\, dR = \lambda_V. \eqnd \begin{prop} Regard $\{R = 0\} =:H$ as a hypersurface of $V$. Let $b$ be the coefficient function appearing in \eqref{eq:ZV}. Then we have $b \neq 0$ on $V' \cap \{I > C'\}$ on a possibly smaller neighborhood $V' \subset V$ of $H$ for a sufficiently large constant $C' > 0$. \end{prop} \begin{proof} We denote by $\iota_H : H \to V$ the inclusion map. We first recall from Lemma \ref{lem:X} that $X = \Psi^*\frac{\partial}{\partial I} \in \ker d\lambda_\partial$. Therefore we have $$ \frac{\partial}{\partial I} \rfloor \Psi_*d\lambda_\partial = 0 $$ on $F \times {\mathcal R}$, since $\Psi_*\lambda_\partial = \lambda^{\text{\rm pre}} = \iota_H^*\lambda_V$. Since $\omega_F = d\lambda_F$, \eqref{eq:dlambdaV} implies $$ d\lambda_V = \pi_F^*d\lambda_F + dR \wedge dI $$ which in turn implies $$ d(\lambda_V - \pi_F^*\lambda_F - I dR) = 0. $$ Since the choice of $\sigma$ made in Lemma \ref{lem:cNH-exact} implies $$ \pi_F^*\lambda_F = \Psi_*\lambda_\partial = \iota_H^*\lambda_V $$ we have $\iota_H^*(\lambda_V - \pi_F^*\lambda_F - IdR) = 0$. In particular the form $\lambda_V - \pi_F^*\lambda_F - I dR$ is exact on any neighborhood $V$ of $\{R = 0\}$ which deformation retracts to $\{R=0\}$. Therefore we have $$ \lambda_V - \pi_F^*\lambda_F - I dR = dh_V $$ on such a neighborhood $V$ for some function $h_V: V \to {\mathcal R}$, i.e., \eqn\label{eq:lambdaV} \lambda_V = \pi_F^*\lambda_F + I dR + dh_V. \eqnd Since $\ker \iota_H^*(d\lambda_V) = \operatorname{span} \{\frac{\partial}{\partial I}\}$, we have $$ \frac{\partial}{\partial I} \rfloor d\lambda_V = 0 $$ on $H$. Then since $Z$ is tangent to $H$ at infinity, we have $$ \lambda_V\left(\frac{\partial}{\partial I}\right) = d\lambda\left(Z,\frac{\partial}{\partial I}\right) = 0. $$ Obviously we have $(\pi_F^*\lambda_F + I dR)(\frac{\partial}{\partial I}) = 0$. Therefore we have derived $$ \frac{\partial h_V}{\partial I}\Big|_{R=0} = 0 $$ by evaluating \eqref{eq:lambdaV} against $\frac{\partial}{\partial I}$. Therefore $h_V|_{\{R=0\}}$ does not depend on $I$. In particular, we have $$ \|h_V|_{\{R=0\}}\|_{C^1} \leq C $$ for some constant $C > 0$. In turn, since $h_V$ is a smooth function, we have $$ \|h_V\|_{C^1} \leq C' $$ by precompactness of $V/{\mathcal R}$ on a sufficient small neighborhood $V' \subset V$ of $H = \{R = 0\}$ for some constant $C'$ choosing $C'$ larger, if necessary. In particular we have \eqn\label{eq:dhdI} \left\|\frac{\partial h_V}{\partial R}\right\|_{C^0;V'} \leq C' \eqnd Substituting \eqref{eq:lambdaV} into \eqref{eq:Z-defining2}, we obtain the equation $$ X_F \rfloor d\lambda_V + a\, dI - b\, dR = \pi_F^*\lambda_F + I dR + dh_V. $$ By evaluating this equation against $\frac{\partial}{\partial R}$ after substituting \eqref{eq:dlambdaV} thereinto, we obtain $$ b = -\left(I + \frac{\partial h_V}{\partial I}\right) $$ on $V$. Therefore we have $b(y,R,I) \neq 0$ for all $(y,R,I) \in V'$, if $|I| > C'$. This finishes the proof. \end{proof} From \eqref{eq:ZV}, we have also derived $$ Z_V[I] = b(y,R,I). $$ In particular $Z_V[I] \neq 0$ on $$ I^{-1}(N) \cap V $$ for any $N \geq C'$ and hence any such level set is a contact-type hypersurface in $V$. \subsection{Combining the two} The following lemma then will finish the proof of Theorem \ref{thm:GPS-question}. \begin{lemma} The Hamiltonian vector field $X_I$ induces a contact vector field on the contact-type hypersurface $I^{-1}(N)$ that is transversal to $F_N := I^{-1}(N) \cap \partial M$ for all $N \geq C'$. \end{lemma} \begin{proof} By the expression of symplectic form in \eqref{eq:omegaV1} we have $$ \frac{\partial}{\partial R} = X_I, \quad \frac{\partial}{\partial I} = - X_R. $$ This implies $$ 1 = d\lambda(X_R,X_I) = - b\, d\lambda(Z,X_I) = - b\, \lambda(X_I). $$ Since $X_I$ is tangent to the level set $I^{-1}(N)$, $X_I$ defines a contact vector field thereon which is transversal to the contact distribution of $I^{-1}(N)$ induced by the contact form $\theta_N: = \iota_N^*\lambda$. The same discussion applies to $F_\infty^-$, which finishes the proof. \end{proof} This finishes the proof of Theorem \ref{thm:GPS-question}. \section{Structure of Liouville $\sigma$-sectors and their automorphism groups} \label{sec:automorphism-group} Our definition of Liouville $\sigma$-sectors with corners enables us to give a natural notion of Liouville automorphisms which is the same as the case without boundary and which does not depend on the choice of splitting data. We first recall the following well-known definition of automorphisms of Liouville manifold (without boundary) \begin{defn}\label{defn:geometric-structure} Let $(M,\lambda)$ be an Liouville manifold without boundary. We call a diffeomorphism $\phi: M \to M$ a Liouville automorphism if $\phi$ satisfies $$ \phi^*\lambda = \lambda + df $$ for a compactly supported function $f: M \to {\mathcal R}$. We denote by $\aut(M)$ the set of automorphisms of $(M,\lambda)$. \end{defn} It is easy to check that $\aut(M)$ forms a topological group. Now we would like extend this definition of automorphisms to the case of Liouville $\sigma$-sectors. For this purpose, we need some preparations by examining the universal geometric structures inherent on the boundary $\partial M$ of a Liouville manifold with boundary and corners. \subsection{Some presymplectic geometry of $\partial M$} We start with the observation that $(\partial M, \omega_{\partial M})$ carries the structure of \emph{presymplectic manifolds} as usual for any coisotropic submanifold mentioned as before. We first introduce automorphisms of presymplectic manifolds $(Y,\omega)$ in general context. \begin{defn}\label{defn:presymplectic-morphism} Let $(Y,\omega)$ and $(Y^\prime, \omega^\prime)$ be two presymplectic manifolds. A diffeomorphism $\phi: Y\to Y'$ is called {\it presymplectic} if $\phi^*\omega^\prime = \omega$. We denote by ${\cP}Symp(Y,\omega)$ the set of presymplectic diffeomorphisms. \end{defn} (We refer to \cite{oh-park} for some detailed discussion on the geometry of presymplectic manifolds and their automorphisms and their application to the deformation problem of coisotropic submanifolds.) Then we note that any diffeomorphism $\phi: (M,\partial M) \to (M,\partial M)$ satisfying \eqn\label{eq:liouville-diffeomorphism} \phi^*\lambda = \lambda + df \eqnd for some function $f$, \emph{not necessarily compactly supported}, induces a presymplectic diffeomorphism $$ \phi_\partial: = \phi|_{\partial M} $$ on $\partial M$ equipped with the presymplectic form $$ \omega_{\partial} := d\lambda_\partial, \quad \lambda_\partial := \iota^*\lambda $$ for the inclusion map $\iota: \partial M \to M$. \begin{lemma}\label{lem:phi-preserving-cD} The presymplectic diffeomorphism $\phi_\partial: \partial M \to \partial M$ preserves the characteristic foliation of $\partial M$. \end{lemma} \begin{proof} We have $$ \cD_{\partial M} = \ker \omega_\partial. $$ Since any Liouville automorphism $\phi$ of $(M, \partial M)$ satisfies \eqref{eq:liouville-diffeomorphism}, we have $$ \phi_\partial^*\omega_\partial = \omega_\partial. $$ Therefore we have $$ \phi_*(\cD_{\partial M}) = \cD_{\partial M} $$ which finishes the proof. \end{proof} In fact, for the current case of our interest $Y = \partial M$, the presymplectic form $\omega_\partial$ is exact in that $$ \omega_\partial = d\lambda_\partial, \quad \lambda_\partial: = \iota^*\lambda. $$ Furthermore \eqref{eq:liouville-diffeomorphism} implies that $\phi$ actually restricts to an exact presymplectic diffeomorphism $$ \phi_\partial: (\partial M, \omega_\partial) \to (\partial M,\omega_\partial) $$ on $\partial M$ in that $$ \phi_\partial^*\lambda_\partial -\lambda_\partial = d h, \quad h = f\circ \iota $$ where the function $h: \partial X \to {\mathcal R}$ is not necessarily compactly supported. We have a natural restriction map \eqn\label{eq:restriction-phi} \aut(M,\lambda) \to {\cP} \text{\rm Symp}(\partial M, \omega_\partial); \quad \phi \mapsto \phi_\partial. \eqnd \begin{defn}[Pre-Liouville automorphism group $\aut(\partial M,\lambda_\partial)$] We call a diffeomorphism $\phi: (\partial M,\lambda_\partial) \to (\partial M,\lambda_\partial)$ a \emph{pre-Liouville diffeomorphism} if the form $\phi^*\lambda_\partial -\lambda_\partial$ is exact. We say $\phi$ is a \emph{pre-Liouville automorphism} if it satisfies $$ \phi^*\lambda_\partial= \lambda_\partial + dh $$ for a compactly supported function $h: \partial M \to {\mathcal R}$. We denote by $\aut(\partial M,\lambda_\partial)$ the set of pre-Liouville automorphisms of $(\partial M,\lambda_\partial)$. \end{defn} The following is an immediate consequence of the definition. \begin{cor}\label{lem:phi-del} The restriction map \eqref{eq:restriction-phi} induces a canonical group homomorphism $$ \aut(M,\lambda) \to \aut(\partial M,\lambda_\partial). $$ \end{cor} We recall that $\partial M$ carries a canonical transverse symplectic structure arising from the presymplectic form $d\lambda_\partial$. (See \cite[Sectioon 4]{oh-park}.) \begin{prop}\label{prop:phi-intertwine} The induced pre-Liouville automorphism $\phi_\partial:= \phi|_{\partial M} : \partial M \to \partial M$ descends to a (stratawise) symplectic diffeomorphism $$ \phi_{\cN_{\partial M}}: \cN_{\partial M} \to \cN_{\partial M} $$ and satisfies $$ \pi_{\partial M} \circ \phi_\partial = \phi_{\cN_{\partial M}} \circ \pi_{\partial M} $$ when we regard both $\partial M$ and $\cN_{\partial M}$ as manifolds with corners. \end{prop} \subsection{Automorphism group of Liouville $\sigma$-sectors} Now we are ready give the geometric structure of \emph{Liouville $\sigma$-sectors}. \begin{defn}[Structure of Liouville $\sigma$-sectors]\label{defn:structure} We say two Liouville $\sigma$-sectors $(M,\lambda)$ and $(M', \lambda')$ are isomorphic, it there exists a diffeomorphism $\psi: M \to M'$ (as a manifold with corners) such that $\psi^*\lambda' = \lambda + df$ for some compactly supported function $f: M \to {\mathbb R}$. A \emph{structure of Liouville $\sigma$-sectors} is defined to be an isomorphism class of Liouville $\sigma$-sectors. \end{defn} With this definition of the structure of Liouville $\sigma$-sectors in our disposal, the following is an easy consequence of the definition and Proposition \ref{prop:phi-intertwine}, which shows that the definition of an automorphism of a Liouville sector $(M,\lambda)$ is in the same form as the case of Liouville manifold given by the defining equation $$ \psi^*\lambda = \lambda +df $$ for some compactly supported function $f: M \to {\mathbb R}$, except that $\psi$ is a self diffeomorphism of $M$ as a stratified manifold and the equality of the above equation as in the sense of Remark \ref{rem:stratawise-presymplectic}. \begin{theorem}[Automorphism group]\label{thm:automorphism-group} Let $(M,\lambda)$ be a Liouville $\sigma$-sector. Suppose a diffeomorphism $\psi: M \to M$ satisfies \eqn\label{eq:defining-psi} \psi^*\lambda = \lambda + df \eqnd for some compactly supported function $f: M \to {\mathbb R}$. Then $\psi$ is an automorphism of the \emph{structure of Liouville $\sigma$-sectors}. \end{theorem} \begin{proof} We first discuss how the action of diffeomorphisms $\psi$ satisfying $$ \psi^*\lambda = \lambda + df $$ affects the structure of Liouville $\sigma$-sectors, when the function $f$ is compactly supported. In particular we have \begin{itemize} \item $\psi^*d\lambda = d\lambda$, \item $\psi^*\lambda = \lambda$ at infinity. \end{itemize} Then $\psi$ restricts to a presymplectic diffeomorphism $\psi_\partial: \partial M \to \partial M$ which is also pre-Liouville, i.e., satisfies $$ (\psi|_{\partial M})^*\lambda_\partial = \lambda_\partial + dh $$ for a \emph{compactly supported} function $h$ on $\partial M$. We need to show that the \emph{structure} of Liouville $\sigma$-sectors with respect to $$ (M,\psi^*\lambda) = (M,\lambda + df) $$ is isomorphic to that of $(M,\lambda)$. For this, we make a choice of $\sigma = \{\sigma_1, \cdots, \sigma_m\}$ associated to a clean coisotropic collection $\{H_1, \dots, H_m\}$ for each sectorial corner $\delta$ of $M$ with $$ C_\delta = H_1 \cap \cdots \cap H_m. $$ Such a collection exists by definition for $(M,\lambda)$ being a Liouville $\sigma$-sector. Now we consider the pushforward collection of hypersurfaces $$ \{H_1', \cdots, H_m'\} := \{\psi(H_1), \ldots, \psi(H_m)\}. $$ Since smooth diffeomorphisms between two manifolds with corners preserve strata dimensions by definition, we work with the defining data of $(M, \psi^*\lambda)$ stratawise of the fixed dimensional strata. We first need to show that each $H_i'$ is $\sigma$-sectorial hypersurface by finding a collection $$ \sigma' = \{\sigma_1',\ldots, \sigma_m\} $$ where each $\sigma_i'$ is a section o $H_i'$ respectively. For this purpose, we prove the following \begin{lemma} Choose the sections $\sigma_i$s so that $$ \operatorname{Image} \sigma_i \subset M \setminus \operatorname{supp} df. $$ Then there exists a neighborhood $\nbhd(\partial_\infty M)$ such that the following hold: \begin{enumerate} \item The map $\psi: \nbhd(\partial_\infty M) \cap H_i \to H_i$ descends to a diffeomorphism $[\psi]: \cN_{H_i} \to \cN_{H_i}$. \item The map $\sigma_i^\psi: \cN_{H_i} \to \psi(H_i)$ defined by $$ \sigma_i^\psi: = \psi \circ \sigma_i \circ [\psi]^{-1} $$ is a section of the projection $\psi(H_i) \to \cN_{\psi(H_i)} = \cN_{H_i}$. \end{enumerate} \end{lemma} \begin{proof} Since $\operatorname{Image} \sigma_i \subset M \setminus \operatorname{supp} df$, we have $$ \psi^*\lambda = \lambda $$ on $\operatorname{Image} \sigma_i:=F_i$. In particular, the projection $\pi_{H_i}:H_i \to \cN_{H_i}$ restricts to a bijective map on $F_i$. Furthermore since $\psi^*\lambda = \lambda$ on $\nbhd(\partial_\infty M)$, the associated Liouville vector field $Z_\lambda$ of $\lambda$ satisfies $$ \psi_*Z_\lambda = Z_\lambda $$ thereon. Recall that $\psi$ restricts to a diffeomorphism on $\partial M$ (as a map on manifold with corners). Then the equality $\psi^*\lambda = \lambda$ implies $\psi_\partial^*d\lambda_\partial = d\lambda_\partial$ and hence $$ d\psi_\partial (\ker d\lambda_\partial)= \ker d\lambda_\partial $$ on $\nbhd(\partial M) \cap H_i$. Therefore $\psi$ descends to a diffeomorphism $[\psi]: \cN_{H_i} \to \cN_{H_i}$ so that we have the commutative diagram $$ \xymatrix{H_i \ar[d]^{\pi_{H_i}} \ar[r]^{\psi} & \psi(H_i) \ar[d]^{\pi_{\psi(H_i)}}\\ \cN_{H_i} \ar[r]^{[\psi]} & \cN_{H_i}. } $$ By composing $\sigma_i' = \psi \circ \sigma_i$ with $ \pi_{\psi(H_i)}$ to the left, we obtain $$ \pi_{\psi(H_i)} \sigma_i' = \pi_{\psi(H_i)} \circ \psi \circ \sigma_i = [\psi]\circ \pi_{H_i} \circ \sigma_i = [\psi] $$ which is a diffeomorphism. Therefore the map $$ \sigma_i^\psi : = \psi \circ \sigma_i' = \psi \circ \sigma_i \circ [\psi]^{-1} $$ is a section of the projection $H_I' \to \cN_{H_i'}$. This finishes the proof. \end{proof} Clearly any diffeomorphism preserves the clean intersection property. This proves that any diffeomorphism $\psi$ satisfying $\psi^*\lambda = \lambda + df$ with compactly supported $f$ is an automorphism of the \emph{structure of Liouville $\sigma$-sectors}. (See Definition \ref{defn:sectorial-collection} and \ref{defn:structure}.) This finishes the proof of the theorem. \end{proof} Based on this discussion, we will unambiguously denote by $\aut(M)$ the automorphism group of Liouville $\sigma$-sector $(M,\lambda)$ as in the case of Liouville manifolds. \begin{remark} \begin{enumerate} \item The above proof shows that the group $\aut(M,\lambda)$ is manifestly the automorphism group of the structure of Liouville $\sigma$-sectors. We alert the readers that this is not manifest in the original definition of Liouville sectors from \cite{gps}, \cite{gps-2}. \item This simple characterization of the automorphism groups of Liouville $\sigma$-sectors with corners enables one to define the bundle of Liouville sectors with corners in the same way for the case of Liouville manifolds (with boundary) \emph{without corners}. See \cite{oh-tanaka-liouville-bundles} for the usage of such bundles in the construction of continuous actions of Lie groups on the wrapped Fukaya category of Liouville sectors (with corners). \item Recall that the Liouville structure $\lambda$ on $M$ induces a natural contact structure on its ideal boundary $\partial_\infty M$. We denote the associated contact structure by $\xi_\infty$. Then we have another natural map $$ \operatorname{Aut}(M,\lambda) \to \operatorname{Cont}(\partial_\infty M,\xi_\infty) $$ where $(\partial_\infty M,\xi_\infty)$ is the group of \emph{contactomorphisms of the contact manifold} $ (\partial_\infty M,\xi_\infty)$. (See \cite{giroux}, \cite{oh-tanaka-smooth-approximation} for the details.) \item The different geometric nature of $(\partial_\infty M, \xi_\infty)$ and $(\partial M, \lambda_\partial)$ is partially responsible for the difficulty of the constructions of a pseudoconvex pair $(\psi, J)$ in a neighborhood $$ \nbhd(\partial_\infty M \cup \partial M) $$ such that the almost complex structures $J$ is amenable to the (strong) maximum principle) for the (perturbed) pseudoholomorphic maps into the Liouville sectors as manifested in \cite{oh:sectorial}. \end{enumerate} \end{remark} \section{Monoid of Liouville $\sigma$-sectors and smoothing profiles} \label{sec:monoid} As mentioned before, the enlarged set of manifolds with corners forms a monoid under the product. We now show that this monoidal structure restricts to the set of \emph{Liouville $\sigma$-sectors with corners} by defining the product as a Liouville $\sigma$-sector with corners. In this section, when we say `sectorial', it always means $\sigma$-sectorial and will not mention this throughout the section, unless necessary. \subsection{Monoid of manifolds with corners} We start with some discussion on the boundary of manifolds with corners. We will regard the boundaries themselves as corners of codimension one and do not separate them from other corners, and call manifolds with boundary and corners just \emph{manifolds with corners} for the simplicity of naming. For a noncompact manifold with corner $M$, we have the standard definition of the \emph{ideal boundary}, also called as the asymptotic boundary. In the present paper, to facilitate our exposition on the ideal boundary of the product of Liouville sectors later, we reserve the name `ideal boundary' for the ideal boundary for the Liouville manifolds (with corners) and denote by $$ \partial_\infty^{\text{\rm Liou}} M $$ for the latter meaning. We reserve the name `asymptotic boundary' for the ideal boundary in the context of topological spaces and denote them by $$ \partial_\infty M $$ which is defined as follows. We recall some basic definitions of the end and the asymptotic boundary (or also called the ideal boundary) of a noncompact manifold $M$. We adopt the definitions from \cite{richards} where it is applied to a surface which however applies equally to general topological spaces. \begin{defn}\label{defn:ends} Let $M$ be a noncompact topological space. An \emph{end} of $M$ is an equivalence class of nested sequences $p=\{P_1\supset P_2 \supset \dots\}$ of connected unbounded regions in $M$ such that \begin{enumerate} \item The boundary of $P_i$ in $M$ is compact for every $i$. \item For any bounded subset $A$ of $M$, $P_i \cap A=\varnothing$ for sufficiently large $i$ \end{enumerate} Two ends $p=\{P_1\supset P_2 \supset \dots\}$ and $q=\{Q_1\supset Q_2 \supset \dots\}$ are equivalent if for any $n$ there is a corresponding integer $N$ such that $P_n \subset Q_N$ holds and vice versa. We say an equivalence class $$ [\{P_1\supset P_2 \supset \dots\}] $$ an \emph{end} of $M$. \end{defn} When $M$ is a manifold of finite type, we assume that each end is cylindrical, i.e., we may assume the sequence ${P_1 \supset P_2 \supset \cdots}$ is topologically stable and assume that $P_i$ is diffeomorphic to $\partial P_i \times [0, \infty)$ and $\partial P_i$ is compact. (In the term of \cite{richards} in two dimensions, such an end is called a \emph{planar end}.) So from now on, we assume $M$ is of finite type. Then we define the notion of the \emph{asymptotic boundary} of $M$ as follows. \begin{defn}[Asymptotic boundary]\label{defn:asymptotic-boundary} The \emph{asymptotic boundary}, denoted by $\partial_\infty M$ of $M$ is a topological space equipped with the topology induced from $\partial P_j$ for sufficiently large $j$. \end{defn} Now we introduce the notion of \emph{ideal completion}. \begin{defn}[Ideal completion] Let $M$ be as above. The ideal completion denoted by $\overline M$ is the coproduct $$ \overline M = M \coprod \partial_\infty M $$ equipped with the finest topology for which both inclusion maps $\partial_\infty M, \, M \to \overline M$ are continuous. \end{defn} Now we restrict ourselves to the case of noncompact manifolds with boundary and corners. It is also useful to consider the definition of the full boundary of the ideal completion in the study of pluri-subharmonic exhaustion functions of $\operatorname{Int} M$. \begin{defn}[Full boundary of $\overline M$] We define the $DM$ to be the boundary of $\overline M$ which is the union of $\partial M$ and $\partial_\infty M$ $$ DM: = \partial \overline M = \partial M \cup \partial_\infty M. $$ \end{defn} Recall that for given two manifolds with corners $X$ and $Y$, we have $$ \partial(X\times Y) = \partial X \times Y \bigcup X \times \partial Y $$ as a manifold with corners. Then we have the following \begin{lemma}\label{lem:del-infty-end} Let $(X,\lambda_X)$ and $(Y,\lambda_Y)$ be two Liouville sectors (with corners). Then \begin{eqnarray*} \partial_\infty(X \times Y) &: = & (\partial_\infty X \times \overline Y) \cup (\overline X \times \partial_\infty Y),\\ D(X \times Y) &:=& DX \times \overline Y \cup (\overline X \times DY). \end{eqnarray*} \end{lemma} \begin{proof} The proof is a straightforward calculation and so omitted. \end{proof} \begin{example} Let $(M,\lambda)$ be a Liouville sector and consider the splitting $$ \nbhd(\partial M) \cong F \times {\mathbb C}_{\text{\rm Re} \geq 0} $$ as the product of manifolds with boundary and corners. Then we have $$ \partial_\infty(F \times {\mathbb C}_{\text{\rm Re} \geq 0}) = (\partial_\infty F \times \overline{{\mathbb C}_{\text{\rm Re} \geq 0}}) \cup (\overline F \times \partial_\infty {\mathbb C}_{\text{\rm Re} \geq 0}). $$ We also have $$ D(F \times {\mathbb C}_{\text{\rm Re} \geq 0}) = (DF \times \overline{{\mathbb C}_{\text{\rm Re} \geq 0}}) \cup (\overline F \times D({\mathbb C}_{\text{\rm Re} \geq 0})) $$ which carries a natural structure of manifold with corners. \end{example} We also have the following which is one of the main reasons why the Floer moduli spaces arising in the wrapped Fukaya category and the symplectic cohomology are monoidal. (See Subsection \ref{subsec:product-branes} below.) \begin{cor} Let $\psi$ be an exhaustion function for a neighborhood $\nbhd(DX)$ of the Liouville sector $X$. The level sets $\psi^{-1}(r)$ smoothly approximate $D(F \times {\mathbb C}_{\text{\rm Re} \geq 0})$ as $r \to \infty$ in the obvious sense as a manifold with corners. \end{cor} \subsection{Monoid of Liouville $\sigma$-sectors with corners} In this section, Liouville sectors mean Liouville sectors with corners omitting `with corners'. We start with recalling the following standard definition of the ideal boundary of Liouville manifolds $(M,\lambda)$. \begin{defn}[Asymptotic Liouville rays] Let $(M,\lambda)$ be a Liouville manifold and denote by $Z$ its Liouville vector field. We call a Liouville trajectory escaping to infinity an \emph{asymptotic Liouville rays} as time flows to $+\infty$. When $\opname{dim} M = 2$, we also define a positive (resp. negative) asymptotic ray as it escapes to infinity as time flows to $\pm \infty$ respectively, as described in Section \ref{sec:intrinsic}. \end{defn} \begin{defn}[Ideal boundary of Liouville manifolds] The ideal boundary $\partial_\infty^{\text{\rm Liou}}M$ is the set of Liouville equivalence classes of asymptotic Liouville rays of $Z_M$. \end{defn} The following is easy to check whose proof is omitted. \begin{lemma}\label{prop:del-Liou=del} Let $(M,\lambda)$ be a Liouville sector. Then we have $$ \partial_\infty^{\text{\rm Liou}}M = \partial_\infty M. $$ \end{lemma} We consider the product of Liouville sectors. We now show that the monoidal structure of the category of manifolds with corners restricts to the set of (intrinsic) \emph{Liouville sectors with corners} by defining the product as a Liouville sector with corners. \begin{prop}\label{prop:product-bulk} The product of Liouville $\sigma$-sectors $(X,\lambda_X)$ and $(Y,\lambda_Y)$ $$ (X \times Y, \pi_X^*\lambda_X + \pi_Y^*\lambda_Y) $$ is canonically a Liouville $\sigma$-sector with corners. In particular the set of Liouville $\sigma$-sectors with corners is a submonoid of the monoid of manifolds with boundary and corners under the product. \end{prop} \begin{proof} First we check that the vector field $$ Z_{X \times Y}: = Z_X \oplus Z_Y $$ satisfies \begin{eqnarray*} Z_{X \times Y} \rfloor d(\pi_X^*\lambda_X + \pi_Y^*\lambda_Y) & = & (Z_X\oplus 0) \rfloor d(\pi_X^*\lambda_X) \oplus (0\oplus Z_Y) \rfloor d(\pi_Y^*\lambda_Y) \\ & = & (Z_X \rfloor d\lambda_X) \oplus (Z_Y \rfloor d\lambda_Y) = Z_X \oplus Z_Y = Z_{X \times Y} \end{eqnarray*} on $\operatorname{Int}(X \times Y)$. This shows that $Z_{X \times Y}$ is the Liouville vector field of the Liouville form $\pi_X^*\lambda_X + \pi_Y^*\lambda_Y$ on $\operatorname{Int}(X \times Y)$. Furthermore it is tangent to $$ \partial(X \times Y) = \partial X \times Y \bigcup X \times \partial Y. $$ near infinity if $Z_X$ and $Z_Y$ are tangent to $X\times \{y\}$ and $\{x\} \times Y$ near infinity respectively. This proves that $Z_{X \times Y}$ is tangent to the boundary $$ \partial (X\times Y) \setminus \partial X \times \partial Y $$ at infinity, where $\partial X \times \partial Y$ is a corner of $X\times Y$ of codimension 2. Therefore a Liouville trajectory $\gamma: {\mathcal R} \to X\times Y$ is given by $$ \gamma(t) = (\gamma_X(t),\gamma_Y(t)) $$ which is complete in the sense that either $\gamma$ is forward complete or exits $\partial(X \times Y)$ in finite time. Next we examine the $\sigma$-sectorial structure of the product $X \times Y$. Note that the characteristic foliation of $\partial(X \times Y)$ is given by $C \times \{y\}$ on $\partial X \times Y$ and $\{x\} \times C'$ respectively. Obviously they are homeomorphic to ${\mathcal R}$ if $C$ and $C'$ are. Therefore we have the defining continuous section $\sigma_{\text{\rm ref}}: \cN_{\partial(X \times Y)} \to \partial(X \times Y)$ given by $$ \sigma_{\text{\rm ref}}(\ell) = \begin{cases} (\sigma_{\text{\rm ref}}^X(\pi_X(\ell)),\pi_Y(\ell)) \quad \text{on } \partial X \times Y\\ (\pi_X(\ell), \sigma_{\text{\rm ref}}^Y(\pi_Y(\ell))) \quad \text{on } X \times \partial Y \end{cases} $$ where $\sigma_{\text{\rm ref}}^X$ and $\sigma_{\text{\rm ref}}^Y$ the defining sections of the Liouville $\sigma$-sectors $X, \, Y$ respectively. These prove that the pair $\{H_1,H_2\}$ of $$ H_1: = \partial X \times Y, \quad H_2 : = X \times \partial Y $$ is a sectorial collection such that $$ \partial X \times \partial Y = H_1 \cap H_2. $$ This finishes the proof. \end{proof} \begin{cor} We have \begin{eqnarray*} \partial_\infty^{\text{\rm Liou}}(X \times Y) & = &\partial_\infty^{\text{\rm Liou}} X \times \overline Y \bigcup \overline X \times \partial_\infty^{\text{\rm Liou}} Y,\\ D(X\times Y) & = & DX \times \overline Y \bigcup \overline X \times DY. \end{eqnarray*} \end{cor} \begin{proof} The lemma immediately follows from Lemma \ref{lem:del-infty-end} and Proposition \ref{prop:del-Liou=del}. \end{proof} \begin{remark} One can define the \emph{compact} $\sigma$-sectorial domain with corners $W$ in the same way as one does the Liouville domain, except that we need to add one requirement of the existence of \emph{outer collaring} of $\partial W$ which reads that the outward pointing Liouville vector fields along the various codimension zero strata of $\partial M$ are compatible: They can be smoothly interpolated to one another along the \emph{outer collar} of the intersections of the codimension zero strata of $\partial M$. Our canonical description, Theorem \ref{thm:splitting-data-corners} of the $\sigma$-splitting data at the sectorial corners enables us to prove the existence of such an outer collaring. Since we do not use this in the present paper, we do not elaborate this further here postponing its full explanation elsewhere. \end{remark} \subsection{Smoothing profiles and their products} \label{subsec:monoidality-profile} \subsubsection{Compatible corner smoothing of $[0,\infty)^n$} \label{subsubsec:smoothcornermodel} In this subsection, we borrow the construction from \cite[Section 18.5]{fooo:book-virtual} and \cite[Section 3]{oh:sectorial} which provides a family of local models of corner smoothing of $$ {\mathcal R}_{\geq 0}^m= [0,\infty)^m $$ that is compatible with various $m$ and with the $\mathsf S_m$-symmetry for the coordinate swapping. Combination of constructions given in \cite[Condition 18.21]{fooo:book-virtual} and in \cite[Section 3]{oh:sectorial} provides a family of compatible convex corner smoothing functions as follows. \newenvironment{symmetric-convex}{ \renewcommand*{\theenumi}{(CV\arabic{enumi})} \renewcommand*{\labelenumi}{(CV\arabic{enumi})} \enumerate }{ \endenumerate } \begin{defn}[Convex smoothing functions of ${\mathcal R}_{\geq 0}^m$]\label{defn:symmetric-convex} A \emph{symmetric convex smoothing function} on ${\mathcal R}^k$ is a function $\varphi: {\mathcal R}^k \to {\mathcal R}$ satisfying the following: \begin{symmetric-convex} \item\label{item. CV symmetric} The restriction $\varphi|_{{\mathcal R}^J}$ is $\mathsf S_{|J|}$-invariant for all subsets $J \subset \underline n = \{1,\ldots,n\}$. Here ${\mathcal R}^J \subset {\mathcal R}^n$ is the obvious copy of ${\mathcal R}^{|J|}$. \item\label{item. positive definite} $\text{\rm Hess}(\varphi)$ is positive semi-definite everywhere. \item\label{item. hessian} $\text{\rm Hess}(\varphi|_{{\mathcal R}^J})$ is compactly supported on ${\mathcal R}^J$ for all subsets $J \subset \underline n$ for $|J| \geq 1$. \end{symmetric-convex} We denote the set thereof by $\mathfrak{Conv}^{\mathsf S_k}_{\mathfrak{sm}}({\mathcal R}^k)$. \end{defn} It is shown in \cite[Section 3]{oh:sectorial} that the set is nonempty, which are also convex and so contractible. We now fix a constant $\epsilon_0 > 0$ which will measure the size of $\nbhd(\partial M)$ once and for all. We will utilize a two-parameter family of such convex smoothing functions parameterized by $\epsilon > 0$ and $T_0> 0$ as follows. \begin{defn}[Convex smoothing functions of ${\mathcal R}_{\geq 0}^m$] Let $\epsilon> 0$ be a sufficiently small constant and $T_0> 0$ a sufficiently large constant such that $$ \epsilon_0 < 2T_0 \sqrt{\epsilon} < \frac{3}{2}\epsilon_0. $$ Then we consider a family of functions $\varphi_m^\epsilon: {\mathcal R}_{\geq 0}^m \to {\mathbb R}_{\geq 0}$ that satisfies the following: \begin{enumerate} \item We have \eqn\label{eq:linearity-region} \varphi_m^\epsilon(x_1, \ldots, x_m) = x_i \quad \text{when $x_i \geq 2T_0\sqrt{\epsilon}$, and $0 \leq x_j \leq \frac{\sqrt{\epsilon}}{4}$ for $j \neq i$} \eqnd \item $d\varphi(x_1, \ldots, x_m) = 0$ if and only if $(x_1, \ldots, x_m) = 0$. \end{enumerate} \end{defn} \subsubsection{Smoothing profiles} \label{subsec:smoothing-profiles} Recall from Definition \ref{defn:intrinsic-corners} that $\partial M$ is a union of a collection of cleanly intersecting hypersurfaces $H_1, \ldots, H_m \subset M$ (cylindrical at infinity) near each sectorial corner of $\partial M$ of codimension $k$ for some $k$ with $$ C = H_1 \cap \cdots \cap H_m. $$ The $\sigma$-splitting data given in Theorem \ref{thm:splitting-data-corners} is the collection of maps $$ \widetilde \Psi_C: \nbhd(\partial M) \to F_C \times {\mathcal R}_{\geq 0}^m $$ associated to the corners satisfying $$ (\widetilde \Psi_C)_*(d\lambda) = \omega_{F_C} + \sum_{i=1}dR_i \wedge dI_i $$ with a canonically given symplectic from $\omega_{F_C}$ on $F_C$. \begin{defn}[Sectorial corner smoothing]\label{defn:sectorial-corner-smoothing} Let $C$ be a sectorial corner of $M$. Define the function \eqn\label{eq:skvarphi} s_{m,\varphi} = -\log \varphi \circ \widetilde \Psi_C^\sigma = -\log \varphi (R_1,\ldots, R_m) \eqnd which we call a sectorial corner smoothing function. \end{defn} Then we fix a contact-type hypersurface $S_0 \subset M$ and a Liouville embedding $S_0 \times [0,\infty) \hookrightarrow M$ that equips $M$ with a symplectization end. We denote by $s= s_{S_0}$ the associated radial function. \begin{defn}[End-profile function]\label{defn:end-profile-function} Consider the function \eqn\label{eq:log-varphi2} s_{m_\delta+1,\varphi_\delta}: = - \log \varphi_2\left(e^{-s_{m_\delta,\varphi_\delta}},e^{-s}\right) \eqnd at each sectorial corner $C_\delta$. We glue these functions of the corners by a partition of unity and denote by $$ {\mathfrak s}_\varphi, \quad \varphi := \{\varphi_\delta\} $$ the resulting function, which we call an \emph{end-profile function} of the end of $M$, $\partial_\infty M \cup \partial M$. \end{defn} \begin{cond}[Smoothing profile] \label{cond:smoothing-profile} \begin{enumerate} \item We fix a contact-type hypersurface $S_0 \subset M$ and the associated decomposition of $M$ $$ M = W \cup_{\partial W} S_0 \times [0,\infty) , \quad \partial W = S_0. $$ and the associated radial function $s= s_{S_0}$. \item At each sectorial corner $C_\delta$ of $\nbhd(\partial_\infty M \cup \partial M)$, we fix the following data: \begin{itemize} \item a splitting data $(F_\delta, \{(R_\delta,I_\delta)\})$ with $F_\delta= (R_\delta,I_\delta)^{-1}(0,0)$, \item convex smoothing function $$ \varphi_\delta = \varphi_{k_\delta +1} \in \mathfrak{Conv}_{\mathfrak{sm}}({\mathcal R}^{k_\delta+1}). $$ and its associated sectorial corner smoothing function $$ s_{k_\delta,\varphi}: \nbhd(C_\delta) \to {\mathcal R}. $$ \end{itemize} \item An end profile function $ {\mathfrak s}_\varphi $ defined as in Definition \ref{defn:end-profile-function}. \end{enumerate} \end{cond} \subsubsection{Product of smoothing profiles} Let $(X,\lambda_X)$ and $(Y,\lambda_Y)$ be Liouville $\sigma$-sectors with corners. The following shows that the $\sigma$-splitting data is monoidal whose proof is straightforward and so is omitted. \begin{lemma} For given $\sigma$-splitting data $(F^X_\alpha,\{(R_\alpha^X,I_\alpha^X)\})$ of $X$ along a sectorial corner $C_\alpha$ of codimension $k_\alpha$ and $(F^Y_\beta, \{(R_\beta^Y, I_\beta^Y)\}$ of $Y$ along $C_\beta$ of codimension $k_\beta$, their product \begin{eqnarray}\label{eq:splitting data for product} &{}& (F^{X \times Y}_{\alpha,\beta},\{R_{\alpha,\beta}^{X\times Y}, I_{\alpha,\beta}^{X \times Y}\}) \nonumber\\ &: =& \left(F^X_\alpha \times Y \bigcup X \times F^Y_\beta,\left(\{(\pi_X^*R_\alpha^X,\pi_X^*I_\alpha^X)\} \bigcup \{(\pi_Y^*R_\beta^Y,\pi_Y^*I_\beta^Y)\}\right)\right) \end{eqnarray} is a $\sigma$-splitting data for $X \times Y$. \end{lemma} We also have \begin{eqnarray*} D(X \times Y) & = & \partial_\infty^{\text{\rm Liou}}(X \times Y) \bigcup \partial(X \times Y)\\ & = & DX \times \overline Y \bigcup \overline X \times DY. \end{eqnarray*} \begin{remark} An upshot of this function $\frak s_{\varphi}$ is that it is $J$-convex for \emph{sectorial complex structures} $J$ to be introduced later in that $-d(d{\frak s}_\varphi \circ J) \geq 0$ as a $(1,1)$-current. \end{remark} Next equip each of $X$ and $Y$ with an end-profile function $$ \frak s_X:= \frak s_{\varphi_X}, \quad \frak s_Y:= \frak s_{\varphi_Y} $$ defined as in Definition \ref{defn:end-profile-function}, i.e., with end-profile functions $\frak s_X$ and $\frak s_Y$ associated to the collections $$ \{s_{k_\alpha+1,\varphi_\alpha}\}, \quad \{s_{k_\beta+1,\varphi_\beta}\}. $$ with \begin{eqnarray}\label{eq:s-alpha*s-beta} s_{k_\alpha+1,\varphi_\alpha} & = & -\log \varphi_2(e^{-s_{k_\alpha,\varphi_\alpha}}, e^{-s})\} \nonumber\\ s_{k_\beta+1,\varphi_\beta} & = & -\log \varphi_2(e^{-s_{k_\alpha,\varphi_\alpha}}, e^{-s}) \end{eqnarray} for convex symmetric smoothing functions $\varphi_{k_\alpha +1}: {\mathcal R}^{k_\alpha+1}_+ \to {\mathcal R}$ and $\varphi_{k_\beta +1}: {\mathcal R}^{k_\beta+1}_+ \to {\mathcal R}$. Next we state the following lemma whose proof is immediate by definition and so left to the readers. \begin{lemma} Take symplectization radial functions $s^X$ and $s^Y$ of $X$ and $Y$ respectively. Then the function $ - \log \varphi_2(e^{-s^X}, e^{-s^Y}) $ is transversal to the product Liouville vector fields $Z^X \times Z^Y$ on $s_{X \times Y}^{-1}(r)$ for all sufficiently large $r> 0$. We denote $$ s^{X \times Y}: = - \log \varphi_2(e^{-s^X}, e^{-s^Y}) $$ and call it the \emph{(smoothed) product radial coordinate} of $s^X$ and $s^Y$. Furthermore, we have \eqn\label{eq:boundary-behavior} s^{X \times Y} = \begin{cases} \pi_X^*s^X \quad & s^X \geq R^+, \, e^{-s^Y}\geq \epsilon_\varphi\\ \pi_Y^*s^Y \quad & s^Y \geq R^+, \, e^{-s^X} \geq \epsilon_\varphi \end{cases} \eqnd for a fixed small constant $\epsilon_\varphi > 0$ depending on $\varphi$ and for a sufficiently large constant $R^+ > 0$. \end{lemma} \begin{defn}[Product end-profile function]\label{defn:product-endprofile} We define the \emph{product end-profile function} $$ \frak s_X *_{\varphi} \frak s_Y: X \times Y \to {\mathcal R} $$ by taking the union of the local convex interpolation $$ \{s_{k_\alpha+k_\beta+1,\varphi}:\nbhd(C_\alpha \times C_\beta) \to {\mathcal R}\} $$ defined by $$ s_{k_\alpha+k_\beta+1,\varphi} : = -\log \varphi_3\left(e^{-s_{k_\alpha,\varphi_\alpha}}, e^{-s_{k_\alpha,\varphi_\alpha}}, ,e^{-s^{X\times Y}}\right) $$ followed by taking a partition of unity thereof on $X \times Y$. \end{defn} Using the definition \eqref{eq:skvarphi}, we unravel the arguments inside parenthesis into \begin{eqnarray*} s_{k_\alpha,\varphi_\alpha} & = & -\log \left(\varphi_{k_\alpha}(R_{\alpha,1},\ldots,R_{\alpha,k_\alpha})\right) \\ s_{k_\beta,\varphi_\beta} & = & -\log \left(\varphi_{k_\beta}(R_{\beta,1},\ldots,R_{\beta,k_\beta}) \right) \end{eqnarray*} and hence we have \begin{eqnarray}\label{eq:product-local} &{}& s_{k_\alpha+k_\beta+1,\varphi} \nonumber\\ &: = & -\log \varphi_3\left(\varphi_{k_\alpha}(R_{\alpha,1},\ldots,R_{\alpha,k_\alpha}), \varphi_{k_\beta}(R_{\beta,1},\ldots,R_{\beta,k_\beta}),\varphi_2\left(e^{-s^X}, e^{-s^Y}\right)\right)\nonumber\\ &{}& \end{eqnarray} \begin{lemma}\label{lem:product-endprofile} The function $$ \frak s_X *_{\varphi} \frak s_Y: X \times Y \to {\mathcal R} $$ is an exhaustion function of $\nbhd(D(X \times Y))$ and its level sets smoothly approximate the full boundary $D(X \times Y)$. \end{lemma} \begin{proof} We first note that the function $$ -\log \varphi_3(e^{-x}, e^{-y}, e^{-z}) $$ is a convex three-term interpolation of the functions $x, \, y$ and $z$ on ${\mathcal R}^3_{\geq 0}$ whose level sets smoothly approximate the boundary $\partial {\mathcal R}^3_{\geq 0}$. This being said, the first statement follows from definition of the convex smoothing functions $\varphi_k$, and the second is obvious by construction. \end{proof} \section{Sectorial almost complex structures and gradient-sectorial Lagrangians} In this section, we assume that we are given the splitting data and end-profile functions $$ \frak s_X = {\frak s}_{\varphi^X}, \quad \frak s_Y: = {\frak s}_{\varphi^Y} $$ respectively for $X$ and $Y$. Then we consider a product end-profile function $$ {\mathfrak s}_{X \times Y}: = \frak s_X *_{\varphi} \frak s_Y $$ as defined before. Note that this product depends on the choice of convex functions $\varphi$ which is, however, a contractible choice. \subsection{Definition of sectorial almost complex structures} Note that a splitting data $$ \nbhd(\partial M) \cong F \times {\mathbb C}_{\text{\rm Re} \geq 0}^k, \quad \{(R_i^\sigma,I_i^\sigma)\}_{i=1}^k $$ provides a foliation, denoted by $\cF_F$, of symplectic hypersurfaces whose leaves are given by $$ F \times \{(x,y)\}_{(x,y) \in {\mathbb C}_{\text{\rm Re}\geq 0}}. $$ We now introduce the following class of almost complex structures. \newenvironment{corner-J}{ \renewcommand*{\theenumi}{(J\arabic{enumi})} \renewcommand*{\labelenumi}{(J\arabic{enumi})} \enumerate }{ \endenumerate } \begin{defn}[Sectorial almost complex structures]\label{defn:weakly-sectorial-J} Let $(M,\lambda)$ be a Liouville sector with boundary and corners equipped with a smoothing profile whose associated splitting is given by $$ \nbhd(\partial M) \cong F \times {\mathbb C}_{\text{\rm Re} \geq 0}^k, \quad \{(R_i^\sigma,I_i^\sigma)\}_{i=1}^k $$ and whose associated end-profile function is ${\frak s}_\varphi$. An $\omega$-tame almost complex structure $J$ on a Liouville sector is said to be \emph{sectorial} (with respect to the given smoothing profile) if $J$ satisfies the following: \begin{corner-J} \item {\textbf{[$\cF_F$ is $J$-complex]}}\label{item. piF is holomorphic} In a neighborhood of $\nbhd^Z(\partial \Mliou)$ of $\partial \Mliou$, we require \eqn\label{eq:J-versus-JF} J\left(T^*F \oplus 0_{\text{\rm span}\{dR_{i}^\sigma, dI_{i}^\sigma\}_{i=1}^k}\right) \subset T^*F \oplus 0_{\text{\rm span}\{dR_{i}^\sigma, dI_{i}^\sigma\}_{i=1}^k}, \eqnd and $J$ restricts to an almost complex structure of contact-type on $F$. \item {\textbf{[$({\mathfrak s}_\varphi,J)$ is a pseudoconvex pair]}}\label{item. ds is dual to lambda_kappa} In a neighborhood $\nbhd^Z(\partial \Mliou)\cup \partial_\infty)$ of $\partial \Mliou \setminus \nbhd(\partial_\infty M)$, we have $$ -d(d{\mathfrak s}_{\varphi} \circ J) \geq 0 $$ is a positive $(1,1)$-current. \end{corner-J} We denote by $\cJ^{\text{\rm sec}}_{{\mathfrak s}_\varphi}=\cJ^{\text{\rm sec}}_{{\mathfrak s}_\varphi}(M)$ the set of sectorial almost complex structures. \end{defn} Such an almost complex structures is also studied in \cite{oh:sectorial} where the common asymptotically $Z$-invariant Lagrangians are still adopted as branes of the wrapped Fukaya category. Obviously any almost complex structure of the form $$ -d{\mathfrak s}_{\varphi} \circ J = \lambda + df $$ satisfying \ref{item. piF is holomorphic} is sectorial, which includes both the class of $\kappa$-sectorial almost complex structures and that of $\lambda$-sectorial almost complex structures from \cite{oh:sectorial}. In particular $\cJ^{\text{\rm sec}}_{{\mathfrak s}_\varphi}$ is nonempty and contractible. The following proposition is the reason why we consider the sectorial almost complex structures instead of $\lambda$-sectorial ones from \cite{oh:sectorial}. \emph{Such a natural inclusion does not exist for the set of $\lambda$-sectorial almost complex structures therefrom.} \begin{prop} Consider the product $J_1 \times J_2$ for $J_1 \in \cJ_{{\frak s}_X}^{\text{\rm sect}}$ and $J_2 \in \cJ_{{\frak s}_Y}^{\text{\rm sect}}$. Then $({\mathfrak s}_{X \times Y}, J_1 \times J_2)$ satisfies the following: \begin{enumerate} \item $- d{\mathfrak s}_{X \times Y} \circ (J_1 \times J_2) = f_X \pi_1^*\lambda_1 + f_Y \pi_2^*\lambda_2$ for positive function $f_X, \, f_Y: X \times Y \to {\mathcal R}$ of the form $$ f_X = \psi_1({\frak s}_X, {\frak s}_Y), \quad \quad f_Y = \psi_2({\frak s}_X, {\frak s}_Y) $$ where $\psi_1, \psi_2 > 0$. \item $({\mathfrak s}_{X \times Y}, J_1 \times J_2)$ is a pseudoconvex pair. \end{enumerate} In particular we have $$ \cJ_{{\frak s}_X}^{\text{\rm sec}} \times \cJ_{{\frak s}_Y}^{\text{\rm sec}} \subset \cJ_{{\frak s}_{X \times Y}}^{\text{\rm sec}}. $$ \end{prop} \begin{proof} We compute \begin{eqnarray*} - d{\mathfrak s}_{X \times Y} \circ (J_1 \times J_2) & = & - \frac{\partial \varphi_2}{\partial x}({\mathfrak s}_{X \times Y}) d{\mathfrak s}_X \circ J_1 - \frac{\partial \varphi_2}{\partial y}({\mathfrak s}_{X \times Y}) d{\mathfrak s}_Y \circ J_2\\ & = & \frac{\partial \varphi_2}{\partial x}({\mathfrak s}_{X \times Y}) \pi_X^*\lambda_1 + \frac{\partial \varphi_2}{\partial y}({\mathfrak s}_{X \times Y}) \pi_Y^*\lambda_2. \end{eqnarray*} By setting $$ f_X: = \frac{\partial \varphi_2}{\partial x}({\mathfrak s}_{X \times Y}), \quad f_Y: = \frac{\partial \varphi_2}{\partial y}({\mathfrak s}_{X \times Y}) $$ we have finished the first statement. To prove the second statement, we keep computing \begin{eqnarray*} - d(d{\mathfrak s}_{X \times Y} \circ (J_1 \times J_2)) & = & \frac{\partial^2 \varphi_2}{\partial x^2}({\mathfrak s}_{X \times Y})\, (d{\mathfrak s}_X \circ J_1) \wedge d{\mathfrak s}_X \\ &{}& + \frac{\partial \varphi_2}{\partial x}({\mathfrak s}_{X \times Y})\, d\lambda_1 + \frac{\partial \varphi_2}{\partial y}({\mathfrak s}_{X \times Y})\, d\lambda_2\\ &{}& + \frac{\partial^2 \varphi_2}{\partial y^2}({\mathfrak s}_{X \times Y})\, (d{\mathfrak s}_Y \circ J_2) \wedge d{\mathfrak s}_Y . \end{eqnarray*} The first and the third summand are nonnegative currents since $d{\mathfrak s}_X \circ J_1 \wedge d{\mathfrak s}_X$ and $d{\mathfrak s}_Y \circ J_2 \wedge d{\mathfrak s}_Y$ are nonnegative $(1,1)$-currents and $$ \frac{\partial^2 \varphi_2}{\partial x^2}, \quad \frac{\partial^2 \varphi_2}{\partial x^2} \geq 0. $$ On the other hand, the second summand is nonnegative because $d\lambda$ is a positive $(1,1)$-current and $$ \frac{\partial \varphi_2}{\partial x}, \quad \frac{\partial \varphi_2}{\partial y} \geq 0. $$ This finishes the proof. \end{proof} As usual, we denote by $g_J$ the $\omega$-tame metric given by $$ g_J(v,w): = \frac{\omega(v, J w) + \omega(w,Jv)}{2} $$ for each given sectorial almost complex structure $J$. \subsection{Gradient-sectorial Lagrangians and their products} \label{subsec:product-branes} As mentioned before, the enlarged set of manifolds with corners forms a monoid under the product. We next identify the set of objects for the wrapped Fukaya category of a Liouville sectors with corners. We omit other brane data such as the orientation or the spin structure since they are not changed from the usual description in the literature and do not enter the discussion of the current paper. We introduce the following normalized gradient vector field on a neighborhood $\nbhd(DM)$ \eqn\label{eq:gradient-endprofile} Z_{\mathfrak s} := \frac{\opname{grad} { \mathfrak s}}{|\opname{grad} { \mathfrak s}|^2} \eqnd where the gradient is taken with respect to the metric $g = g_J$. By definition, we have $Z_{\mathfrak s}[\mathfrak s] = 1> 0$ on $\nbhd(DM)$. The following is the key lemma towards our proof of monoidality of the class of gradient-sectorial Lagrangian submanifolds we introduce below. Its proof is an immediate consequence of the definition of convex smoothing function $\varphi_2$ from \cite[Section 3.2]{oh:sectorial}, more specifically by \eqref{eq:boundary-behavior}. \begin{lemma}\label{lem:sXY-behavior} For any given $\epsilon_0 > 0$, there exists sufficiently large $r_0> 0$ such that $$ {\frak s}_{X \times Y} = s^{X \times Y} = - \log \varphi_2(e^{-s^X}, e^{-s^Y}) $$ whenever $\opname{dist}(x, \partial(X \times Y)) \geq \epsilon_0$ and ${\frak s}_{X \times Y}(x) \geq r_0$. \end{lemma} Now we are ready to introduce the notion of \emph{gradient-sectorial Lagrangians} with respect to the end-profile function $\frak s_{\varphi}$. This a variation of sectorial Lagrangians introduced in \cite{oh:sectorial}. Equip $M$ with any tame metric such as $g_J$ given above for a sectorial $J$. \begin{defn}[Gradient-sectorial Lagrangian branes]\label{defn:gradient-sectorial-Lagrangian} Let $(M,\lambda)$ be a Liouville sector with corners. Let ${\frak s}$ be the end-profile function associated to a given smoothing profile. We say an exact Lagrangian submanifold $L$ of $(M,\omega)$ is \emph{gradient-sectorial} if \begin{enumerate} \item $L \subset \operatorname{Int} M \setminus \partial M$, and $\text{\rm dist}(L,\partial M) > 0$. \item There exist a sufficiently large $r_0> 0$ such that $L \cap {\frak s}^{-1}([r_0,\infty))$ is $Z_{\mathfrak s}$-invariant. \end{enumerate} \end{defn} \begin{remark}[Comparison with sectorial Lagrangians \cite{oh:sectorial}]\label{rem:comparison} The vector field $Z_{\mathfrak s}$ coincides with the Liouville vector field $Z$ on the symplectization end $[0, \infty) \times S_0$ of $M$ if we set ${\mathfrak s} = s$ and $J$ is $Z$-invariant so that $$ \frac{\partial}{\partial s} = Z $$ in the $Z$-invariant splitting $$ TM \cong \text{\rm span}_{\mathcal R} \{Z, R_\theta\} \oplus \xi_{\{s = 0\}} $$ with respect $\omega = d(e^s \, \theta)$, where $R_\theta$ is the Reeb vector field of the contact-type hypersurface $S_0$ equipped with the contact form $\theta = \iota_{S_0}^*\lambda$. In this regard, the notion of gradient-sectorial Lagrangian branes is a generalization of $Z$-invariant-at-infinity Lagrangian submanifolds. However while the latter definition does not respect the product operation of Liouville manifolds, the definition of gradient-sectorial Lagrangian does under the product operations of the manifold together with that of product operation of end-profile functions $$ ({\frak s}_X, {\frak s}_Y) \mapsto {\frak s}_X *_\varphi {\frak s}_Y = : {\frak s}_{X \times Y}. $$ \end{remark} The gradient-sectorial Lagrangians have the following fundamental monoidal property under the product operation. \emph{Such monoidality fails for the commonly adopted $Z$-invariant Lagrangians in the study of wrapped Fukaya category} which results in much nuisance in the construction of K\"unneth-type mappings. (See \cite{gao,gps,gps-2} for example.) Here crucially enters the special interpolation property of the end-profile function ${\frak s}_\varphi$ having the expression $$ s_{k+1, \varphi} = - \log \varphi(R_1, \cdots, R_k,e^{-s}) $$ on the ceiling of each sectorial corner of $\nbhd(\partial M)$ slightly away from the corner $\partial_\infty M \cap \partial M$. \begin{theorem}\label{thm:product-brane} Let $(X,\omega_X)$ and $(Y,\omega_Y)$ be Liouville sectors with corners equipped with a smoothing profile. Let ${\frak s}_{X \times Y}$ be the product end-profile function given in Definition \ref{defn:product-endprofile}. Then for any gradient-sectorial Lagrangian branes $L_1$ and $L_2$, the product $L_1 \times L_2$ is also gradient-sectorial with respect to ${\frak s}_{X \times Y}$. \end{theorem} \begin{proof} Obviously we have $$ L_1 \times L_2 \subset \operatorname{Int} (X \times Y) $$ if $L_1 \subset \operatorname{Int} X$ and $L_2 \subset \operatorname{Int} Y$, and $$ \text{\rm dist}(L_1 \times L_2, \partial(X \times Y)) \geq \min\{\text{\rm dist}(L_1, \partial X), \text{\rm dist}(L_2, \partial Y)\} > 0. $$ This shows that Definition \ref{defn:gradient-sectorial-Lagrangian} (1) is satisfied for $L_1 \times L_2$ This also implies $$ (L_1 \times L_2) \cap {\frak s}_{X \times Y}^{-1}([r_0,\infty)) \cap R^{-1}((0,e^{-r_0}]) = \emptyset $$ for all $R = R_{\delta_X,i}$ (resp. $R = R_{\delta_Y,j}$) at any sectorial corner of $X$ (resp. of $Y$), because we have $L_X \cap R^{-1}(0, e^{-r_0}]) = \emptyset$ (resp. $L_Y \cap R^{-1}(0,e^{-r_0}]) =\emptyset$), whenever $$ e^{-r_0} < \min\{\text{\rm dist}(L_1, \partial X),\, \text{\rm dist}(L_2, \partial Y)\}) $$ or equivalently whenever $$ \max\{-\log \text{\rm dist}(L_1, \partial X),\, - \log \text{\rm dist}(L_2, \partial Y)\} > r_0. $$ Therefore from the definition of ${\frak s}_{X \times Y}$ in Definition \ref{defn:product-endprofile}, \eqref{eq:boundary-behavior} and the definition of sectorial Lagrangians above, we obtain $$ {\frak s}_{X \times Y} = \begin{cases} \pi_X^*s^X \quad &\text{\rm on } \, (L_1 \times L_2) \cap \{s^Y \geq r_0\}\\ \pi_Y^* s^Y \quad & \text{\rm on } \, (L_1 \times L_2) \cap \{s^X \geq r_0\}. \end{cases} $$ This and the splitting property of $J_1 \times J_2$ give rise to $$ \opname{grad}_{g_{J_1 \times J_2}} {\frak s}_{X \times Y} = \begin{cases} \opname{grad}_{g_{J_1}} {\frak s^X} \oplus 0 \quad \text{\rm on }\, (L_1 \times L_2) \cap \{s^Y \geq r_0\}\\ 0 \oplus \opname{grad}_{g_{J_2}} {\frak s^Y} \quad \text{\rm on }\, (L_1 \times L_2) \cap \{s^X \geq r_0\}. \end{cases} $$ Therefore it follows from $Z_{{\mathfrak s}_X}$-invariance of $L_1$ and $Z_{{\mathfrak s}_Y}$-invariance of $L_2$ at infinity that $Z_{{\frak s}^{X \times Y}}$ is tangent to $$ (L_1 \times L_2) \cap {\frak s}_{X \times Y}^{-1}([r,\infty)) $$ for all sufficiently large $r>0$. We remark that the open subset $ {\frak s}_{X \times Y}^{-1}((r,\infty))$ is a neighborhood of $\partial_\infty(X \times Y)$ and that on any given neighborhood $$ \nbhd(\partial_\infty(X \times Y)) = \nbhd(\partial_\infty X \times Y \cup X \times \partial_\infty Y) $$ we can choose $r$ sufficiently large so that $$ (L_1 \times L_2) \cap {\frak s}_{X \times Y}^{-1}([r,\infty)) \subset \nbhd(\partial_\infty(X \times Y)). $$ This proves that $L_1 \times L_2$ satisfies Condition 2 of Definition \ref{defn:gradient-sectorial-Lagrangian} and hence the proof. \end{proof} \subsection{Strong maximum principle and gradient-sectorial Lagrangian branes} In this subsection, we explain how the sectorial almost complex structure nicely pairs with gradient-sectorial Lagrangian branes to become amenable to the strong maximum principle and hence gives rise to fundamental confinement results for various Floer-type equations. We will just illustrate this for the Floer equation which produces the structure maps of the associated $A_\infty$ category, leaving the proofs for other cases to \cite{oh:sectorial} which handles the more subtle case of \emph{$\lambda$-sectorial almost complex structures} and \emph{$Z$-invariant-at-infinity Lagrangian branes}. Let $J$ be a sectorial almost complex structure of the Liouville sectors with corners $M$. Consider a $(k+1)$-tuple $(L_0, \ldots, L_k)$ of gradient-sectorial Lagrangian submanifolds. We denote $$ \Sigma = D^2 \setminus \{z_0, \ldots, z_k\} $$ and equip $\Sigma$ with strip-like coordinates $(\tau,t)$ with $\pm \tau \in [0,\infty)$ and $t \in [0,1]$ near each $z_i$. Then for a given collection of intersection points $p_i \in L_i\cap L_{i+1}$ for $i = 0, \ldots, k$, we wish to study maps $u: \Sigma \to \Mliou$ satisfying the Cauchy-Riemann equation \eqn\label{eq:unwrapped-structure-maps} \begin{cases} \overline \partial_J u = 0\\ u(\overline{z_iz_{i+1}}) \subset L_i \quad & i = 0, \ldots k\\ u(\infty_i,t) = p_i, \quad & i = 0, \ldots k. \end{cases} \eqnd The following theorem will establish both vertical and horizontal confinement results simultaneously. \begin{theorem}\label{thm:unwrapped} Suppose that there exists some $\delta > 0$ such that $$ \opname{dist}(L_i,\partial M) \geq \delta $$ for all $i = 0, \dots, k$. Let $u$ be a solution to~\eqref{eq:unwrapped-structure-maps}. Then there exists a sufficiently large $r > 0$ such that \eqn\label{eq:confimennt} \operatorname{Image} u \subset ({\mathfrak s}_{\varphi})^{-1}((-\infty,r]) \eqnd \end{theorem} \begin{proof} First all $L_i$'s are contained in $\{R > \delta\} \subset \operatorname{Int} M$ since we have $$ \min \{d(L_i, \partial M) \mid i=0, \ldots, k\} > \delta $$ by the hypothesis. Then by definition of sectorial almost complex structure $J$, $J$ is associated to a given splitting data and the end-profile function ${\mathfrak s}_{\varphi}$. Since a neighborhood of $\partial_\infty M \cup \partial M$ is exhausted by the family of hypersurfaces $$ ({\mathfrak s}_{\varphi})^{-1}(r) $$ for $r \geq 0$, it is enough to prove \eqn\label{eq:confimennt} \operatorname{Image} u \subset ({\mathfrak s}_{\varphi})^{-1}((-\infty,r]) \eqnd for some $r > 0$. We first recall that $du$ is $J$-holomorphic and satisfies $ - d(d {\mathfrak s}_{\varphi} \circ J) \geq 0 $ from the definition of pseudoconvex pair $( {\mathfrak s}_{\varphi}, J)$. Since $u$ is $J$-holomorphic, we obtain $$ d\left({\mathfrak s}_{\varphi} \circ u\right) \circ j = d{\mathfrak s}_{\varphi} \circ J \circ du = u^*(d{\mathfrak s}_{\varphi} \circ J) $$ By taking the differential of the equation, we derive $$ -d\left(d\left({\mathfrak s}_{\varphi} \circ u\right) \circ j\right) = - u^*(d(d{\mathfrak s}_{\varphi} \circ J)) \geq 0. $$ In particular, the function ${\mathfrak s}_{\varphi} \circ u$ is a subharmonic function and cannot carry an interior maximum on ${\mathcal R} \times [0,1]$ by the maximum principle. Next we will show by the strong maximum principle that $u$ cannot have a boundary maximum in a neighborhood of $ \partial_\infty M \cup \partial M $ either. This will then enable us to obtain a $C^0$ confinement result $$ \operatorname{Image} u \subset \{{\mathfrak s}_{\varphi} \leq r_0\} $$ for any finite energy solution $u$ with fixed asymptotics given in \eqref{eq:unwrapped-structure-maps} provided $r_0$ is sufficiently large. Now suppose to the contrary that ${\mathfrak s}_{\varphi} \circ u$ has a boundary local maximum point $z' \in \partial D^2\setminus \{z_0,\ldots, z_k\}$. By the strong maximum principle, we must have \eqn\label{eq:lambda(dudtheta)} 0 < \frac{\partial}{\partial \nu}({\mathfrak s}_{\varphi}(u(z'))) = d{\mathfrak s}_{\varphi}\left(\frac{\partial u}{\partial \nu}(z')\right) \eqnd for the outward unit normal $\frac{\partial}{\partial \nu}|_{z'}$ of $\partial \Sigma$. unless ${\mathfrak s}_{\varphi}\circ u$ is a constant function in which case there is nothing to prove. Let $(r,\theta)$ be an isothermal coordinate of a neighborhood of $z' \in \partial \Sigma$ in $(\Sigma,j)$ adapted to $\partial \Sigma$, i.e., such that $\frac{\partial}{\partial \theta}$ is tangent to $\partial \Sigma$ and $|dz|^2 = (dr)^2 + (d\theta)^2$ for the complex coordinate $z = r+ i\theta$ and \eqn\label{eq:normal-derivative} \frac{\partial}{\partial \nu} = \frac{\partial}{\partial r} \eqnd along the boundary of $\Sigma$. Since $u$ is $J$-holomorphic, we also have $$ \frac{\partial u}{\partial r} + J \frac{\partial u}{\partial \theta} = 0. $$ Therefore we derive $$ d{\mathfrak s}_{\varphi}\left(\frac{\partial u}{\partial \nu}(z')\right) = d{\mathfrak s}_{\varphi} \left(-J \frac{\partial u}{\partial \theta}(z')\right). $$ By the ${\mathfrak s}_{\varphi}$-gradient sectoriality of $L$ and the boundary condition $u(\partial \Sigma) \subset L$, both $Z_{{\mathfrak s}_\varphi}(u(z'))$ and $\frac{\partial u}{\partial \theta}(z')$ are contained in $T_{u(z')}L$, which is a $d\lambda$-Lagrangian subspace. Therefore we have \begin{eqnarray*} 0 & = & d\lambda\left(Z_{{\mathfrak s}_\varphi}(u(z')),\frac{\partial u}{\partial \theta}(z')\right) = d\lambda\left(Z_{{\mathfrak s}_\varphi}(u(z')), J \frac{\partial u}{\partial \nu}(z')\right)\\ & = & g_J\left(Z_{{\mathfrak s}_\varphi}(z'), \frac{\partial u}{\partial \nu}(z')\right) =\frac{1}{|Z_{\mathfrak s_\varphi}(u(z'))|^2} d \mathfrak s_\varphi\left(\frac{\partial u}{\partial \nu} (z')\right) \end{eqnarray*} where the last equality follows from the definition of normalized gradient vector field $Z_{\mathfrak s_\varphi}$. This is a contradiction to \eqref{eq:lambda(dudtheta)} (unless $\frak s_\varphi \circ u$ is constant) and hence the function ${\mathfrak s}_\varphi \circ u$ cannot have a boundary maximum either. This then implies $$ \max {\mathfrak s_\varphi\circ u} \leq \max \{{\mathfrak s}_\varphi(p_i) \mid i=0,\ldots, k \} $$ By setting $$ r_0 = \max \{{\mathfrak s}_\varphi(p_i) \mid i=0,\ldots, k \} + 1, $$ we have finished the proof. \end{proof} We remark that the constant $ \max \{{\mathfrak s}_\varphi(p_i) \mid i=0,\ldots, k \}$ (and so $r_0$) depends only on the intersection set $$ \bigcup_{i=0}^k L_i \cap L_{i+1} ( \mod k) $$ and not on the maps $u$. \section{Discussion and what to do} \subsection{Pseudoconvex pairs and $Z$-invariant-at-infinity Lagrangian branes} \label{subsec:deform-end} A construction of sectorial almost complex structures is given in \cite{oh:sectorial} whose details we refer readers thereto. A subtle difficulty to overcome in the construction of a pseudoconvex pair $(\psi, J)$ in \cite{oh:sectorial} lies in the fact that for given Liouville sector with boundary \emph{and corners} the asymptotic boundary $\partial_\infty M$ is of contact type and does not form a \emph{coisotropic collection} when it is added to the coisotropic collection $$ \{H_1, \, H_2, \cdots, H_m\} $$ associated to a sectorial corner $\delta$ of the boundary $\partial M$. (Compare the definitions of $\frak s_{k,\varphi}$ and $\frak s_{k+1,\varphi}$: The former one involves interpolations between the sectorial corners only, while the latter involves both sectorial corners and the ceiling corner.) This destroys the contact-type property of the union $$ DM = \partial_\infty M \bigcup \partial M $$ in the sense that it may not be approximated by the hypersurfaces of contact-type, \emph{unless the sector is sufficiently expanded in the horizontal direction.} (See \cite[Lemma 2.31]{gps}.) However \cite[Theorem 1.2.3]{oh:sectorial} shows that it admits an exhaustion function $\psi$ that becomes $J$-convex for some $\omega$-tame almost complex structures $J$ which also satisfies $$ -d\psi \circ J = \lambda $$ Such a pair is called a \emph{Liouville-pseudoconvex pair}, or more specifically a $\lambda$-pseudoconvex pair. It is shown in \cite{oh:sectorial} that for any Liouville-pseudoconvex pair $(\psi,J)$, the function $\psi \circ u$ satisfies the (interior) maximum principle as well as the strong maximum principle when a $J$-holomorphic curve $u$ satisfies the boundary condition attached to a \emph{$Z$-invariant-at-infinity Lagrangian submanifold}. The pairs constructed in \cite{oh:sectorial} are nothing but those consisting of the pair $(\psi,J)$ with \eqn\label{eq:Liouville-pseudoconvex-pair} {\mathfrak s}_{\varphi,\kappa} = \text{$\kappa$-wiggled end-profile function}, \, J = \text{$\lambda$-sectorial almost complex structures} \eqnd precise definitions of which we refer to \cite{oh:sectorial}. The aforementioned difficulty is what was overcome in \cite{oh:sectorial} as a byproduct of the existence result of \emph{$\lambda$-sectorial almost complex structures} introduced therein. The main task in \cite{oh:sectorial} then is to interpolate the two requirements on the intersection $\nbhd(\partial_\infty M \cap \partial M)$ in the way that relevant maximum and strong maximum principle are still applicable. This construction of $J$ in \cite{oh:sectorial} requires to unveil background geometry of Liouville sectors with corners and to go through a careful pointwise consideration of almost complex structures near the corner $\partial_\infty M \cap \partial M$ to reveal what presents the obstruction to interpolating the aforementioned two geometric structures, presymplectic geometry of $\partial M$ and the Liouville geometry of $\nbhd(\partial_\infty M)$, near the corner \emph{so that $Z$-invariant Lagrangian boundary condition becomes amenable to the strong maximum principle}. \subsection{Relationship with the K\"unneth-type formulae in Floer theory} One main consequence of Lemma \ref{lem:product-endprofile} combined with the usage of sectorial almost complex structures and gradient-sectorial Lagrangians in the present paper is the following monoidality of the various Floer moduli spaces under the product of Liouville sectors. Let $K = K(t,x)$ be a sectorial Hamiltonian with respect to the end-profile function ${\mathfrak s}_X$, i.e., $K = \rho \circ {\mathfrak s}_X$ of the function $\rho: {\mathcal R} \to {\mathcal R}$ with $\rho' \geq 0$ and $\operatorname{supp} \rho'$ having compact support. We denote by $$ \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M;J,K), \quad \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M,L_0, L_1;J,K) $$ the moduli spaces of the Hamiltonian-perturbed Floer trajectories for the closed string and for the open string cases respectively, and by $$ \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M, \cL;J), \quad \cL = (L_0, \cdots, L_k) $$ the moduli spaces of the $J$-holomorphic polygons entering in the construction of (wrapped) Fukaya category $\Fuk(M)$. The following, especially Statement (2), is an immediate consequence of Lemma \ref{lem:product-endprofile} and Theorem \ref{thm:unwrapped} applied to the pseudoconvex pair $$ ({\mathfrak s}_{M_1} *_{\varphi}{\mathfrak s}_{M_2}, J_1 \times J_2) $$ under the boundary condition of product gradient-sectorial Lagrangians. \begin{cor} Let $(\mathfrak s_{M_i}, J)$ be a pair of end-profile function of Liouville sector $(M_i,\lambda_i)$ and $J_i$ its associated sectorial almost complex structure, respectively for $i=1, \, 2$. Then we have the following natural inclusion maps: \begin{enumerate} \item For any sectorial pair $K_1$ and $K_2$ of Hamiltonians on $M_1$ and $M_2$ respectively, we have $$ \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M_1,L_1;J_1,K_1) \times \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M_1,L_1;J_1,K_1) \to \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M_1 \times M_2, L_1 \times L_2;J_1 \oplus J_2, K_1 \oplus K_2) $$ \item For any tuple of Lagrangians $\cL_i = (L_1^1, L_1^2, \ldots, L_1^k)$ for $i=1, \, 2$, we have $$ \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M_1, \cL_1) \times \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M_2,\cL_2) \to \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P(M_1 \times M_2, \cL_1 \times \cL_2). $$ \end{enumerate} \end{cor} Similar proof can be given to prove Statement (1) whose details we refer to \cite{oh:sectorial} which deals with the more subtle case of the \emph{Liouville-pseudoconvex pairs} $({\mathfrak s}_{\varphi,\kappa}, J)$ of \eqref{eq:Liouville-pseudoconvex-pair} under the boundary condition of \emph{$Z$-invariant Lagrangian submanifolds}. An immediate consequence of this corollary will be that all the K\"unneth-type maps in the studies of symplectic cohomology (from Statement (1) above) and of the Hochschild homology (and cohomology) (from Statement (2) above) of the wrapped Fukaya category have chain-level monoidal property under the product operation of the Liouville sectors with corners, which follow rather straightforwardly by the algebraic arguments in homological algebra given in \cite{seidel-book} or in its references or in the one as summarized in \cite[Appendix B]{gps-2}. We will elaborate this remark in a sequel to the present paper. We also refer readers to \cite{amorim:kuenneth,amorim:tensorproduct}, \cite{fukaya:unobstructed} for the relevant study of tensor products in the filtered setting of compact symplectic manifolds. \subsection{Fukaya categories of Liouville manifolds of infinite type} As explained in \cite{richards}, a topological space $M$ of infinite type that has infinite number of ends can have \emph{non-cylindrical} ends such as in the case of a Riemann surface with infinite type with \emph{non-planar} ends. In this context, the standard setting in the literature using Liouville manifolds with cylindrical ends and cylindrical-at-infinity Lagrangian submanifolds as the objects of the Fukaya category cannot be applied. The standard approach of defining the Fukaya category as the (homotopy) colimit of the Fukaya category of finite type by considering the increasing union $$ M = \bigcup_{N =1}^\infty M_N $$ where each $M_N$ is a compact Liouville domain of finite type is not necessarily the only way of defining the Fukaya category of such Liouville manifolds. We propose to adopt the \emph{gradient-sectorial} Lagrangian submanifolds with respect to a suitably chosen exhaustion function $\psi$ as the objects of the Fukaya category. In adjunction with this choice, we then consider the class of almost complex structures $J$ for which $(\psi, J)$ forms a pseudoconvex pair as defined in the present paper and \cite{oh:sectorial}, and the associated sectorial Hamiltonians $H$ as defined in \cite{oh:sectorial} as the relevant wrapping Hamiltonians. We refer to \cite{choi} for the study of the Fukaya category for the infinite-type Riemann surfaces in this point of view. It remains to be seen whether this new definition of the Fukaya category is quasi-isomorphic to the colimit definition of the Fukaya category or not, even for the case of Riemann surfaces. Generalizing this construction to higher dimensional cases and its applications is a subject of future research. \bibliographystyle{amsalpha}
7a78a24fd3a3b9c7404b1a76fb10b46ea2e9571a
\section{Introduction} \label{sec:intro} Radio relics usually located in the outskirts of merging galaxy clusters are giant ($\sim$Mpc) synchrotron sources that are believed to be produced by cosmic-ray electrons (CRe) (re)-accelerated by merger induced shock waves in the intracluster medium (ICM; \citet{ensslin1998, 1999ApJ...518..594R, bj14, vanweeren2019review, 2019SSRv..215...14B}). The connection between shocks and relics has been confirmed by finding the surface brightness and temperature discontinuities in the X-ray observations at the location of relics \citep[e.g.][]{2008A&A...486..347G, 2013PASJ...65...16A,vanweeren2019review}. The details of the acceleration mechanisms in radio relics are still not fully understood. The widely accepted mechanism for acceleration of relativistic cosmic-ray (CR) particles at shock fronts is diffusive shock acceleration (DSA) \citep[e.g.][]{1987PhR...154....1B}. DSA is based on the original idea of \citet{1949PhRv...75.1169F}, according to which particles are scattered upstream and downstream of the shock by plasma irregularities, gaining energy at each shock crossing. In recent years, deep X-ray observations performed with \textit{Chandra}, \textit{XMM-Newton}, and \textit{Suzaku} have led to an increase in the number of shocks detected in merging galaxy clusters \citep[e.g.][ for recent works]{2017A&A...600A.100A, 2017MNRAS.464.2896C, 2018MNRAS.476.5591B}. Radio and X-ray observations suggest that radio relics probe particle acceleration by weak shocks, $\mathcal{M} \leq 5$, \citep[e.g.][]{2009A&A...494..429B,vw10,2012MNRAS.426...40B,Hoang2017sausage,2017A&A...600A.100A,2018ApJ...852...65R,2018MNRAS.476.5591B, 2019ApJ...873...64D} in a high-$\beta$ ($\beta = P_{th}/P_B$, i.e., the ratio between the thermal and magnetic pressures) environment such as the ICM, where the thermal pressure dominates over the magnetic pressure. However, X-ray and radio estimates of shocks' strength are typically in disagreement, possibly because these two proxies probe different parts of the underlying Mach number distribution \citep[e.g. see][for a recent discussion of this issue]{2021arXiv210608351W}. In the weak shock regime, the acceleration efficiencies of cosmic-ray protons (CRp) are poorly understood, although current models and simulations predict acceleration efficiencies (defined as the ratio between the shock kinetic power and the energy flux of accelerated cosmic rays) that are less than a few percent \citep[e.g.][]{2018ApJ...864..105H, 2019ApJ...883...60R, 2020ApJ...892...86H, 2020MNRAS.495L.112W}, in agreement with direct constraints coming from $\gamma$-ray non-detections of galaxy clusters \citep[e.g.][for review]{ack10, ackermann14, ackermann16, 2021NewA...8501550W}. On the other hand, the observed connection between radio relics and shocks in merging galaxy clusters demonstrates that the electron acceleration (or re-acceleration) at these shocks is efficient, in the sense that even weak shocks ($\mathcal{M} \leq 2$) are associated with detectable radio emission. This implies a surprisingly large ratio of electron-to-proton CR acceleration efficiencies for DSA, because at the same time CR protons have never been detected in the ICM \citep[e.g.][]{va14relics, bj14,2015MNRAS.451.2198V, scienzo16,2020A&A...634A..64B}. Even if radio power of some relics can be explained by the acceleration of electrons from the thermal pool (i.e. DSA mechanism) \citep{locatelli2020dsa}, this mechanism alone cannot explain the high radio power of the majority of relics \citep{2016MNRAS.460L..84B, 2016MNRAS.461.1302E, Hoang2017sausage}. To mitigate the problem of the high acceleration efficiencies implied by weak cluster shocks, recent theoretical models assume a pre-existing population of CRe at the position of the relic that is re-accelerated by the passage of the shock \citep[e.g.][]{2005ApJ...627..733M, 2011ApJ...728...82M, kr11, ka12, 2014ApJ...788..142K, 2013MNRAS.435.1061P, 2020A&A...634A..64B}. This would soften the above theoretical problems, because the population of CRs processed by $\mathcal{M} \leq 3-4$ shocks is predicted to be dominated by the re-accelerated fossil populations, and not the freshly accelerated one. The re-acceleration scenario is supported by the observation of radio galaxies located nearby or within a few radio relics \citep[e.g.][]{2014ApJ...785....1B, 2015MNRAS.449.1486S, 2016MNRAS.460L..84B, 2017NatAs...1E...5V, digennaro2018saus}. However, it is not obvious that the injection of fossil electrons by one or a few radio galaxies, can automatically produce a uniform population of electrons, capable of producing the high degree of coherence of the radio emission observed in a few giant relics: in radio relics like "the Sausage" and "the Toothbrush" the spectral properties of the emission are very coherent across $\sim 2$ $\rm Mpc$, requiring a very uniform distribution of coeval fossil electrons in the shock upstream \citep[e.g.][]{2010Sci...330..347V,2016ApJ...818..204V,2018ApJ...852...65R,rajpurohit2020toothbrush, digennaro2018saus}. Complementary to the above scenario, we thus focus here on a specific mechanism potentially alleviating this problem, i.e. we consider a multiple-shock (MS) scenario in which multiple, wide merger shocks sweeping the ICM in sequence can produce a large-scale and uniform distribution of mildly relativistic electrons. A similar mechanism has been very recently analyzed by \citet{kang2021diffusive}, in the context of the acceleration of cosmic ray protons via DSA. The existence of multiple populations of shock waves sweeping the ICM along a variety of angles with respect to the leading axis of mergers, and possibly merging into larger shocks have been recently explored by several simulations \citep[e.g.][]{2015ApJ...812...49H,2020MNRAS.498L.130Z,2021MNRAS.501.1038Z} In our work, we focus on MS electron acceleration and the radio emission generated in this way. In detail, we analyse the simulation of a massive $M_{200} \approx 9.7 \cdot 10^{14} \ \mathrm{M}_{\odot}$ galaxy cluster from $z=1$ to $z=0$ \citep{2017MNRAS.464.4448W}, and we compute the radio emission generated by particles following the merging of the cluster, showing that MS electrons develop, on average, large enough radio emission to be detectable with current radio telescopes (e.g. LOFAR). This paper is organized as follow. In Section \ref{sec:method} we describe the numerical set-up used for the galaxy cluster simulation and the model used to simulate the evolution of electron spectra. The results obtained are analyzed in Section \ref{sec:families}, where we distinguish two different relics probed by the particle simulated and we study the radio emission of these particles catalogued by their shock history. The detailed study of the integrated radio emission is presented in Section \ref{sec:integrated_radio}. Section \ref{sec:conclusion} summarizes the results obtained in the paper and discusses future work. \section{Numerical Method} \label{sec:method} \subsection{Simulation setup}\label{subsec:crater} In this work, we study a massive, $M_{200} \approx 9.7 \cdot 10^{14} \ \mathrm{M}_{\odot}$, galaxy cluster that was simulated and analysed in \citet[][]{2016Galax...4...71W,2017MNRAS.464.4448W}. This cluster is interesting for a comparison with real observations as it undergoes a major merger at redshift $z \approx 0.27$, producing detectable giant radio relics. The cluster was simulated with the cosmological magneto-hydrodynamic (MHD) code \textsc{ENZO} \citep{ENZO_2014} and analysed with the Lagrangian tracer code Cosmic-Ray Tracers (\textsc{CRaTer}) \citep{2017MNRAS.464.4448W}. In the following, we give a brief overview on the used simulation setup. For specific details, we point to section 2.1 in \citet{2017MNRAS.464.4448W}. The \textsc{ENZO} code follows the dark matter using a N-body particle-mesh solver \citep{1988csup.book.....H} and the baryonic matter using an adaptive mesh refinement (AMR) method \citep{1989JCoPh..82...64B}. More specifically, \citet{2017MNRAS.464.4448W} used the piecewise linear method \citep{1985JCoPh..59..264C} in combination with the hyperbolic Dedner cleaning \citep{2002JCoPh.175..645D}. The simulation covers a root grid with a comoving volume of $\sim (250 \ \mathrm{Mpc})^3$ sampled with $256^3$ grid cells and dark matter particles. An additional comoving volume of size $\sim (25 \ \mathrm{Mpc})^3$ has been further refined using 5 levels of AMR, i.e. $2^5$ refinements, for a final resolution of $31.7 \ \mathrm{kpc}$. The chosen AMR criteria, i.e. based on the over-density and the 1D velocity jump, ensure that about $\sim 80 \ \%$ of the cluster volume are refined at the highest AMR level. We study this cluster in detail because it is a massive one, it has been already the subject of several works by our group, and because the fairly large dynamical range and number of available snapshots is optimal for our analysis involving tracer particles (see below). However, the final magnetic field reached through small-scale dynamo amplification in this object is kept artificially small by the spatial resolution, which is not enough to ensure a large enough Reynolds number to enter an efficient small-scale dynamo amplification regime, as studied in \citet{va18mhd}. Therefore, for simulating the injection and advection of CRe in this system, we re-normalized the magnetic field strength, measured by the tracers,by a factor 10. The re-normalization results typical magnetic field strengths of $\sim 0.1-0.2$ $\rm \mu G$ in our relics. In fact, the electron cooling depends rather weakly on the renormalization of magnetic field strengths, because inverse Compton cooling dominates over synchrotron cooling (see the denominator in Eq. \ref{eq:ic}. Using \textsc{CRaTer}, \citet{2017MNRAS.464.4448W} used a total of $\sim 1.3 \cdot 10^7$ Lagrangian tracer particles to analyse the cluster's evolution between $z = 1$ and $z = 0$, at a (nearly constant) time resolution of $\Delta t=31 \rm ~Myr$. Following the gas distribution of the ICM, \textsc{CRaTer} injects particles with a fixed mass, i.e. in our case $m_{\mathrm{tracer}} \approx 10^8 \ \mathrm{M}_{\odot}$, into the simulation. The tracers' velocities are computed by interpolating the local velocities to the tracers' position using a \textit{cloud-in-cell}-interpolation method. An additional velocity correction term was used in \citet{2017MNRAS.464.4448W} to account for mixing motions that might be underestimated in the case of complex flows \citep{Genel_2014_following_the_flow}. The velocity interpolation schemes have been extensively tested in \citet{2017MNRAS.464.4448W} and \citet{wittorPHD}. The tracer particles use a temperature-based shock finder to detect shocks in the ICM. The corresponding Mach number is computed from the Rankine-Hugoniot relation, assuming $\gamma = 5/3$, as \begin{align} M = \sqrt{\frac{4}{5} \frac{T_{\mathrm{new}}}{T_{\mathrm{old}}} \frac{\rho_{\mathrm{new}}}{\rho_{\mathrm{old}}} + \frac{1}{5}}. \end{align} Here, $T$ and $\rho$ are the temperature and density in the pre- and post-shock. We have specifically chosen the cluster simulation presented in \citet[][]{2016Galax...4...71W,2017MNRAS.464.4448W} for our analysis. \citet[][]{2017MNRAS.464.4448W} found that a significant fraction of the particles that produce giant radio relics at $z \approx 0$, have crossed several shocks before, see figure 12 and Section 3.5 therein. Hence, the radio emitting particles should have been subjected to several cycles of shock (re-)acceleration, making this simulation a perfect candidate for our analysis. \begin{figure*} \centering \includegraphics[width=1\textwidth]{FIGURES/video-snap.png} \caption{Snapshot sequence at different times of, respectively, the galaxy cluster baryonic density (purple-yellow), the radio emission at $1.4$ GHz (light blue) and the tracers with their path. The tracers change from yellow (not active) to blue (active) when they cross a shock front in the simulation.} \label{fig:video} \end{figure*} We use a 3D rendering of this merger event to better describe the sequence of mergers (leading to multiple shock waves) which interest a particular sector of the cluster. Figure \ref{fig:video} shows a snapshot sequence obtained from a cinematic scientific visualization realized from the simulation data\footnote{The video is called "\textit{The VLA shedding lights on the origin of radio relics}" and it was recently awarded the $1^{st}$ prize for the NRAO Image Contest for the celebration of VLA $40^{th}$ anniversary. The video is available at the following link: \hyperlink{https://vimeo.com/464248944/3fc17a5b8b}{https://vimeo.com/464248944/3fc17a5b8b}.}. The video shows the baryonic density (purple-yellow) surrounded by the volumetric radio emission at $1.4$ GHz (light blue) during the formation of the galaxy cluster. The tailed spheres highlight the evolution of a selection of tracers. Initially, all the tracers are yellow and when they cross a shock front, they are activated, changing color to bright blue. The sequence in Fig. \ref{fig:video} shows the evolution of two streams of tracers (``beam"). This qualitative analysis of the cluster merging evolution shows an history of MS scenario before tracers arrive at the end of the simulation. In this paper, we analyse the spectral evolution measured by the tracers in this simulation. \subsection{Simulating the evolution of electron spectra} \label{subsec:fokker} We solve the time-dependent diffusion-loss equation of relativistic electrons represented by tracer particles, using the standard \citet{1970JCoPh...6....1C} finite-difference scheme implemented in a serial code written in IDL language. We used $N_{\rm b}=10^5$ equal energy bins in the $\gamma_{\rm min} \leq \gamma \leq \gamma_{\rm max}$ Lorentz factor, with $\gamma_{\rm min}=1$ and $\gamma_{\rm max}=4.5\times10^5$ (hence $\rm d\gamma=5$). The code we used to evolve our particle spectra is freely available \footnote{\url{https://github.com/FrancoVazza/IDL_FP}}. We are concerned with the evolution of relativistic electrons injected and/or re-accelerated by shocks, at the periphery of clusters and on timescales of a few Gigayears ($\leq 3 \rm ~Gyr$). For this specific task, we only have to evolve the energy spectra for $ \sim 7000$ tracers, necessary to sample the spatial extension of radio relics formed in the system by $z \approx 0$. The combination of the limited amount of tracers and of the relatively small number of snapshots to process (up to 238) allowed us to resort to the serial implementation of the Fokker Planck solver already used in previous work \citep[e.g.][]{rajpurohit2020toothbrush}. Notice that, unlike the more recent work presented in \citet{2021arXiv210204193V}, in this implementation, we evolve the electron spectra in $\gamma$ space, and not in momentum space. This introduces a small error in the low energy part of the spectra, where the injected distribution from shock acceleration is a power-law in momentum space, but not in $\gamma$ space (since of course $E^2=m^2c^4+ p^2 c^2$). The ultra-relativistic simplification used here is however suitable when focusing on the radio emitting electrons ($\gamma \geq 10^2-10^3$) and also considering that the accumulated particle population at low energy is small in the short time range considered \citep[e.g.][]{sa99}. We considered a reduced Fokker-Planck equation without injection and escape terms (i.e. Liouville equation), and neglected the spatial diffusion of cosmic rays (which is appropriate for the $\sim \rm MeV-GeV$ electrons considered in this work), which allows us to track the evolution of the number density of relativistic electrons as a function of their energy, $N(\gamma)$, computed separately for each tracer particle: \begin{equation} {\frac{\partial N}{\partial t}} = {\frac{\partial}{\partial \gamma}} \left[ N \left( \left|{\frac{\gamma}{\tau_{\rm rad}}}\right| + \left|{\frac{\gamma}{\tau_{\rm c}}}\right| + {\frac{\gamma}{\tau_{\rm adv}}} - \left|{\frac{\gamma}{\tau_{\rm acc}}}\right| \right) \right], \label{eq11} \end{equation} We use the approximation \begin{equation} \dot{\gamma} \approx \left|{\frac{\gamma}{\tau_{\rm rad}}}\right| + \left|{\frac{\gamma}{\tau_{\rm c}}}\right| + {\frac{\gamma}{\tau_{\rm adv}}} - \left|{\frac{\gamma}{\tau_{\rm DSA}}}\right| , \label{eq12} \end{equation} where $\tau_{\rm rad}$, $\tau_{\rm c}$, and $\tau_{\rm adv}$ are respectively the loss timescales for the radiative, Coulomb and expansion (compression) processes that we define in Sec. \ref{sec:lossterm}. $\tau_{\rm DSA}$ represents instead the acceleration timescale due to DSA that we estimate in Sec. \ref{sec:DSAacc}. The numerical solution is obtained using the \citet{1970JCoPh...6....1C} finite difference scheme: \begin{equation} N(\gamma,t+dt)=\frac{{{N(\gamma,t)}/{dt}} + N(\gamma+d\gamma,t+dt){{\gamma}}} {1/dt + {{\gamma}}/d\gamma} + Q_{\rm inj}(\gamma) , \label{eq13} \end{equation} where in the adopted splitting-scheme to perform the finite differences we assumed $N(\gamma +d\gamma/2)=N(\gamma+d\gamma)$ and $N(\gamma-d\gamma/2)=N(\gamma)$, where $Q_{\rm inj}$ accounts for the injection by shocks. The latter isis regarded as an almost instantaneous process, considering that timescales are much shorter than the time step of our integration, $\delta t \approx 31 \rm Myr$ (see Eq. \ref{eq:tDSA} below). \subsubsection{Loss Terms} \label{sec:lossterm} The timescales associated to the energy losses by radiative, Coulomb and expansion (compression) processes are given by the following formulae, adapted from \citet{bj14}: \begin{equation} \tau_{\rm rad} =\frac {7720 \rm ~Myr} {(\gamma/{300})\left[\left(\frac{B}{3.25 \rm \mu G}\right)^2 + (1+z)^4\right]} , \label{eq:ic} \end{equation} \begin{equation} \tau_{\rm c} = 7934 \rm ~Myr \left\{ {{\frac{n/10^{-3}}{{\gamma/300}}}} \left( 1.168 + {\frac{1}{75}}ln \left( {\frac{\gamma/300}{ n/10^{-3} }} \right) \right) \right\}^{-1} \label{eq:coulomb} \end{equation} and \begin{equation} \tau_{\rm adv} = \frac{951 \rm ~Myr}{ \nabla \cdot \mathbf{v}/10^{-16}} , \label{eq:adv} \end{equation} \noindent in which where the density $n$ is measured in [$\rm cm^{-3}$], $B$ in [$\rm \mu G$] and the gas divergence is measured in $\nabla \cdot \mathbf{v}$ in [$1/\rm s$]. Bremsstrahlung losses can be safely neglected in this case, because for the typical ICM conditions encountered also their timescale is much larger than the ones of all other loss channels. Inverse Compton and synchrotron losses are by far the most relevant for the evolution of electrons considered in this work, owing to their peripheral location and low gas density. \subsubsection{Shock (re-)acceleration} \label{sec:DSAacc} Predicting the spectrum of injected "fresh" relativistic electrons injected by weak shocks, as well as their spectrum after shock reacceleration, is far from being a solved problem. In this paper we follow a relatively simple approach, motivated by the existing literature on the subject and meant to simplify the steps to determine the post-shock spectrum of radio emitting electrons. We rely here on the DSA model by \citet{kr11}, which assumes that the injection Lorentz factor of electrons is related to the injection momentum ($\gamma_{\rm inj}=\sqrt{1+p^2_{\rm inj}/m_e^2c^2}$), where $p_{\rm inj}$ in DSA is assumed to be a multiple of the thermal momentum of {\it protons}, i.e. $p_{\rm inj}= \xi p_{\rm th}$ ($p_{\rm th}=\sqrt{2 k_b T_d m_p}$, where $k_b$ is the Boltzmann constant). Following \citet{kr11}, we compute $\xi$ based on the fit formula given from their one-dimensional convection-diffusion simulations: \begin{equation} \xi_{\rm inj}=1.17 \frac{m_p v_d}{p_{\rm th}} \cdot (1+\frac{1.07}{\epsilon_B})\frac{\mathcal{M}^{0.1}}{3^{0.1}}) \end{equation} where $v_d$ is the downstream shock velocity and $\epsilon_B$ is the ratio of magnetic field strength between the $B_0$ downstream magnetic field generated by the shock, and $B_\perp$ is the magnetic field perpendicular to the shock normal. We set here $\epsilon_B=0.23$ \citep[][]{2013MNRAS.435.1061P} and obtaining values in the range $\xi_{\rm inj} \sim 2.5-3.5$ and $\gamma_{\rm inj} \sim 10-20$ for our shocks. The source term for relativistic electrons in Eq.\ref{eq13} assumes an energy distribution that follows a power-law \citep[e.g.][]{1962SvA.....6..317K,sa99}: \begin{equation} Q_{\rm inj}(\gamma) = K_{\rm inj,e} ~\gamma^{-\delta_{\rm inj}} \left(1-\frac{\gamma}{\gamma_{\rm cut}}\right)^{\delta_{\rm inj}-2} , \label{eq:xi} \end{equation} in which the initial slope of the input momentum spectrum, $\delta_{\rm inj}$, is computed based on the standard DSA prediction, i.e. $\delta_{\rm inj} = 2 (\mathcal{M}^2+1)/(\mathcal{M}^2-1)$. The cuff-off energy, $\gamma_{\rm cut}$, is the defined for every shocked tracer as the maximum energy, beyond which the radiative cooling timescale is shorter than the acceleration timescale, $\tau_{\rm DSA}$: \begin{equation} \tau_{\rm DSA} = \frac{3~D(E)}{V_s^2} \cdot \frac{r(r+1)}{r-1} , \label{eq:tDSA} \end{equation} in which $r$ is the shock compression factor, $V_s$ is the shock velocity, and $D(E)$ is the diffusion coefficient of relativistic electrons, as a function of their energy \citep[e.g.][]{gb03}. The specific energy-dependent value of $D(E)$ is little constrained because it depends on the local conditions of the turbulent plasma, and it is critical to limit the maximum energy in DSA \citep[e.g.][]{ka12}. However, the latter is not an issue for our simulation, because all plausible choices of $D(E)$ in Eq.~\ref{eq:tDSA} give an acceleration timescale many orders of magnitude smaller than the typical cooling time of radio emitting electrons, whose energy distribution be assumed to follow a power law within the energy range of interest, at least the moment of .their injection. We can set therefore $\gamma_{\rm cut} = \gamma_{\rm max}$ in this work. This also motivates the fact that we can model shock injection by DSA by adding the newly created population of particles across timesteps (see Eq.~\ref{eq13} above), without integrating a source term as needed for the much slower re-acceleration by turbulence (see below). Under these assumptions, the rate of injection of relativistic electrons in the downstream is: \begin{eqnarray} K_{\rm inj,e}= 4 \pi ~ K_{e/p} \int_{p_{\rm inj}}^{p_{\rm cut}} (\sqrt{p^2+1}-1) f_N ~p^{-(\delta_{\rm inj}+2)} \cdot \nonumber \\ \cdot \exp[-(p/p_{\rm cut})^{2}] ~p^2 dp ~dx_t^2 ~V_s ~dt \label{eq:phicr} \end{eqnarray} with \begin{equation} f_N = \frac{n_d}{\pi^{3/2}}p_{\rm th}^{-3} \exp{(-\xi_{inj}^2)} \end{equation} and where $K_{e/p}$ is the electron-to-proton ratio. Following \citet{2020JKAS...53...59K} we use $K_{e/p}=(m_p/m_e)^{(1-\delta_{\rm inj})/2}$, which gives $K_{e/p} \sim 10^{-2}$ for an injection spectral index of $\delta_{\rm inj} \approx 2.3$, in line with the injection spectral index of local Galactic supernova remnants \citep[e.g.][]{2007Natur.449..576U}. $dx_t^2$ is the surface element associated to each shocked tracer particle, and is computed considering that $dx_t^3 = dx^3/n_{\rm tracers}$ is the initial volume associated to every tracer at the epoch of their injection ($n_{\rm tracer}$ being the number of tracers in every cell) and $dx_t(z)^3=dx_t^3 \cdot \rho_t/\rho(z)$ is the relative change of the volume associated to each tracer as a function of $z$, based on the ratio between the density at injection, $\rho$ and the density of cells where each tracer sits as a function of redshift, $\rho(z)$. This procedure allow us to guess the acceleration efficiency of relativistic electrons at the shock, at least to a first degree of approximation and with a modest computing time. Of course, the physical uncertainty behind this is of course very large, and dedicated simulations are needed to fully solve the acceleration cycle of relativistic electrons by weak merger shocks, for the possible range of shock obliquities and typical plasma conditions of the ICM \citep[][]{Guo_eta_al_2014_II,2015PhRvL.114h5003P,2019ApJ...876...79K,2020ApJ...897L..41X,2021ApJ...915...18H}. \bigskip Beside the {\it direct} injection of relativistic electrons by shocks, we also include the effect of shock {\it re}-acceleration on existing relativistic electrons \citep[e.g.][]{2005ApJ...627..733M,kr11,ka12}. According to DSA, the input particle spectrum, $N_0(x)$, becomes \begin{eqnarray} N(\gamma)=(\delta_{\rm inj}+2) \cdot \gamma^{-\delta_{\rm inj}} \int_{\gamma_{min,re}}^\gamma N_0(x) x^{\delta_{\rm inj}+1} dx , \label{eq:shock_re} \end{eqnarray} where $\delta_{\rm inj}$ is the local slope within each energy bin. We consider that the minimum momentum for the electron re-acceleration by shocks is the injection momentum $p_{\rm inj}$, above which DSA is expect to operate \citep{2020JKAS...53...59K}. So we set $\gamma_{min,re-e} = \gamma_{\rm inj}$as a lower bound for the integration in Eq. \ref{eq:shock_re}. \section{MS scenario of electrons re-acceleration} \label{sec:families} In this section, we analyze the properties of tracer particles used to probe the evolution of the simulation, focusing on their shock time. We select more than 7000 tracers that cross shocks with Mach number $M\geq2$ during the entire evolution of the simulation. By construction, all these tracers cross a shock at the end of the simulation $t_{\mathrm{end}}=13.76$ Gyrs. We investigate if these tracers crossed other shocks before the final one and if, how many times. We divide the tracers in different families according to the number of shocks they cross during the simulation evolution. Tracers of Family 1 are only accelerated by the shock at the end of the simulation. Family 2, 3 and 4 have been shocked respectively one, two and three times before they cross the final shock. Details of the families population are collected in Tab. \ref{tab:families_stat}. \begin{table} \centering \begin{tabular}{c|cc|cc} & Relic A & & Relic B & \\ \hline Family 1 & 1297 & 91.92\% & 1709 & 28.91\% \\ Family 2 & 114 & 8.08\% & 3687 & 62.36\% \\ Family 3 & & & 489 & 8.27\% \\ Family 4 & & & 27 & 0.46\% \\ Total & 1411 & & 5912 & \end{tabular} \caption{Tracer population statistic in the different families for both Relic A and Relic B.} \label{tab:families_stat} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{FIGURES/tracer_pos_all_2D.pdf} \caption{2D map of the tracer position at the final step. We distinguish two relics for the analysis. The black oblique line (defined by the equation in the legend) divides the region of the two relics, namely Relic A (left) and Relic B (right)} \label{fig:tracer_pos} \end{figure} Figure \ref{fig:tracer_pos} shows the $(x,y)$ projection of Family 1 (blue), Family 2 (red), Family 3 (green), and Family 4 (orange) tracers at $t_{\mathrm{end}}$. According to these positions, we divide the tracers in two groups named "Relic A" and "Relic B" in Fig. \ref{fig:tracer_pos}. We observe that Relic A is composed mostly by Family 1 tracers, with the presence of $8 \ \%$ of Family 2 tracers. Relic B, instead, is composed of more than $62 \ \%$ by Family 2 tracers and for $\sim28 \ \%$ by Family 1 tracers, with the presence of families with higher number of shocks, as reported in Tab. \ref{tab:families_stat}. As first approach, also motivated by the fact that the differences in the timing of shocks within each family of electrons is typically $\leq \rm ~Gyr$, we computed the energy evolution of each family based on the family-averaged fields, i.e. assuming at each timestep that the entire family of particles is charactersied by the same values of density, temperature and magnetic field, and that all particles in the same family are shocked at the same time. For this family-averaged analysis, we chose the shock times of each family as the ones at which the majority of the tracers cross a shock simultaneously. This is of course a gross approximation, but it is enough to allow us to obtain some first important information on the electron energy spectrum based on the MS scenario and the subsequent radio emission. A detailed report of the family-averaged approach is available as Appendix of this paper \ref{sec:appendix}. \subsection{Relic A} For Relic A, the family-averaged quantities are collected in Tab. \ref{tab:relicA_stat}. We use these quantities to compute the time evolution of the electron energy spectrum according to the model introduced in Sec. \ref{subsec:fokker}. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} & Time [Gyrs] & Mach & B [$\mu$G] & $\rho$ [g/cm$^{-3}$] & T [K]\\ \hline Family 1 & & & & \\ Shock 1 & 13.76 & 2.6 & $1.7\times10^{-1}$ & $1.3\times10^{-28}$ & $2.5\times10^{7}$ \\ \hline Family 2 & & & & \\ Shock 1 & 12.69 & 3.8 & $1.1\times10^{-1}$ & $1.6\times10^{-28}$ & $2.7\times10^{7}$ \\ Shock 2 & 13.76 & 2.3 & $2.2\times10^{-1}$ & $2.0\times10^{-28}$ & $3.1\times10^{7}$ \end{tabular} \caption{Family-averaged quantities at selected shock times for families in Relic A.} \label{tab:relicA_stat} \end{table} Figure \ref{fig:ele_A} shows the time evolution of the electron energy spectrum for Family 2 population for Relic A. The electron population is produced by the first shock at $t_1=12.69$ Gyrs with a power-low spectrum (purple dashed line of the spectrum) and, as time evolves, we observe a cooling of the high-energy tail of the spectrum, that corresponds to a cut-off at $\gamma\sim10^3$ right before the second shock. After the final shock at $t_{\mathrm{end}}=13.76$ Gyrs, we observe that the electron energy spectrum is no longer a power-law and electrons are accelerated up to $\gamma\sim10^5$ with a soft knee in the slope around $\gamma\sim10^3$, in correspondence of the cut-off energy before the shock. (red solid line of the spectrum). However, we are cautious about the result in the low energy part of the spectra, considering the limit of the Fokker-Planck code described in Sec. \ref{subsec:fokker}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{FIGURES/ele_Fam2_A.pdf} \caption{Electron energy spectra time evolution for population-averaged quantities Family 2 tracers in Relic A. Dashed lines correspond to spectra evolution after the first shock. The red solid line represents the electrons spectrum after the second shock.} \label{fig:ele_A} \end{figure} \subsection{Relic B} For Relic B, the family-averaged quantities are reported in Tab. \ref{tab:relicB_stat}. We use these quantities to compute the time evolution of the electron energy spectrum according to the model introduced in Sec. \ref{subsec:fokker}. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} & Time [Gyrs] & Mach & B [$\mu$G] & $\rho$ [g/cm$^{-3}$] & T [K]\\ \hline Family 1 & & & & \\ Shock 1 & 13.76 & 2.7 & $1.6\times10^{-1}$ & $3.1\times10^{-28}$ & $6.5\times10^{7}$ \\ \hline Family 2 & & & & \\ Shock 1 & 12.82 & 3.5 & $1.9\times10^{-1}$ & $5.1\times10^{-28}$ & $3.2\times10^{7}$ \\ Shock 2 & 13.76 & 2.8 & $1.4\times10^{-1}$ & $2.7\times10^{-28}$ & $6.6\times10^{7}$ \\ \hline Family 3 & & & & \\ Shock 1 & 12.56 & 2.4 & $1.4\times10^{-1}$ & $2.8\times10^{-28}$ & $1.9\times10^{7}$ \\ Shock 2 & 13.31 & 2.4 & $3.8\times10^{-1}$ & $3.7\times10^{-28}$ & $4.0\times10^{7}$ \\ Shock 3 & 13.76 & 2.8 & $1.5\times10^{-1}$ & $2.9\times10^{-28}$ & $6.7\times10^{7}$ \\ \hline Family 4 & & & & \\ Shock 1 & 7.82 & 2.3 & $6.2\times10^{-1}$ & $4.9\times10^{-28}$ & $1.0\times10^{7}$ \\ Shock 2 & 10.98 & 2.9 & $2.0\times10^{-1}$ & $9.4\times10^{-28}$ & $6.4\times10^{7}$ \\ Shock 3 & 13.37 & 2.1 & $5.6\times10^{-1}$ & $6.2\times10^{-28}$ & $5.5\times10^{7}$ \\ Shock 4 & 13.76 & 2.3 & $1.8\times10^{-1}$ & $3.4\times10^{-28}$ & $7.5\times10^{7}$ \end{tabular} \caption{Family-averaged quantities at selected shock times for families in Relic B.} \label{tab:relicB_stat} \end{table} Figure \ref{fig:ele_B} shows the time evolution of the electron spectrum obtained from the Fokker-Planck model described in Sec. \ref{sec:method} using the averaged quantities for Family 2 tracer population for Relic B as in Tab. \ref{tab:relicB_stat}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{FIGURES/ele_Fam2_B.pdf} \caption{Electron spectra time evolution for quantities-averaged Family 2 tracers for Relic B. Dashed lines correspond to spectra evolution after the first shock. The red solid line represents the electrons spectrum after the second shock.} \label{fig:ele_B} \end{figure} Figure \ref{fig:ele_B} shows the time evolution of the electron energy spectrum for Family 2 population for Relic B. We observe a similar behaviour of the electron energy spectrum evolution as for Relic A. However, since Family 2 population in Relic B is more than one order of magnitude higher than Family 2 population in Relic A, we notice that the absolute value of the electron energy spectrum for Relic B is approximately one order of magnitude higher than Reilc A spectrum. Similar electron energy spectrum have been obtained for the other Families in Relic B (not shown) in which we observed a behaviour for MS scenario acceleration in the evolution of the spectra consistent with the evolution shown here for Family 2 population. The family-averaged analysis shown here allowed us to witness the different evolution of MS scenario electron energy spectra compared to a single shock scenario. However, we noticed that the averaged analysis introduced an huge variation in the computation of the electron energy spectra and, subsequently, in the radio emission associated (see Appendix \ref{sec:appendix}). In the next Section (Sec. \ref{sec:integrated_radio}) we shall instead compute the detailed radio emission based on the specific sequence of physical fields recorded by each tracer during its evolution, and compute the integrated radio emission across the relic by combining the information of all tracers in all families. \section{Integrated radio emission} \label{sec:integrated_radio} In this section, we study the integrated radio emission along the same viewing angle of Fig.~\ref{fig:video}, obtained using the electron spectra produced via Fokker-Planck integration over $\sim7000$ tracers (Sec. \ref{sec:method}). Contrarily to the family-averaged analysis discussed in the previous section, we now compute the energy spectra using the values of density, magnetic field and density recorded by each tracer. At the final position, we compute the radio emission for both Relic A and Relic B. Figures \ref{fig:radio140} and \ref{fig:radio1400} show the integrated radio emission maps, respectively, at $140$ MHz and $1.4$ GHz, in which we separate the emission contributions from the different families. Assuming the distance of the observation at $z=0.15$, we calculate the integrated radio emission with a beam size of $63.7$ kpc, corresponding to the $25"$ LOFAR telescope resolution. \begin{figure*} \centering \includegraphics[width=\textwidth]{FIGURES/NO-140-radiation-grid.pdf} \caption{Integrated radio emission $(x,y)$ projection at $140$ MHz for Family 1 (top-left), Family 2 (top-right), family 3 (bottom-left), and Family 4 (bottom-right). The Black solid line in the plots divides the particles population between Relic A and Relic B.} \label{fig:radio140} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{FIGURES/NO-1400-radiation-grid.pdf} \caption{Integrated radio emission $(x,y)$ projection at $1.4$ GHz for Family 1 (top-left), Family 2 (top-right), family 3 (bottom-left), and Family 4 (bottom-right). The Black solid line in the plots divides the particles population between Relic A and Relic B.} \label{fig:radio1400} \end{figure*} Focusing on Relic A, we see that the integrated radio emission is mostly dominated by Family 1 population and reaches a peak of $\lesssim 10^3$ mJy at $140$ MHz, while radio emission of Family 2 is concentrated in the lower-right corner of the relic, and its integrated value is $\sim$one order of magnitude lower. This makes us conclude that, in Relic A, the visible radio emission will be mostly dominated by Family 1 population, i.e. by newly accelerated electrons. This object appears therefore as a ``classic" powerful radio relic, in which all/most of the observed emission is due to the latest shock, which has energised a pool of fresh electrons, which are being observed within a cooling time since their first acceleration. Interestingly, the situation is very different for the nearby Relic B, whose integrated radio emission of $\sim 10^3$ mJy at $140$ MHz is dominated by the Family 2 population. The electrons from Family 1 and Family 3 are confined in a small sub-volume of Relic B, albeit they have an overall comparable radio emission between each others. The emission for Family 4, instead, due to its smaller occupation fraction, remains negligible at all frequencies. \begin{figure} \centering \includegraphics[width=\columnwidth]{FIGURES/tot_emission.pdf} \caption{Relic-integrated radio spectrum for Relic A (blue) and Relic B (red).} \label{fig:tot_spectra} \end{figure} In Figure \ref{fig:tot_spectra}, we give the total emission for the ``single" zone analysis of the two relics, i.e. by integrating the CRe emission over the volume of each relic. In a single zone and standard view of radio relics, a $\langle \alpha \rangle \sim -1.55$ associated with a shock with $\mathcal{M} \approx 2.1$. In order for the shock to produce the observed emission of 96 mJy at 1.4 GHz, a single zone \citet{hb07} method requires to dissipate a fraction in the $K_{inj,e} \approx 3\times10^{-4}-10^{-3}$ ballpark of the kinetic energy flux across the shock into electron acceleration - which is very large for such a weak shock, based on DSA \citep[e.g.][]{2020JKAS...53...59K}. This is a common finding of real observations, which have also routinely reported requirements on the acceleration efficiency even of $\sim 100 \%$, or larger, in several objects \citep[e.g.][]{2016MNRAS.461.1302E,Stuardi2019,2020A&A...634A..64B}. Our analysis instead shows that, even in absence of a nearby active galactic source of radio electrons, sectors of galaxy clusters interested by the MS scenario crossing can boost the emission to a level compatible with observations, due to the re-acceleration of fossil particles injected at a $\leq 0.5-1 \rm ~Gyr$ time interval. Detailed values of the single zone radio emission for the two relics at indicative frequencies are reported in Tab. \ref{tab:relic_emission}. \begin{table} \centering \begin{tabular}{c|c|c} & Relic A & Relic B \\ \hline 140 MHz & 515 mJy & 2768 mJy \\ 400 MHz & 179 mJy & 684 mJy \\ 1.4 GHz & 41 mJy & 96 mJy \end{tabular} \caption{relics-integrated radio emission.} \label{tab:relic_emission} \end{table} \bigskip \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/LOWFAR-140-radiation-grid.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/LOWFAR-1400-radiation-grid.pdf} \caption{Integrated radio emission for all families at $140$ MHz (left) and $1.4$ GHz (right) in gray scale. The coloured contours in the two plots indicate the observable radio flux using the LOFAR threshold of $0.2$ mJy at $140$ MHz and $0.02$ mJy at $1.4$ GHz respectively. The Black solid line in the plots divides the particles population between Relic A and Relic B.} \label{fig:emission_observed} \end{figure*} We investigate the possibility to observe the simulated relics by comparing the integrated radio emission obtained from the Fokker-Planck model with LOFAR observation properties. Figure \ref{fig:emission_observed} shows the integrated radio emission with the contribution of all the families in the gray scale. On top of that, the coloured contours indicate the observable radio emission using the LOFAR threshold of $0.2$ mJy at $140$ MHz and JVLA threshold of $0.02$ mJy at $1.4$ GHz respectively. We can conclude that part of the two radio relics generated in our numerical studies are bright enough to be observable. \subsection{Spectral Index Map} To investigate possible differences in the spectral index properties at the shock and the energy losses in the post-shock region, we analyzed the spectral index profile across the radio relics. We obtain the spectral index map by fitting with a first-order polynomial (i.e. $y=a_1x + a_0$) the integrated radio emission calculated in Sec. \ref{sec:integrated_radio} between frequencies $140$\,MHz and $400$\,MHz, and between $400$\,MHz and $1.4$\,GHz. For a first-order polynomial fit, the spectral index $\alpha$ is defined as the slope of the fit, i.e., $\alpha=a_1$. Using this first order fit, we obtain that the radio emission $I$ can be calculated locally at each frequency $\nu$ as $I\propto\nu^{\alpha_1}$. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/spiall1.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/spiall2.pdf} \caption{(left) $\alpha_{140\,\rm MHz}^{400\,\rm MHz}$ spectral index map and (right) $\alpha_{400\,\rm MHz}^{1.4\,\rm GHz}$ spectral index map from the contribution of all families population. The red lines indicate the position of the shock front for each relic, while} the black lines over the relics indicate the lineout of Fig. \ref{fig:index_profile}. \label{fig:spectral_index} \end{figure*} Figure \ref{fig:spectral_index} shows the $\alpha_{140\,\rm MHz}^{400\,\rm MHz}$ and $\alpha_{400\,\rm MHz}^{1.4\,\rm GHz}$ spectral index maps, obtained from the contribution of all families populations. The two relics have rather distinct spectral index properties, which may reflect the different histories of shock acceleration in the two cases. Relic A, dominated by family 1 population, shows a spectral index in the range $-1.0$ to $-1.2$ between $140$ and 400 MHz and $-1.2$ to $-1.5$ between 400 MHz and $1.4$ GHz. Relic B, dominated by family 2 population instead, shows a spectral index in the range of $-1.2$ to $-1.5$ between 140 and 400 MHz and $-1.4$ to $-2.0$ between 400 MHz and 1.4 GHz. In particular, the presence of multiple shocks acceleration events makes Relic B brighter than Relic A, despite having a steeper radio spectral index. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/index-width-A.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/index-width-B.pdf} \vspace{-.6cm} \caption{Spectral index profile for Relic A (left) and Relic B (right) between two different frequency ranges. The different black lines correspond to the profile position for each relic as indicated in Fig. \ref{fig:spectral_index}. The origin of the profiles correspond to the shock front location for each relic.} \label{fig:index_profile} \end{figure*} Figure \ref{fig:index_profile} shows the spectral index profile for Relic A and Relic B at the line-outs indicated by black lines in Fig. \ref{fig:spectral_index}. For each relic, we indicated the position of the shock front and its propagation with a red line in Fig. \ref{fig:spectral_index} and we compute the line profiles along the direction perpendicular to the shock front, starting at its position. For Relic A, we observe an almost constant distribution of the spectral index along its profile around a value of $-1$ at low frequency, and at $\sim -1.2$ at high frequency, however the resolution is not high enough for this relic to make a strong statement from such finding. For Relic B, instead, we notice that the spectral index has a generally more fluctuating behaviour, with a steeper radio spectrum at both frequencies, in particular between $\sim -1.3-1.4$ at low frequency and between $\sim -1.5-2.0$ at high frequency, which remains nearly constant even $\geq 200$ $\rm kpc$ away from the shock edge. The latter behaviour appears as a natural consequence of the MS scenario, in the sense that the emission of Relic B is dominated by the low-energy component of re-accelerated electrons, whose radio emission remain high also away from the shock edge. However, since the time elapsed since the epoch of the first injection of electrons in the MS scenario can vary from case to case, depending on the specific accretion history of the host cluster, and on the dynamics in the cluster sector where relics form, different timings of accretions should be reflected in different steepening frequencies for real observed relics. \subsection{Radio colour-colour Diagram} The shape of the relativistic electron distribution in radio sources can be studied by means of so-called colour–colour diagrams \citep{Katz1993, Rudnick1996, Rudnick2001}. These diagrams emphasize the spectral curvature because they represent a comparison between spectral indices calculated at low- and high-frequency ranges. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/all_data.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/family1.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/family2.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/family3.pdf} \caption{Radio colour-colour plots of the simulated relic superimposed with the spectral ageing JP (magenta) and KGJP (green) models obtained using an injection index of $-0.90$. For KGJP model, particles are injected continuously for about 16\,Myr. Colour-colour plot for all families (top-left), Family 1 (top-right), Family 2 (bottom-left), and Family 3 (bottom-right). The KGJP model fits the distribution well.} \label{fig:color2} \end{figure*} In our case, the low frequency spectral index values were calculated between 140 and 400\,MHz while the high frequency one between 400\,MHz and 1.4\,GHz. By this convention, the curvature is negative for a convex spectrum. The resulting colour-colour plots are shown in Fig. \ref{fig:color2}. The dashed black line indicates a power-law spectrum where $\alpha_{140\,\rm MHz}^{400\,\rm MHz}=\alpha_{400\,\rm MHz}^{1.4\,\rm GHz}$. Any curve deviating from the power-law line represents a spectrum with changing spectral curvature. As visible in Fig.\,\ref{fig:color2} (top-left), we find a clear negative curvature as also reported for some of the well-known relics, for example the Toothbrush \citep{rajpurohit2020toothbrush}, the Sausage \citep{digennaro2018saus}, and MACS\,J0717.5+3745 \citep{rajpurohit2021macsspec}. The single continuous trajectory in the colour-colour plot suggests that the spectrum also has single shape. We also superimposed the resulting plot with the conventional spectral ageing models, namely JP and KGJP \citep{Komissarov1994}, adopting an injection index of $-0.90$. The JP model assumes a single burst of particle acceleration and a continued isotropization of the angle between the magnetic field and the electron velocity vectors (the so-called pitch angle) on a timescale shorter than the radiative timescale. An extension to the JP model is the KGJP that includes finite time of particle injection. In Fokker-Planck model (see Section\,\ref{sec:method}) used for the computation of the radio spectra, we use a JP model for the synchrotron energy losses \citep{1973A&A....26..423J}. At first sight, it may seem surprising that the JP model does never fit the data (Fig.\,\ref{fig:color2}). As discussed in \cite{rajpurohit2020toothbrush,rajpurohit2021macsspec}, a perfectly edge-on shock front with a uniform magnetic field can be described by the JP model. However, if the shock front is inclined with respect to the line of sight, different spectral ages are present and contribute to the observed spectrum. In this case, the colour-colour distribution follows the KGJP model. As seen in Fig.\,\ref{fig:color2}, the KGJP model with an injection index of $\approx -0.90$ can describe the entire distribution quite well, consistent with what is found for the Toothbrush and the Sausage relics \citep{rajpurohit2020toothbrush,digennaro2018saus}. Among the relics, Relic B shows the maximum curvature but both relics follow the same single curve. We do not find any significant difference in the curvature distribution between different families, see Fig. \ref{fig:color2}. We note that there are no data points in the range $-0.5$ to $-0.90$ for both low and high frequency spectral index values. This difference can be understood considering that the radio emission properties derived by our Fokker-Planck model have the intrinsic limitation that all particles (even the one injected by the latest shock in the simulation) are evolved at least for one timestep. Hence, even the youngest population of electrons in both our relics has evolved for one timestep after shock injection, with a duration $\rm \Delta t \approx 30~ \rm Myr$, and the effects of synchrotron and Inverse Compton losses on the observed radio spectrum are already visible at radio emitting frequencies. In summary, the fact that both our relics reasonably well compare with the colour-colour diagrams of real systems (and especially the circumstance that our Relic B is in line with the KGJP model) further confirms that the MS scenario acceleration explored in this work may indeed give rise to realistic relic configurations - albeit non-trivial to tell apart from single injection models, at least based on their colour-colour plots. \section{Conclusions} \label{sec:conclusion} We have simulated the evolution of a radio emitting population of relativistic electrons accelerated by multiple merger shock waves, released during the formation of a massive, $M_{200} \approx 9.7 \cdot 10^{14} \ \mathrm{M}_{\odot}$, galaxy cluster \citet[][]{2016Galax...4...71W,2017MNRAS.464.4448W}. We focused on the spatial and dynamical evolution of $\sim10^4$ tracers, which are located in luminous relic-like structures at the end of our run. We assumed DSA as a source of fresh relativistic electrons out of the thermal pool, and applied a Fokker Planck method to integrate their energy evolution under radiative losses, and further re-acceleration events by merger induced shocks. In our scenario, only shock waves can be the source of cosmic-ray electrons, yet multiple shock waves sweeping the ICM may produce a pre-acceleration of relativistic electrons, qualitatively similar to what radio galaxies are expected to do \citep[e.g.][]{2005ApJ...627..733M, 2011ApJ...728...82M, kr11, ka12, 2014ApJ...788..142K, 2013MNRAS.435.1061P, 2020A&A...634A..64B}. In particular, we indeed identified a specific multiple-shock (MS) scenario, in which particles cross a shock multiple times, before ending up in realistic $\sim ~ \rm Mpc$-sized radio relics. Depending on the number of MS events, CRe with a different evolution can become radio visible. This is regardless of the strength of the final shock event. One of our relics (Relic A) is found to be mostly dominated by a population of tracers which were shocked only just before the epoch of the relic formation, and has a very faint emission, only partially detectable with LOFAR. In this respect, this object appears similar to the recently discovered "Cornetto" relic \citep[][]{locatelli2020dsa}, which was suggested to be indeed the prototype of low-surface brightness radio relics, only powered by freshly injected electrons. On the other hand, we measured that the emission by a second relic in the system (Relic B) is dominated by MS scenario accelerated electrons. We use the shock information collected with the tracers to study the evolution of relativistic electrons injected at the shocks and the associated radio emission via the Fokker-Planck solver described in \ref{subsec:fokker}. We observe that the electron energy spectrum for MS scenario accelerated families differs significantly from the power-law spectrum obtained after a single shock injection, and that their emission is higher than the emission of electrons that were only shocked at the end of the simulation, up to at least $\sim1$ order of magnitude. We computed the total radio emission produced by all accelerated electron families in both relics, emulating the threshold parameters of LOFAR telescope at $140$ MHz and of the JVLA at $1.4$ GHz, obtaining that both relics can be detected by observations, in particular at lower frequencies. From the analysis of the spectral index maps, we observed that Relic A shows relatively flat spectral index values compared to Relic B, suggesting that the presence of a MS scenario evolution of fossil electrons may influence the slope of the radio spectrum observed in the different relics. The radio colour-colour analysis revealed a single continuous curve for both Relics A and B as well as for all families. The curvature distribution can be well explained by the KGJP model. This suggests that, at least in systems whose past evolution is characterised by multiple accretion events, for example objects with prominent filamentary accretions and a past with multiple mergers, such as Abell 2744 \citep{2004MNRAS.349..385K,raj21}, Coma \citep{2011MNRAS.412....2B,bo21}, the Toothbrush cluster \citep{2012A&A...546A.124V,rajpurohit2020toothbrush}, a significant fraction of the observed radio emission can be the product of MS scenario acceleration, with the effect of an apparent boost of the acceleration efficiency with respect to ``single shock" models. Even if this work is exclusive of a single simulation, we found differences between the radio spectra produced in the MS scenario and the single shock scenario. In particular in the MS scenario, electrons produce a more emission at low frequencies and, hence, a steeper spectrum than the single shock scenario. This is particularly intriguing, as, in the MS scenario, the radio emission is produced without including any other sources of fossil electrons in the ICM. This can soften the assumption that single radio galaxies are the source of fossil electrons, for the observed cases that require a high acceleration efficiency. The MS scenario can indeed produce a large pool of pre-accelerated electrons, with rather similar spectra and energy densities on $\sim \rm ~Mpc$ scales, which further produce coherent radio properties on the same scales, if further shocked. The latter may instead be a problem for models in which the source of fossil electrons is a single and recent release of fossil electrons from a radio galaxy. In reality radio galaxies do exist and inject fossil electrons, and we defer the investigation of the MS scenario combined with radio galaxy activity to future work. \section*{Acknowledgements} We gratefully acknowledge the very useful feedback by our reviewer, H. Kang, which significantly improved our numerical analysis since our first submitted version. We acknowledge financial support by the European Union’s Horizon 2020 program under the ERC Starting Grant ‘MAGCOW’, no. 714196. D.W. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 441694982. The cosmological simulations were performed with the ENZO code (\hyperlink{http://enzo-project.org}{http://enzo-project.org}) and were partially produced at Piz Daint (ETHZ-CSCS, \hyperlink{http://www.cscs.ch}{http://www.cscs.ch}) in the Chronos project ID ch2 and s585, and on the JURECA supercomputer at the NIC of the Forschungszentrum Jülich, under allocations nos. 7006, 9016, and 9059. For the creation of the NRAO video, GI acknowledge the assistance of $(AM)^2$ research group of Prof. S. Morigi of the Department of Mathematics of the University of Bologna and the Visit Lab of A. Guidazzoli of CINECA under the project FIBER OF THE UNIVERSE (\hyperlink{http://visitlab.cineca.it/index.php/portfolio/fiber-of-the-universe/}{http://visitlab.cineca.it/index.php/portfolio/fiber-of-the-universe/}). We acknowledge the usage of online storage tools kindly provided by the INAF Astronomical Archive (IA2) initiative (http://www.ia2.inaf.it). \section*{Data Availability} Both the tracer data used for this work \footnote{\hyperlink{https://owncloud.ia2.inaf.it/index.php/s/zcskRy1bryHN62i}{https://owncloud.ia2.inaf.it/index.php/s/zcskRy1bryHN62i}} and the IDL code used to evolve their energy spectra \footnote{\href{https://github.com/FrancoVazza/IDL_FP}{https://github.com/FrancoVazza/IDL$\_$FP}} are publicly available. The \textsc{ENZO} code used to produce the cosmological simulation is also publicly available \footnote{\url{enzo-project.org}}. \bibliographystyle{mnras} \section{Introduction} \label{sec:intro} Radio relics usually located in the outskirts of merging galaxy clusters are giant ($\sim$Mpc) synchrotron sources that are believed to be produced by cosmic-ray electrons (CRe) (re)-accelerated by merger induced shock waves in the intracluster medium (ICM; \citet{ensslin1998, 1999ApJ...518..594R, bj14, vanweeren2019review, 2019SSRv..215...14B}). The connection between shocks and relics has been confirmed by finding the surface brightness and temperature discontinuities in the X-ray observations at the location of relics \citep[e.g.][]{2008A&A...486..347G, 2013PASJ...65...16A,vanweeren2019review}. The details of the acceleration mechanisms in radio relics are still not fully understood. The widely accepted mechanism for acceleration of relativistic cosmic-ray (CR) particles at shock fronts is diffusive shock acceleration (DSA) \citep[e.g.][]{1987PhR...154....1B}. DSA is based on the original idea of \citet{1949PhRv...75.1169F}, according to which particles are scattered upstream and downstream of the shock by plasma irregularities, gaining energy at each shock crossing. In recent years, deep X-ray observations performed with \textit{Chandra}, \textit{XMM-Newton}, and \textit{Suzaku} have led to an increase in the number of shocks detected in merging galaxy clusters \citep[e.g.][ for recent works]{2017A&A...600A.100A, 2017MNRAS.464.2896C, 2018MNRAS.476.5591B}. Radio and X-ray observations suggest that radio relics probe particle acceleration by weak shocks, $\mathcal{M} \leq 5$, \citep[e.g.][]{2009A&A...494..429B,vw10,2012MNRAS.426...40B,Hoang2017sausage,2017A&A...600A.100A,2018ApJ...852...65R,2018MNRAS.476.5591B, 2019ApJ...873...64D} in a high-$\beta$ ($\beta = P_{th}/P_B$, i.e., the ratio between the thermal and magnetic pressures) environment such as the ICM, where the thermal pressure dominates over the magnetic pressure. However, X-ray and radio estimates of shocks' strength are typically in disagreement, possibly because these two proxies probe different parts of the underlying Mach number distribution \citep[e.g. see][for a recent discussion of this issue]{2021arXiv210608351W}. In the weak shock regime, the acceleration efficiencies of cosmic-ray protons (CRp) are poorly understood, although current models and simulations predict acceleration efficiencies (defined as the ratio between the shock kinetic power and the energy flux of accelerated cosmic rays) that are less than a few percent \citep[e.g.][]{2018ApJ...864..105H, 2019ApJ...883...60R, 2020ApJ...892...86H, 2020MNRAS.495L.112W}, in agreement with direct constraints coming from $\gamma$-ray non-detections of galaxy clusters \citep[e.g.][for review]{ack10, ackermann14, ackermann16, 2021NewA...8501550W}. On the other hand, the observed connection between radio relics and shocks in merging galaxy clusters demonstrates that the electron acceleration (or re-acceleration) at these shocks is efficient, in the sense that even weak shocks ($\mathcal{M} \leq 2$) are associated with detectable radio emission. This implies a surprisingly large ratio of electron-to-proton CR acceleration efficiencies for DSA, because at the same time CR protons have never been detected in the ICM \citep[e.g.][]{va14relics, bj14,2015MNRAS.451.2198V, scienzo16,2020A&A...634A..64B}. Even if radio power of some relics can be explained by the acceleration of electrons from the thermal pool (i.e. DSA mechanism) \citep{locatelli2020dsa}, this mechanism alone cannot explain the high radio power of the majority of relics \citep{2016MNRAS.460L..84B, 2016MNRAS.461.1302E, Hoang2017sausage}. To mitigate the problem of the high acceleration efficiencies implied by weak cluster shocks, recent theoretical models assume a pre-existing population of CRe at the position of the relic that is re-accelerated by the passage of the shock \citep[e.g.][]{2005ApJ...627..733M, 2011ApJ...728...82M, kr11, ka12, 2014ApJ...788..142K, 2013MNRAS.435.1061P, 2020A&A...634A..64B}. This would soften the above theoretical problems, because the population of CRs processed by $\mathcal{M} \leq 3-4$ shocks is predicted to be dominated by the re-accelerated fossil populations, and not the freshly accelerated one. The re-acceleration scenario is supported by the observation of radio galaxies located nearby or within a few radio relics \citep[e.g.][]{2014ApJ...785....1B, 2015MNRAS.449.1486S, 2016MNRAS.460L..84B, 2017NatAs...1E...5V, digennaro2018saus}. However, it is not obvious that the injection of fossil electrons by one or a few radio galaxies, can automatically produce a uniform population of electrons, capable of producing the high degree of coherence of the radio emission observed in a few giant relics: in radio relics like "the Sausage" and "the Toothbrush" the spectral properties of the emission are very coherent across $\sim 2$ $\rm Mpc$, requiring a very uniform distribution of coeval fossil electrons in the shock upstream \citep[e.g.][]{2010Sci...330..347V,2016ApJ...818..204V,2018ApJ...852...65R,rajpurohit2020toothbrush, digennaro2018saus}. Complementary to the above scenario, we thus focus here on a specific mechanism potentially alleviating this problem, i.e. we consider a multiple-shock (MS) scenario in which multiple, wide merger shocks sweeping the ICM in sequence can produce a large-scale and uniform distribution of mildly relativistic electrons. A similar mechanism has been very recently analyzed by \citet{kang2021diffusive}, in the context of the acceleration of cosmic ray protons via DSA. The existence of multiple populations of shock waves sweeping the ICM along a variety of angles with respect to the leading axis of mergers, and possibly merging into larger shocks have been recently explored by several simulations \citep[e.g.][]{2015ApJ...812...49H,2020MNRAS.498L.130Z,2021MNRAS.501.1038Z} In our work, we focus on MS electron acceleration and the radio emission generated in this way. In detail, we analyse the simulation of a massive $M_{200} \approx 9.7 \cdot 10^{14} \ \mathrm{M}_{\odot}$ galaxy cluster from $z=1$ to $z=0$ \citep{2017MNRAS.464.4448W}, and we compute the radio emission generated by particles following the merging of the cluster, showing that MS electrons develop, on average, large enough radio emission to be detectable with current radio telescopes (e.g. LOFAR). This paper is organized as follow. In Section \ref{sec:method} we describe the numerical set-up used for the galaxy cluster simulation and the model used to simulate the evolution of electron spectra. The results obtained are analyzed in Section \ref{sec:families}, where we distinguish two different relics probed by the particle simulated and we study the radio emission of these particles catalogued by their shock history. The detailed study of the integrated radio emission is presented in Section \ref{sec:integrated_radio}. Section \ref{sec:conclusion} summarizes the results obtained in the paper and discusses future work. \section{Numerical Method} \label{sec:method} \subsection{Simulation setup}\label{subsec:crater} In this work, we study a massive, $M_{200} \approx 9.7 \cdot 10^{14} \ \mathrm{M}_{\odot}$, galaxy cluster that was simulated and analysed in \citet[][]{2016Galax...4...71W,2017MNRAS.464.4448W}. This cluster is interesting for a comparison with real observations as it undergoes a major merger at redshift $z \approx 0.27$, producing detectable giant radio relics. The cluster was simulated with the cosmological magneto-hydrodynamic (MHD) code \textsc{ENZO} \citep{ENZO_2014} and analysed with the Lagrangian tracer code Cosmic-Ray Tracers (\textsc{CRaTer}) \citep{2017MNRAS.464.4448W}. In the following, we give a brief overview on the used simulation setup. For specific details, we point to section 2.1 in \citet{2017MNRAS.464.4448W}. The \textsc{ENZO} code follows the dark matter using a N-body particle-mesh solver \citep{1988csup.book.....H} and the baryonic matter using an adaptive mesh refinement (AMR) method \citep{1989JCoPh..82...64B}. More specifically, \citet{2017MNRAS.464.4448W} used the piecewise linear method \citep{1985JCoPh..59..264C} in combination with the hyperbolic Dedner cleaning \citep{2002JCoPh.175..645D}. The simulation covers a root grid with a comoving volume of $\sim (250 \ \mathrm{Mpc})^3$ sampled with $256^3$ grid cells and dark matter particles. An additional comoving volume of size $\sim (25 \ \mathrm{Mpc})^3$ has been further refined using 5 levels of AMR, i.e. $2^5$ refinements, for a final resolution of $31.7 \ \mathrm{kpc}$. The chosen AMR criteria, i.e. based on the over-density and the 1D velocity jump, ensure that about $\sim 80 \ \%$ of the cluster volume are refined at the highest AMR level. We study this cluster in detail because it is a massive one, it has been already the subject of several works by our group, and because the fairly large dynamical range and number of available snapshots is optimal for our analysis involving tracer particles (see below). However, the final magnetic field reached through small-scale dynamo amplification in this object is kept artificially small by the spatial resolution, which is not enough to ensure a large enough Reynolds number to enter an efficient small-scale dynamo amplification regime, as studied in \citet{va18mhd}. Therefore, for simulating the injection and advection of CRe in this system, we re-normalized the magnetic field strength, measured by the tracers,by a factor 10. The re-normalization results typical magnetic field strengths of $\sim 0.1-0.2$ $\rm \mu G$ in our relics. In fact, the electron cooling depends rather weakly on the renormalization of magnetic field strengths, because inverse Compton cooling dominates over synchrotron cooling (see the denominator in Eq. \ref{eq:ic}. Using \textsc{CRaTer}, \citet{2017MNRAS.464.4448W} used a total of $\sim 1.3 \cdot 10^7$ Lagrangian tracer particles to analyse the cluster's evolution between $z = 1$ and $z = 0$, at a (nearly constant) time resolution of $\Delta t=31 \rm ~Myr$. Following the gas distribution of the ICM, \textsc{CRaTer} injects particles with a fixed mass, i.e. in our case $m_{\mathrm{tracer}} \approx 10^8 \ \mathrm{M}_{\odot}$, into the simulation. The tracers' velocities are computed by interpolating the local velocities to the tracers' position using a \textit{cloud-in-cell}-interpolation method. An additional velocity correction term was used in \citet{2017MNRAS.464.4448W} to account for mixing motions that might be underestimated in the case of complex flows \citep{Genel_2014_following_the_flow}. The velocity interpolation schemes have been extensively tested in \citet{2017MNRAS.464.4448W} and \citet{wittorPHD}. The tracer particles use a temperature-based shock finder to detect shocks in the ICM. The corresponding Mach number is computed from the Rankine-Hugoniot relation, assuming $\gamma = 5/3$, as \begin{align} M = \sqrt{\frac{4}{5} \frac{T_{\mathrm{new}}}{T_{\mathrm{old}}} \frac{\rho_{\mathrm{new}}}{\rho_{\mathrm{old}}} + \frac{1}{5}}. \end{align} Here, $T$ and $\rho$ are the temperature and density in the pre- and post-shock. We have specifically chosen the cluster simulation presented in \citet[][]{2016Galax...4...71W,2017MNRAS.464.4448W} for our analysis. \citet[][]{2017MNRAS.464.4448W} found that a significant fraction of the particles that produce giant radio relics at $z \approx 0$, have crossed several shocks before, see figure 12 and Section 3.5 therein. Hence, the radio emitting particles should have been subjected to several cycles of shock (re-)acceleration, making this simulation a perfect candidate for our analysis. \begin{figure*} \centering \includegraphics[width=1\textwidth]{FIGURES/video-snap.png} \caption{Snapshot sequence at different times of, respectively, the galaxy cluster baryonic density (purple-yellow), the radio emission at $1.4$ GHz (light blue) and the tracers with their path. The tracers change from yellow (not active) to blue (active) when they cross a shock front in the simulation.} \label{fig:video} \end{figure*} We use a 3D rendering of this merger event to better describe the sequence of mergers (leading to multiple shock waves) which interest a particular sector of the cluster. Figure \ref{fig:video} shows a snapshot sequence obtained from a cinematic scientific visualization realized from the simulation data\footnote{The video is called "\textit{The VLA shedding lights on the origin of radio relics}" and it was recently awarded the $1^{st}$ prize for the NRAO Image Contest for the celebration of VLA $40^{th}$ anniversary. The video is available at the following link: \hyperlink{https://vimeo.com/464248944/3fc17a5b8b}{https://vimeo.com/464248944/3fc17a5b8b}.}. The video shows the baryonic density (purple-yellow) surrounded by the volumetric radio emission at $1.4$ GHz (light blue) during the formation of the galaxy cluster. The tailed spheres highlight the evolution of a selection of tracers. Initially, all the tracers are yellow and when they cross a shock front, they are activated, changing color to bright blue. The sequence in Fig. \ref{fig:video} shows the evolution of two streams of tracers (``beam"). This qualitative analysis of the cluster merging evolution shows an history of MS scenario before tracers arrive at the end of the simulation. In this paper, we analyse the spectral evolution measured by the tracers in this simulation. \subsection{Simulating the evolution of electron spectra} \label{subsec:fokker} We solve the time-dependent diffusion-loss equation of relativistic electrons represented by tracer particles, using the standard \citet{1970JCoPh...6....1C} finite-difference scheme implemented in a serial code written in IDL language. We used $N_{\rm b}=10^5$ equal energy bins in the $\gamma_{\rm min} \leq \gamma \leq \gamma_{\rm max}$ Lorentz factor, with $\gamma_{\rm min}=1$ and $\gamma_{\rm max}=4.5\times10^5$ (hence $\rm d\gamma=5$). The code we used to evolve our particle spectra is freely available \footnote{\url{https://github.com/FrancoVazza/IDL_FP}}. We are concerned with the evolution of relativistic electrons injected and/or re-accelerated by shocks, at the periphery of clusters and on timescales of a few Gigayears ($\leq 3 \rm ~Gyr$). For this specific task, we only have to evolve the energy spectra for $ \sim 7000$ tracers, necessary to sample the spatial extension of radio relics formed in the system by $z \approx 0$. The combination of the limited amount of tracers and of the relatively small number of snapshots to process (up to 238) allowed us to resort to the serial implementation of the Fokker Planck solver already used in previous work \citep[e.g.][]{rajpurohit2020toothbrush}. Notice that, unlike the more recent work presented in \citet{2021arXiv210204193V}, in this implementation, we evolve the electron spectra in $\gamma$ space, and not in momentum space. This introduces a small error in the low energy part of the spectra, where the injected distribution from shock acceleration is a power-law in momentum space, but not in $\gamma$ space (since of course $E^2=m^2c^4+ p^2 c^2$). The ultra-relativistic simplification used here is however suitable when focusing on the radio emitting electrons ($\gamma \geq 10^2-10^3$) and also considering that the accumulated particle population at low energy is small in the short time range considered \citep[e.g.][]{sa99}. We considered a reduced Fokker-Planck equation without injection and escape terms (i.e. Liouville equation), and neglected the spatial diffusion of cosmic rays (which is appropriate for the $\sim \rm MeV-GeV$ electrons considered in this work), which allows us to track the evolution of the number density of relativistic electrons as a function of their energy, $N(\gamma)$, computed separately for each tracer particle: \begin{equation} {\frac{\partial N}{\partial t}} = {\frac{\partial}{\partial \gamma}} \left[ N \left( \left|{\frac{\gamma}{\tau_{\rm rad}}}\right| + \left|{\frac{\gamma}{\tau_{\rm c}}}\right| + {\frac{\gamma}{\tau_{\rm adv}}} - \left|{\frac{\gamma}{\tau_{\rm acc}}}\right| \right) \right], \label{eq11} \end{equation} We use the approximation \begin{equation} \dot{\gamma} \approx \left|{\frac{\gamma}{\tau_{\rm rad}}}\right| + \left|{\frac{\gamma}{\tau_{\rm c}}}\right| + {\frac{\gamma}{\tau_{\rm adv}}} - \left|{\frac{\gamma}{\tau_{\rm DSA}}}\right| , \label{eq12} \end{equation} where $\tau_{\rm rad}$, $\tau_{\rm c}$, and $\tau_{\rm adv}$ are respectively the loss timescales for the radiative, Coulomb and expansion (compression) processes that we define in Sec. \ref{sec:lossterm}. $\tau_{\rm DSA}$ represents instead the acceleration timescale due to DSA that we estimate in Sec. \ref{sec:DSAacc}. The numerical solution is obtained using the \citet{1970JCoPh...6....1C} finite difference scheme: \begin{equation} N(\gamma,t+dt)=\frac{{{N(\gamma,t)}/{dt}} + N(\gamma+d\gamma,t+dt){{\gamma}}} {1/dt + {{\gamma}}/d\gamma} + Q_{\rm inj}(\gamma) , \label{eq13} \end{equation} where in the adopted splitting-scheme to perform the finite differences we assumed $N(\gamma +d\gamma/2)=N(\gamma+d\gamma)$ and $N(\gamma-d\gamma/2)=N(\gamma)$, where $Q_{\rm inj}$ accounts for the injection by shocks. The latter isis regarded as an almost instantaneous process, considering that timescales are much shorter than the time step of our integration, $\delta t \approx 31 \rm Myr$ (see Eq. \ref{eq:tDSA} below). \subsubsection{Loss Terms} \label{sec:lossterm} The timescales associated to the energy losses by radiative, Coulomb and expansion (compression) processes are given by the following formulae, adapted from \citet{bj14}: \begin{equation} \tau_{\rm rad} =\frac {7720 \rm ~Myr} {(\gamma/{300})\left[\left(\frac{B}{3.25 \rm \mu G}\right)^2 + (1+z)^4\right]} , \label{eq:ic} \end{equation} \begin{equation} \tau_{\rm c} = 7934 \rm ~Myr \left\{ {{\frac{n/10^{-3}}{{\gamma/300}}}} \left( 1.168 + {\frac{1}{75}}ln \left( {\frac{\gamma/300}{ n/10^{-3} }} \right) \right) \right\}^{-1} \label{eq:coulomb} \end{equation} and \begin{equation} \tau_{\rm adv} = \frac{951 \rm ~Myr}{ \nabla \cdot \mathbf{v}/10^{-16}} , \label{eq:adv} \end{equation} \noindent in which where the density $n$ is measured in [$\rm cm^{-3}$], $B$ in [$\rm \mu G$] and the gas divergence is measured in $\nabla \cdot \mathbf{v}$ in [$1/\rm s$]. Bremsstrahlung losses can be safely neglected in this case, because for the typical ICM conditions encountered also their timescale is much larger than the ones of all other loss channels. Inverse Compton and synchrotron losses are by far the most relevant for the evolution of electrons considered in this work, owing to their peripheral location and low gas density. \subsubsection{Shock (re-)acceleration} \label{sec:DSAacc} Predicting the spectrum of injected "fresh" relativistic electrons injected by weak shocks, as well as their spectrum after shock reacceleration, is far from being a solved problem. In this paper we follow a relatively simple approach, motivated by the existing literature on the subject and meant to simplify the steps to determine the post-shock spectrum of radio emitting electrons. We rely here on the DSA model by \citet{kr11}, which assumes that the injection Lorentz factor of electrons is related to the injection momentum ($\gamma_{\rm inj}=\sqrt{1+p^2_{\rm inj}/m_e^2c^2}$), where $p_{\rm inj}$ in DSA is assumed to be a multiple of the thermal momentum of {\it protons}, i.e. $p_{\rm inj}= \xi p_{\rm th}$ ($p_{\rm th}=\sqrt{2 k_b T_d m_p}$, where $k_b$ is the Boltzmann constant). Following \citet{kr11}, we compute $\xi$ based on the fit formula given from their one-dimensional convection-diffusion simulations: \begin{equation} \xi_{\rm inj}=1.17 \frac{m_p v_d}{p_{\rm th}} \cdot (1+\frac{1.07}{\epsilon_B})\frac{\mathcal{M}^{0.1}}{3^{0.1}}) \end{equation} where $v_d$ is the downstream shock velocity and $\epsilon_B$ is the ratio of magnetic field strength between the $B_0$ downstream magnetic field generated by the shock, and $B_\perp$ is the magnetic field perpendicular to the shock normal. We set here $\epsilon_B=0.23$ \citep[][]{2013MNRAS.435.1061P} and obtaining values in the range $\xi_{\rm inj} \sim 2.5-3.5$ and $\gamma_{\rm inj} \sim 10-20$ for our shocks. The source term for relativistic electrons in Eq.\ref{eq13} assumes an energy distribution that follows a power-law \citep[e.g.][]{1962SvA.....6..317K,sa99}: \begin{equation} Q_{\rm inj}(\gamma) = K_{\rm inj,e} ~\gamma^{-\delta_{\rm inj}} \left(1-\frac{\gamma}{\gamma_{\rm cut}}\right)^{\delta_{\rm inj}-2} , \label{eq:xi} \end{equation} in which the initial slope of the input momentum spectrum, $\delta_{\rm inj}$, is computed based on the standard DSA prediction, i.e. $\delta_{\rm inj} = 2 (\mathcal{M}^2+1)/(\mathcal{M}^2-1)$. The cuff-off energy, $\gamma_{\rm cut}$, is the defined for every shocked tracer as the maximum energy, beyond which the radiative cooling timescale is shorter than the acceleration timescale, $\tau_{\rm DSA}$: \begin{equation} \tau_{\rm DSA} = \frac{3~D(E)}{V_s^2} \cdot \frac{r(r+1)}{r-1} , \label{eq:tDSA} \end{equation} in which $r$ is the shock compression factor, $V_s$ is the shock velocity, and $D(E)$ is the diffusion coefficient of relativistic electrons, as a function of their energy \citep[e.g.][]{gb03}. The specific energy-dependent value of $D(E)$ is little constrained because it depends on the local conditions of the turbulent plasma, and it is critical to limit the maximum energy in DSA \citep[e.g.][]{ka12}. However, the latter is not an issue for our simulation, because all plausible choices of $D(E)$ in Eq.~\ref{eq:tDSA} give an acceleration timescale many orders of magnitude smaller than the typical cooling time of radio emitting electrons, whose energy distribution be assumed to follow a power law within the energy range of interest, at least the moment of .their injection. We can set therefore $\gamma_{\rm cut} = \gamma_{\rm max}$ in this work. This also motivates the fact that we can model shock injection by DSA by adding the newly created population of particles across timesteps (see Eq.~\ref{eq13} above), without integrating a source term as needed for the much slower re-acceleration by turbulence (see below). Under these assumptions, the rate of injection of relativistic electrons in the downstream is: \begin{eqnarray} K_{\rm inj,e}= 4 \pi ~ K_{e/p} \int_{p_{\rm inj}}^{p_{\rm cut}} (\sqrt{p^2+1}-1) f_N ~p^{-(\delta_{\rm inj}+2)} \cdot \nonumber \\ \cdot \exp[-(p/p_{\rm cut})^{2}] ~p^2 dp ~dx_t^2 ~V_s ~dt \label{eq:phicr} \end{eqnarray} with \begin{equation} f_N = \frac{n_d}{\pi^{3/2}}p_{\rm th}^{-3} \exp{(-\xi_{inj}^2)} \end{equation} and where $K_{e/p}$ is the electron-to-proton ratio. Following \citet{2020JKAS...53...59K} we use $K_{e/p}=(m_p/m_e)^{(1-\delta_{\rm inj})/2}$, which gives $K_{e/p} \sim 10^{-2}$ for an injection spectral index of $\delta_{\rm inj} \approx 2.3$, in line with the injection spectral index of local Galactic supernova remnants \citep[e.g.][]{2007Natur.449..576U}. $dx_t^2$ is the surface element associated to each shocked tracer particle, and is computed considering that $dx_t^3 = dx^3/n_{\rm tracers}$ is the initial volume associated to every tracer at the epoch of their injection ($n_{\rm tracer}$ being the number of tracers in every cell) and $dx_t(z)^3=dx_t^3 \cdot \rho_t/\rho(z)$ is the relative change of the volume associated to each tracer as a function of $z$, based on the ratio between the density at injection, $\rho$ and the density of cells where each tracer sits as a function of redshift, $\rho(z)$. This procedure allow us to guess the acceleration efficiency of relativistic electrons at the shock, at least to a first degree of approximation and with a modest computing time. Of course, the physical uncertainty behind this is of course very large, and dedicated simulations are needed to fully solve the acceleration cycle of relativistic electrons by weak merger shocks, for the possible range of shock obliquities and typical plasma conditions of the ICM \citep[][]{Guo_eta_al_2014_II,2015PhRvL.114h5003P,2019ApJ...876...79K,2020ApJ...897L..41X,2021ApJ...915...18H}. \bigskip Beside the {\it direct} injection of relativistic electrons by shocks, we also include the effect of shock {\it re}-acceleration on existing relativistic electrons \citep[e.g.][]{2005ApJ...627..733M,kr11,ka12}. According to DSA, the input particle spectrum, $N_0(x)$, becomes \begin{eqnarray} N(\gamma)=(\delta_{\rm inj}+2) \cdot \gamma^{-\delta_{\rm inj}} \int_{\gamma_{min,re}}^\gamma N_0(x) x^{\delta_{\rm inj}+1} dx , \label{eq:shock_re} \end{eqnarray} where $\delta_{\rm inj}$ is the local slope within each energy bin. We consider that the minimum momentum for the electron re-acceleration by shocks is the injection momentum $p_{\rm inj}$, above which DSA is expect to operate \citep{2020JKAS...53...59K}. So we set $\gamma_{min,re-e} = \gamma_{\rm inj}$as a lower bound for the integration in Eq. \ref{eq:shock_re}. \section{MS scenario of electrons re-acceleration} \label{sec:families} In this section, we analyze the properties of tracer particles used to probe the evolution of the simulation, focusing on their shock time. We select more than 7000 tracers that cross shocks with Mach number $M\geq2$ during the entire evolution of the simulation. By construction, all these tracers cross a shock at the end of the simulation $t_{\mathrm{end}}=13.76$ Gyrs. We investigate if these tracers crossed other shocks before the final one and if, how many times. We divide the tracers in different families according to the number of shocks they cross during the simulation evolution. Tracers of Family 1 are only accelerated by the shock at the end of the simulation. Family 2, 3 and 4 have been shocked respectively one, two and three times before they cross the final shock. Details of the families population are collected in Tab. \ref{tab:families_stat}. \begin{table} \centering \begin{tabular}{c|cc|cc} & Relic A & & Relic B & \\ \hline Family 1 & 1297 & 91.92\% & 1709 & 28.91\% \\ Family 2 & 114 & 8.08\% & 3687 & 62.36\% \\ Family 3 & & & 489 & 8.27\% \\ Family 4 & & & 27 & 0.46\% \\ Total & 1411 & & 5912 & \end{tabular} \caption{Tracer population statistic in the different families for both Relic A and Relic B.} \label{tab:families_stat} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{FIGURES/tracer_pos_all_2D.pdf} \caption{2D map of the tracer position at the final step. We distinguish two relics for the analysis. The black oblique line (defined by the equation in the legend) divides the region of the two relics, namely Relic A (left) and Relic B (right)} \label{fig:tracer_pos} \end{figure} Figure \ref{fig:tracer_pos} shows the $(x,y)$ projection of Family 1 (blue), Family 2 (red), Family 3 (green), and Family 4 (orange) tracers at $t_{\mathrm{end}}$. According to these positions, we divide the tracers in two groups named "Relic A" and "Relic B" in Fig. \ref{fig:tracer_pos}. We observe that Relic A is composed mostly by Family 1 tracers, with the presence of $8 \ \%$ of Family 2 tracers. Relic B, instead, is composed of more than $62 \ \%$ by Family 2 tracers and for $\sim28 \ \%$ by Family 1 tracers, with the presence of families with higher number of shocks, as reported in Tab. \ref{tab:families_stat}. As first approach, also motivated by the fact that the differences in the timing of shocks within each family of electrons is typically $\leq \rm ~Gyr$, we computed the energy evolution of each family based on the family-averaged fields, i.e. assuming at each timestep that the entire family of particles is charactersied by the same values of density, temperature and magnetic field, and that all particles in the same family are shocked at the same time. For this family-averaged analysis, we chose the shock times of each family as the ones at which the majority of the tracers cross a shock simultaneously. This is of course a gross approximation, but it is enough to allow us to obtain some first important information on the electron energy spectrum based on the MS scenario and the subsequent radio emission. A detailed report of the family-averaged approach is available as Appendix of this paper \ref{sec:appendix}. \subsection{Relic A} For Relic A, the family-averaged quantities are collected in Tab. \ref{tab:relicA_stat}. We use these quantities to compute the time evolution of the electron energy spectrum according to the model introduced in Sec. \ref{subsec:fokker}. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} & Time [Gyrs] & Mach & B [$\mu$G] & $\rho$ [g/cm$^{-3}$] & T [K]\\ \hline Family 1 & & & & \\ Shock 1 & 13.76 & 2.6 & $1.7\times10^{-1}$ & $1.3\times10^{-28}$ & $2.5\times10^{7}$ \\ \hline Family 2 & & & & \\ Shock 1 & 12.69 & 3.8 & $1.1\times10^{-1}$ & $1.6\times10^{-28}$ & $2.7\times10^{7}$ \\ Shock 2 & 13.76 & 2.3 & $2.2\times10^{-1}$ & $2.0\times10^{-28}$ & $3.1\times10^{7}$ \end{tabular} \caption{Family-averaged quantities at selected shock times for families in Relic A.} \label{tab:relicA_stat} \end{table} Figure \ref{fig:ele_A} shows the time evolution of the electron energy spectrum for Family 2 population for Relic A. The electron population is produced by the first shock at $t_1=12.69$ Gyrs with a power-low spectrum (purple dashed line of the spectrum) and, as time evolves, we observe a cooling of the high-energy tail of the spectrum, that corresponds to a cut-off at $\gamma\sim10^3$ right before the second shock. After the final shock at $t_{\mathrm{end}}=13.76$ Gyrs, we observe that the electron energy spectrum is no longer a power-law and electrons are accelerated up to $\gamma\sim10^5$ with a soft knee in the slope around $\gamma\sim10^3$, in correspondence of the cut-off energy before the shock. (red solid line of the spectrum). However, we are cautious about the result in the low energy part of the spectra, considering the limit of the Fokker-Planck code described in Sec. \ref{subsec:fokker}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{FIGURES/ele_Fam2_A.pdf} \caption{Electron energy spectra time evolution for population-averaged quantities Family 2 tracers in Relic A. Dashed lines correspond to spectra evolution after the first shock. The red solid line represents the electrons spectrum after the second shock.} \label{fig:ele_A} \end{figure} \subsection{Relic B} For Relic B, the family-averaged quantities are reported in Tab. \ref{tab:relicB_stat}. We use these quantities to compute the time evolution of the electron energy spectrum according to the model introduced in Sec. \ref{subsec:fokker}. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} & Time [Gyrs] & Mach & B [$\mu$G] & $\rho$ [g/cm$^{-3}$] & T [K]\\ \hline Family 1 & & & & \\ Shock 1 & 13.76 & 2.7 & $1.6\times10^{-1}$ & $3.1\times10^{-28}$ & $6.5\times10^{7}$ \\ \hline Family 2 & & & & \\ Shock 1 & 12.82 & 3.5 & $1.9\times10^{-1}$ & $5.1\times10^{-28}$ & $3.2\times10^{7}$ \\ Shock 2 & 13.76 & 2.8 & $1.4\times10^{-1}$ & $2.7\times10^{-28}$ & $6.6\times10^{7}$ \\ \hline Family 3 & & & & \\ Shock 1 & 12.56 & 2.4 & $1.4\times10^{-1}$ & $2.8\times10^{-28}$ & $1.9\times10^{7}$ \\ Shock 2 & 13.31 & 2.4 & $3.8\times10^{-1}$ & $3.7\times10^{-28}$ & $4.0\times10^{7}$ \\ Shock 3 & 13.76 & 2.8 & $1.5\times10^{-1}$ & $2.9\times10^{-28}$ & $6.7\times10^{7}$ \\ \hline Family 4 & & & & \\ Shock 1 & 7.82 & 2.3 & $6.2\times10^{-1}$ & $4.9\times10^{-28}$ & $1.0\times10^{7}$ \\ Shock 2 & 10.98 & 2.9 & $2.0\times10^{-1}$ & $9.4\times10^{-28}$ & $6.4\times10^{7}$ \\ Shock 3 & 13.37 & 2.1 & $5.6\times10^{-1}$ & $6.2\times10^{-28}$ & $5.5\times10^{7}$ \\ Shock 4 & 13.76 & 2.3 & $1.8\times10^{-1}$ & $3.4\times10^{-28}$ & $7.5\times10^{7}$ \end{tabular} \caption{Family-averaged quantities at selected shock times for families in Relic B.} \label{tab:relicB_stat} \end{table} Figure \ref{fig:ele_B} shows the time evolution of the electron spectrum obtained from the Fokker-Planck model described in Sec. \ref{sec:method} using the averaged quantities for Family 2 tracer population for Relic B as in Tab. \ref{tab:relicB_stat}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{FIGURES/ele_Fam2_B.pdf} \caption{Electron spectra time evolution for quantities-averaged Family 2 tracers for Relic B. Dashed lines correspond to spectra evolution after the first shock. The red solid line represents the electrons spectrum after the second shock.} \label{fig:ele_B} \end{figure} Figure \ref{fig:ele_B} shows the time evolution of the electron energy spectrum for Family 2 population for Relic B. We observe a similar behaviour of the electron energy spectrum evolution as for Relic A. However, since Family 2 population in Relic B is more than one order of magnitude higher than Family 2 population in Relic A, we notice that the absolute value of the electron energy spectrum for Relic B is approximately one order of magnitude higher than Reilc A spectrum. Similar electron energy spectrum have been obtained for the other Families in Relic B (not shown) in which we observed a behaviour for MS scenario acceleration in the evolution of the spectra consistent with the evolution shown here for Family 2 population. The family-averaged analysis shown here allowed us to witness the different evolution of MS scenario electron energy spectra compared to a single shock scenario. However, we noticed that the averaged analysis introduced an huge variation in the computation of the electron energy spectra and, subsequently, in the radio emission associated (see Appendix \ref{sec:appendix}). In the next Section (Sec. \ref{sec:integrated_radio}) we shall instead compute the detailed radio emission based on the specific sequence of physical fields recorded by each tracer during its evolution, and compute the integrated radio emission across the relic by combining the information of all tracers in all families. \section{Integrated radio emission} \label{sec:integrated_radio} In this section, we study the integrated radio emission along the same viewing angle of Fig.~\ref{fig:video}, obtained using the electron spectra produced via Fokker-Planck integration over $\sim7000$ tracers (Sec. \ref{sec:method}). Contrarily to the family-averaged analysis discussed in the previous section, we now compute the energy spectra using the values of density, magnetic field and density recorded by each tracer. At the final position, we compute the radio emission for both Relic A and Relic B. Figures \ref{fig:radio140} and \ref{fig:radio1400} show the integrated radio emission maps, respectively, at $140$ MHz and $1.4$ GHz, in which we separate the emission contributions from the different families. Assuming the distance of the observation at $z=0.15$, we calculate the integrated radio emission with a beam size of $63.7$ kpc, corresponding to the $25"$ LOFAR telescope resolution. \begin{figure*} \centering \includegraphics[width=\textwidth]{FIGURES/NO-140-radiation-grid.pdf} \caption{Integrated radio emission $(x,y)$ projection at $140$ MHz for Family 1 (top-left), Family 2 (top-right), family 3 (bottom-left), and Family 4 (bottom-right). The Black solid line in the plots divides the particles population between Relic A and Relic B.} \label{fig:radio140} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{FIGURES/NO-1400-radiation-grid.pdf} \caption{Integrated radio emission $(x,y)$ projection at $1.4$ GHz for Family 1 (top-left), Family 2 (top-right), family 3 (bottom-left), and Family 4 (bottom-right). The Black solid line in the plots divides the particles population between Relic A and Relic B.} \label{fig:radio1400} \end{figure*} Focusing on Relic A, we see that the integrated radio emission is mostly dominated by Family 1 population and reaches a peak of $\lesssim 10^3$ mJy at $140$ MHz, while radio emission of Family 2 is concentrated in the lower-right corner of the relic, and its integrated value is $\sim$one order of magnitude lower. This makes us conclude that, in Relic A, the visible radio emission will be mostly dominated by Family 1 population, i.e. by newly accelerated electrons. This object appears therefore as a ``classic" powerful radio relic, in which all/most of the observed emission is due to the latest shock, which has energised a pool of fresh electrons, which are being observed within a cooling time since their first acceleration. Interestingly, the situation is very different for the nearby Relic B, whose integrated radio emission of $\sim 10^3$ mJy at $140$ MHz is dominated by the Family 2 population. The electrons from Family 1 and Family 3 are confined in a small sub-volume of Relic B, albeit they have an overall comparable radio emission between each others. The emission for Family 4, instead, due to its smaller occupation fraction, remains negligible at all frequencies. \begin{figure} \centering \includegraphics[width=\columnwidth]{FIGURES/tot_emission.pdf} \caption{Relic-integrated radio spectrum for Relic A (blue) and Relic B (red).} \label{fig:tot_spectra} \end{figure} In Figure \ref{fig:tot_spectra}, we give the total emission for the ``single" zone analysis of the two relics, i.e. by integrating the CRe emission over the volume of each relic. In a single zone and standard view of radio relics, a $\langle \alpha \rangle \sim -1.55$ associated with a shock with $\mathcal{M} \approx 2.1$. In order for the shock to produce the observed emission of 96 mJy at 1.4 GHz, a single zone \citet{hb07} method requires to dissipate a fraction in the $K_{inj,e} \approx 3\times10^{-4}-10^{-3}$ ballpark of the kinetic energy flux across the shock into electron acceleration - which is very large for such a weak shock, based on DSA \citep[e.g.][]{2020JKAS...53...59K}. This is a common finding of real observations, which have also routinely reported requirements on the acceleration efficiency even of $\sim 100 \%$, or larger, in several objects \citep[e.g.][]{2016MNRAS.461.1302E,Stuardi2019,2020A&A...634A..64B}. Our analysis instead shows that, even in absence of a nearby active galactic source of radio electrons, sectors of galaxy clusters interested by the MS scenario crossing can boost the emission to a level compatible with observations, due to the re-acceleration of fossil particles injected at a $\leq 0.5-1 \rm ~Gyr$ time interval. Detailed values of the single zone radio emission for the two relics at indicative frequencies are reported in Tab. \ref{tab:relic_emission}. \begin{table} \centering \begin{tabular}{c|c|c} & Relic A & Relic B \\ \hline 140 MHz & 515 mJy & 2768 mJy \\ 400 MHz & 179 mJy & 684 mJy \\ 1.4 GHz & 41 mJy & 96 mJy \end{tabular} \caption{relics-integrated radio emission.} \label{tab:relic_emission} \end{table} \bigskip \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/LOWFAR-140-radiation-grid.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/LOWFAR-1400-radiation-grid.pdf} \caption{Integrated radio emission for all families at $140$ MHz (left) and $1.4$ GHz (right) in gray scale. The coloured contours in the two plots indicate the observable radio flux using the LOFAR threshold of $0.2$ mJy at $140$ MHz and $0.02$ mJy at $1.4$ GHz respectively. The Black solid line in the plots divides the particles population between Relic A and Relic B.} \label{fig:emission_observed} \end{figure*} We investigate the possibility to observe the simulated relics by comparing the integrated radio emission obtained from the Fokker-Planck model with LOFAR observation properties. Figure \ref{fig:emission_observed} shows the integrated radio emission with the contribution of all the families in the gray scale. On top of that, the coloured contours indicate the observable radio emission using the LOFAR threshold of $0.2$ mJy at $140$ MHz and JVLA threshold of $0.02$ mJy at $1.4$ GHz respectively. We can conclude that part of the two radio relics generated in our numerical studies are bright enough to be observable. \subsection{Spectral Index Map} To investigate possible differences in the spectral index properties at the shock and the energy losses in the post-shock region, we analyzed the spectral index profile across the radio relics. We obtain the spectral index map by fitting with a first-order polynomial (i.e. $y=a_1x + a_0$) the integrated radio emission calculated in Sec. \ref{sec:integrated_radio} between frequencies $140$\,MHz and $400$\,MHz, and between $400$\,MHz and $1.4$\,GHz. For a first-order polynomial fit, the spectral index $\alpha$ is defined as the slope of the fit, i.e., $\alpha=a_1$. Using this first order fit, we obtain that the radio emission $I$ can be calculated locally at each frequency $\nu$ as $I\propto\nu^{\alpha_1}$. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/spiall1.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/spiall2.pdf} \caption{(left) $\alpha_{140\,\rm MHz}^{400\,\rm MHz}$ spectral index map and (right) $\alpha_{400\,\rm MHz}^{1.4\,\rm GHz}$ spectral index map from the contribution of all families population. The red lines indicate the position of the shock front for each relic, while} the black lines over the relics indicate the lineout of Fig. \ref{fig:index_profile}. \label{fig:spectral_index} \end{figure*} Figure \ref{fig:spectral_index} shows the $\alpha_{140\,\rm MHz}^{400\,\rm MHz}$ and $\alpha_{400\,\rm MHz}^{1.4\,\rm GHz}$ spectral index maps, obtained from the contribution of all families populations. The two relics have rather distinct spectral index properties, which may reflect the different histories of shock acceleration in the two cases. Relic A, dominated by family 1 population, shows a spectral index in the range $-1.0$ to $-1.2$ between $140$ and 400 MHz and $-1.2$ to $-1.5$ between 400 MHz and $1.4$ GHz. Relic B, dominated by family 2 population instead, shows a spectral index in the range of $-1.2$ to $-1.5$ between 140 and 400 MHz and $-1.4$ to $-2.0$ between 400 MHz and 1.4 GHz. In particular, the presence of multiple shocks acceleration events makes Relic B brighter than Relic A, despite having a steeper radio spectral index. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/index-width-A.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/index-width-B.pdf} \vspace{-.6cm} \caption{Spectral index profile for Relic A (left) and Relic B (right) between two different frequency ranges. The different black lines correspond to the profile position for each relic as indicated in Fig. \ref{fig:spectral_index}. The origin of the profiles correspond to the shock front location for each relic.} \label{fig:index_profile} \end{figure*} Figure \ref{fig:index_profile} shows the spectral index profile for Relic A and Relic B at the line-outs indicated by black lines in Fig. \ref{fig:spectral_index}. For each relic, we indicated the position of the shock front and its propagation with a red line in Fig. \ref{fig:spectral_index} and we compute the line profiles along the direction perpendicular to the shock front, starting at its position. For Relic A, we observe an almost constant distribution of the spectral index along its profile around a value of $-1$ at low frequency, and at $\sim -1.2$ at high frequency, however the resolution is not high enough for this relic to make a strong statement from such finding. For Relic B, instead, we notice that the spectral index has a generally more fluctuating behaviour, with a steeper radio spectrum at both frequencies, in particular between $\sim -1.3-1.4$ at low frequency and between $\sim -1.5-2.0$ at high frequency, which remains nearly constant even $\geq 200$ $\rm kpc$ away from the shock edge. The latter behaviour appears as a natural consequence of the MS scenario, in the sense that the emission of Relic B is dominated by the low-energy component of re-accelerated electrons, whose radio emission remain high also away from the shock edge. However, since the time elapsed since the epoch of the first injection of electrons in the MS scenario can vary from case to case, depending on the specific accretion history of the host cluster, and on the dynamics in the cluster sector where relics form, different timings of accretions should be reflected in different steepening frequencies for real observed relics. \subsection{Radio colour-colour Diagram} The shape of the relativistic electron distribution in radio sources can be studied by means of so-called colour–colour diagrams \citep{Katz1993, Rudnick1996, Rudnick2001}. These diagrams emphasize the spectral curvature because they represent a comparison between spectral indices calculated at low- and high-frequency ranges. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FIGURES/all_data.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/family1.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/family2.pdf} \includegraphics[width=0.49\textwidth]{FIGURES/family3.pdf} \caption{Radio colour-colour plots of the simulated relic superimposed with the spectral ageing JP (magenta) and KGJP (green) models obtained using an injection index of $-0.90$. For KGJP model, particles are injected continuously for about 16\,Myr. Colour-colour plot for all families (top-left), Family 1 (top-right), Family 2 (bottom-left), and Family 3 (bottom-right). The KGJP model fits the distribution well.} \label{fig:color2} \end{figure*} In our case, the low frequency spectral index values were calculated between 140 and 400\,MHz while the high frequency one between 400\,MHz and 1.4\,GHz. By this convention, the curvature is negative for a convex spectrum. The resulting colour-colour plots are shown in Fig. \ref{fig:color2}. The dashed black line indicates a power-law spectrum where $\alpha_{140\,\rm MHz}^{400\,\rm MHz}=\alpha_{400\,\rm MHz}^{1.4\,\rm GHz}$. Any curve deviating from the power-law line represents a spectrum with changing spectral curvature. As visible in Fig.\,\ref{fig:color2} (top-left), we find a clear negative curvature as also reported for some of the well-known relics, for example the Toothbrush \citep{rajpurohit2020toothbrush}, the Sausage \citep{digennaro2018saus}, and MACS\,J0717.5+3745 \citep{rajpurohit2021macsspec}. The single continuous trajectory in the colour-colour plot suggests that the spectrum also has single shape. We also superimposed the resulting plot with the conventional spectral ageing models, namely JP and KGJP \citep{Komissarov1994}, adopting an injection index of $-0.90$. The JP model assumes a single burst of particle acceleration and a continued isotropization of the angle between the magnetic field and the electron velocity vectors (the so-called pitch angle) on a timescale shorter than the radiative timescale. An extension to the JP model is the KGJP that includes finite time of particle injection. In Fokker-Planck model (see Section\,\ref{sec:method}) used for the computation of the radio spectra, we use a JP model for the synchrotron energy losses \citep{1973A&A....26..423J}. At first sight, it may seem surprising that the JP model does never fit the data (Fig.\,\ref{fig:color2}). As discussed in \cite{rajpurohit2020toothbrush,rajpurohit2021macsspec}, a perfectly edge-on shock front with a uniform magnetic field can be described by the JP model. However, if the shock front is inclined with respect to the line of sight, different spectral ages are present and contribute to the observed spectrum. In this case, the colour-colour distribution follows the KGJP model. As seen in Fig.\,\ref{fig:color2}, the KGJP model with an injection index of $\approx -0.90$ can describe the entire distribution quite well, consistent with what is found for the Toothbrush and the Sausage relics \citep{rajpurohit2020toothbrush,digennaro2018saus}. Among the relics, Relic B shows the maximum curvature but both relics follow the same single curve. We do not find any significant difference in the curvature distribution between different families, see Fig. \ref{fig:color2}. We note that there are no data points in the range $-0.5$ to $-0.90$ for both low and high frequency spectral index values. This difference can be understood considering that the radio emission properties derived by our Fokker-Planck model have the intrinsic limitation that all particles (even the one injected by the latest shock in the simulation) are evolved at least for one timestep. Hence, even the youngest population of electrons in both our relics has evolved for one timestep after shock injection, with a duration $\rm \Delta t \approx 30~ \rm Myr$, and the effects of synchrotron and Inverse Compton losses on the observed radio spectrum are already visible at radio emitting frequencies. In summary, the fact that both our relics reasonably well compare with the colour-colour diagrams of real systems (and especially the circumstance that our Relic B is in line with the KGJP model) further confirms that the MS scenario acceleration explored in this work may indeed give rise to realistic relic configurations - albeit non-trivial to tell apart from single injection models, at least based on their colour-colour plots. \section{Conclusions} \label{sec:conclusion} We have simulated the evolution of a radio emitting population of relativistic electrons accelerated by multiple merger shock waves, released during the formation of a massive, $M_{200} \approx 9.7 \cdot 10^{14} \ \mathrm{M}_{\odot}$, galaxy cluster \citet[][]{2016Galax...4...71W,2017MNRAS.464.4448W}. We focused on the spatial and dynamical evolution of $\sim10^4$ tracers, which are located in luminous relic-like structures at the end of our run. We assumed DSA as a source of fresh relativistic electrons out of the thermal pool, and applied a Fokker Planck method to integrate their energy evolution under radiative losses, and further re-acceleration events by merger induced shocks. In our scenario, only shock waves can be the source of cosmic-ray electrons, yet multiple shock waves sweeping the ICM may produce a pre-acceleration of relativistic electrons, qualitatively similar to what radio galaxies are expected to do \citep[e.g.][]{2005ApJ...627..733M, 2011ApJ...728...82M, kr11, ka12, 2014ApJ...788..142K, 2013MNRAS.435.1061P, 2020A&A...634A..64B}. In particular, we indeed identified a specific multiple-shock (MS) scenario, in which particles cross a shock multiple times, before ending up in realistic $\sim ~ \rm Mpc$-sized radio relics. Depending on the number of MS events, CRe with a different evolution can become radio visible. This is regardless of the strength of the final shock event. One of our relics (Relic A) is found to be mostly dominated by a population of tracers which were shocked only just before the epoch of the relic formation, and has a very faint emission, only partially detectable with LOFAR. In this respect, this object appears similar to the recently discovered "Cornetto" relic \citep[][]{locatelli2020dsa}, which was suggested to be indeed the prototype of low-surface brightness radio relics, only powered by freshly injected electrons. On the other hand, we measured that the emission by a second relic in the system (Relic B) is dominated by MS scenario accelerated electrons. We use the shock information collected with the tracers to study the evolution of relativistic electrons injected at the shocks and the associated radio emission via the Fokker-Planck solver described in \ref{subsec:fokker}. We observe that the electron energy spectrum for MS scenario accelerated families differs significantly from the power-law spectrum obtained after a single shock injection, and that their emission is higher than the emission of electrons that were only shocked at the end of the simulation, up to at least $\sim1$ order of magnitude. We computed the total radio emission produced by all accelerated electron families in both relics, emulating the threshold parameters of LOFAR telescope at $140$ MHz and of the JVLA at $1.4$ GHz, obtaining that both relics can be detected by observations, in particular at lower frequencies. From the analysis of the spectral index maps, we observed that Relic A shows relatively flat spectral index values compared to Relic B, suggesting that the presence of a MS scenario evolution of fossil electrons may influence the slope of the radio spectrum observed in the different relics. The radio colour-colour analysis revealed a single continuous curve for both Relics A and B as well as for all families. The curvature distribution can be well explained by the KGJP model. This suggests that, at least in systems whose past evolution is characterised by multiple accretion events, for example objects with prominent filamentary accretions and a past with multiple mergers, such as Abell 2744 \citep{2004MNRAS.349..385K,raj21}, Coma \citep{2011MNRAS.412....2B,bo21}, the Toothbrush cluster \citep{2012A&A...546A.124V,rajpurohit2020toothbrush}, a significant fraction of the observed radio emission can be the product of MS scenario acceleration, with the effect of an apparent boost of the acceleration efficiency with respect to ``single shock" models. Even if this work is exclusive of a single simulation, we found differences between the radio spectra produced in the MS scenario and the single shock scenario. In particular in the MS scenario, electrons produce a more emission at low frequencies and, hence, a steeper spectrum than the single shock scenario. This is particularly intriguing, as, in the MS scenario, the radio emission is produced without including any other sources of fossil electrons in the ICM. This can soften the assumption that single radio galaxies are the source of fossil electrons, for the observed cases that require a high acceleration efficiency. The MS scenario can indeed produce a large pool of pre-accelerated electrons, with rather similar spectra and energy densities on $\sim \rm ~Mpc$ scales, which further produce coherent radio properties on the same scales, if further shocked. The latter may instead be a problem for models in which the source of fossil electrons is a single and recent release of fossil electrons from a radio galaxy. In reality radio galaxies do exist and inject fossil electrons, and we defer the investigation of the MS scenario combined with radio galaxy activity to future work. \section*{Acknowledgements} We gratefully acknowledge the very useful feedback by our reviewer, H. Kang, which significantly improved our numerical analysis since our first submitted version. We acknowledge financial support by the European Union’s Horizon 2020 program under the ERC Starting Grant ‘MAGCOW’, no. 714196. D.W. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 441694982. The cosmological simulations were performed with the ENZO code (\hyperlink{http://enzo-project.org}{http://enzo-project.org}) and were partially produced at Piz Daint (ETHZ-CSCS, \hyperlink{http://www.cscs.ch}{http://www.cscs.ch}) in the Chronos project ID ch2 and s585, and on the JURECA supercomputer at the NIC of the Forschungszentrum Jülich, under allocations nos. 7006, 9016, and 9059. For the creation of the NRAO video, GI acknowledge the assistance of $(AM)^2$ research group of Prof. S. Morigi of the Department of Mathematics of the University of Bologna and the Visit Lab of A. Guidazzoli of CINECA under the project FIBER OF THE UNIVERSE (\hyperlink{http://visitlab.cineca.it/index.php/portfolio/fiber-of-the-universe/}{http://visitlab.cineca.it/index.php/portfolio/fiber-of-the-universe/}). We acknowledge the usage of online storage tools kindly provided by the INAF Astronomical Archive (IA2) initiative (http://www.ia2.inaf.it). \section*{Data Availability} Both the tracer data used for this work \footnote{\hyperlink{https://owncloud.ia2.inaf.it/index.php/s/zcskRy1bryHN62i}{https://owncloud.ia2.inaf.it/index.php/s/zcskRy1bryHN62i}} and the IDL code used to evolve their energy spectra \footnote{\href{https://github.com/FrancoVazza/IDL_FP}{https://github.com/FrancoVazza/IDL$\_$FP}} are publicly available. The \textsc{ENZO} code used to produce the cosmological simulation is also publicly available \footnote{\url{enzo-project.org}}. \bibliographystyle{mnras}
164433fc6adbf47dfc6dbe5d202cb20aefa0dff9
\section{Introduction}\label{s:intro} Observations of binary neutron star (BNS) mergers and their electromagnetic (EM) counterparts can probe the expansion history of the universe \cite{Schutz:1986ss,Holz_2005}. Gravitational waves from compact binary coalescences (CBC) are standard sirens, which means that identical mergers always have the same luminosity. Therefore, we can directly estimate the luminosity distance of their source. EM counterparts, such as kilonovae or gamma-ray bursts, allow astronomers to identify the host galaxies \cite{Dalal_2006, Nissanke_2010} from which we can measure the cosmological redshift. The relationship between the luminosity distance of the source and its redshift depends on cosmological parameters, and, for late times, is dominated by the Hubble constant ($H_0$). Currently, measurements of the Cosmic Microwave Background \cite{refId0} are in tension with observations based on the local distance ladder \cite{Riess_2019}; this tension has risen to the $4.4 \sigma$ level \cite{Riess_2019,DiValentino:2021izs,Freedman:2021ahq}. Being completely independent and a self-calibrated measurement of $H_0$, standard sirens could have a crucial role in solving this tension. The first estimation of $H_0$ from standard sirens \cite{LIGOScientific:2017adf} gave results broadly consistent with other measurements found to date \cite{freedman2017cosmology}. Multiple multimessenger detections (i.e. gravitational-waves and EM detections) are required to improve the accuracy; several studies predict a $1\%$ $H_0$ measurement accuracy is achievable with $\mathcal{O}(100)$ detections~\cite{nissanke2013determining, Chen_2018, Feeney_2019, 2019PhRvD.100j3523M}. This would break the tension on $H_0$. An accurate estimation of $H_0$ will depend crucially on the understanding of the systematic uncertainties in both EM and gravitational-wave observations. One source of systematic error related to the observation of the EM counterparts regards the peculiar velocity field of the host galaxy \cite{10.1093/mnras/staa049,10.1093/mnras/staa1120,Mukherjee_2021}. The uncertainty on the peculiar velocity is dominant only for extremely close events and is negligible for most of the expected future detections. An additional bias can arise from mis-modelling the kilonova signal on the inclination \cite{Chen_2020}. Improved models of the kilonova emission could reduce this uncertainty, however, this will require significant theoretical progress \cite{10.1093/mnras/stab221}. Currently, the known dominant systematic uncertainty in the standard sirens approach is due to the gravitational-wave data. The main source is the detectors calibration, with an uncertainty in the amplitude of the calibrated strain below $2\%$ in both LIGO~\cite{Sun_2020, 2015CQGra..32g4001L} and Virgo~\cite{TheVirgo:2014hva, Estevez_2021} detectors. This uncertainty should decrease in future observing runs, but, even at the current level, does not limit the resolution of the $H_0$ tension \cite{arxiv.2204.03614}. An unaccounted source of systematic uncertainty could arise from mis-modelling the noise in the gravitational-wave detectors in the estimation of the luminosity distance. Indeed, the most widely used inference codes for gravitational waves -- (\texttt{LALInference} \cite{Veitch_2015}, \texttt{Bilby} \cite{Ashton_2019,10.1093/mnras/staa2850}, \texttt{PyCBC Inference} \cite{Biwer_2019} and \texttt{RIFT} \cite{lange2018rapid}) -- assume that the detector noise is both stationary and Gaussian \cite{PhysRevD.84.122004}. Gaussianity refers to the distribution of the noise and means that the noise can be completely characterised by a mean vector and a covariance matrix. Stationary noise means that the statistical properties of the noise do not vary in time. In reality, due to broadband sources of noise of instrumental or environmental origin, data from ground-based detectors, such as LIGO and Virgo, are both non-Gaussian and non stationary \cite{Abbott_2016,Abbott_2020,RICHABBOTT2021100658,Davis_2021}. Non-Gaussianities are generally noise transients (called glitches \cite{Nuttall_2015, Zevin_2017}) that last on the order of a second. Non-stationary noise, instead, can vary the detector sensitivity on the order of tens of seconds affecting especially long duration signals. Noise transients are more obvious within gravitational-wave data compared to non-stationary noise. As such, analyses can be performed with noise transients either subtracted or the frequency range of analyses restricted to limit the impact of the noise transient (e.g.~\cite{LIGOScientific:2017vwq, Pankow_2018, LIGOScientific:2020ibl}). It is not possible to employ these workaround techniques with non-stationary noise as the noise tends to be more subtle, harder to model and the noise usually impacts a large frequency range. The effect of mis-modelling the noise in the parameter estimation of gravitational-wave signals can be estimated analytically by assuming the \textit{linearised signal approximation} (LSA) \cite{PhysRevD.77.042001}, where the template waveform $h(\theta)$ is expanded as a linear function of the true signal $h_0$ across the expected uncertainties of the parameters ~\cite{Edy_2021}. With this approximation and using an uninformative prior, the maximum likelihood of the parameters averaged over non-stationary Gaussian noise realisations is an unbiased estimator of the true source parameters, which means that mis-modelling the noise does not affect the posterior mode. Non-stationarity affects only the uncertainty on the posteriors, which is mis-estimated in particular for longer signals like BNS ~\cite{Edy_2021}. Although the LSA is a good baseline to understand the effect of non-stationary noise in simple cases, it is impractical for real data, being valid just for high SNR signals. As shown in ~\cite{PhysRevD.77.042001}, to correctly estimate the parameters for low-SNR signals requires higher orders in the template expansion. Moreover, the prior can not be easily handled analytically for non-flat or non-Gaussian priors. Including an informative prior can bias the estimation, introducing noise dependent terms in the posterior mode. As discussed in section \ref{Section:Injection}, the luminosity distance is generally estimated by adopting a uniform prior in Euclidean volume, therefore mis-estimating the noise could bias the luminosity distance posterior. In conclusion, non-stationary noise could affect the estimation of the luminosity distance, and, consequently, the number of detections necessary to reach a few percent measurement of $H_0$. More importantly, non-stationary noise could bias the estimation of $H_0$. In this paper we investigate how non-stationary noise affects the estimation of the luminosity distance of BNS signals from gravitational-wave data, assuming the detection of an electromagnetic counterpart. We use publicly available LIGO and Virgo data from the first half of the third observing run (O3a) \cite{Vallisneri_2015, RICHABBOTT2021100658}. While previous studies have investigated how to obtain more accurate parameter estimation in non-stationarity data \cite{PhysRevD.102.124038, PhysRevD.103.044006}, fully accounting for non-stationarity in parameter estimation is still computationally prohibitive. Therefore, we aim to determine if this effort is necessary to solve the Hubble tension. The rest of the paper is organised as follows. Section \ref{Section:Basics non-stat} describes the Bayesian approach which is widely used to estimate the parameters of gravitational-wave signals and how non-stationarity breaks its basic assumptions. In section \ref{Section:Method} we introduce our investigation on the effect of non-stationary noise in the estimation of the luminosity distance. We also present our results discussing the possible consequences on the estimation of $H_0$ through gravitational-wave data. In section \ref{Section:Conclusions} we summarise our main results and conclude. \section{Parameter estimation in non-stationary noise}\label{Section:Basics non-stat} The output of a gravitational-wave interferometer is a time series $d(t)$ such that: \begin{equation} d(t)= \begin{cases} n(t)+h(t), & \text{if a signal is present,}\\ n(t), & \text{otherwise,} \end{cases} \end{equation} where $n(t)$ is the detector noise and $h(t)$ is a gravitational-wave signal. Gravitational-wave transient searches identify signals by matched-filtering the data with a number of templates which sample the waveform parameter space \cite{Cutler_1993, Allen_2012}. Once the merger time is identified, the posterior probability densities for the source parameters are extracted using a Bayesian approach \cite{PhysRevD.46.5236, Romano_2017}. This approach requires the computation of the likelihood function, which represents the probability of observing data $d$ assuming that the signal has parameters $\theta$. If the noise is Gaussian with zero mean, the single detector likelihood takes the form \cite{Veitch_2010, LIGOScientific:2017vwq} \begin{equation}\label{like} \mathcal{L} = p(d | h(\theta)) = \frac{1}{det(2\pi C_n)}e^{-\frac{1}{2} r^{\dagger}(f)C_n^{-1}r(f)} \end{equation} where the residual $r(f) = d(f)-h(f,\theta)$ is assumed to have the same distribution as the noise. $C_n=\langle n^{*}(f) n(f') \rangle$ is the noise covariance matrix, where the angle brackets denote averaging over different realisations of the noise. We use the variables $t$ and $f$ to indicate whether a quantity is in the time or frequency domain. If the data are stationary the frequencies are completely uncorrelated. In this case, ${\langle n^{*}(f) n(f') \rangle}$ is diagonal and is fully described by the noise power spectral density (PSD) $S_n(f)$: \begin{eqnarray}\label{PSD} \langle n^{*}(f) n(f') \rangle = \frac{T}{2} S_n (|f|) \delta(f - f') ~, \end{eqnarray} where T is the duration of the analysed data. Combining Equations~\eqref{like} and \eqref{PSD} we obtain the likelihood function typically used for gravitational-wave parameter estimation \cite{Veitch_2015}. This model is accurate for short segments of data from ground based interferometers. However, the assumption of stationary breaks down for periods of 64 seconds \cite{PhysRevD.91.084034}, a smaller window than what is typically needed to analyse binary neutron star mergers. Non-stationarity appears in Equation \eqref{like} as off-diagonal terms in the covariance matrix \cite{talbot2021inference}. Ref. \cite{Edy_2021} showed that ignoring these terms would affect the width of the posterior, with more evident effects when the signal is longer in duration. Nevertheless, accounting for non-stationarity would increase the computational cost from $\mathcal{O}(N)$ to $\mathcal{O}(N^2)$, where $N^2$ is the number of elements in the covariance matrix. This would be prohibitive in particular for longer signals. Even assuming a diagonal covariance matrix as a fair approximation, non-stationary noise would bias the estimation of the noise spectrum. As a workaround, a common approach is to compute the PSD ``off-source'', using data close to, but not containing, the detected signal. However, this approach has an intrinsic uncertainty which could introduce new biases in the parameter estimation \cite{2020PhRvR...2d3298T}. Moreover it produces poor estimates of the noise for long signals \cite{Chatziioannou_2019}. A better approach is to estimate the noise spectrum ``on-source'' using parametrized models \cite{PhysRevD.91.084034, Cornish_2015} and to marginalise over the noise estimation uncertainty \cite{Biscoveanu_2020}. \subsection{Non-stationary noise in LIGO and Virgo data} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{plots/psdvar_dist.png} \caption{\label{Fig:psdvar} PSD variation ($v_{s}$) distribution in LIGO Livingston based on data for O2 (blue), O3 (orange) and, estimated, O4 (green). The black curve shows the expected distribution in Gaussian stationary noise based on O3 data. The vertical red dashed line shows the limit over which we consider data to be non-stationary.} \end{figure} To obtain an accurate measurement of $H_0$, the standard sirens method requires us to combine several multi-messenger detections of binary neutron stars. We saw that non-stationary noise can affect the parameter estimation for longer signals. Now we want to estimate how many signals could be detected, on average, in non-stationary data. We identify non-stationary noise using the approach described in Ref.~\cite{Mozzon_2020}. This method relies on modelling the relation between the noise spectrum computed over a short stretch of data (typically 8 seconds) and a longer segment of time (512 seconds) with a frequency independent factor, $v_s$, such that $S_n(short) = v_s S_n(long)$. The time series $v_s(t)$, also called PSD variation statistic, has been proven effective to track non-stationarity in LIGO and Virgo data during the third observing run (O3)~\cite{Davis_2021}. The PSD variation at each time depends only on the amplitude of the non-stationarity and is completely independent of the shape of the noise. In Gaussian and stationary noise, the PSD variation statistic is well modelled by a Gaussian distribution with mean 1 and variance dependent on the bandwidth of the detector, as shown by the black curve in Figure~\ref{Fig:psdvar} based on O3 data. As shown in Figure \ref{Fig:psdvar}, non-stationarity appears as a tail of high $v_s$ values. For simplicity, here we consider $v_s>1.2$ as an indicator of non-stationarity in the data. With this approximation we analyse LIGO and Virgo data. We found that the fraction of non-stationary data in the LIGO detectors almost doubles between the second observing run (O2) and O3a. During O3a $\sim$2\% of LIGO Hanford and LIGO Livingston data and $\sim$1\% of Virgo data are non-stationary. Randomly placing 10,000 signals of 128 seconds in duration in the data, we found that 15\% of the signals would lie in non-stationary noise in at least one detector for 10 or more seconds around the merger time. Therefore, an average of more than 1 in 7 BNS detections could have been affected by non-stationary noise during O3a. Assuming the fraction of non-stationary data to be linearly dependent with the sensitivity of the detectors we predict the levels of non-stationarity in LIGO Livingston for the next two observing runs~\cite{Abbott_2020_1}. The assumption is consistent with the rate of non-stationarity observed in LIGO data from the first three observing runs. We predict that on average 4\% and 9\% of data will be non-stationary respectively for O4 and O5. Figure \ref{Fig:psdvar} shows the measured distribution of the PSD variation of LIGO Livingston for O2, O3a and the estimation for O4. For O4, we assume an average BNS range of 180 Mpc. Similar values can be predicted for LIGO Hanford, showing that considering non-stationarity will be increasingly important in the future. \section{Estimating the effect of non-stationary noise}\label{Section:Method} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/New_spectrogram.png} \caption{\label{Fig:real_times} Time-frequency spectrogram of stationary (left) and non-stationary data (right) measured by LIGO Livingston during the LIGO and Virgo third observing run.} \end{figure*} In order to investigate the effect of non-stationary noise in the estimation of the luminosity distance we add a population of 467 simulated binary neutron stars to O3a LIGO and Virgo data. We target 28 separate periods of non stationary data in LIGO Livingston, with varying duration between 25 and 200 seconds. In the selected segments, we require the data in LIGO Hanford and Virgo to be stationary. In fact, coincident periods of non-stationarity are rare for ground-based detectors, representing less then 2\% of the total non-stationarity time. In future observing runs we expect more periods of coincident non-stationarity due to the larger fraction of non-stationary data; although it is unlikely this will represent the dominant scenario. From the simulated signals we randomly select 100 signals with network signal-to-noise ratio (SNR) greater than 12 and we perform a Bayesian analysis to estimate the luminosity distance. We choose this detection threshold in order to analyse signals which are confidently detected by search pipelines even in non-stationary noise. This choice is also consistent with similar analyses \cite{Abbott:2019yzh, arxiv.2204.03614}. Finally, we compare the results of an equivalent analysis made over stationary noise close in time (but not overlapping) with the targeted non-stationarity segments. The detectors sensitivity to gravitational-wave signals varies considerably during O3a due to adjustments in the configuration of the interferometers. Considering adjacent times reduces the possibility of any variation which could affect our investigation. We targeted moderate non-stationary noise with a PSD variation value between 1.2-3, which constitutes 80\% of all non-stationarity. Higher values generally indicate extreme non-stationarity or very short bursts of excess power \cite{TheLIGOScientific:2016zmo,2018RSPTA.37670286N,Davis_2021} that are likely to be identified and removed before performing the parameter estimation analysis. Figure \ref{Fig:real_times} shows two time-frequency spectrograms of LIGO Livingston data for one targeted time and its adjacent closest period of stationary noise. Non-stationarity appears as power excess of unknown origin distributed around 50 Hz. \subsection{Simulations}\label{Section:Injection} We simulate a population of non-spinning binary neutron stars with detector-frame chirp mass $\mathcal{M^{\mathrm{det}}}$ \cite{Sathyaprakash_2009} uniformly distributed between 1.7 and 1.9 $M_\odot$ and a mass ratio between $0.75-1$. We distribute mergers uniformly in Euclidean volume, extracting the signals from a prior in luminosity distance $\pi(d_L)\propto d_L^2$ \cite{PhysRevLett.116.241102, PhysRevX.9.031040} between 20 and 400~Mpc. This approximation is appropriate to describe the observed population of BNS in the luminosity range considered \cite{10.1093/mnras/staa2850}. To reduce the computational cost we neglect tidal effects; tidal parameters do not contribute to the waveform amplitude and are not correlated with the luminosity distance. The choice of non-spinning injection is justified by the expected low number of events with high spins (e.g.~\cite{Zhu:2017znf}). Moreover, none of the BNS signals detected so far by the LIGO and Virgo Collaborations were consistent with high spins (e.g.~\cite{PhysRevX.11.021053}). We generate the signals using the waveform model~\texttt{IMRPhenomPv2} \cite{PhysRevD.91.024043, PhysRevLett.113.151101} with a low frequency cut off of 20 Hz. This model has a low computational cost and provides a good approximation for a BNS system if the tidal effects are neglected. We fix the sky position of the signals to the optimal location for our targeted detector, i.e. on the zenith for LIGO Livingston. With this choice each signal has the highest SNR in the detector which presents non-stationary noise, therefore the effect of non-stationarity on the detection is maximised. We injected the signals in individual stretches of data and we separate the merger times between simulations by 4 seconds. This is to avoid correlation between the estimations \cite{pizzati2021bayesian, Samajdar_2021, relton2021parameter}. We calculate the PSD using the off-source approach as described in Ref \cite{Allen_2012}, using the Welch method for 1024 seconds of data. Despite being sub-optimal compared to the ``on-source'' method, this approach is much faster for longer signals and is therefore preferable for population studies. We then use \texttt{Bilby} with the \texttt{Dynesty} sampler \cite{2020MNRAS.493.3132S} to estimate the parameters of the injections. For each signal, we analyse 128 seconds of data using~\texttt{IMRPhenomPv2} model in its reduced order quadrature approximation to reduce the computational cost \cite{Canizares_2015, Smith_2016}. We use priors consistent with the generated population of signals. Assuming the EM counterpart would allow us to uniquely identify the host galaxy, we fix the sky location to the injected value. While this is an optimistic scenario, this assumption drastically improves the accuracy in the estimation of the luminosity distance, helping to isolate and highlight the effect of non-stationarity. We assume the EM counterpart does not provide any information on the binary inclination angle. \subsection{Bias in luminosity distance}\label{Section:Results} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{plots/posterior_prova1.png} \caption{\label{Fig:violins} Luminosity distance posteriors for 6 simulated signals added in stationary (blue) and non-stationary noise (orange). Each posterior is centred around the true value of the simulated signal. The dashed lines show the quartiles of the distributions.} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{plots/pr_new_0.png} \caption{\label{Fig:results} Results of 100 injections recovered in stationary and non-stationary noise. The grey regions cover the cumulative 1,2 and 3 $\sigma$ confidence intervals accounting for sampling errors. The blue lines represent the cumulative fraction of real luminosity distances found within this confidence interval (C.I.). Luminosity distance p-values for stationary and non-stationary noise are displayed in parentheses in the plot legend. The luminosity distance p-value for stationary noise, is 0.759, consistent with the p-value being drawn from a uniform distribution.} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/Bootstrap1.png} \caption{\label{Fig:bias} Results for 100 BNS injections in simulated Gaussian noise. The top plots show the variation of the luminosity distance p-value as a function of the bias introduced in the luminosity distance posterior samples. For each level of bias, we calculate the Kolmogorov-Smirnov p-value for 100 posteriors randomly selected from our sample of signals. The blue line represents the median luminosity distance p-value from 50 different random samplings. The blue area delimits the 5th and the 95th percentiles of the p-value distribution. The red dashed line shows the p-value measured for events in non-stationary noise. In the bottom plots we show how the bias distorts the cumulative fraction of injected luminosity distances found within the confidence interval in two particular cases: for $\Delta d_L = 0.047$ (\textit{Left}) and $\sigma_b = 0.25$ (\textit{Right}). These values correspond to the intersections between the blue and red lines in the upper plot, i.e. the median biases associated with the non-stationary p-values.} \end{figure*} We first present the luminosity distance posterior distributions for a representative sample of the simulated signals. In Figure \ref{Fig:violins} we compare distributions obtained for identical signals detected in stationary and non-stationary noise. The effects of non-stationarity appear to vary for different signals. While signals labelled as 1 and 5 appear to be over-constrained in non-stationary noise, the main effect on simulations 2, 3 and 4 is a shift towards smaller luminosity distance values. Considering signals detected in both stationary and non-stationary, we found that the median distance is reduced on average by 1.4\%. Similarly, the 25th and the 75th percentiles of the estimated luminosity distance distribution are shifted on average by 1.1\% and 1.5\%, suggesting non-stationarity might cause a rigid shift of the luminosity distance posteriors. Note that only 80\% of signals are detected in both stationary and non-stationary noise. The remaining detections vary between the two sets. Considering all the signals, we found the median of the luminosity distance posteriors to be lower than the correspondent true value for 58\% of the signals in non-stationary noise, in contrast with the 47\% for signals in stationary noise. These differences indicate the possible presence of systematics in the estimations in non-stationary noise. However, directly comparing the posterior distributions obtained with and without non-stationary noise for each event is not sufficient to identify systematic biases. In fact, different realisations of stationary Gaussian noise might also cause the inferred parameters to vary. To verify if the observed distortions could be explained as random variations of the noise, we compare the luminosity distance posteriors computing the normalised cumulative fraction of true luminosity distances which lie within a measured confidence interval \cite{doi:10.1198/106186006X136976}. This approach is commonly used to identify biases in the inference on gravitational-wave sources \cite{10.1093/mnras/staa2850, gair2015quantifying, Berry_2015, Biwer_2019, PhysRevD.89.084060, PhysRevD.92.023002}. Figure \ref{Fig:results} shows the results for 100 injections in stationary and non-stationary noise. This is generally referred to as a P-P plot. If the inference is unbiased the fraction of events in a particular confidence interval is drawn from a uniform distribution. Hence, the expected cumulative distribution would lie on the diagonal of the plot with some scatter due to Poisson error. The shaded regions delimit the expected 1, 2 and 3-sigma error given the number of events. For signals injected in non-stationary noise the cumulative distribution is systematically below the diagonal, exceeding the 2-sigma error for confidence intervals between 0.6 and 0.8. We test the consistency between the measured curves and the diagonal line using the Kolmogorov-Smirnov (KS) statistic. For unbiased parameter estimations the two-tailed p-value is uniformly distributed between 0 and 1. Therefore, a p-value $<0.05$ will occur once in 20 times. For the curves in Figure \ref{Fig:results} the resulting p-values are $0.759$ and $0.070$ for stationary and non-stationary noise respectively. A smaller p-value indicates that the measured curve is unlikely to be randomly extracted from the assumed distribution if the sampler is unbiased. In particular, there is just a 7\% chance to obtain a more extreme curve than the one measured from events in non-stationary noise. As shown in Table \ref{snr table}, we obtained higher p-values when increasing the cut in SNR, showing that the distortion is reduced for louder signals. However, even for louder signals the PP-plot presents similarities with Figure \ref{Fig:results}. \begin{table} \begin{center} \begin{tabular}{||c || c | c | c||} \hline SNR cut & 12 & 13 & 14 \\ [0.5ex] \hline\hline stationary & 0.759 & 0.535 & 0.468 \\ \hline non-stationary & 0.070 & 0.161 & 0.368 \\ \hline \end{tabular} \caption{\label{snr table} Luminosity distance p-values for stationary and non-stationary noise as a function of the SNR cut imposed.} \end{center} \end{table} The inconsistencies in the observed p-values can be interpreted as a systematic bias in the estimated luminosity distance in non-stationary noise. Power excess in the data can increase the matched-filter SNR of the detection, decreasing the estimated luminosity distance. However, a lower p-value can also arise from over-constraining the posterior. If the posterior is over-constrained we would see a larger fraction of events in a lower confidence intervals and smaller fraction for higher intervals. This distortion with respect to the predicted curve would lower the p-value. To investigate these two scenarios and quantify the bias, we repeat our analysis adding 800 signals in simulated Gaussian stationary noise. For consistency with the analysis in real data we require a network SNR$>$12, which yields a final sample of 153 signals. For each signal we then artificially bias the estimated luminosity distance posterior $d_L$ by shifting the distribution by a constant value $\Delta d_L$, such that: \begin{equation} d_{L, \mathrm{biased}} = d_{L} - \Delta d_L\times d_{L,inj} \end{equation} where $d_{L,inj}$ is the injected luminosity distance. We calculate the K-S p-value randomly selecting 100 signals from our sample. To consider variations in the estimated p-value due to selection effects, we repeat the calculation for 50 different random samples of signals. Finally, we investigate the relation between the p-value and the bias repeating this procedure for increasing values of $\Delta d_L$, re-estimating the p-value for each iteration. We perform a similar test to understand the effect of over-constraining the posterior. In this case we uniformly shrink the distribution around the median luminosity distance. This modification has the effect to reduce the samples standard deviation of the luminosity distance distribution $\sigma(d_L)$ by a factor $\sigma_b$ for each signal. For example, a $\sigma_b = 0.1$ refers to a 10\% reduction of the standard deviation of the luminosity distance distribution for each signal. The top plots of Figure \ref{Fig:bias} show the evolution of the luminosity distance p-value as a function of the bias introduced in the posteriors for the two cases. The blue line represents the median p-values. for each level of bias. The blue area enclose all values between the 5th and the 95th percentiles. The relation between the p-value and the bias is monotonic, with lower p-values indicating greater biases. Indeed, greater distortions of the posterior distribution makes the assumption that the sampler is unbiased more unlikely. From these plots we can infer the bias associated with the p-value observed for events in non-stationary noise, which is indicated in Figure \ref{Fig:bias} with a red dashed line. The measured p-value is consistent with a $4.7^{+2.1}_{-1.7}$\% systematic under-estimation of the measured luminosity distance or a $25^{+6}_{-5}$\% reduction in the dispersion of the posteriors distribution. In the bottom plots of Figure \ref{Fig:bias} we show how these two biases distort the P-P plots. Reducing the samples variance (right plot) induces an ``S-shape" in the cumulative fraction of injections found in each confidence interval, increasing it for lower confidence intervals and decreasing it for higher levels. Instead, reducing the mean of the posteriors (left plot) systematically decreases the cumulative fraction of real luminosity distances in each confidence interval. Qualitatively comparing these plots with Figure \ref{Fig:results}, we can conclude that the measured dominant effect of non-stationary noise is a systematic under-estimation of the luminosity distance. \subsection{Considerations on the estimation of $H_0$} In principle, a 4.7\% systematic under-estimation of the luminosity distance of the source combined with the other expected systematic uncertainties could dramatically affect the accuracy of the estimation of $H_0$ using standard sirens. Despite this inaccuracy, standard sirens would still help to break the $H_0$ tension. Let us consider the worst case, in which the systematics due to non-stationarity and calibration simply add up. Considering a 1\% calibration error, this would correspond to a 5.7\% systematic under-estimation of the luminosity distance, i.e. we would infer 5.7\% higher values of $H_0$. Assuming the early universe estimation of $H_0$ ($67.4\pm 0.5$ km~s$^{-1}$~Mpc$^{-1}$ \cite{refId0}) to be correct, we would obtain $H_0=71.2\pm 0.7$ km~s$^{-1}$~Mpc$^{-1}$, in which we assumed our estimation to be Gaussian distributed with a 1\% error. In this case, the effect of non-stationary noise would make the standard sirens estimation to fall between the early universe measurement and the local distance ladder estimation of $H_0$ ($74.03\pm 1.42$ km~s$^{-1}$~Mpc$^{-1}$ \cite{Riess_2019}). Therefore, none of the two hypothesis would be confidently excluded. In the worst case presented in Figure \ref{Fig:bias}, i.e. a 6.8\% under-estimation of the luminosity distance, the standard sirens method could also favour the wrong hypothesis. However, our results represent an upper limit in the shift of the luminosity distance due to non-stationary noise, with the measured p-value likely to be a result of various effects. Moreover, to reach the precision of a few percent required to solve the current tension on the estimation of $H_0$ it may be necessary to combine at least $\sim$50 standard sirens~\cite{Feeney_2019}. Of them, just a fraction will be affected by non-stationary noise, hence the error on the estimation will be reduced. On the other hand, the number of standard sirens required to attain the necessary precision may be reduced by additional EM constraints on e.g., source orientation, as with GW170817~\cite{Hotokezaka:2018dfi,Mukherjee:2019qmm}, and this could further compound any present bias. Therefore, assessing the level of non-stationarity for individual BNS detections will be important in confidently presenting estimates of $H_0$ free from significant bias. Other methods to estimate $H_0$ which rely on shorter signals like binary black holes will also be important to improve the accuracy on $H_0$~\cite{Fishbach_2019, Soares_Santos_2019, Gray_2020, Finke_2021}. These approaches require shorter periods of data, therefore the effect of non-stationary noise will be less important. \section{Summary and Conclusions}\label{Section:Conclusions} In this paper we have investigated whether the presence of non-stationarity in LIGO and Virgo data introduces a new source of systematic error in the estimation of the Hubble constant through the standard sirens approach. The problem is particularly important for longer duration signals such as BNS, for which longer periods of data are required, making the parameter estimation more vulnerable to fluctuations in the detector noise. Indeed, we found that during O3a non-stationarity accounts for 2\% of the overall LIGO data. By placing simulated BNS signals of 128 seconds in length throughout O3a data, we find that 1 in 7 BNS signals merger times could have fallen in non-stationary noise. More importantly, this fraction is predicted to increase, with non-stationarity that will reach an estimated 4\% and 9\% of the overall data for O4 and O5 respectively. We explore the issue of non-stationarity and how it affects the estimation of the luminosity distance of the source, by adding simulated BNS signals in stationary and non-stationary data from O3a. We compare the luminosity distance posteriors obtained in the two cases calculating the cumulative fraction of true luminosity distances which lie within a measured confidence interval. We employ the Kolmogorov-Smirnov test to estimate the consistency of the results with the theoretical predictions. We found a lower p-value (0.070) for events in non-stationary noise, showing that the null hypothesis of an unbiased estimation is unlikely. In order to understand the magnitude of the misestimation, we artificially bias the posteriors of BNS signals estimated in simulated Gaussian and stationary noise. We found that the measured p-value for non-stationary noise is consistent with a systematic under-estimation of the measured luminosity distance by up to 6.8\%. The estimated bias in the luminosity distance is an upper limit and does not automatically translate to an expected systematic error in the estimation of $H_0$. First, just a fraction of the BNS-like gravitational-wave detections will be measured in non-stationary noise. It is estimated that $\mathcal{O}(100)$ joint gravitational-wave and EM detections are needed in order to infer $H_0$ to an accuracy of 1$\%$. Therefore the combination of these $\mathcal{O}(100)$ signals, of which $\sim 15\%$ may be affected by non-stationarity, is unlikely to have a large effect on the accuracy of $H_0$. Moreover, binary black holes detections are expected to give an important contribution to improve the accuracy of $H_0$ \cite{LVK:2021bbr}. The duration of these signals is on the order of seconds, making the effect of non-stationary noise less important. Therefore, despite non-stationarity in LIGO and Virgo data will affect the standard sirens estimation of $H_0$, we do not expect it to be a limiting factor in resolving the tension on $H_0$ using data from second generation (2G) detectors. However, until gravitational-wave inference methods fully account for non-stationary noise, assessing the level of non-stationarity in the data, in particular for louder signals, will be crucial to exclude biases in the $H_0$ estimation. The next generation (3G) of gravitational-wave detectors, such as the Einstein Telescope~\cite{Punturo:2010zz,Maggiore:2019uih} and Cosmic Explorer~\cite{LIGOScientific:2016wof,Reitze:2019iox}, with their increased sensitivity at lower frequencies, will detect much longer duration gravitational-wave signals than current 2G detectors. Non-stationarity will still be an issue in these detectors, although we can not estimate, just yet, whether it will be at similar levels or worse than what we see in the 2G detectors. Either way, non-stationarity will have to be considered in the interpretation of long duration signals in 3G detectors to ensure this form of noise does not impact key scientific conclusions. \acknowledgments We are grateful to the referees for the very valuable comments and suggestions. We are thankful to the LIGO/Virgo Cosmology group for insightful comments on this work, as well as Sylvia Biscoveanu and Ian Harry for useful discussions on this paper. SM was supported by a STFC studentship. GA and LKN thank the UKRI Future Leaders Fellowship for support through the grant MR/T01881X/1. ARW thanks the STFC for support through the grant ST/S000550/1. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This work carries LIGO Document number P2100316.
ce2f068f9e0a31deebe107216e2de5ea36c35955
\section{Introduction} Let $a(n),b(n):\mathbb{N} \to\mathbb{C}$ be two complex valued sequences. Following the literature \cite{GT12b}, we say they are ``strongly orthogonal" if \[\sum_{n\leq N}a(n)b(n)=O_A\Bigbrac{(\log N)^{-A}\sum_{n\leq N}|a(n)b(n)|}, \] holds for any $A>0$ and uniformly for $N\geq 2$. The study of the correlation between two sequences is an important subject in number theory. Here, we list some well-known pieces of literature, taking one of the sequences being the M\"obius function as an example. \begin{enumerate}[$\bullet$] \item For $(a(n),b(n))=(\mu(n),1)$, the strong orthogonality $\sum_{n\leq N}\mu(n)\ll Ne^{-c\sqrt{ \log N}}$ is essentially equivalent to the prime number theorem; \item For $(a(n),b(n))=(\mu(n),e(n\alpha))$, this is a classical result due to Davenport \cite{Dav}, he proved a strongly orthogonal result by modifying Vinogradov's method on bilinear forms; \item For $(a(n),b(n))=(\mu(n),e(n^k\alpha))$, Hua \cite{Hua} generalizes Davenport's strong orthogonality result from linear phase functions to polynomial phase functions; \item For $(a(n),b(n))=(\mu(n),F(g(n)\Gamma))$, where $F(g(n)\Gamma)$ is a nilsequence, Green and Tao \cite{GT12b} prove that the M\"obius function is strongly orthogonal to polynomial nilsequences. \end{enumerate} In this paper, we consider the correlation of nilsequences with a class of multiplicative functions. And in the rest of this section, we would pay attention to the historical developments that are more relevant to the theme of this paper. Suppose that $f:\mathbb{N}\to\mathbb{C}$ is a 1-bounded multiplicative complex valued function, Daboussi \cite{Dab} proves that for any irrational frequency $\alpha\in\mathbb{R}/\mathbb{Z}(=\mathbb{T})$, \[ S(f,\alpha):=\frac{1}{N}\sum_{n\leq N}f(n)e(n\alpha)=o(1). \] Montgomery and Vaughan \cite{MV} consider a much more general class of multiplicative functions. They relax the $1$-bounded condition to the below two conditions \begin{equation}\label{bound-prime} |f(p)|\leq A, \textrm{ for all primes} \ p, \end{equation} and \begin{equation}\label{bound-l2} \sum_{n\leq N}|f(n)|^2\leq A^2N, \text{ for large natural numbers} \ N, \end{equation} where $A\geq 1$ is some constant. More formally, they prove that if $\norm{q\alpha}\leq\frac{1}{q}$ for some integer $1\leq q\leq N(\log N)^{-3}$, then for every function $f$ satisfies conditions (\ref{bound-prime}) and (\ref{bound-l2}), we have \[ S(f,\alpha)\ll \frac{1}{\sqrt{\phi(q)}}+\frac{1}{\log N}. \] Taking note that when $q\geq(\log N)^{2+\varepsilon}$, the above upper bound would be determined by the second term. Thus, when the function $f$ retains reasonable decay in arithmetic progressions with modulus less than $(\log N)^{2+\varepsilon}$, we can expect that for any frequency $\alpha\in\mathbb{T}$, \begin{align}\label{aim} \frac{1}{N}\sum_{n\leq N}\bigbrac{f(n)-\mathbb{E}_{[N]}f}e(n\alpha)\ll (\log N)^{-1}, \end{align} where we have written $\mathbb{E}_{[N]}f=\mathbb{E}_{n\in[N]}f(n)=\frac{1}{N}\sum_{n\leq N}f(n)$ as the average of $f$ on the discrete interval $[N]=\set{1,2,\dots,N}$. Recently, Jiang, L\"{u} and Wang \cite{JLW} weaken the above condition (\ref{bound-prime}) that $f$ takes bounded values in primes to the following two conditions \begin{equation}\label{p-l2} \sum_{p\leq N}|f(p)|^2\log p\ll N; \end{equation} and \begin{equation}\label{ph-l2} \twosum{p\leq N}{p+h\text{ is prime}}|f(p)f(p+h)|\ll\frac{h}{\phi(h)}\frac{N}{(\log N)^2} \end{equation} holds for all positive integers $h$. They can show that when $f$ is under conditions (\ref{bound-l2}), (\ref{p-l2}) and (\ref{ph-l2}), the error term of $S(f,\alpha)$ is similar to the result of Montgomery and Vaughan. Using this result, they also prove that the mean value of coefficients of automorphic $L$-functions and linear phase functions has logarithmic decay (see next subsection). The studying of the correlation between multiplicative functions and polynomial nilsequence in place of the exponential function $n\to e(n\alpha)$ begins with Green and Tao \cite{GT10}. In that paper, they obtained the asymptotic formula of linear equations in primes using the information that $W$-tricked von Mangoldt function does not correlate with polynomial nilsequences. And very recently, Tao and Ter\"av\"ainen \cite{TT} give the above asymptotic formula a quantitative error term. In order to make this paper clear and readable, we are planning to record the definition of nilsequences and related notations here, and refer to \cite{GT12a} for more detailed introductions. \begin{definition} Let $G$ be a connected, simply-connected nilpotent Lie group, and let $\Gamma\leq G$ be a lattice. By a \emph{ filtration} $G_{\bullet}=(G_i)_{i=0}^{\infty}$ on $G$, we mean a descending sequence of groups $G=G_1\supseteq G_2\supseteq\cdots\supseteq G_d\supseteq G_{d+1}=\{\mathrm{id}_G\}$ such that \begin{equation}\label{EqFiltration}[G,G_{i-1}]\subseteq G_i, \forall i\geq 2.\end{equation} This actually implies $[G_i,G_j]\subseteq G_{i+j}$ for all $i, j\geq 1$, and $\Gamma_i:=\Gamma\cap G_i$ is a lattice in $G_i$ for $i\geq 0$. The number $d$ is the \emph{ degree} of the filtration $G_\bullet$. The \emph{ step} $s$ of $G$ is the degree of the lower central filtration defined by $G_{i+1}=[G,G_i]$. A lattice $\Gamma$ must be cocompact, and the compact quotient $G/\Gamma$ is called a \emph{ nilmanifold}. We say that $g$ is a \emph{ polynomial sequence} with coefficients in $G_{\bullet}$, and write $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$, if $g:\mathbb{Z}\to G$ satisfies the derivative condition \begin{align*} \partial_{h_1}\cdots \partial_{h_i}g(n) \in G_i \end{align*} for all $i\geq 0$, $n\in \mathbb{Z}$ and all $h_1,\ldots, h_i\in \mathbb{Z}$, where $\partial_h g(n):=g(n+h)g(n)^{-1}$ is the discrete derivative with shift $h$. The Mal'cev basis $\mathcal{X}$ (see next section) induces a right invariant metric $d_G$ on $G$, which is the largest metric such that $d(x,y)\leq|\psi_\mathcal{X}(xy^{-1})|$ always holds, where $|\cdot|$ denotes the $l^\infty$-norm on $\mathbb{R}^m$, and $\psi_\mathcal{X}:G\to\mathbb{R}$ is the Mal'cev coordinate map which is defined in \cite[(2.1)]{GT12a}. Actually, this in turn induces a metric $d_{G/\Gamma}$ on $G/\Gamma$. For a function $F:G/\Gamma\to\mathbb{C}$, we define its \emph{ Lipschitz norm} as \begin{equation}\label{DefLip}\|F\|_{\operatorname{Lip}}=\|F\|_{\infty}+\sup_{x,y\in G/\Gamma, x\neq y}\frac{|F(x)-F(y)|}{d_{G/\Gamma}(x,y)}\end{equation} with respect to $d_{G/\Gamma}$. Finally, if $F:G/\Gamma\to \mathbb{C}$ is a Lipschitz function (that is $\norm{F}_{\mathrm{Lip}}<\infty$), we call a sequence of the form $n \mapsto F(g(n)\Gamma)$ a \emph{nilsequence}. \end{definition} An interesting question is to generalize the above-described classical results on linear phase functions from Daboussi, Montgomery and Vaughan and so on, to nilsequences. Frantzikinakis and Host \cite[Theorem 2.2]{FH} generalized Daboussi's consideration to nilsequence cases, and they can prove a qualitative result as follows. \begin{theorem}[Daboussi for nilsequences]\label{FH} Let $G/\Gamma$ be a nilmainifold with finite dimension and $G_\bullet$ be a finite degree filtration of $G$. Suppose that $g\in\mathrm{poly}(\mathbb{Z},G_\bullet)$ is a polynomial sequence and $\bigbrac{g(n)\Gamma}_{n\in\mathbb{N}}$ is totally equidistributed\footnote{An infinite sequence $(g(n)\Gamma)_{n\in\mathbb{N}}$ in $G/\Gamma$ is said to be totally equidistributed if for any integers $0\neq a\in\mathbb{Z}$ and $r\in\mathbb{Z}$ and any continuous function $F:G/\Gamma\to\mathbb{C}$, we have \[ \lim_{n\to\infty}\mathbb{E}_{n\in[N]}F(g(an+r)\Gamma)=\int_{G/\Gamma}F. \] Here, the symbol $\int_{G/\Gamma}$ will stand for integration with respect to the unique left invariant probability measure on $G/\Gamma$.} in $G/\Gamma$, then for any 1-bounded multiplicative function $f:\mathbb{Z}\to\mathbb{C}$ and any continuous function $F:G/\Gamma\to\mathbb{C}$ with $\int_{G/\Gamma}F=0$, we have \[ \lim_{N\to\infty}\mathbb{E}_{n\in[N]}f(n)F(g(n)\Gamma)=0. \] \end{theorem} Matthiesen \cite{Mat} consideres a general class of functions which doesn't need the functions to take bounded values at every integer point but bound at primes and higher power of primes. Let $H\geq1$ be a fixed number, $\mathcal M_H$ denote the class of multiplicative functions in which each function $f:\mathbb{N}\to\mathbb{C}$ satisfies the following conditions: \begin{enumerate} \item (Bound at prime powers). $|f(p^k)|\leq H^k$ for all prime powers $p^k$; \item (Accumulation at primes is positive). There is a number $0<\alpha_f\leq1$ such that the below inequality holds for all large $N$ \[ \frac{1}{N}\sum_{p\leq N}|f(p)|\log p\geq\alpha_f. \] \end{enumerate} Matthiesen \cite{Mat} then gives a quantitative discorrelation of polynomial nilsequences with $W$-tricked functions from the class $\mathcal M_H$, see \cite[Theorem 6.1]{Mat} for precise statement. We plan to study another class of multiplicative functions, for which we relax the point-to-point information at prime powers but instead we need stronger statistics information. Let $\mathcal M'$ be the class of multiplicative functions $f:\mathbb{N}\to\mathbb{C}$ with the following properties, \begin{enumerate} \item (Equidistributed in arithmetic progressions with small moduli). There are relatively prime integers $1\leq b\leq W\ll(\log N)^C$ such that the $W$-tricked function of $f$, named $f(W\cdot+b)$, is equidistributed in long arithmetic progressions. More formally, \begin{align}\label{w-equi} \frac{\phi(W)}{W}\biggabs{\mathbb{E}_{n\in P}\biggbrac{f(Wn+b)-\frac{\phi(W)}{WN}\sum_{n\in[N]}f(Wn+b)}}\ll\frac{1}{\log N}, \end{align} where $P\subseteq [N]$ is an arithmetic progression of length $|P|\gg N/(\log N)^C$. \item (The $L^2$-norm at primes is bounded). \begin{align}\label{lp2} \sum_{p\leq N}|f(p)|^2\log p\ll N. \end{align} \item (The $L^2$-norm is bounded). \begin{align}\label{fl2} \mathbb{E}_{n\in[N]}|f(n)|^2\ll1; \end{align} and \begin{align}\label{wl2} \frac{\phi(W)}{WN}\sum_{n\leq N}|f(Wn+b')|^2\ll1 \end{align} for all integers $1\leq b'\leq W$ coprime with $W$. \end{enumerate} \cite[Proposition A.2]{GT12b} shows that the M\"obius function is equidistributed in progressions with small common differences. It can also been seen from \cite[Proposition 2.2]{TT} that $\Lambda-\Lambda_{\mathrm{Siegel}}$ is equidistributed in progressions with small moduli, where $\Lambda$ is the von Mangoldt function and $\Lambda_{\mathrm{Siegel}}$ is defined in \cite[Definition 2.1]{TT}. When taking $W=1$ in (\ref{w-equi}), the von Mangoldt function itself is not equidistributed in progressions with small moduli, but this difficulty can be overcome by a simple affine change of variables known as $W$-trick, which is introduced in \cite{GT08}. However, for general multiplicative functions, finding such a number $W$ such that $f(W\cdot+b)$ eliminates bias to any residue class with small modulus is much more complicated, we refer the interested readers to \cite[Section 5]{Mat} for the inspiring readings. \begin{remark} When $W\neq1$ and $W\ll(\log N)^C$, for the function $f(W\cdot+b)$, the logarithmic decay in condition (\ref{w-equi}) is, somehow, also a strict condition, we can weaken this condition to expect a poor error term of Theorem \ref{main}. However, we are extremely interested in how to obtain a result similar to Montgomery and Vaughan (looks like (\ref{aim})) in the nilsequence settings. And this is the reason that we do such a logarithmic decay hypothesis. \end{remark} \begin{remark} Assumption (\ref{wl2}) may be strange at first glance, we try to defend a bit for it. The application of the approach of Montgomery and Vaughan would reduce the question to considering a sum over $pn$ in an arithmetic progression, where $p$ is a prime. Visually speaking, to handle $\sum_{pn\leq x; pn\equiv b\mrd W}f(p)f(n)e(pn\alpha)$, where $f(n)$ and $f(p)$ may be seen as additive coefficients for the moment. When we aim to remove these coefficients and so to separate variables $p$ and $n$, taking note that the summation is on a reduced residue class, at least one of the variables $p$ and $n$ needs to run over all of the reduced residue classes modulus $W$, and this leads to condition (\ref{wl2}) after applying Cauchy-Schwarz inequality. Moreover, it is easy to verify that when $W=1$ the condition (\ref{wl2}) is indeed (\ref{fl2}). \end{remark} \begin{theorem}\label{main} Let $G/\Gamma$ be a nilmanifold of dimension $m_G\geq1$, $\mathcal X$ be a $M_0$-rational Mal'cev basis adapted to $G/\Gamma$ for some $2\leq M_0\leq \log N$, and $G_\bullet$ be a filtration of $G$ of degree $d\geq1$. Suppose that $g\in\mathrm{poly}(\mathbb{Z},G_\bullet)$ is a polynomial sequence and $F:G/\Gamma\to\mathbb{C}$ is a 1-bounded Lipschitz function, then for every function $f\in\mathcal M'$, we have \[ \frac{\phi(W)}{WN}\sum_{n\in[N]}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F(g(n)\Gamma)\ll_{m_G,d}(1+\norm{F}_{\mathrm{Lip}})\frac{1}{\log N}, \] where we have written $\mathbb{E}_f(N;W,b)=\frac{\phi(W)}{WN}\sum_{n\in[N]}f(Wn+b)$.\end{theorem} \subsection*{Coefficients of automorphic $L$-functions twisted by nilsequences} Inspired from the argument of \cite{JLW}, we aim to apply Theorem \ref{main} to the coefficients of automorphic $L$-functions. We begin with a brief introduction to the coefficients of automorphic $L$-functions. Let $m\geq 2$ be an integer and $\pi$ be an automorphic irreducible cuspidal representation of $GL_m$ over $\mathbb{Q}$ with unitary central character. Denote by $\lambda_\pi (n)$ the Dirichlet coefficients of automorphic $L$-function $L(s,\pi)$ attached to $\pi$. Then $\lambda_\pi(n)$ is a multiplicative function. We now write $S_1(\alpha,N)$ as the sum of $\lambda_\pi$ with the linear phase function, i.e. \[ S_1(\alpha,N)=\sum_{n\leq N}\lambda_\pi(n)e(n\alpha). \] The Ramanujan conjecture asserts that \[|\lambda_\pi(p)|\leq m\] for all primes $p$, and this means that $\lambda_\pi(n)$ satisfy the condition (\ref{bound-prime}) under Ramanujan conjecture. Besides, in the paper of Jiang, L\"u and Wang \cite{JLW}, they proved that $\lambda_\pi(n)$ also satisfy condition (\ref{bound-l2}), thus, under Ramanujan conjecture, the result given by Montgomery and Vaughan can be applied to $\lambda_\pi(n)$. Whilst, we also concerne about the upper bound of $S_1(\alpha,N)$ without any sharp hypothesis. When $m=2$, $\lambda_\pi(n)$ are the normalised coefficients of a modular form or Maass form, and in this case, \cite[Theorem 8.1]{Iwa} shows that \[S_1(\alpha,N)\ll N^{\frac{1}{2}}(\log N).\] We now view $S_1(\alpha,N)$ as the Fourier coefficient of $\lambda_\pi$ at frequency $\alpha$ temporarily, then using of Parseval's identity and the Rankin-Selberg theory yields that \[\int_0^1|S_1(\alpha,N)|^2d\alpha=\sum_{n\leq N}|\lambda_\pi(n)|^2\sim N,\] which means that, on average, $S_1(\alpha,N)$ demonstrates square-root cancellation. As a consequence, one cannot expect above exponent $\frac{1}{2}$ could be reduced for all frequencies $\alpha$. When $m=3$, Miller \cite{Mil} proved that \[S_1(\alpha,N)\ll N^{\frac{3}{4}+\varepsilon},\] by using of the Vorono\"i summation for $GL_3$. For general $m\geq 4$, Jiang, L\"u and Wang \cite[Theorem 1.2]{JLW} proved that \[S_1(\alpha,N)\ll \frac{N}{\log N}.\] Let's give a quick sketch of their proof. In order to get the desired bound, they firstly split frequencies $\alpha\in\mathbb{T}$ into the so-called $major \ arcs$ and $minor \ arcs$. When $\alpha$ belongs to $major \ arcs$, they can get good bounds for $S_1(\alpha, N)$ much easier since the good behavior of $\lambda_\pi$ in progressions with small modulus. As for those frequencies in $minor \ arcs$, they employ their Theorem 1.1 in \cite{JLW} to the coefficients $\lambda_\pi(n)$, thus, reduce the question to verifying conditions (\ref{bound-l2}), (\ref{p-l2}) and (\ref{ph-l2}) for coefficients $\lambda_\pi(n)$. And, the main business of their paper is to handle the much harder condition (\ref{ph-l2}). It is worth pointing out that the logarithmic decay for $S_1(\alpha,N)$ in their result is derived from the approach of Montgomery and Vaughan. And we believe it is a difficult work to gain any power saving result for $S_1(\alpha,N)$, and also needs more clever ideas. There are also several pieces of literature concerning the correlation of coefficients $\lambda_\pi(n)$ and polynomial phase functions even nilsequences, such as the sum \[ S_k(\alpha,N)=\sum_{n\leq N}\lambda_\pi(n)e(n^k\alpha). \] Cafferata, Perelli and Zaccagnini \cite{CPZ} extend the criterion of Bourgain--Sarnak--Ziegler by relaxing some conditions in the criterion to make it efficient for more multiplicative functions, then they applied this (generalized) criterion to holomorphic cusp form $\pi$ on $GL_2$ and prove that \[S_k(\alpha,N)\ll N\frac{\log\log N}{\log N}.\] Later, Jiang and L\"u \cite{JL} extend the above result to $GL_m$ automorphic cuspidal representation $\pi$ under Hypothesis H (Hypothesis H was stated by Rudnick and Sarnak \cite{RS}). Afterward, the Hypothesis H is removed by Jiang and L\"u in a later paper \cite[Lemma 5.3]{JL21}. As for general nilsequence settings, the only literature we can find is the paper of Matthiesen \cite{Mat}. Thanks to her Theorem 6.1, one can show that, for each holomorphic cusp form $\pi$ on $GL_2$, the mean value of $\lambda_\pi(n)$ and polynomial nilsequences has some quantitative decay. Our second result of this paper is to extend the previous consideration to the correlation of general $GL_m (m\geq 2)$ automorphic cuspidal representation $\pi$ over $\mathbb{Q}$ with polynomial nilsequence. \begin{theorem}\label{lfunction} Let $G/\Gamma$ be a nilmanifold of dimension $m_G\geq1$, $\mathcal X$ be a $M_0$-rational Mal'cev basis adapted to $G/\Gamma$ for some $2\leq M_0\leq \log N$, and $G_\bullet$ be a filtration of $G$ of degree $d\geq1$. Suppose that $g\in\mathrm{poly}(\mathbb{Z},G_\bullet)$ is a polynomial sequence and $F:G/\Gamma\to\mathbb{C}$ is a 1-bounded Lipschitz function, then we have \[ \frac{1}{N}\sum_{n\in[N]}\lambda_\pi(n)F(g(n)\Gamma)\ll_{m_G,d,\pi}\bigbrac{1+\norm{F}_{\mathrm{Lip}}}\frac{1}{\log N}. \] \end{theorem} \begin{remark} To prove Theorem \ref{lfunction}, we need to verify $\lambda_\pi(n)$ satisfy the major-arc condition (\ref{w-equi}) with $W=1$ and the $L^2$-norm bounded conditions (\ref{lp2}) and (\ref{fl2}). Thus, compared with the proof of \cite{JLW}, we lessen the work to verify $\lambda_\pi(n)$ satisfying the sieve condition (\ref{ph-l2}). \end{remark} We are willing to take some examples for the readers who are less familiar with nilsequences. Theorem 1.4 shows that the twisting of $\lambda_\pi(n)$ with a polynomial phase function of degree $d\geq1$ has logarithmic decay, i.e. \[ \sum_{n\leq N}\lambda_\pi(n)e(\alpha_d n^d+\dots+\alpha_1n+\alpha_0)\ll\frac{N}{\log N}, \] where $\alpha_d,\dots,\alpha_1,\alpha_0\in\mathbb{T}$ are arbitrary frequencies; Theorem 1.4 also shows that the following expression has logarithmic decay, \[ \sum_{n\leq N}\lambda_\pi(n)e\bigbrac{\beta n\floor{n\alpha}}\psi(\{\alpha n\})\psi(\{\beta n\})\ll\frac{N}{\log N}, \] where $\floor{\cdot}$ is the integer part operator, $\{\cdot\}$ is the fractional part operator, $\alpha,\beta\in\mathbb{T}$ are frequencies, and $\psi:[0,1]\to \mathbb{C}$ is a Lipschitz function that vanishes near 0 and 1. It is also an interesting question to consider whether the sequences $\{\mu(n)\}_n$ and $\{\lambda_{\pi}(n)F(g(n)\Gamma)\}_n$ are orthogonal or not. This is motivated by the M\"obius pseudorandomness principle, which predicts that sums of the form $\sum_{n\leq x}\mu(n)\xi(n)$ should exhibit some cancellation if $\xi(n)$ is not obviously related to the prime factorisation of $n$. See also the M\"obius disjointness conjecture of Sarnak \cite{Sar}, which expects to use observables from zero entropy topological dynamical systems as the sequence $\xi(n)$. Our next conclusion shows that the M\"obius pseudorandomness principle is true for the sequence $\{\lambda_{\pi}(n)F(g(n)\Gamma)\}$. In this direction, the current recording results are from Fouvry and Ganguly \cite{FG} when $m=2$ and Jiang and L\"u \cite{JL19} when $m=3$ respectively. Combining their results, one has when $m=2,3$, \[\sum_{n \leqslant N} \mu(n) \lambda_{\pi}(n) e(n \alpha) \ll N \exp (-c \sqrt{\log N}),\] where $c>0$ is an absolute constant. In the same paper of Cafferata, Perelli and Zaccagnini \cite{CPZ}, they also have \[\sum_{n \leqslant N} \mu(n) \lambda_{\pi}(n) e\left(n^{k} \alpha\right) \ll N \frac{\log \log N}{\sqrt{\log N}}.\] Jiang and L\"u \cite{JL} extend the above result to $GL_m$ automorphic cuspidal representation $\pi$ under Hypothesis H and Hypothesis S. Afterward, these two hypothesizes are removed in two later papers respectively--Jiang and L\"u \cite[Lemma 5.3]{JL21} removes Hypothesis H and Jiang, L\"u, Thorner and Wang \cite[Corollary 4.7]{JLTW} removes Hypothesis S. Recently, Jiang, L\"u and Wang \cite{JLW21}\footnote{We would like to thank Yujiao Jiang for posting their paper \cite{JLW21} and explaining their papers.} prove that \[\sum_{n \leqslant N} \mu(n) \lambda_{\pi}(n) e(n \alpha) \ll \frac{N}{\log N},\] for $\pi$ is self-dual and $\pi \ncong \pi\otimes\chi$ for any quadratic primitive character $\chi$. In this direction, we can show that \begin{theorem}\label{mulfunction} Assume that $\pi$ is self-dual and $\pi \ncong \pi\otimes\chi$ for any quadratic primitive character $\chi$, with the notations in Theorem \ref{lfunction}, we have \[\sum_{n \leqslant N} \mu(n) \lambda_{\pi}(n) F(g(n)\Gamma) \ll_{m_G,d,\pi} \frac{N}{\log N}.\] \end{theorem} \begin{remark} The assumption $\pi \ncong \pi\otimes\chi$ is in \cite[Theorem 1.2]{JLTW}. Roughly speaking, Jiang, L\"u, Thorner and Wang can obtain an analogy of the Siegel theorem in higher rank groups based on this assumption. \end{remark} \begin{remark} Following the argument in \cite{FG} and \cite{JL19}, we don't need the additional assumption on $\pi$ for $m=2$ and $m=3$ respectively. \end{remark} \subsection*{Acknowledgements} The authors are grateful to Yujiao Jiang, Lilian Matthiesen, Joni Ter\"av\"ainen and Zhiwei Wang for their helpful discussions. The authors also appreciate comments and suggestions from the referee which help us to improve this article. Part of this work was written when M.W. visits Shandong University, she thanks the warm and generous hospitality from Shandong University. X.H. is supported by the National Postdoctoral Innovative Talents Support Program (Grants No. BX20190227), NSFC (No. 12101427) and the Fundamental Research Funds for the Central Universities, SCU (No. 2021SCU12109). M.W. is partially supported by the G\"oran Gustafsson Foundation. \section{Preliminaries and Outline of the Proof} \subsection{Preliminaries} In this subsection, we quickly collect all the facts and notations that we will need from Green-Tao's paper \cite{GT12a}. Denote by $\mathfrak{g}_i$ the Lie algebra $G_i$, then $\mathfrak{g}_\bullet=\{\mathfrak{g}_i\}$ is a \emph{ filtration} of Lie algebras, i.e. $[\mathfrak{g},\mathfrak{g}_i]\subseteq\mathfrak{g}_{i+1}$, if and only if $G_\bullet=\{G_i\}$ is a filtration. \begin{definition}\cite[Definition 2.1]{GT12a}\label{malcev} Let $G/\Gamma$ be an $m$-dimensional nilmanifold and let $G_{\bullet}$ be a filtration. A basis $\mathcal{X} = \{X_1,\dots,X_{m}\}$ for the Lie algebra $\mathfrak{g}$ over $\mathbb{R}$ is called a \emph{Mal'cev basis} for $G/\Gamma$ adapted to $G_{\bullet}$ if the following four conditions are satisfied: \begin{enumerate}[(i)] \item $\{X_j,X_{j+1},\cdots,X_m\}$ spans an ideal of $\mathfrak{g}$ for all $0\leq j\leq m$; \item For each $1\leq i\leq d$ and $m_i=\dim G_i$, the Lie algebra $\mathfrak{g}_i$ of $G_i$ is the linear span of $\{X_{m-m_i+1},X_{m-m_i+2},\cdots,X_m\}$; \item There is a diffeomorphism $\psi_\mathcal{X}:G\to\mathbb{R}^m$ determined by $$\psi_\mathcal{X}\Big(\exp(\omega_1X_1)\cdots\exp(\omega_mX_m)\Big)=(\omega_1,\cdots,\omega_m);$$ \item In the coordinate system $\psi_\mathcal{X}$, $\Gamma=\psi_\mathcal{X}^{-1}(\mathbb{Z}^m)$. \end{enumerate} \end{definition} We say that a Mal'cev basis $\mathcal{X}$ for $G/\Gamma$ is \emph{$Q$-rational} if all of the structure constants $c_{ijk}$ in the relations \[ [X_i, X_j] = \sum_k c_{ijk} X_k\] are rational with height at most $Q$. The \emph{height} of a real number $x$ is defined as $\max(|a|,|b|)$ if $x=a/b$ is rational in reduced form. \begin{definition}\cite[Definition 1.2]{GT12a}\label{almost-equidistribution} Let $G/\Gamma$ be a nilmanifold. \begin{enumerate} \item Given a length $N > 0$ and an error tolerance $\delta > 0$, a finite sequence $(g(n)\Gamma)_{n \in [N]}$ is said to be \emph{$\delta$-equidistributed} if we have $$ \left|\mathbb{E}_{n \in [N]} F(g(n) \Gamma) - \int_{G/\Gamma} F\right| \leq \delta \|F\|_{\operatorname{Lip}}$$ for all Lipschitz functions $F: G/\Gamma \to \mathbb{C}$. \item A finite sequence $(g(n)\Gamma)_{n \in [N]}$ is said to be \emph{totally $\delta$-equidistributed} if we have $$ \left|\mathbb{E}_{n \in P} F(g(n) \Gamma) - \int_{G/\Gamma} F\right| \leq \delta \|F\|_{\operatorname{Lip}}$$ for all Lipschitz functions $F: G/\Gamma \to \mathbb{C}$ and all arithmetic progressions $P \subset [N]$ of length at least $\delta N$. \end{enumerate} \end{definition} The symbol $\int_{G/\Gamma}$ will stand for integration with respect to the unique left invariant probability measure on $G/\Gamma$. \begin{definition}\cite[Definition 1.17]{GT12a}\label{rat-def-quant} Let $G/\Gamma$ be a nilmanifold and let $Q > 0$ be a parameter. We say that $\gamma \in G$ is \emph{$Q$-rational} if $\gamma^r \in \Gamma$ for some integer $r$, $0 < r \leq Q$. A \emph{$Q$-rational point} is any point in $G/\Gamma$ of the form $\gamma\Gamma$ for some $Q$-rational group element $\gamma$. A sequence $(\gamma(n))_{n \in \mathbb{Z}}$ is \emph{$Q$-rational} if every element $\gamma(n)\Gamma$ in the sequence is a $Q$-rational point. \end{definition} \begin{definition}\cite[Definition 1.18]{GT12a}\label{smooth-seq-def} Let $G/\Gamma$ be a nilmanifold with a Mal'cev basis $\mathcal{X}$. Let $(\varepsilon(n))_{n \in \mathbb{Z}}$ be a sequence in $G$, and let $M, N \geq 1$. We say that $(\varepsilon(n))_{n \in \mathbb{Z}}$ is \emph{$(M,N)$-smooth} if we have $d(\varepsilon(n),\operatorname{id}_G) \leq M$ and $d(\varepsilon(n),\varepsilon(n-1)) \leq M/N$ for all $n \in [N]$. \end{definition} With the above notations, we give the following factorization theorem. See \cite[Theorem 1.19]{GT12a} \begin{proposition}[Green--Tao factorization theorem]\label{factorization} Let $m,d \geq 0$, and let $M_0, N \geq 1$ and $A > 0$ be real numbers. Suppose that $G/\Gamma$ is an $m$-dimensional nilmanifold together with a filtration $G_{\bullet}$ of degree $d$. Suppose that $\mathcal{X}$ is an $M_0$-rational Mal'cev basis $\mathcal X$ adapted to $G_{\bullet}$ and that $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$. Then there is an integer $M$ with $M_0 \leq M \ll M_0^{O_{A,m,d}(1)}$, a rational subgroup $G' \subseteq G$, a Mal'cev basis $\mathcal{X}'$ for $G'/\Gamma'$ in which each element is an $M$-rational combination of the elements of $\mathcal{X}$, and a decomposition $g = \varepsilon g' \gamma$ into polynomial sequences $\varepsilon, g', \gamma \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$ with the following properties: \begin{enumerate} \item $\varepsilon : \mathbb{Z} \rightarrow G$ is $(M,N)$-smooth; \item $g' : \mathbb{Z} \rightarrow G'$ takes values in $G'$, and the finite sequence $(g'(n)\Gamma')_{n \in [N]}$ is totally $1/M^A$-equidistributed in $G'/\Gamma'$, using the metric $d_{\mathcal{X}'}$ on $G'/\Gamma'$; \item $\gamma: \mathbb{Z} \rightarrow G$ is $M$-rational, and $(\gamma(n)\Gamma)_{n \in \mathbb{Z}}$ is periodic with period at most $M$. \end{enumerate} \end{proposition} The next result is the key theorem to verify whether a sequence is equidistributed or not, but before it, we introduce the so-called smoothness norm. \begin{definition}\cite[Definition 2.7]{GT12a}\label{smooth-norm} Suppose that $g:\mathbb{Z}\to\mathbb{T}$ is a polynomial sequence of degree $d$. Then $g$ may be written uniquely as \[ g(n)=\alpha_0+\alpha_1\binom n1+\dots+\alpha_d\binom nd. \] For any $N>0$, we define the \emph{smoothness norm} as \[ \norm{g}_{C^\infty[N]}:=\sup_{1\leq j\leq d}N^j\norm{\alpha_j}. \] \end{definition} \begin{proposition}[Quantitative Leibman theorem]\label{leibman} Let $m_G, d \geq 0$, $0 < \delta < 1/2$, and $N \geq 1$. Let $G/\Gamma$ be an $m_G$-dimensional nilmanifold together with a filtration $G_{\bullet}$ of degree $d$ and a $\frac{1}{\delta}$-rational Mal'cev basis $\mathcal{X}$ adapted to this filtration. Suppose that $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$. If $(g(n)\Gamma)_{n \in [N]}$ is not $\delta$-equidistributed in $G/\Gamma$, then there is a horizontal character $\eta$ with $0<|\eta|\ll\delta^{-O_{m_G,d}(1)}$ such that \[ \norm{\eta\circ g}_{C^\infty[N]}\ll\delta^{-O_{m_G,d}(1)}. \] \ \end{proposition} \begin{proof} See \cite[Theorem 2.9]{GT12a}. \end{proof} \subsection{Motivation and outline of the argument} For convenience, in the rest of this paper we would allow all of the asymptotic notions, such as $O(\cdot),o(\cdot),\gg,\ll$ and so on, to depend on the degree $d$ and the dimension $m_G$. We actually dawn on the ideas from the three papers \cite{GT12b}, \cite{Mat} and \cite{MV} to prove Theorem \ref{main}. As we said in introduction, Montgomery and Vaughan \cite{MV} deal with the correlation of a class of multiplicative functions with linear phase functions. For doing this, they first reduce the exponential sum $\mathbb{E}_{n\in[N]}f(n)e(n\alpha)$ with $\alpha\neq0$ to the one in which variables are in terms of primes $p$, that is $\mathbb{E}_{p\sim U}\mathbb{E}_{n\leq N/p}\log p\,f(p)f(n)e(pn\alpha)$. The using of Cauchy-Schwarz inequality shows that it is bounded by $\norm{f}_{L^2[\frac{N}{U}]}$ times a half power of \[ \mathbb{E}_{p,p'\sim U}\log p\log p'\,f(p)f(p')\mathbb{E}_{n\leq\min\set{\frac{N}{p},\frac{N}{p'}}}e\bigbrac{(p-p')n\alpha}. \] Because the number of pairs $(p,p')\in(U,2U]^2$ such that $p=p'$ is only $U$, one can show that in this case the above expression is pretty small so is negligible; whilst, Montgomery and Vaughan also show that the sequence $\set{n(p-p')\alpha\mrd{1}}_n$ is equidistributed for almost all pairs $(p,p')$ whenever $\alpha\neq 0$, and this leads them to a desired bound. It seems that their approach can be generalized to deal with the correlation of $f$ with polynomial phase functions if $f$ satisfies conditions (\ref{bound-prime}) and (\ref{bound-l2}). In what follows, we provide a heuristic argument for the generalization. Suppose that $k\geq 2$ is a natural number and $0<\delta<1$ is a parameter, we aim to show that the following inequality holds only if $\alpha$ is $\delta^{-O(1)}$-rational ($\norm{q\alpha}\ll\delta^{-O(1)}/N^k$ for some $1\leq q\ll\delta^{-O(1)}$) \begin{align*} \delta\leq\bigabs{\mathbb{E}_{p\sim U}\mathbb{E}_{pn\sim N}\log pf(p)f(n)e(p^kn^k\alpha)}. \end{align*} Firstly, it can be seen from H\"older's inequality with some large integer $s=s(k)$ that \[ \delta\ll\Bigbrac{\mathbb{E}_{n\sim N/U}|f(n)|^{\frac{2s}{2s-1}}}^{\frac{2s-1}{2s}}\Bigbrac{\mathbb{E}_{n}\bigabs{\mathbb{E}_{p\sim U}\log p\,f(p)e(p^kn^k\alpha)}^{2s}}^{\frac{1}{2s}}. \] The assumptions $f(p)$ is bounded for every prime $p$ and $\mathbb{E}_{n}|f(n)|^2\ll1$ (or a weaker assumption $\mathbb{E}_{n}|f(n)|^{\frac{2s}{2s-1}}\ll1$) then lead us to \[ \delta^{2s}\ll\mathbb{E}_{p_1,\cdots,p_{2s}\sim U}\log p_1\,\cdots\log p_{2s}\,\bigabs{\mathbb{E}_{n\sim N/U}e\bigbrac{(p_1^k+\cdots +p_s^k-\cdots-p_{2s}^k)n^k\alpha}}. \] It follows from \cite[Lemma 1.1.16]{Tao} that there is $q\ll\delta^{-O(1)}$ and a set $\mathcal P\subset(U,2U]^{2s}$ with $|\mathcal P|\gg\delta^{O(1)}U^{2s}$ such that for each $\vec p=(p_1,\cdots,p_{2s})\in\mathcal P$ \[ \norm{q(p_1^k+\cdots +p_s^k-\cdots-p_{2s}^k)\alpha}\ll\delta^{-O(1)}\bigbrac{\frac{U}{N}}^k. \] If we can show that $\set{p_1^k+\cdots +p_s^k-\cdots-p_{2s}^k}_{\vec p\in\mathcal P}$ forms a dense (up to a factor $\delta^{O(1)}$ ) subset of the interval $[1,O(U^k)]$, the desired conclusion ($\alpha$ is $\delta^{-O(1)}$-rational) would follow from \cite[Lemma 3.4]{GT12b}. Practically,, the fact that $\set{p_1^k+\cdots +p_s^k-\cdots-p_{2s}^k}_{\vec p\in\mathcal P}$ is dense is a consequence of \cite[Lemma 3.3]{GT12b} if $s$ is large compared with the degree $k$. In this paper, however, we are more interested in the correlation of multiplicative functions under weaker conditions (weaker than (\ref{bound-prime}) and (\ref{bound-l2})). In our understanding, the main result of Jiang, L\"u and Wang \cite{JLW} is that they verify the function $\lambda_\pi$ satisfies conditions (\ref{bound-l2}--\ref{ph-l2}), and then they apply the thought of Montgomery and Vaughan to show that $\lambda_\pi$ doesn't correlate with linear phase functions. Fortunately, from their proof, we find that the function $\lambda_\pi$ has good major-arc behavior, and this motivates us to study the (dis-)correlation of nilsequences with a class of functions that have good major-arc behavior (satisfy condition (\ref{w-equi})). The first step is to apply Green--Tao factorization theorem, Theorem \ref{factorization}, to decompose the polynomial $g$ as a product $\epsilon g'\gamma$, where $\epsilon$ is a smooth factor, $\gamma$ is rational, and $g'$ is highly equidistributed in a closed subgroup $G'\subseteq G$. We then eliminate the rather harmless factors $\epsilon$ and $\gamma$ just as Green and Tao did in their \cite[Section 2]{GT12b}. This leaves us to deal with the highly equidistributed sequences with respect to arithmetic progressions with small moduli, say, \[ \mathbb{E}_P\mathbb{E}_{n\in P}f(n)F_P\bigbrac{g_P(n)\Gamma_P}, \] where $P$ denotes those arithmetic progressions, and $g_P$ is highly equidistributed for each $P$. The next step is to utilize the approach of Montgomery and Vaughan, to split the function $f(n)$ as a product $f(p)f(n)$, where $p$ is a prime number, so that we are in the position that \[ \mathbb{E}_P\mathbb{E}_{pn\in P}\log p\,f(p)f(n)F_P\bigbrac{g_P(pn)\Gamma_P}. \] Here, we simplify the situation we would actually face, such as $pn$ is in a reduced residue class $pn\equiv b\mrd{W}$. Unlike in the work of Matthiesen \cite{Mat}, our functions $f$ don't have enough information in each progression $P$ which fails us to deal with the summation over $pn$ in progressions $P$ piece by piece. We compromise with taking the above outside sum (counting progression) all the time, and view $F_P$ as a piecewise function which supports on $G/\Gamma$ and has very nice local properties in the sense that $F_P\bigbrac{g_P(\cdot)\Gamma_P}$ is equidistributed in each progression $P$. The application of Cauchy-Schwarz inequality allows us to transfer the matter to understanding the equidistribution of product of nilsequences, and the correlation of von Mangoldt function with product of nilsequences, \[ \mathbb{E}_nF_P(g_P(pn)\Gamma_P)\bar{F_P(g_P(p'n)\Gamma_P)} \] and \[ \mathbb{E}_p\Lambda(p)F_P(g_P(pn)\Gamma_P)\bar{F_P(g_P(pn')\Gamma_P)}. \] We then take care of the two expressions by showing that almost all product sequences $\bigbrac{g_P(p\cdot),g_P(p'\cdot)}$ and $\bigbrac{g_P(n\cdot),g_P(n'\cdot)}$ are equidistributed whenever $g_P$ is highly equidistributed, making use of the quantitative Leibman theorem (Proposition \ref{leibman}); and also by using bilinear form method adapted to polynomial nilsequences. \section{Reducing to Equidistributed Cases} As described in Section 2, we are going to factorize the polynomial sequence $g$ into a smooth factor $\epsilon$, a rational factor $\gamma$ and a totally equidistributed factor $g'$, then eliminate the rather harmless factors $\epsilon$ and $\gamma$ in the light of condition (\ref{w-equi}). This will allow us to focus on the highly equidistributed polynomial sequences. The overall strategy of the reduction follows from \cite[Section 2]{GT12b}. Without loss of generality, we may normalise the Lipschitz function $F$ so that $\norm{F}_\mathrm{Lip}=1$, assume that $A>1$ is a large number to be specified, and the parameter $M_0$ in Theorem \ref{main} is equal to the maximal value, that is $M_0\geq\log N$. We begin our calculation with applying Proposition \ref{factorization} to the polynomial sequence $g$ to find an integer $M$ with $M_0\leq M\leq M_0^{O_{A}(1)}$; a rational subgroup $G'\subseteq G$; a Mal'cev basis $\mathcal X'$ adapted to $(G')_\bullet$ for which elements are $M$-rational combinations of elements of $\mathcal X$; and a decomposition $g=\epsilon g'\gamma$, where $\bigbrac{\epsilon(n)}_{n\in\mathbb{Z}}$ is $(M,N)$-smooth, $\bigbrac{g'(n)\Gamma'}_{n\in[N]}$ is totally $M^{-A}$-equidistributed in $G'/\Gamma'$, and $\bigbrac{\gamma(n)}_{n\in\mathbb{Z}}$ is periodic with period $q\leq M$. Immediately, on writing $g$ as the product $\epsilon g'\gamma$, one has, \begin{multline*} \frac{\phi(W)}{WN}\sum_{n\in[N]}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F(g(n)\Gamma)\\ =\frac{\phi(W)}{WN}\sum_{n\in[N]}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F\bigbrac{\epsilon(n)g'(n)\gamma(n)\Gamma}. \end{multline*} Since the sequence $\bigbrac{\gamma(n)}_{n\in\mathbb{Z}}$ is periodic with period $q\leq M$, we can partition the discrete interval $[N]$ into arithmetic progressions with common difference $q$ so that the fractional part of $\gamma$ with respect to $\Gamma$ takes constant values in each progression. In practice, let $P_i$ be the largest progression of the form $i+q\cdot\set{0,1,2,\dots}$ inside of $[N]$ and of common difference $q$ and starting point $i=\set{0,1,\dots,q-1}$ respectively. Then for every integer $0\leq i<q$, there is a number $\gamma_i$ such that $\gamma_i\Gamma=\gamma(P_i)\Gamma$; besides, from \cite[Section 2]{GT12b}, all the coordinates $\psi_\mathcal X(\gamma_i)$ are rationals with height at most $O(M^{O(1)})$. Therefore, we see quickly that, \begin{multline*} \frac{\phi(W)}{WN}\sum_{n\in[N]}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F\bigbrac{\epsilon(n)g'(n)\gamma(n)\Gamma} \\=\frac{\phi(W)}{WN}\sum_{0\leq i<q}\sum_{n\in P_i}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}. \end{multline*} In the next step, we'd like to split $P_i$ further into sub-progressions such that $\epsilon$ is approximately constant in each of these sub-progressions. For this purpose, we fix a number $n_0\in[N]$ for the moment and consider the difference \[ \Bigabs{F\bigbrac{\epsilon(n_0)g'(n)\gamma_i\Gamma}-F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}}. \] Taking note that the function $F: G/\Gamma\to\mathbb{C}$ has Lipschitz norm 1, one can deduce from the definition of Lipschitz norm that \[ \Bigabs{F\bigbrac{\epsilon(n_0)g'(n)\gamma_i\Gamma}-F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}}\leq\,\mathrm{d}_\mathcal X\bigbrac{\epsilon(n_0)g'(n)\gamma_i,\epsilon(n)g'(n)\gamma_i}. \] From the perspective of the operator $\,\mathrm{d}_\mathcal X$ is right-invariant, one has \[ \,\mathrm{d}_\mathcal X\bigbrac{\epsilon(n_0)g'(n)\gamma_i,\epsilon(n)g'(n)\gamma_i}=\,\mathrm{d}_\mathcal X\bigbrac{\epsilon(n_0),\epsilon(n)}. \] The assumption $\epsilon$ is $(M,N)$-smooth then yields that \[ \,\mathrm{d}_\mathcal X\bigbrac{\epsilon(n_0),\epsilon(n)}\leq\abs{n-n_0}\sup_{m\in[n_0,n]}\,\mathrm{d}_\mathcal X\bigbrac{\epsilon(m-1),\epsilon(m)}\leq\frac{M|n-n_0|}{N}. \] Thus, we can conclude that \begin{align}\label{diff} \Bigabs{F\bigbrac{\epsilon(n_0)g'(n)\gamma_i\Gamma}-F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}}\ll\frac{1}{\log N}, \end{align} whenever $|n-n_0|\leq\frac{N}{M\log N}$. Taking the advantage of above analysis, one can decompose $P_i$ into sub-progressions $P_{i,j}$ in which each $P_{i,j}$ has diameter length at most $O(\frac{N}{M\log N})$, and as a consequence, there are at most $O(M\log N)$ disjoint progressions. By fixing an element $\epsilon_{i,j}\in\bigset{\epsilon(P_{i,j})}$ for each progression $P_{i,j}$, plainly, we have \begin{multline*} \frac{\phi(W)}{WN}\sum_{0\leq i<q}\sum_{n\in P_i}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}\\ =\frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F\bigbrac{\epsilon_{i,j}g'(n)\gamma_i\Gamma} \\ +\frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}\Bigset{F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}-F\bigbrac{\epsilon_{i,j}g'(n)\gamma_i\Gamma}}. \end{multline*} Since it follows from Cauchy-Schwarz inequality and condition (\ref{wl2}) that \begin{align}\label{wl1} \frac{\phi(W)}{WN}\sum_{n\leq N}|f(Wn+b')|\ll N^{-1}\Bigbrac{\frac{\phi(W)}{W}}^{1/2}\Bigbrac{\frac{\phi(W)}{WN}\sum_{n\in[N]}|f(Wn+b)|^2}^{1/2}\ll 1 \end{align} holds for every integer $1\leq b'\leq W$ coprime with $W$. We are able to bound the above second term as follows due to inequality (\ref{diff}) and (\ref{wl1}) \begin{multline*} \frac{\phi(W)}{WN}\sum_{i,j} \sup_{n\in P_{i,j}}\Bigabs{F\bigbrac{\epsilon_{i,j}g'(n)\gamma_i\Gamma}-F\bigbrac{\epsilon(n)g'(n)\gamma_i\Gamma}}\sum_{n\in P_{i,j}}\Bigbrac{|f(Wn+b)|+|\mathbb{E}_f(N;W,b)|}\\ \ll\frac{1}{\log N}\frac{\phi(W)}{WN}\sum_{n\in[N]}|f(Wn+b)| \ll \frac{1}{\log N},\qquad\qquad\qquad \end{multline*} thus above term can be absorbed by the error term of Theorem \ref{main}. Besides, it also follows from the assumption $\epsilon$ is $(M,N)$-smooth that $\,\mathrm{d}_\mathcal X(\epsilon_{i,j},\mathrm{id}_G)\leq M$ and thus, by \cite[Lemma A.4]{GT12a}, $\psi_\mathcal X(\epsilon_{i,j})\ll M^{O(1)}$. It leaves us to concentrate on settling the above first term. When the totally equidistributed part $\bigbrac{g'(n)\Gamma'}_{n\in[N]}$ is trivial, i.e. $g'(n)=\text{id}_G\mod\Gamma$, it becomes \[ \frac{\phi(W)}{WN}\sum_{0\leq i<q}\sum_{n\in P_i}c_{i,j}\bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)} \] for some $c_{i,j}\in\mathbb{C}$ with $|c_{i,j}|\leq1$, in view of $F:G/\Gamma\to\mathbb{C}$ is a 1-bounded function. It then can be easily verified from condition (\ref{w-equi}) that this term is bounded by $O\bigbrac{\frac{1}{\log N}}$, and the theorem follows. Hence, in the rest part of this paper we would always assume that $\bigbrac{g'(n)\Gamma'}_{n\in[N]}$ isn't trivial. For a fixed pair $(i,j)$, write $(H_i)_\bullet=\gamma_i^{-1}G'_\bullet\gamma_i$, and let $g_i\in\mathrm{poly}(\mathbb{Z},(H_i)_\bullet)$ be the polynomial sequence defined via $g_i(n)=\gamma_i^{-1}g'(n)\gamma_i$. Besides, suppose that $\Lambda_i=\Gamma\cap H_i$, and define the 1-bounded Lipschitz function $F_{i,j}:H_i/\Lambda_i\to\mathbb{C}$ in the manner of $F_{i,j}(x\Lambda_i)=F(\epsilon_{i,j}\gamma_ix\Gamma)$. Thus, for each pair $(i,j)$, \begin{multline*} \sum_{n\in P_{i,j}}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F\bigbrac{\epsilon_{i,j}g'(n)\gamma_i\Gamma}\\ =\sum_{n\in P_{i,j}}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F_{i,j}\bigbrac{g_i(n)\Lambda_i}. \end{multline*} Now, assume that $\mu_{i,j}=\int_{H_i/\Lambda_i}F_{i,j}$ as the mean value of $F_{i,j}$ on the nilmanifold $H_i/\Lambda_i$ and then rewrite $F_{i,j}$ as $\brac{F_{i,j}-\mu_{i,j}}+\mu_{i,j}$. Clearly, $F_{i,j}-\mu_{i,j}$ is a bounded Lipschitz function and $\int_{H_i/\Lambda_i}(F_{i,j}-\mu_{i,j})=0$. Moreover, taking note that $P_{i,j}$ are progressions of size $|P_{i,j}|\gg\frac{N}{qM\log N}\gg\frac{N}{qM^2}$, it can be deduced from condition (\ref{w-equi}), as well as $|\mu_{i,j}|\ll1$ for all pairs $(i,j)$, that \begin{multline*} \biggabs{\frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}\mu_{i,j}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}} \\\ll\frac{\phi(W)}{WN}\sum_{i,j}\Bigabs{\sum_{n\in P_{i,j}}f(Wn+b)-\mathbb{E}_f(N;W,b)}\ll\frac{1}{\log N}. \end{multline*} Thus, without loss of generality, we may assume that $F_{i,j}:H_i/\Lambda_i\to\mathbb{C}$ is a bounded Lipschitz function with $\int_{H_i/\Lambda_i}F_{i,j}=0$. Just sum up what we've agreed so far, we are in the position that \begin{align}\label{finalre} &\frac{\phi(W)}{WN}\sum_{n\in[N]}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F(g(n)\Gamma)\nonumber\\ =&\frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}\Bigbrac{f(Wn+b)-\mathbb{E}_f(N;W,b)}F_{i,j}\bigbrac{g_i(n)\Lambda_i}+O(\frac{1}{\log N}). \end{align} With the help of \cite[Claim in Section 2]{GT12b}, together with $\psi_\mathcal X(\gamma_i),\psi_\mathcal X(\epsilon_{i,j})\ll M^{O(1)}$, one may find a Mal'cev basis $\mathcal Y_i$ for $H_i/\Lambda_i$ adapted to $(H_i)_\bullet$ for which each $\mathcal Y_i$ is an $M^{O(1)}$-rational combination of elements of $\mathcal X$; besides, we also have $\norm{F_{i,j}}_\mathrm{Lip}\ll M^{O(1)}$ and $g_i\in\mathrm{poly}(\mathbb{Z},(H_i)_\bullet)$ with $\bigbrac{g_i(n)}_{n\in[N]}$ is totally $M^{-cA+O(1)}$-equidistributed for some constant $c>0$. Besides, it can be seen from (\ref{wl1}) that \[ \mathbb{E}_f(N;W,b)=\frac{\phi(W)}{WN}\sum_{n\in[N]}f(Wn+b)\ll1. \] Therefore, using the fact that $(g_i(n))_{n\in[N]}$ is totally $M^{-cA+{O(1)}}$-equidistributed, and noting that $\norm{F_{i,j}}_{\mathrm{Lip}}\ll M^{-O(1)}$ and $\int_{H_i/\Lambda_i}F_{i,j}=0$, we can get \begin{multline*} \frac{\phi(W)}{WN}\sum_{i,j}|\mathbb{E}_f(N;W,b)|\cdot\Bigabs{\sum_{n\in P_{i,j}}F_{i,j}\bigbrac{g_i(n)\Lambda_i}}\ll\frac{\phi(W)}{WN}\sum_{i,j}\Bigabs{\sum_{n\in P_{i,j}}F_{i,j}\bigbrac{g_i(n)\Lambda_i}}\\ \ll M^{-cA+O(1)}\frac{\phi(W)}{WN}\sum_{i,j}|P_{i,j}|\ll M^{-cA+O(1)}. \end{multline*} In view of (\ref{finalre}) and above inequality, we see that to prove Theorem \ref{main} it suffices to show that \[ \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}f(Wn+b)F_{i,j}\bigbrac{g_i(n)\Lambda_i}\ll \frac{1}{\log N}. \] And this is the main business of the next two sections. \section{Applying the Approach of Montgomery and Vaughan}\label{apply-mv} For clarity and completeness, we restate our task here again. Let $A>1$ be a large number and $\log N\leq M\leq (\log N)^{O_A(1)}$. Let $G'\subseteq G$ be a subgroup of $G$, $g'\in\mathrm{poly}(\mathbb{Z},G_\bullet)$ and $\bigbrac{g'(n)}_{n\in[N]}$ be a totally $M^{-cA}$-equidistributed sequence for some constant $c>0$. Suppose that $P_{i,j}$ are pairwise disjoint arithmetic progressions with common difference $q\leq M$ and of length $|P_{i,j}|\geq\frac{N}{qM^2}$, and $\sqcup_{i,j}P_{i,j}=[N]$. Suppose that $(\epsilon_{i,j})_{i,j}$ and $(\gamma_i)_i$ are sequences of $G$. Set $H_i=\gamma_i^{-1}G'\gamma_i$, $\Lambda_i=\Gamma\cap H_i$, and $\mathcal Y_i$ is $M^{O(1)}$-rational Mal'cev basis adapted to filtration $(H_i)_{\bullet}$. Suppose that $F_{i,j}(x\Lambda_i)=F(\epsilon_{i,j}\gamma_ix\Gamma)$ with $\norm{F_{i,j}}_{\mathrm{Lip}}\ll M^{-O(1)}$ and $\int_{H_i/\Lambda_i}F_{i,j}=0$, and $\bigbrac{g_i(n)\Lambda_i}_{n\in[N]}$ is totally $M^{-A}$-equidistributed with $g_i=\gamma_i^{-1}g'\gamma_i$. We'll focus on handling the expression \[ \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}f(Wn+b)F_{i,j}(g_i(n)\Lambda_i), \] where $f$ is an arbitrary function from $\mathcal M'$. On employing the approach of Montgomery and Vaughan, we'd like to add a logarithmic factor into above expression, and the aim of this section is to transfer the question to understand the equidistribution matter along with variables in terms of the prime numbers. \begin{lemma}\label{mv} Suppose that $H_i/\Lambda_i$ are nilmanifolds, and that for each $i$ there is a $M^{O(1)}$-rational Mal'cev basis $\mathcal Y_i$ adapted to filtration $(H_i)_{\bullet}$, where $\log N\leq M\leq (\log N)^C$. Suppose that $P_{i,j}$ are disjoint arithmetic progressions with common difference $q\leq M$ and length $\Omega(\frac{N}{qM^2})$, and the union of $P_{i,j}$ is the discrete interval $[N]$. Assume that $g_i\in\mathrm{poly}(\mathbb{Z},(H_i)_\bullet)$ are polynomial sequences and $F_{i,j}:H_i/\Lambda_i\to\mathbb{C}$ are bounded functions, then for any multiplicative function $f\in\mathcal M'$, we have \begin{multline*} \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}f(Wn+b)F_{i,j}(g_i(n)\Lambda_i)\\ \ll\frac{1}{\log N}\biggabs{\frac{\phi(W)}{WN}\sum_{i,j}\sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}}+\frac{1}{\log N}. \end{multline*} \end{lemma} \begin{proof} We start the proof by adding a suitable log-factor into the target expression as follows \[ \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}f(Wn+b)F_{i,j}(g_i(n)\Lambda_i)\log\frac{WN+b}{Wn+b}. \] Directly, on the one hand, it equals to \[ \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}\Bigset{\log(WN+b)-\log(Wn+b)}f(Wn+b)F_{i,j}(g_i(n)\Lambda_i). \] On the other hand, from the assumption that $F_{i,j}$ are bounded functions and all progressions $\set{P_{i,j}}_{i,j}$ form a partition of $[N]$, making use of Cauchy-Schwarz inequality and condition (\ref{wl2}) gives that \begin{multline*} \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}f(Wn+b)F_{i,j}(g_i(n)\Lambda_i)\log\frac{WN+b}{Wn+b}\\ \leq\biggbrac{\frac{\phi(W)}{WN}\sum_{n\in[N]}|f(Wn+b)|^2}^{1/2}\biggbrac{\frac{\phi(W)}{WN}\sum_{n\in[N]}\bigbrac{\log(WN+b)-\log(Wn+b)}^2}^{1/2}\\ \ll \biggbrac{\frac{\phi(W)}{WN}\sum_{n\in[N]}\bigbrac{\log^2 (N+\frac{b}{W})-2\log (N+\frac{b}{W})\log (n+\frac{b}{W})+\log^2(n+\frac{b}{W})}}^{1/2}\ll1. \end{multline*} Combining the above two estimates it turns out that \begin{multline*} \frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}f(Wn+b)F_{i,j}(g_i(n)\Lambda_i)\\ \ll\frac{1}{\log N}\biggabs{\frac{\phi(W)}{WN}\sum_{i,j}\sum_{n\in P_{i,j}}\log(Wn+b)f(Wn+b)F_{i,j}(g_i(n)\Lambda_i)}+\frac{1}{\log N}. \end{multline*} By denoting $x=Wn+b$ with $n\in P_{i,j}$, one has $x\in W\cdot P_{i,j}+b$, and then rename $x$ as $n$ to get that \[ \sum_{n\in P_{i,j}}\log(Wn+b)f(Wn+b)F_{i,j}\bigbrac{g_i(n)\Lambda_i}=\sum_{n\in W\cdot P_{i,j}+b}\log n\,f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{n-b}{W}}\Lambda_i} \] The formula $\log n=\sum_{m|n}\Lambda(m)$, which can be viewed as a generation of the fundamental theorem of arithmetic, then yields that the above identity is also equal to \[ \sum_{mn\in W\cdot P_{i,j}+b}\Lambda(m)f(mn)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{mn-b}{W}}\Lambda_i}. \] In light of $\Lambda(m)$ is zero unless $m$ is a prime power, we may rewrite the above expression as \begin{multline*} \sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\\ +\sum_{pn\in W\cdot P_{i,j}+b}\log p\,\bigset{f(pn)-f(p)f(n)}F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\\ +\sum_{p,k\geq2}\sum_{p^kn\in W\cdot P_{i,j}+b}\log p\,f(p^kn)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{p^kn-b}{W}}\Lambda_i}. \end{multline*} We then quickly see that this lemma follows if we can prove that the contribution from the collection of all progressions $P_{i,j}$ in terms of the above latter two terms times $\frac{\phi(W)}{WN}$ is bounded by 1. Since $f$ is multiplicative, $f(pn)-f(p)f(n)$ vanishes unless $p$ is a divisor of $n$. On recalling the facts that $F_{i,j}$ is a bounded function and the collection of $P_{i,j}$ forms a partition of $[N]$, one has \begin{multline*} \biggabs{\frac{\phi(W)}{WN}\sum_{i,j}\twosum{pn\in W\cdot P_{i,j}+b}{p|n}\log p\,\bigset{f(pn)-f(p)f(n)}F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}}\\ \ll\frac{\phi(W)}{WN}\sum_{p,k\geq2}\twosum{p^kn\in[WN+b]}{p^kn\equiv b\mrd{W}}\log p\,\Bigset{|f(p^k)||f(n)|+|f(p)||f(p^{k-1})||f(n)|}. \end{multline*} We deal with above first term at first. As $(b,W)=1$ and $1\leq b\leq W\ll(\log N)^C$ for some constant $C>0$, so that $b$ is quite smaller than $N$, and thus, the first summation term is bounded by \[ \frac{\phi(W)}{WN}\twosum{p,k\geq2}{(p,W)=1}\log p\,|f(p^k)|\twosum{n\leq WN/p^k}{n\equiv b\bar{p^k}\mrd{W}}|f(n)|\ll\twosum{p,k\geq2}{(p,W)=1}\frac{\log p\,|f(p^k)|}{p^k}, \] where we have applied the following inequality (\ref{wl1}) to the inner summation. On applying Cauchy-Schwarz inequality another time, one can deduce from condition (\ref{lp2}) that \[ \twosum{p,k\geq2}{(p,W)=1}\frac{\log p\,|f(p^k)|}{p^k}\leq\Bigbrac{\sum_{p,k\geq2}\frac{(\log p)^2}{p^{3k/4}}}^{1/2}\Bigbrac{\sum_{p,k\geq2}\frac{|f(p^k)|^2}{p^{5k/4}}}^{1/2}\ll1. \] Besides, there is no difficulty to find that in the same manner we can also have \[ \biggabs{\frac{\phi(W)}{WN}\sum_{i,j}\sum_{p,k\geq2}\sum_{p^kn\in W\cdot P_{i,j}+b}\log p\,f(p^kn)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{p^kn-b}{W}}\Lambda_i}}\ll1. \] It, therefore, remains to calculate the following \begin{multline*} \frac{\phi(W)}{WN}\twosum{p,k\geq2}{(p,W)=1}\log p\,|f(p)||f(p^{k-1})|\twosum{n\leq WN/p^k}{n\equiv b\bar{p^k}\mrd{W}}|f(n)|\\ \ll\sum_{p,k\geq 1}\frac{|f(p)|}{p^{1/3}}\frac{|f(p^k)|}{p^{k/3}}\log p\,p^{-2/3}p^{-2k/3}. \end{multline*} Using the inequality $ab\leq|a|^2+|b|^2$, together with condition (\ref{lp2}), it is bounded by \[ \sum_{p\geq 1}\frac{|f(p)|^2}{p^{4/3}}\log p\sum_{k\geq 1}p^{-2k/3}+\sum_{p,k\geq1}\frac{|f(p^k)|^2}{p^{4k/3}}\log p\, p^{-2/3}\ll1 . \] We then finish the proof of this lemma. \end{proof} The next step is to decompose the summation range of prime $p$ into two domains, according to whether it is close to $N$ or not. Let \begin{align}\label{uv} U=N^{2/3} \end{align} be a cutoff parameter, it is clear that, \begin{multline}\label{decom} \frac{\phi(W)}{WN}\sum_{i,j}\sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\\ =\frac{\phi(W)}{WN}\sum_{i,j}\biggset{\sum_{p\leq U}+\sum_{U<p\leq N}}\sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}. \end{multline} And we are going to handle the above two summations respectively in the next section. The proof idea can be explained as follows. Suppose that $g$ is a highly equidistributed polynomial, when $p$ is small, then $g_p=g(p\,\cdot)$ would also look like equidistributed and thus the summation $\sum_n F\bigbrac{g_p(n)\Gamma}$ should have some cancellation; whilst when $p$ is large, since $pn$ is no more than $WN$ with $W\ll(\log N)^C$, $n$ should be small, and in this case the sum $\sum_p F\bigbrac{g_n(p)\Gamma}$ also has some cancellation. \section{Equidistribution of Product Nilsequences} In this section, we'll handle the two representations in (\ref{decom}) in turns. \subsection{When $p$ is small} \begin{lemma}\label{pissmall} \[ \frac{\phi(W)}{WN}\sum_{i,j}\sum_{p\leq U}\sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\ll1, \] where all of the parameters are in Section \ref{apply-mv}. \end{lemma} \begin{proof} To begin with, we split $p\leq U$ into dyadic ranges $p\sim N_k$ with $N_k=2^{-k}U$ and $0\leq k\leq\frac{2\log N}{3\log 2}$. Let $P_{i,j}=a_{i,j}+q\cdot[X]$, where $a_{i,j}+q$ is the starting point of the progression $P_{i,j}$ and, by assumption, we have $X\geq\frac{N}{qM^2}$. After decomposing the sum in the congruence condition $pn\equiv Wa_{i,j}+b\, (\mathrm{mod}\, Wq)$ further, we are in the position that \begin{align}\label{total} \frac{\phi(W)}{WN}\sum_{i,j,k}\twosum{a_1,a_2\mrd{Wq}}{a_1a_2\equiv Wa_{i,j}+b\mrd{Wq}}\twosum{p\sim N_k}{p\equiv a_1\mrd{Wq}}\sum_{n}\log p\,f(p)f(n) F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}, \end{align} where $n$ ranges over $\frac{Wa_{i,j}+b}{p}\leq n\leq\frac{WqX+Wa_{i,j}+b}{p}$ and obeys the congruence condtion $n\equiv a_2\mrd{Wq}$. Switching the summation order of variables $p$ and $n$, and using Cauchy-Schwarz inequality, one has \begin{multline}\label{pn} \twosum{p\sim N_k}{p\equiv a_1\mrd{Wq}}\twosum{\frac{Wa_{i,j}+b}{p}\leq n\leq\frac{WqX+Wa_{i,j}+b}{p}}{n\equiv a_2\mrd{Wq}}\log p\,f(p)f(n) F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\\ \leq\biggbrac{\sum_{n}|f(n)|^2}^{1/2} \biggbrac{\twosum{n}{n\equiv a_2\mrd{Wq}}\biggabs{\twosum{p\sim N_k}{p\equiv a_1\mrd{Wq}}\log p\,f(p)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}}^2}^{1/2} , \end{multline} where $n$ ranges over the interval $\frac{Wa_{i,j}+b}{N_k}\leq n\leq\frac{WqX+Wa_{i,j}+b}{N_k}$. The first factor is easily to calculate. Indeed, condition (\ref{fl2}) gives that \begin{align}\label{f2} \biggbrac{\sum_{n\leq\frac{WqX+Wa_{i,j}+b}{N_k}}|f(n)|^2-\sum_{n\leq\frac{Wa_{i,j}+b}{N_k}}|f(n)|^2}^{1/2}\ll\bigbrac{\frac{WqX}{N_k}}^{1/2}. \end{align} For the second factor, expanding the square to double the variables $p$ and then swapping the order of summation to get that \begin{align}\label{pp'} \biggbrac{\twosum{p,p'\sim N_k}{p\equiv p'\equiv a_1\mrd{Wq}}\log p\log p'f(p)f(p')\sum_{n}F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\bar{F_{i,j}\Bigbrac{g_i\bigbrac{\frac{p'n-b}{W}}\Lambda_i}}}^{1/2}, \end{align} where $n$ ranges over $Wa_{i,j}+b\leq pn,p'n\leq WqX+Wa_{i,j}+b$ and is also in the residue class $n\equiv a_2\mrd{Wq}$. Let $n=a_2+Wqm$, taking note that $p\equiv a_1\mrd{Wq}$, $a_1a_2\equiv b\mrd{Wq}$, as well as $1\leq a_1\leq Wq$, we can conclude that there must be integers $|a_p|,|a_{p'}|\leq qN_k$ such that $\frac{pn-b}{W}=qpm+a_p$ and $\frac{p'n-b}{W}=qp'm+a_{p'}$ respecitvely. Thus, above inner summation over $n$ is indeed \begin{align}\label{fij} \sum_{m\in \frac{I}{\max\set{p,p'}}}F_{i,j}\bigbrac{g_i(qpm+a_p)\Lambda_i}\bar{F_{i,j}\bigbrac{g_i(qp'm+a_{p'})\Lambda_i}}, \end{align} where $I\subseteq[N/q]$ is a discrete subinterval of length at least $X$. Thanks to the sequences $g_i$ are totally $M^{-A}$-equidistributed, there is a chance to show that for almost all pairs of primes $(p,p')\in(N_k,2N_k]^2$ the multiplicative sequence $\bigbrac{g_i(qp\cdot+a_p),g_i(qp'\cdot+a_{p'})}\in\mathrm{poly}(\mathbb{Z},(H_i)_\bullet\times (H_i)_\bullet)$ is equidistributed whenever this sequence is not too short. The following result is similar to \cite[Proposition 8.1]{Mat}. And we hope the proof could serve as an easy-reading exposition of \cite[Proposition 8.1]{Mat}. \begin{lemma}[Equidistribution of product nilsequences]\label{prose} Suppose that $0<\delta<1/2$ is a number. There are two small numbers $0<c<c'<1$ such that the following statement holds uniformly for integers $K$ with $1\leq K\leq \delta^{c'}N$. Suppose that $H_\bullet$ is a filtration of finite degree and of a finite dimensional nilmanifold $H/\Lambda$. Suppose that $g\in\mathrm{poly}(\mathbb{Z},H_\bullet)$ is a polynomial and the sequence $(g(n)\Lambda)_{n\in[N]}$ is totally $\delta$-equidistributed. Suppose that $q\ll\delta^{-O(1)}$, $a_p$ and $a_{p'}$ are integers satisfying $|a_p|,|a_{p'}|\leq qK$, and $I\subseteq[N/q]$ is an interval of length $|I|\geq\delta^{O(1)}\frac{N}{q}$. Assume further that $F:H/\Lambda\to\mathbb{C}$ is a Lipschitz function with $\int_{H/\Lambda}F=0$. Write $\mathcal E_K$ as the set of pairs $(p,p')\in(K,2K]^2$ for which \[ \#\bigset{m:m\in \frac{I}{\max\set{p,p'}}}\gg\delta^{O(c)}N/K, \] and \[ \Bigabs{\sum_{m\in \frac{I}{\max\set{p,p'}}}F(g(qpm+a_p)\Lambda)\bar{F(g(qp'm+a_{p'})\Lambda)}}>(1+\norm{F}_\mathrm{Lip})\delta^{O(c)} N/K. \] Then we have \[ \#\mathcal E_K\ll\delta^{O(c)}K^2. \] \begin{proof} In the following, we only consider those pairs $(p,p')\in(K,2K]^2$ such that \[ \#\bigset{m:m\in \frac{I}{\max\set{p,p'}}}\gg\delta^{O(c)}N/K. \] Now assume for contradiction that $\#\mathcal E_K\gg\delta^{O(c)}K^2$, which means that there are $\gg\delta^{O(c)}K^2$ pairs of primes $(p,p')\in(K,2K]^2$ such that \[ \Bigabs{\sum_{m\in \frac{I}{\max\set{p,p'}}}F(g(qpm+a_p)\Lambda)\bar{F(g(qp'm+a_{p'})\Lambda)}}>(1+\norm{F}_\mathrm{Lip})\delta^{O(c)} N/K. \] For such a pair $(p,p')\in(K,2K]^2$, we define $g_{p,p'}(n)=\bigbrac{g(qpn+a_p),g(qp'n+a_{p'})}$ as a new polynomial. Then we have $g_{p,p'}\in\mathrm{poly}\bigbrac{\mathbb{Z},H_\bullet\times H_\bullet}$. Besides, set a Lipchitz function $\tilde F:H\times H\to\mathbb{C}$ via $\tilde F(\gamma,\gamma')=F(\gamma)\bar{F(\gamma)}$, then $\int_{H/\Lambda\times H/\Lambda}\tilde F=0$ and $\|\tilde F\|_\mathrm{Lip}\ll\norm{F}_\mathrm{Lip}$. Moreover, the above inequality can be rewritten as --- there are $\gg\delta^{O(c)}K^2$ pairs of $(p,p')\in(K,2K]^2$ such that \[ \biggabs{\sum_{m\in \frac{I}{\max\set{p,p'}}}\tilde F(g_{p,p'}(m)\Lambda\times\Lambda)}\gg(1+\|\tilde F\|_\mathrm{Lip})\delta^{O(c)} N/K. \] Clearly, it can be deduced from Definition \ref{almost-equidistribution} and the assumption $\#\bigset{m:m\in \frac{I}{\max\set{p,p'}}}\gg\delta^{O(c)}N/K$ that there are $\gg\delta^{O(c)}K^2$ of $(p,p')\in(K,2K]^2$ such that the corresponding polynomial sequences $\bigbrac{g_{p,p'}(n)}_{n\in\frac{I}{\max\set{p,p'}}}$ fails to be $\delta^{O(c)}$-equidistributed. It then follows from Theorem \ref{leibman} that for each pair of primes $(p,p')$ there is a nontrivial horizontal character $0<|\psi_{p,p'}|\ll\delta^{-O(c)}$ such that \[ \norm{\psi_{p,p'}\circ g_{p,p'}}_{C^\infty\frac{I}{\max\set{p,p'}}}\ll\delta^{O(c)}. \] On the other hand, for any nontrivial horizontal character $\psi:H\times H\to\mathbb{T}$ of modulus $\ll\delta^{-O(c)}$, define a set \[ S_\psi=\set{(p,p')\in(K,2K]^2:\psi_{p,p'}=\psi}. \] Since the total number of characters $\psi_{p,p'}$ is at least $\Omega(\delta^{O(c)}K^2)$, it can be deduced from pigeonhole principle that there is a nontrivial horizontal character $0<|\psi|\ll\delta^{-O(c)}$ such that $|S_{\psi}|\gg \delta^{O(c)}K^2$. Fixing a character $\psi$ with $| S_{\psi}|\gg \delta^{O(c)}K^2$, we thus have \[ \norm{\psi\circ g_{p,p'}}_{C^\infty\frac{I}{\max\set{p,p'}}}\ll\delta^{O(c)} \] holds for $\gg\delta^{O(c)}K^2$ pairs of $(p,p')\in(K,2K]^2$. Taking $\psi=\psi_1\oplus\psi_2$ where $\psi_1,\psi_2:H\to\mathbb{T}$ and $\psi_1$ is non-trivial, and let \begin{eqnarray*} (\psi_1\circ g)(n)=\alpha_dn^d+\dots+\alpha_1n+\alpha_0;\\ (\psi_2\circ g)(n)=\alpha_d'n^d+\dots+\alpha_1'n+\alpha_0'. \end{eqnarray*} Hence, \begin{multline*} (\psi\circ g_{p,p'})(n)=\alpha_d(qpn+a_p)^d+\dots+\alpha_0+\alpha_d'(qp'n+a_{p'})^d+\dots+\alpha_0'\\ =\sum_{1\leq j\leq d}n^j\sum_{j\leq i\leq d}\binom i jq^j\bigbrac{p^ja_p^{i-j}\alpha_i+p'^ja_{p'}^{i-j}\alpha_i'}+\tilde\alpha_0. \end{multline*} From the definition of smoothness norm in Definition \ref{smooth-norm}, one may find that there are $\gg\delta^{O(c)}K^2$ of pairs $(p,p')\in(K,2K]^2$ such that \begin{align}\label{induction} \norm{q^j\bigbrac{p^ja_p^{i-j}\alpha_i+p'^ja_{p'}^{i-j}\alpha_i'}}\ll\delta^{-O(c)}\Bigbrac{\frac{K}{N}}^j \end{align} for all $1\leq j\leq d$, as $\#\bigset{m:m\in \frac{I}{\max\set{p,p'}}}\gg\delta^{O(c)}N/K$. We claim that there is a non-zero integer $0<q'\ll\delta^{-O(c)}$ such that \begin{align}\label{claim} \norm{q'\alpha_j}\ll\delta^{-O(c)}N^{-j}\qquad\text{for all } 1\leq j\leq d. \end{align} When $j=d$, from (\ref{induction}), there are $\gg\delta^{O(c)}K^2$ of pairs $(p,p')\in(K,2K]^2$ such that $\norm{q^dp^d\alpha_d+q^dp'^d\alpha_d'}\ll\delta^{-O(c)}\bigbrac{\frac{K}{N}}^{d}$. By pigeonhole principle there is some $p'\sim K$ such that at least $\gg\delta^{O(c)}K$ of $p\sim K$ satisfying the sequence $\set{q^dp^d\alpha\mrd{\mathbb{Z}}}$ stays in an interval of length $O(\delta^{-O(c)}\bigbrac{\frac{K}{N}}^{d})$. Here we assume that $\delta^{-O(c)}\bigbrac{\frac{K}{N}}^{d}\ll\delta^{O(1)}$, this can be done by taking $0<c'<1$ larger than $c$, recalling that $K\leq\delta^{O(c')}N$ (we'll find that $c$ is sufficiently small). The application of \cite[Lemma 8.4]{Mat} yields that there are $\gg\delta^{O(c)}K^d$ of integers $n\leq2^{3d}K^d$ such that $\set{q^dn\alpha\mrd{\mathbb{Z}}}$ stays in an interval of length $O(\delta^{-O(c)}\bigbrac{\frac{K}{N}}^{d})$. Then making use of \cite[Lemma 1.1.14]{Tao} together with the fact $q\ll\delta^{-O(c)}$ ensures that there is $0<q_d\ll\delta^{-O(c)}$ such that $\norm{q_d\alpha_d}\ll\delta^{-O(c)}N^{-d}$. We now assume that there is a group of integers $0<q_{j+1},\dots,q_d\ll\delta^{-O(c)}$ such that $\norm{q_i\alpha_i}\ll\delta^{-O(c)}N^{-i}$ holds for all $j<i\leq d$. And consider the case $j$. Also pigeonhole gives some $p'\sim K$ such that for at least $\delta^{O(c)}$ proportion of $p\sim K$ such that $\bigset{(\sum_{j\leq i\leq d}q^jp^ja_p^{i-j}\alpha_i)\mrd{\mathbb{Z}}}$ lies in an interval of length at most $O\bigbrac{\delta^{-O(c)}\bigbrac{\frac{K}{N}}^{j}}$. Whilst, from the assumption one has when $i>j$, \[ \norm{q^jp^ja_p^{i-j}\alpha_i}\leq p^ja_p^{i-j}\norm{q^j\alpha_i}\ll\delta^{-O(c)}\Bigbrac{\frac{K}{N}}^i, \] as $|a_p|\leq qK$, $p\sim K$ and $\norm{q_i\alpha_i}\ll\delta^{-O(c)}N^{-i}$ for some $q_i\ll\delta^{-O(c)}$. This means that the sequence $\set{q^jp^ja_p^{i-j}\alpha_i\mrd\mathbb{Z}}$ moves very slowly apart from the interval $\bigset{(\sum_{j\leq i\leq d}q^jp^ja_p^{i-j}\alpha_i)\mrd{\mathbb{Z}}}$, and thus, we can conclude that there are at least $\delta^{O(c)}$ proportion of $p\sim K$ such that $\set{q^jp^j\alpha_j\mrd{\mathbb{Z}}}$ stays in an interval of length at most $O\bigbrac{\delta^{-O(c)}\bigbrac{\frac{K}{N}}^{j}}$. Another application of \cite[Lemma 8.4]{Mat} and \cite[Lemma 1.1.14]{Tao} gives an integer $0<q_j\ll\delta^{-O(c)}$ such that $\norm{q_j\alpha_j}\ll\delta^{-O(c)}N^{-j}$. Therefore, the claim follows from taking $q'$ as the least common multiple of $q_1,\dots,q_d$. Now write $\psi'=q'\cdot \psi$, then we see that $\norm{\psi'\circ g(n)}\ll\delta^{-O(c)}\frac{n}{N}$ for every positive integer $n$. Let $N'=\delta^{O(cC)}N$ for some large $C\geq1$ such that when $n\in[N']$, we can get \[ \norm{\psi'\circ g(n)}\leq\frac{1}{10}. \] We then set $F':H/\Lambda\to\mathbb{C}$ to be the function $F'=\eta\circ \psi'$, where $\eta:\mathbb{T}\to\mathbb{C}$ is a function with mean zero, bounded Lipschitz norm and equals 1 on the interval $[-\frac{1}{10},\frac{1}{10}]$. Then we have $\int_{H/\Lambda}F'=0$, $\norm{F'}_\mathrm{Lip}\ll\delta^{-O(c)}$ and, \[ \bigabs{\mathbb{E}_{n\in[N]}F'\bigbrac{g(n)\Lambda}}\geq\delta\norm{F'}_\mathrm{Lip}, \] provided that $c$ is sufficiently small. Thereby, we conclude the proof by noting that above inequality contradicts with the assumption that $\bigbrac{g(n)\Lambda}_{n\in[N]}$ is $\delta$-equidistributed. \end{proof} \end{lemma} We now back our attention to the representation (\ref{pn}) and utilizing the full strength of Lemma \ref{prose}. Recalling that the sequence $\bigbrac{g_i(n)}_{n\in[N]}$ is totally $M^{-A}$-equidistributed for each $i$, where \begin{align}\label{m} \log N\leq M\ll(\log N)^{O_A(1)}, \end{align} and $q\leq M$ and $|I|\geq\frac{N}{qM^2}$, one can invoke Lemma \ref{prose} with $\delta=M^{-A}$. To begin with, we split the sum range over $p,p'\sim N_k$ into two cases according to whether $\#\bigset{m:m\in \frac{I}{\max\set{p,p'}}}\gg M^{-O(cA)}N$ or not. In the case of $\bigset{m:m\in \frac{I}{\max\set{p,p'}}}$ does not contain many elements (less than $CM^{-O(cA)}N$), one can show that the total contribution in $(\ref{total})$ is negligible. In practice, The fact that $F_{i,j}$ is a bounded function leads to \[ \sum_{m\in \frac{I}{\max\set{p,p'}}}F_{i,j}\bigbrac{g_i(qpm+a_p)\Lambda_i}\bar{F_{i,j}\bigbrac{g_i(qp'm+a_{p'})\Lambda_i}}\ll\#\bigset{m:m\in \frac{I}{\max\set{p,p'}}}. \] From the perspective of the assumption, it is bounded by $O(M^{-O(cA)}N)$. Substituting it into (\ref{pp'}) to obtain that (\ref{pp'}) is no more than \begin{multline*} \Bigbrac{M^{-O(cA)}N\sum_{p,p'\sim N_k}\log p\log p'|f(p)||f(p')|}^{1/2}=M^{-O(cA)}N^{1/2}\sum_{p\sim N_k}\log p\,|f(p)|\\\ll M^{-O(cA)}(NN_k)^{1/2}, \end{multline*} \;where the last inequality follows from condition (\ref{lp2}) and Cauchy-Schwarz inequality. Substituting this inequality and (\ref{f2}) into (\ref{pn}) yields that $(\ref{pn})$ is bounded by $O( M^{-O(cA)-1}W^{1/2}N)$, due to $X\leq\frac{N}{qM^2}$. Thus, it can be seen that the contribution to (\ref{total}) is bounded by \[ qW^{1/2}M^{-O(cA)}\sum_{k\leq \log N}\#\bigset{(a_1,a_2)\in[0,W-1)^2:a_1a_2\equiv b\mrd{W}}\ll1 \] just by using the facts that $W\leq(\log N)^C$, $q\leq M$ and (\ref{m}) and by enlarging $A>1$ if necessary. Hence, we just need to consider those pairs $(p,p')$ for which $\Bigabs{\frac{I}{\max\set{p,p'}}}\gg M^{-O(cA)}N$. Decomposing the summation over prime pairs $(p,p')$ in (\ref{pp'}) according to whether $(p,p')$ in the exceptional set $\mathcal E_k$, it allows us to conclude that (\ref{pp'}) is indeed a half power of the following expression \begin{multline*} \Bigset{\sum_{(p,p')\in\mathcal E_{N_k}}+\sum_{(p,p')\in(N_k,2N_k]^2\backslash\mathcal E_{N_k}}}\log p\log p'|f(p)||f(p')|\\ \times\biggabs{ \sum_{m\in \frac{I}{\max\set{p,p'}}}F_{i,j} \bigbrac{g_i(qpm+a_p)\Lambda_i}\bar{F_{i,j}\bigbrac{g_i(qp'm+a_{p'})\Lambda_i}}}. \end{multline*} The application of Lemma \ref{prose} shows that $\#\mathcal E_{N_k}\ll M^{-O(cA)} N_k^2$, and thus the above first summation term is bounded by \[ \sum_{(p,p')\in(N_k,2N_k]^2}\log p\log p'|f(p)||f(p')|1_{(p,p')\in\mathcal E_{N_k}}\cdot\frac{|I|}{\max\set{p,p'}}, \] due to $F_{i,j}$ is a bounded function. By making use of Cauchy-Schwarz inequality and condition (\ref{lp2}), as well as noting that $|I|=\Theta\bigbrac{\frac{N}{qM^2}}$, one may find it is bounded by \begin{multline*} \frac{N}{qM^2N_k}\Bigbrac{\sum_{p\sim N_k}(\log p\,f(p))^2}\Bigbrac{\#\mathcal E_{N_k}}^{1/2}\\\ll q^{-1}M^{-O(cA)}N\sum_{p\sim N_k} \log p\,|f(p)|^2\ll M^{-O(cA)}q^{-1}NN_k. \end{multline*} Substituting the above inequality and (\ref{f2}) into representation (\ref{total}) and taking $A\geq1$ large enough, the total contribution to (\ref{total}) can be bounded by $O(1)$. It, therefore, remains to calculate the case $\Bigabs{\frac{I}{\max\set{p,p'}}}\gg M^{-O(cA)}N$ and $(p,p')\not\in\mathcal E_{N_k}$. Another application of Lemma \ref{prose} gives that \begin{multline*} \sum_{(p,p')\not\in\mathcal E_{N_k}}\log p\log p'|f(p)||f(p')|\biggabs{\sum_{m\in \frac{I}{\max\set{p,p'}}}F_{i,j}\bigbrac{g_i(qpm+a_p)\Lambda_i}\bar{F_{i,j}\bigbrac{g_i(qp'm+a_{p'})\Lambda_i}}}\\ \ll M^{-O(cA)}\frac{N}{N_k}\biggbrac{\sum_{p\sim N_k}\log p\,f(p)}^2\ll M^{-O(cA)}\frac{N}{N_k}\sum_{p\sim N_k}\log p\sum_{p\sim N_k}\log p|f(p)|^2\\ \ll M^{-O(cA)} NN_k. \end{multline*} And in this case it can also be shown that the contribution to (\ref{total}) is bounded by $O(1)$. To sum up, in order to prove Lemma \ref{pissmall} we split the prime pairs $(p,p')\in(N_k,2N_k]^2$ in formula (\ref{pp'}) into three cases: the interval $\frac{I}{\max\set{p,p'}}$ is short; there are many elements in the interval $\frac{I}{\max\set{p,p'}}$ but $(p,p')$ belongs to the exceptional set $\mathcal E_{N_k}$; and there are many elements in the interval $\frac{I}{\max\set{p,p'}}$ and $(p,p')$ is out of $\mathcal E_{N_k}$ so that (\ref{fij}) has $M^{-O(cA)}$ decay. We then complete the deduction by showing that in each case the contribution to (\ref{total}) is bounded by $O(1)$. \end{proof} \subsection{When $p$ is large} The task of this subsection is to prove the following lemma. And, clearly, Theorem \ref{main} follows from this lemma and Lemma \ref{pissmall}. \begin{lemma}\label{pislarge} With all parameters the same as (\ref{decom}), we have \[ \frac{\phi(W)}{WN}\sum_{U<p\leq N}\sum_{i,j}\sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\bigbrac{g_i(\frac{pn-b}{W})\Lambda_i}\ll1. \] \end{lemma} \begin{proof} The proof, to some extent, is similar to Lemma \ref{pissmall}. We also start with splitting the interval $U<p\leq N$ into dyadic ranges, say $p\sim U_k$ with $U_k=2^kU$ and $0\leq k\leq\frac{\log N}{3\log 2}$. For a fixed number $0\leq k\leq\frac{\log N}{3\log 2}$ and a fixed pair $(i,j)$, consider the following representation \[ \twosum{p\sim U_k}{p\equiv a_1\mrd{Wq}}\log p\,f(p)\twosum{\frac{Wa_{i,j}+b}{p}\leq n\leq\frac{WqX+Wa_{i,j}+b}{p}}{n\equiv a_2\mrd{Wq}}f(n)F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}. \] Here, as before, letting $P_{i,j}=a_{i,j}+q\cdot[X]$, and both $a_1$ and $a_2$ run over the residue classes modulus $Wq$ and satisfy the relationship $a_1a_2\equiv Wa_{i,j}+b\mrd{Wq}$. The application of Cauchy-Schwarz inequality shows that it is bounded by \begin{multline}\label{pnn'} \biggbrac{\twosum{p\sim U_k}{p\equiv a_1\mrd{Wq}}\log p\,|f(p)|^2}^{1/2}\biggbrac{\twosum{\frac{Wa_{i,j}+b}{U_k}\leq n,n'\leq\frac{WqX+Wa_{i,j}+b}{U_k}}{n\equiv n'\equiv a_2\mrd{Wq}}f(n)f(n')\times \\\times\twosum{p\sim U_k}{p\equiv a_1\mrd{Wq}}\log p\,F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn-b}{W}}\Lambda_i}\bar{F_{i,j}\Bigbrac{g_i\bigbrac{\frac{pn'-b}{W}}\Lambda_i}}}^{1/2}. \end{multline} Condition (\ref{lp2}) implies that the first factor is bounded by $O(U_k^{1/2})$. After changing of variables, we see that the sum over $p$ in above second factor is indeed bounded by \[ \sum_{m\sim\frac{U_k}{Wq}}\Lambda(Wqm+a_1)F_{i,j}\bigbrac{g_i(qmn+c_n)\Lambda_i}\bar{F_{i,j}\bigbrac{g_i(qmn'+c_{n'})\Lambda_i}}, \] since the contribution of the higher powers is $O(U_k^{1/2+\varepsilon})$ which is admissible in view of the estimate of the above expression, see (\ref{bound}). When $(a_1,Wq)\neq1$, since $F_{i,j}$ is a bounded function, it is easy to see that in this case above expression is bounded by $\log U_k\cdot\#\set{p\sim U_k:p\equiv a_1\mrd{Wq}}\ll\log U_k$, thus the contribution is negligible. In the following, we can assume that $(a_1,Wq)=1$. A possible way to deal with this expression is to follow the idea of \cite[Lemma 9.5]{Mat}, to transfer $|\mathbb{E}_{m}\Lambda(Wqm+a_1)F(g'(m)\Gamma)|$ to $|\mathbb{E}_{m}F(g'(m)\Gamma)|$ with some error term. In \cite[Lemma 9.5]{Mat} this error term is double logarithmic decay which is far too large than we expect. Indeed, one can check that to make Theorem \ref{main} has logarithmic decay, this error term should be strongly logarithmic decay. This seems achievable, because \cite[theorem 2.7]{TT} shows that the average of $\Lambda-\Lambda_{\mathrm{Siegel}}$ and polynomial nilsequences has pseudopolynomial decay. It, thus, needs to calculate the average $\mathbb{E}_{n\in[N]}(\Lambda_{\mathrm{Siegel}}-1)(Wn+b)F(g'(n)\Gamma)$. \cite[Proposition 7.1]{TT} tells us that it is bounded by $O\bigbrac{M^{O(1)}q_{\mathrm{Siegel}}^{-O(1)}}$. Because in our case $M\ll(\log N)^{O(1)}$, if $q_{\mathrm{Siegel}}\ll_A(\log N)^{-A}$, we have the chance to show that $\mathbb{E}_{n\in[N]}(\Lambda_{\mathrm{Siegel}}-1)(Wn+b)F(g'(n)\Gamma)$ is logarithmic decay. However, the assumption $q_{\mathrm{Siegel}}\ll_A(\log N)^{-A}$ will never happen in the case of $W\ll(\log N)^{O(1)}$, on noting from the analysis surrounds formula (5.6) in \cite[P. 19]{TT}.\footnote{We would like to thank Joni Ter\"av\"ainen for explaining their paper to us.} Our strategy is as follows. When $p$ is a large number, that is $p>U=N^{2/3}$, due to $pn,pn'\leq WN$, both $n$ and $n'$ should be small numbers. Inspired by the proof idea of Lemma \ref{pissmall}, when polynomial $g$ is highly equidistributed, we guess for almost all pairs of $(n,n')$ the product polynomial $\bigbrac{g(n\cdot),g(n'\cdot)}$ should be equidistributed. It, therefore, can be reduced to consider the correlation of an equidistributed nilsequence with the shifted von Mangoldt function. And this can be calculated by the so-called bi-linear form method adapted to polynomial nilsequences, see \cite[Section 3]{GT12b} as an example. For technical reason, we would decompose the summation interval of $n,n'$ in the following representation into dyadic ranges, say $n,n'\sim V_l$ with $V_l=2^l\frac{Wa_{i,j}}{U_k}$ and $0\leq l\ll\log N$. \begin{align}\label{splitnn'} \sum_n f(n)f(n')\sum_{m\sim\frac{U_k}{Wq}}\Lambda(Wqm+a_1)F_{i,j}\bigbrac{g_i(qmn+c_n)\Lambda_i}\bar{F_{i,j}\bigbrac{g_i(qmn'+c_{n'})\Lambda_i}}, \end{align} where both $n$ and $n'$ are supported in the interval $[\frac{Wa_{i,j}+b}{U_k},\frac{WqX+Wa_{i,j}+b}{U_k}]$ and subject to $n\equiv n'\equiv a_2\mrd{Wq}$. Let $g_{i;n,n'}:\mathbb{Z}\to H_i/\Lambda_i\times H_i/\Lambda_i$ be the polynomial defined by \[ g_{i;n,n'}(m)=\bigbrac{g_i(qnm+c_n),g_i(qn'm+c_{n'})}. \] \subsection*{Claim} There is a small constant $0<c<1$ such that the following statement holds uniformly for $V_l$ with $0\leq l\ll\log N$. Suppose that $\bigbrac{g(m)}_{m\sim\frac{U_k}{Wq}}$ is totally $M^{-A}$-equidistributed. When the pairs $(n,n')\in(V_l,2V_l]$ is outside of a subset of size bounded by $O(M^{-O(cA)}V_l^2)$, the corresponding product polynomial sequence $\bigbrac{g_{i;n,n'}(m)}_{m\sim\frac{U_k}{Wq}}$ is totally $M^{-O(cA)}$-equidistributed. Since the treatment for every $i$ is the same, in the following we fix one such $i$ and omit the letter $i$. Similar to the proof of Lemma \ref{prose}, we assume that the sequence $\bigbrac{g_{n,n'}(m)}_{m\sim\frac{U_k}{Wq}}$ fails to be $M^{-O(cA)}$-equidistributed. Then from quantitative Leibman theorem (Theorem \ref{leibman}), there is a non-trivial horizontal character of modulus $0<|\psi|\ll M^{-O(cA)}$ such that \[ \norm{\psi\circ g_{n,n'}}_{C^\infty[\frac{U_k}{Wq},\frac{2U_k}{Wq}]}\ll M^{-O(cA)}. \] One can see from the proof of Lemma \ref{prose} that this contradicts with the assumption $\bigbrac{g(m)}_{m\sim\frac{U_k}{Wq}}$ is $M^{-A}$-equidistributed when we take $0<c<1$ sufficiently small. Just need to pay attention to one thing, the prime pairs $(p,p')$ in Lemma \ref{prose} are replaced by integer pairs $(n,n')$ here, but this doesn't matter because it can be achieved by replacing \cite[Lemma 8.4]{Mat} by \cite[Lemma 3.3]{GT12b} in the proof. The claim then follows by an application of \cite[Lemma 7.2]{Mat}, which asserts that if $(g(m))_{m\in[N]}$ is $\delta$-equidistributed, then there is a number $0<c<1$ such that $(g(m))_{m\in[N]}$ is totally $\delta^c$-equidistributed. Now we write $\widetilde F_{i,j}(\gamma,\gamma')=F_{i,j}(\gamma)\bar{F_{i,j}(\gamma')}$ as a Lipschitz function supported on $H_i/\Lambda_i\times H_i/\Lambda_i$. Then we have the information that $\int_{H_i/\Lambda_i\times H_i/\Lambda_i}\widetilde F_{i,j}=0$ and $\|\widetilde F\|_\mathrm{Lip}\ll M^{-O(1)}$. Next, we aim is to prove when $\bigbrac{g_{i;n,n'}(m)}_{m\sim\frac{U_k}{Wq}}$ is totally $M^{-O(cA)}$-equidistributed, the following inequality holds \begin{align}\label{lastone} \sum_{m\sim\frac{U_k}{Wq}}\Lambda(Wqm+a_1)\widetilde F_{i,j}(g_{i;n,n'}(m)\Lambda_i\times\Lambda_i)\ll M^{-O(cA)}\frac{U_k}{Wq}. \end{align} Here, $a_{1}$ is less than $Wq$ and coprime with $Wq$. This, indeed, follows from the next lemma. And a brief proof of the below lemma will be given after the proof of Lemma \ref{pislarge}. \begin{lemma}[von Mangoldt function disjoints with equidistributed nilsequences]\label{Mangoldt} Let $0<\delta<1/2$ be a parameter. Suppose that $G/\Gamma$ is a nilmanifold of finite dimension $m_G$, $G_\bullet$ is a filtration of $G/\Gamma$ and of degree $d$, and $\mathcal Y$ is a $\delta^{-O(1)}$-rational Mal'cev basis adapted to $G_\bullet$. Suppose that $g\in\mathrm{poly}(\mathbb{Z},G/\Gamma)$ is a polynomial and $(g(n))_{n\in[N]}$ is totally $\delta$-equidistributed. Then for any coprime pair $1\leq b\leq W\ll\delta^{-O(1)}$ and any Lipschitz function $F:\mathbb{Z}\to\mathbb{C}$ with $\int_{G/\Gamma}F=0$ and $\norm{F}_\mathrm{Lip}\ll\delta^{-O(1)}$, we have \[ \mathbb{E}_{n\in[N]}\Lambda(Wn+b)F(g(n)\Gamma)\ll\delta^{-O(1)}. \] \end{lemma} We now splice all the ingredients to finish the proof of Lemma \ref{pislarge}. For a fixed number $0\leq l\ll\log N$, divide the pairs $(n,n')\in(V_l,2V_l]^2$ into two sets. One is that (\ref{lastone}) holds for each pair $(n,n')$ in this set, and we call the other one as the exceptional set $\mathcal E_l$. In view of the claim, the size of the exceptional set $\mathcal E_l$ is at most $O(M^{-O(cA)}V_l^2)$, besides, for $(n,n')\in\mathcal E_l$, we notice that \begin{align}\label{bound} \sum_{m\sim\frac{U_k}{Wq}}\Lambda(Wqm+a_1)\widetilde F_{i,j}(g_{i;n,n'}(m)\Lambda_i\times\Lambda_i)\ll\sum_{m\sim\frac{U_k}{Wq}}\Lambda(Wqm+a_1)\ll\frac{U_k}{\phi(Wq)}, \end{align} as $\widetilde F_{i,j}$ is bounded and $(Wq,a_1)=1$. One can now see from (\ref{lastone}), together with above information for exceptional set, that (\ref{splitnn'}) is bounded by \[ \sum_{0\leq l\ll\log N}\Bigset{M^{-O(cA)}\frac{U_k}{Wq}\sum_{n,n'\sim V_l}|f(n)||f(n')|+\frac{U_k}{\phi(Wq)}\sum_{(n,n')\in(V_l,2V_l]^2}|f(n)||f(n')|1_{\mathcal E_l}(n,n')}. \] We now compute above two sum terms in turns. Summing over the dyadic intervals $(V_l,2V_l]$, and then using Cauchy-Schwarz inequality and condition (\ref{fl2}), one has the first term is bounded by \begin{align*} &\qquad M^{-O(cA)}\frac{U_k}{Wq}\biggbrac{\sum_{\frac{Wa_{i,j}+b}{U_k}\leq n\leq\frac{WqX+Wa_{i,j}+b}{U_k}}|f(n)|}^2\\ &\ll M^{-O(cA)}\frac{U_k}{Wq}\biggbrac{\sum_{n\leq\frac{WqX+Wa_{i,j}+b}{U_k}}|f(n)|-\sum_{n\leq\frac{Wa_{i,j}+b}{U_k}}|f(n)|}^2\ll M^{-O(cA)}\frac{WqX^2}{U_k}. \end{align*} Besides, Cauchy-Schwarz inequality and condition (\ref{fl2}) and summing over the dyadic intervals $(V_l,2V_l]$ also shows that the second term is bounded by \begin{multline*} \frac{U_k}{\phi(Wq)}\Bigbrac{\sum_{l\ll\log N}\sum_{n\in V_l}|f(n)|^2}\Bigbrac{\sum_{l\ll\log N}\sum_{(n,n')\in(V_l,2V_l]^2}1_{\mathcal E_l}(n,n')^2}^{1/2}\\ \ll \frac{U_k}{\phi(Wq)}\biggbrac{\sum_{n\leq\frac{WqX+Wa_{i,j}+b}{U_k}}|f(n)|^2-\sum_{n\leq\frac{Wa_{i,j}+b}{U_k}}|f(n)|^2}(\log N)^{1/2}|\mathcal E_l|^{1/2}\\\ll M^{-O(cA)}(\log N)^{1/2}\frac{NX}{U_k}. \end{multline*} Recalling that $X\gg\frac{N}{qM^2}$ and $\log N\leq M\ll(\log N)^{O_A(1)}$, when we take $A>1$ sufficiently large, the factors $(\log N)^{1/2}$ and $N/X=qM^2$ can be absorbed by $M^{-O(cA)}$, thus, we can assume that both of the above two terms are bounded by $O\bigbrac{M^{-O(cA)}\frac{X^2}{U_k}}$. Substitute it into (\ref{pnn'}), as well as recalling that the first factor in (\ref{pnn'}) is no more than $O(U_k^{1/2})$, we can conclude that (\ref{pnn'}) is bounded by $O(M^{-O(cA)}X)$. Splicing the intervals $(U_k,2U_k]$ with $0\leq k\leq\frac{\log N}{3\log2}$, one therefore has \begin{multline*} \frac{\phi(W)}{WN}\sum_{U<p\leq N}\sum_{i,j}\sum_{pn\in W\cdot P_{i,j}+b}\log p\,f(p)f(n)F_{i,j}\bigbrac{g_i(\frac{pn-b}{W})\Lambda_i}\\ \ll \frac{\phi(W)}{WN}\sum_{k\ll\log N}\sum_{i,j}\twosum{a_1,a_2\mrd{Wq}}{a_1a_2\equiv Wa_{i,j}+b\mrd{Wq}}M^{-O(cA)}X\ll1, \end{multline*} as $X=\Theta(\frac{N}{qM^2})$ and letting $A>1$ sufficiently large. Therefore, we complete the proof of this lemma. \end{proof} \noindent\emph{A brief proof of Lemma \ref{Mangoldt}.} On recalling the well-known Vaughan's identity \cite{Vau} \[ \Lambda(n)=\Lambda(n) 1_{n \leq N^{1 / 3}}-\sum_{d \leq N^{2 / 3}} a_{d} 1_{d \mid n}+\sum_{d \leq N^{1 / 3}} \mu(d) 1_{d \mid n} \log \frac{n}{d}+\sum_{d, w>N^{1 / 3}} \Lambda(d) b_{w} 1_{d w=n}, \] where $a_d=\sum_{bc=d:b,c\leq N^{1/3}}\mu(b)\Lambda(c)$ and $b_w=\sum_{c|w:c>N^{1/3}}\mu(c)$. The first term is a negligible one, we call the second and fourth term as Type I sum and Type II sum respectively, besides, the third one can be expressed as a convex combination of Type I sum by using the identity \[ \log \frac{n}{d}=\log N-\int_{1}^{N} 1_{t>n} \frac{d t}{t}-\log d, \] and then by absorbing all the various logarithmic factors into the divisor-bounded coefficients. Therefore, the expression $\sum_{n\in[N]}\Lambda(Wn+b)F(g(n)\Gamma)$ can be expressed as follows, \[ \twosum{n\in P}{n\leq N^{1/3}}\Lambda(n)F\Bigbrac{g\bigbrac{\frac{n-b}{W}}\Gamma}+\sum_{n\in P}\twosum{d\leq N^{2/3}}{d|n}a_dF\Bigbrac{g\bigbrac{\frac{n-b}{W}}\Gamma}+\twosum{dw\in P}{d,w\geq N^{1/3}}a_db_wF\Bigbrac{g\bigbrac{\frac{dw-b}{W}}\Gamma}, \] where $P=b+W\cdot[N]$ is the arithmetic progression, $a_d\ll (\log N)^{O(1)}\tau^{O(1)}(d)$, and $b_w\ll (\log N)^{O(1)}\tau^{O(1)}(w)$. Plainly, the first term is bounded by $O(N^{1/2})$ (say) so is negligible, and the latter two terms can be dealt with in the same manner of \cite[section 3]{GT12b}, also of \cite[Section 7]{TT}. \vspace{2mm} \qed \section{Proof of Theorem \ref{lfunction} and Theorem \ref{mulfunction}} We first give some standard facts about $L$-functions which can be found in \cite[Section 2]{RS} and \cite[Chapter 5]{IK}. Let $m\geq 2$ be an integer, $\pi=\otimes_p \pi_p$ be a normalized irreducible cuspidal automorphic representations of $GL_m$ over $\mathbb{Q}$, which means that $\pi$ has unitary central character. The $L$-function $L(s,\pi)$ associated to $\pi$ is defined by \[L(s,\pi)=\sum_{n=1}^{\infty}\frac{\lambda_\pi(n)}{n^s}=\prod_p\prod_{j=1}^{m}\left(1-\frac{\alpha_{j,\pi}(p)}{p^s}\right)^{-1},\] for some suitable complex numbers $\alpha_{j,\pi}(p)$. The Dirichlet series converge absolutely for $\Re(s)>1$. There is also an archimedean local factor $L(s, \pi_\infty)$. There are $m$ complex Langlands parameters $\mu_\pi(j)$, we define \[L(s, \pi_\infty)=\pi^{-\frac{m s}{2}} \prod_{j=1}^{m} \Gamma\left(\frac{s+\mu_{\pi}(j)}{2}\right).\] The generalized Ramanujan conjecture and Selberg conjecture assert that \[|\alpha_{j,\pi}(p)|\leq 1 \ \textrm{and} \ \Re(\mu_{\pi}(j))\geq 0.\] The Ramanujan conjecture was proved by Deligne \cite{Del} for holomorphic cusp form on $GL_2$. According to Luo, Rudnick and Sarnak \cite{LRS} and M\"uller and Speh \cite{MS}, we know that \begin{equation}\label{bound-alpha} |\alpha_{j,\pi}(p)|\leq p^{\frac{1}{2}-\frac{1}{m^2+1}} \ \textrm{and} \ \Re(\mu_{\pi}(j))\geq -(\frac{1}{2}-\frac{1}{m^2+1}). \end{equation} In order to define the functional equation for $L(s,\pi)$, we give the contragredient of $\pi$. Let $\widetilde{\pi}$ be the contragredient of $\pi$ which is also an irreducible cuspidal automorphic representations of $GL_m$ over $\mathbb{Q}$. For each $p\leq \infty$, we have \[\big\{\alpha_{j, \pi}(p): 1 \leq j \leq m\big\}=\big\{\overline{\alpha_{j, \pi}(p)}: 1 \leq j \leq m\big\}\] and \[\big\{\mu_{\widetilde{\pi}}(j): 1 \leq j \leq m\big\}=\big\{\overline{\mu_{\pi}(j)}: 1 \leq j \leq m\big\}.\] Define the completed $L$-function \[\Lambda(s, \pi)=N_{\pi}^{s / 2} L(s, \pi) L\left(s, \pi_{\infty}\right),\] where $N_\pi$ be the conductor of $\pi$. Then, $\Lambda(s, \pi)$ extends to an entire function and satisfies the functional equation \[\Lambda(s, \pi)=\xi (\pi) \Lambda(1-s, \tilde{\pi}),\] where $\xi(\pi)$ is a complex number of modulus 1. Let $\chi$ be a primitive Dirichlet character modulo $q$. Then the twisted $L$-function is defined by \[L(s, \pi \times \chi)=\prod_{p} \prod_{j=1}^{m}\left(1-\frac{\alpha_{j, \pi \times \chi}(p)}{p^{s}}\right)^{-1}.\] When $p\nmid q$, we have \[\left\{\alpha_{j, \pi \times \chi}(p): 1 \leq j \leq m\right\}=\left\{\alpha_{j, \pi}(p) \chi(p): 1 \leq j \leq m\right\}.\] Thus we get \begin{equation}\label{Lspichi} \begin{aligned} \sum_{n=1}^{\infty} \frac{\lambda_{\pi}(n) \chi(n)}{n^{s}} &=\prod_{p} \prod_{j=1}^{m}\left(1-\frac{\alpha_{j, \pi}(p) \chi(p)}{p^{s}}\right)^{-1} \\ &=L(s, \pi \times \chi) \prod_{p \mid q} \prod_{j=1}^{m}\left(1-\frac{\alpha_{j, \pi \times \chi }(p)}{p^{s}}\right). \end{aligned} \end{equation} We also need the following convexity bound for $L(s,\pi\times\chi)$, see \cite[Lemma 3.2]{JLW}, \begin{equation}\label{convbound-l} L(\frac{1}{2}+it,\pi\times\chi)\ll (q(1+|t|))^{\frac{m}{4}+\varepsilon}. \end{equation} Next, we prove that $\lambda_\pi (n)\in \mathcal M^\prime$ with $W=1$, then Theorem \ref{lfunction} directly follows from Theorem \ref{main}. The conditions (\ref{fl2}) and (\ref{lp2}) can be easily be deduced from the Rankin-Selberg theory and the prime number theorem of Rankin-Selberg L-functions, see \cite[(5.2) and Page 631]{JLW}, i.e., \begin{equation}\label{lampi-l2} \sum_{n \leq N}\left|\lambda_{\pi}(n)\right|^{2} \ll N, \end{equation} and \begin{equation}\label{bound-lam-p} \sum_{p \leq N}\left|\lambda_{\pi}(p)\right|^{2} \log p \ll N. \end{equation} Then, we just need to prove that $\lambda_\pi(n)$ satisfy condition (\ref{w-equi}) with $W=1$. Take $P=\{n\equiv b (\bmod q), n\leq N\}$ with $q\leq (\log N)^C$, we consider the following sum \[ \sum_{n\leq N \atop n\equiv b(\bmod q)}\lambda_{\pi}(n).\] We will consider the following two cases according to $(b,q)=1$ or $(b,q)>1$. Case I: $(b,q)=1.$ Applying the orthogonality of Dirichlet characters, we have \begin{equation}\label{lam-arithprog}\sum_{n\leq N \atop n\equiv b(\bmod q)}\lambda_{\pi}(n)=\frac{1}{\varphi(q)}\sum_{\chi\bmod q}\bar{\chi}(b)\sum_{n\leq N }\lambda_{\pi}(n)\chi(n).\end{equation} It suffices to estimate the sum \begin{equation}\label{pichi} \sum_{n \leq N} \lambda_{\pi}(n) \chi(n) \end{equation} for any $\chi(\bmod q)$. This is the same with \cite[(5.4)]{JLW}, they proved that \begin{equation}\label{bound-lamchi} \sum_{n\leq N}\lambda_{\pi}(n)\chi(n)\ll q^{\frac{m}{2m+4}+\varepsilon}N^{\frac{m+1}{m+2}+\varepsilon}. \end{equation} Taking (\ref{bound-lamchi}) into (\ref{lam-arithprog}), we get \begin{equation}\label{bound-lambdarith} \sum_{n\leq N \atop n\equiv b(\bmod q)}\lambda_{\pi}(n)\ll q^{\frac{m}{2m+4}+\varepsilon}N^{\frac{m+1}{m+2}+\varepsilon}. \end{equation} This is also true for $q=1.$ We confirm that $\lambda_{\pi}(n)$ satisfy condition (\ref{w-equi}) with $W=1$. Case II: $(b,q)>1$. First, we consider that $n$ runs over square-free numbers. Let $d=(b,q), \ b=b^\prime d$ and $q=q^\prime d$, then $(b^\prime,q^\prime)=1$ and $(d,n/d)=1$. Denote $\chi_d$ be the principal character modulo $d$. Then we have \begin{equation}\label{equ-squarefree} \begin{aligned} \threesum{n \leq N}{n \equiv b(\bmod q)}{n \ \textrm{square-free}} \lambda_{\pi}(n)&=\lambda_{\pi}(d) \quad \threesum{l \leq N / d }{l\equiv b^\prime (\bmod q^\prime)} {l \ \textrm{square-free}}\quad \lambda_{\pi}(l)\chi_d(l)\\ &=\frac{\lambda_{\pi}(d)}{\varphi(q^\prime)}\sum_{\chi\bmod q^\prime }\bar{\chi}(b^\prime)\twosum{l \leq N / d}{l \ \textrm{square-free}} \lambda_{\pi}(l)(\chi\chi_d)(l) \end{aligned} \end{equation} Since $\chi\chi_{d}$ is a character modulo $q^\prime d=q$, we use (\ref{bound-lamchi}) to the inner sum, we have \begin{equation}\label{bound-squarefree} \threesum{n\leq N}{ n\equiv b(\bmod q)}{n \ \textrm{square-free}}\lambda_\pi(n)\ll q^{\frac{m}{2m+4}+\varepsilon}N^{\frac{m+1}{m+2}+\varepsilon}. \end{equation} Here we use $\lambda_{\pi}(d)\ll d^{\frac{1}{2}-\frac{1}{m^2+1}+\varepsilon}.$ Next we remove the restriction that $n$ is square-free. Any arbitrary positive integer $n$ can be represented in a unique way as \[n=k l, \quad k \text { is square-free, } \quad l \text { is square-full, } \quad(k, l)=1.\] Hence we have \begin{equation*} \begin{aligned} \sum_{n \leq N \atop n \equiv b(\bmod q)} \lambda_{\pi}(n)&=\threesum{l k \leq N}{lk \equiv b(\bmod q)}{l \ \textrm{square-full}, \ k \ \textrm{square-free}} \lambda_{\pi}(l) \lambda_{\pi}(k)=\sum_{l \leq N \atop l \textrm{ square-full }} \lambda_{\pi}(l) \threesum{k \leq N / l }{m k \equiv b(\bmod q)}{k \ \textrm{square-free}} \lambda_{\pi}(k)\\ &=\sum_{l \leq N\atop l \ \textrm{square-full}} \lambda_{\pi}(l) \threesum{k \leq N / l }{ k \equiv \frac{b}{(l, q)} \overline{\frac{l}{(l, q)}}\left(\bmod \frac{q}{(l, q)}\right)}{k \ \textrm{square-free}} \lambda_{\pi}(k). \end{aligned} \end{equation*} We split the sum over $l$ into two parts: $1\leq l\leq N^{4/5}$ and $N^{4/5}< l\leq N$. If $1\leq l\leq N^{4/5}$, then $N/l\geq N^{1/5}$. We use (\ref{bound-squarefree}) for the inner sum, we have \begin{equation}\label{lsmall} \threesum{n \leq N}{n \equiv b(\bmod q)}{l\leq N^{4/5}, \ l \ \textrm{square-full}} \lambda_{\pi}(n)\ll q^{\frac{m}{2m+4}+\varepsilon}N^{\frac{m+1}{m+2}+\varepsilon}\sum_{l \leq N^{4/5}\atop l\ \textrm{square-full}} \frac{|\lambda_{\pi}(l)|}{l^{\frac{m+1}{m+2}+\varepsilon}}\ll q^{\frac{m}{2m+4}+\varepsilon}N^{\frac{5m+9}{5m+10}+\varepsilon}. \end{equation} Here we use (\ref{lampi-l2}) and the Cauchy-Schwarz inequality to get \[\sum_{l \leq N^{4/5}\atop l \ \textrm{ square-full}} \frac{|\lambda_{\pi}(l)|}{l^{\frac{m+1}{m+2}+\varepsilon}}\ll N^{\frac{4}{5(m+2)}}\left(\sum_{l\leq N^{4/5}}\frac{1}{l}\right)^\frac{1}{2}\left(\sum_{l\leq N^{4/5}}\frac{|\lambda_{\pi}(l)|^2}{l}\right)^\frac{1}{2}\ll N^{\frac{4}{5(m+2)}}\log N. \] If $N^{4/5}<l\leq N$, we use (\ref{lampi-l2}) and the Cauchy-Schwarz inequality \begin{equation}\label{llarge} \begin{aligned} \threesum{n \leq N}{n \equiv b(\bmod q)}{N^{4/5}<l\leq N, \ l\ \textrm{square-full}} \lambda_{\pi}(n)&\ll \sum_{N^{4/5}<l \leq N \atop l \ \textrm{square-full}}|\lambda_{\pi}(l)|\twosum{k\leq N^{1/5}}{k\ \textrm{square-free}}|\lambda_{\pi}(k)|\\ &\ll \left(\sum_{l \leq N \atop l \ \textrm{square-full}}1\right)^{1/2}\left(\sum_{l \leq N}|\lambda_{\pi}(l)|^2\right)^{1/2}\left(\sum_{k \leq N^{1/5} }1\right)^{1/2}\left(\sum_{k \leq N^{1/5} }|\lambda_{\pi}(k)|^2\right)^{1/2}\\ &\ll N^{19/20}. \end{aligned} \end{equation} From (\ref{lsmall}) and (\ref{llarge}), we have \[\sum_{n \leq N \atop n \equiv b(\bmod q)} \lambda_{\pi}(n)\ll q^{\frac{m}{2m+4}+\varepsilon}N^{\frac{5m+9}{5m+10}+\varepsilon}.\] We confirm that $\lambda_{\pi}(n)$ satisfy condition (\ref{w-equi}). This completes the proof of Theorem \ref{lfunction}. In order to prove Theorem \ref{mulfunction}, we prove that $\mu(n)\lambda_\pi (n)\in \mathcal M^\prime$ with $W=1$. From (\ref{lampi-l2}) and (\ref{bound-lam-p}), we know that \begin{equation} \sum_{n \leq N}\left|\mu(n)\lambda_{\pi}(n)\right|^{2}\leq \sum_{n \leq N}\left|\lambda_{\pi}(n)\right|^{2} \ll N, \end{equation} and \begin{equation} \sum_{p \leq N}\left|\mu(p)\lambda_{\pi}(p)\right|^{2} \log p \ll N. \end{equation} This says that $\mu(n)\lambda_{\pi}(n)$ satifies conditions (\ref{fl2}) and (\ref{lp2}). Next we consider the following sum \[ \sum_{n\leq N \atop n\equiv b(\bmod q)}\mu(n)\lambda_{\pi}(n),\] with $q\leq (\log N)^C$. We use the recent result of Jiang, L\"u and Wang \cite[Section 5]{JLW21}, for $\pi$ is self-dual and $\pi \ncong \pi\otimes\chi$ for any quadratic primitive character $\chi$, they prove that \begin{equation}\label{bound-mupichi} \sum_{n \leqslant N} \mu(n) \lambda_{\pi}(n) \chi(n) \ll N \exp \left( -c \sqrt{\log N}\right), \end{equation} for any Dirichlet character $\chi(\bmod \ q)$, where $c>0$ is a constant. Let $d=(b,q), \ b=b^\prime d$ and $q=q^\prime d$, then $(b^\prime,q^\prime)=1$. Denote $\chi_d$ be the principal character modulo $d$. Then we have \begin{equation} \begin{aligned} \sum_{n\leq N\atop n\equiv b(\bmod q)} \mu(n)\lambda_{\pi}(n) &=\sum_{n\leq N/d \atop n \equiv b^\prime\left(\bmod q^\prime\right)} \mu(dn)\lambda_{\pi}\left(d n\right) \\ &= \mu(d)\lambda_{\pi}(d) \sum_{n\leq N/d\atop n\equiv b^\prime (\bmod q^\prime)} \mu(n)\lambda_{\pi}\left(n\right) \chi_{d}\left(n\right) \\ &=\frac{\mu(d)\lambda_{\pi}(d)}{\varphi\left(q^\prime\right)} \sum_{\chi\left(\bmod q^\prime\right)} \bar{\chi}\left(b^\prime\right) \sum_{n \leqslant N / d}\mu(n) \lambda_{\pi}\left(n\right) \left(\chi \chi_{d}\right)\left(n\right) \end{aligned} \end{equation} Since $\chi\chi_d$ is a character modulo $q^\prime d=q$, we use (\ref{bound-mupichi}) to the inner sum, we have \[\sum_{n\leq N\atop n\equiv b(\bmod q)} \mu(n)\lambda_{\pi}(n)\ll N\exp \left( -c \sqrt{\log (N/q)}\right).\] Here we use $\lambda_{\pi}(d)\ll d^{\frac{1}{2}-\frac{1}{m^2+1}+\varepsilon}$. We confirm that $\mu(n)\lambda_{\pi}(n)$ satisfy condition (\ref{w-equi}) with $W=1$. This completes the proof of Theorem \ref{mulfunction}. \bibliographystyle{plain} \renewcommand{\bibname}{}
f2d97de3e751b1ec18a85f61e8bfd2fc72914b33
\subsubsection*{\bibname}} \newcommand{\mathieu}[1]{\textcolor{magenta}{\texttt{mathieu:} #1}} \newcommand{\pierre}[1]{\textcolor{olive}{\texttt{pierre:} #1}} \newcommand{\mic}[1]{\textcolor{blue}{\texttt{michael:} #1}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\texttt{SoftMax}}{\texttt{SoftMax}} \newcommand{\texttt{Sinkhorn}}{\texttt{Sinkhorn}} \begin{document} \twocolumn[ \aistatstitle{Sinkformers: Transformers with Doubly Stochastic Attention} \aistatsauthor{ Michael E. Sander \And Pierre Ablin \And Mathieu Blondel \And Gabriel Peyré} \aistatsaddress{ENS and CNRS \And ENS and CNRS \And Google Research, Brain team \And ENS and CNRS}] \begin{abstract} Attention based models such as Transformers involve pairwise interactions between data points, modeled with a learnable attention matrix. Importantly, this attention matrix is normalized with the SoftMax operator, which makes it row-wise stochastic. In this paper, we propose instead to use Sinkhorn's algorithm to make attention matrices doubly stochastic. We call the resulting model a Sinkformer. We show that the row-wise stochastic attention matrices in classical Transformers get close to doubly stochastic matrices as the number of epochs increases, justifying the use of Sinkhorn normalization as an informative prior. On the theoretical side, we show that, unlike the SoftMax operation, this normalization makes it possible to understand the iterations of self-attention modules as a discretized gradient-flow for the Wasserstein metric. We also show in the infinite number of samples limit that, when rescaling both attention matrices and depth, Sinkformers operate a heat diffusion. On the experimental side, we show that Sinkformers enhance model accuracy in vision and natural language processing tasks. In particular, on 3D shapes classification, Sinkformers lead to a significant improvement. \end{abstract} \section{Introduction} % The Transformer \citep{vaswani2017attention}, an architecture that relies entirely on attention mechanisms \citep{bahdanau_2014}, has achieved state of the art empirical success in natural language processing (NLP) \citep{brown2020language, radford2019language, wolf2019huggingface} as well as in computer vision \citep{dosovitskiy2020image, zhao2020point, zhai2021scaling, lee2019set}. As the key building block of the Transformer, the self-attention mechanism takes the following residual form \citep{yun2019transformers} given a $n$-sequence $(x_1 , x_2 , ... , x_n)$, embedded in dimension $d$: \begin{equation}\label{eq:attention} x_i \leftarrow x_i +\sum^{n}_{j=1} K^{1}_{i,j} W_V x_j, \end{equation} where ${K^{1}} \coloneqq \texttt{SoftMax}(C)$ with $C_{i,j} \coloneqq ({W_Qx_i})^{\top}{W_Kx_j}$ $= x_i^\top W_Q^\top W_K x_j$. Here, $W_Q, W_K \in \RR^{m\times d}$ and $ W_V \in \RR^{d\times d}$ are the query, key and value matrices. The SoftMax operator can be seen as a normalization of the matrix $K^0 \coloneqq \exp(C)$ as follows: $K^1_{ij} \coloneqq K^0_{ij}/ \sum_{l=1}^n K^0_{il}$ for all $i$ and $j$. Importantly, the matrix $K^{1}$ is row-wise stochastic: its rows all sum to $1$. In this work, we propose to take the normalization process further by successively normalizing the rows and columns of $K^{0}$. This process is known to provably converge to a doubly stochastic matrix (i.e., whose rows and columns both sum to $1$) and is called Sinkhorn's algorithm \citep{sinkhorn1964relationship, cuturi2013sinkhorn, peyre2019computational}. We denote the resulting doubly stochastic matrix $K^{\infty}$. Intuitively, such a normalization relies on a democratic principle where all points are matched one to another with different degrees of intensity, so that more interactions are considered than with the SoftMax normalization, as shown in Figure~\ref{fig:different_norm}. \vspace{-1em} \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{figures/first_fig_4-crop.pdf} \caption{\textbf{Illustration of the different normalizations of attention matrices.} We form two point clouds $(W_Qx_i)_{1\leq i \leq 10}$ (green) and $(W_Kx_j)_{1\leq i \leq 10}$ (red). For $k \in \{0, 1, \infty\}$, the width of the line connecting $x_i$ to $x_j$ is $K^{k}_{i,j}$. We only display connections with $K^{k}_{i,j} \geq 10^{-12}$. For $K^{0}$, one interaction dominates. For $K^{1}$ (SoftMax), one cluster is ignored. For $K^{\infty}$ (Sinkhorn), all points are involved in an interaction. }\label{fig:different_norm} \vspace{-1em} \end{figure} We call our Transformer variant where the SoftMax is replaced by Sinkhorn a \textbf{Sinkformer}. Since Sinkhorn's first iteration coincides exactly with the SoftMax, Sinkformers include Transformers as a special case. Our modification is differentiable, easy to implement using deep learning libraries, and can be executed on GPUs for fast computation. Because the set of row-wise stochastic matrices contains the set of doubly stochastic matrices, the use of doubly stochastic matrices can be interpreted as a prior. On the experimental side, we confirm that doubly stochastic attention leads to better accuracy in several learning tasks. On the theoretical side, doubly stochastic matrices also give a better understanding of the mathematical properties of self-attention maps. To summarize, we make the following contributions. \begin{itemize}[topsep=0pt,itemsep=2pt,parsep=2pt,leftmargin=10pt] \item We show empirically that row-wise stochastic matrices seem to converge to doubly stochastic matrices during the learning process in several classical Transformers (Figure \ref{fig:sum_training}). Motivated by this finding, we then introduce the Sinkformer, an extension of the Transformer in which the SoftMax is replaced by the output of Sinkhorn's algorithm. In practice, our model is parametrized by the number of iterations in the algorithm, therefore interpolating between the Transformer and the Sinkformer. \item On the theoretical side, we show that Transformers and Sinkformers can be viewed as models acting on discrete distributions, and we show under a symmetry assumption that Sinkformers can be seen in the infinite depth limit as a Wasserstein gradient flow for an energy minimization (Proposition \ref{prop:gradient_flows}). We also show that the classical Transformer with the SoftMax operator cannot be interpreted as such a flow (Proposition \ref{prop:not_a_gradient}). To the best of our knowledge, this is the first time such a connection is established. We also prove that in the infinite number of particles limit (when $n$ goes to infinity), the iterations of Sinkformers converge to the heat equation (Theorem \ref{thm:diffusion}), while the corresponding equation for Transformers is nonlinear and nonlocal (Proposition \ref{prop:soft}). \item On the experimental side, we show that Sinkformers lead to a significant accuracy gain compared to Transformers on the ModelNet 40 3D shapes classification task. We then demonstrate better performance of Sinkformers on the NLP IMDb dataset for sentiment analysis and IWSLT'14 German to English neural machine translation tasks. Sinkformers also achieve a better accuracy than Vision Transformers on image classification tasks. Therefore, the proposed method is capable of enhancing the performance of transformers in a wide range of applications. \end{itemize} \section{Background and related work}\label{sec:background} \paragraph{Transformers.} Proposed by \cite{vaswani2017attention}, the Transformer is a fully attention-based architecture. Originally designed to process sequences for natural language processing (NLP), many variants have since been developed such as Vision Transformers \citep{dosovitskiy2020image, zhai2021scaling}, Set Transformers \citep{lee2019set} or Point Cloud Transformers \citep{zhao2020point}. The Transformer and its variants are based on an encoder-decoder structure, where the decoder can have a more or less complex form. The encoder is fully \textit{self}-attention based. After embedding and concatenating with positional encoding the original input sequence, the encoder uses a series of residual blocks that iterates relation \eqref{eq:attention} followed by a feed forward neural network applied to each $x_i$ independently. In its most complex form such as in neural machine translation, the decoder combines a \text{self}-attention based mechanisms and a \textit{cross} attention one, meaning that it is given access to the encoder via another multi-head attention block. \paragraph{Sinkhorn and Attention.} To the best of our knowledge, using Sinkhorn's algorithm in Transformers has been done once in a different context \citep{tay2020sparse}. The authors propose to learn efficient and sparse attention using a differentiable algorithm for sorting and rearranging elements in the input sequence. For this purpose, they introduce a sorting network to generate a doubly-stochastic matrix (that can be seen as a relaxed version of a permutation matrix) and use it to sort the sequence in a differentiable fashion. \citet{mialon2021trainable} propose an embedding for sets of features in $\RR^d$ based on Sinkhorn's algorithm, by using the regularized optimal transport plan between data points and a reference set. \citet{niculae_2018} use doubly stochastic attention matrices in LSTM-based encoder-decoder networks but they use Frank-Wolfe or active set methods to compute the attention matrix. None of these works use Sinkhorn on self-attention maps in Transformers and provide its theoretical analysis, as we do. \paragraph{Impact of bi-normalization.} Theoretical properties of kernels $\Kk$, which attention is an instance of, can also be studied through the operator $f \mapsto f - \Kk f$. Bi-normalization of kernels over manifolds have already been studied in the literature, on uniform measures \citep{singer2006graph}, weighted measures \citep{hein2007graph} and in a more general setup with associated diffusion operators \citep{ting2011analysis}. \citet{milanfar2013symmetrizing} proposes to approximate smoothing operators by doubly stochastic matrices using Sinkhorn's updates, leading to better performance in data analysis and signal processing. Importantly, the works of \cite{marshall2019manifold} and \cite{wormell2021spectral} exactly introduce a normalization that is based on Sinkhorn's algorithm. They prove that this method models a Langevin diffusion and leads to the approximation of a symmetric operator. They also show that convergence to this operator is faster with Sinkhorn normalization than with the SoftMax normalization. In section \ref{sec:laplacians}, we adopt a similar point of view with a parametrized cost and show that different normalizations result in different partial differential equations (PDEs) in the infinite number of particles limit. \paragraph{Infinite depth limit.} Studying deep residual neural networks (ResNets) \citep{he2016deep} in the infinitesimal step-size regime (or infinite depth limit) has recently emerged as a new framework for analyzing their theoretical properties. The ResNet equation \begin{equation}\label{eq:resnet} x_i \leftarrow x_i + T(x_i) \end{equation} can indeed be seen as a discretized Euler scheme with unit step size of the ordinary differential equation (ODE) $\dot{x_i}=T(x_i)$ \citep{E_2017,chen2018neural,dupont2019augmented,sun2018stochastic,E_2018,lu2017finite,ruthotto2018deep, pmlr-v139-sander21a}. In section \ref{sec:gradient_flows}, we adopt this point of view on residual attention layers in order to get a better theoretical understanding of attention mechanisms. This is justified by the fact that, for instance, GPT-3 \citep{brown2020language} has 96 layers. % \paragraph{Neural networks on measures.} The self-attention mechanism \eqref{eq:attention} acts on sets $\{ x_i \}_{i}$ where the ordering of the elements does not matter. An equivalent way to model such invariant architectures is to consider them as acting on probability measures or point clouds of varying cardinality \citep{de2019stochastic, vuckovic2021regularity, zweig2021functional}. Specifically, a collection of points $(x_i)_{1\leq i \leq n}$, where $x_i \in \RR^{d}$, can also be seen as a discrete measure on $\RR^d$: $\mu \coloneqq \frac{1}{n} \sum_{i=1}^{n} \delta_{x_i} \in \Mm(\RR^d)$, where $\Mm(\RR^d)$ is the set of probability measures on $\RR^d$. A map $T_\mu$ then acts on $\mu$ through $F(\mu) \coloneqq \frac{1}{n} \sum_{i=1}^{n} \delta_{T_\mu(x_i)}$. One notable interest of such a point of view is to consider the evolution of non ordered sets of points. Another is to consider the mean field (or large sample) limit, that is when $n \to \infty$, to conduct theoretical analysis \citep{zweig2021functional} as when analyzing the SGD properties in the mean-field limit \citep{song2018mean}. \section{Sinkformers} We now introduce Sinkformers, a modification of any Transformer by replacing the SoftMax operator in the attention modules by Sinkhorn's algorithm. \paragraph{Attention matrices during training.} In Transformers, attention matrices are row-wise stochastic. A natural question is how the sum over columns evolve during training. On $3$ different models and $3$ different learning tasks, we calculated the sum over columns of attention matrices in Transformers. We find out that the learning process makes the attention matrices more and more doubly stochastic, as shown in Figure \ref{fig:sum_training}. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figures/softmax_learning-crop.pdf} \caption{\textbf{Sum over columns} of attention matrices at different training epochs (color) when training, from left to right, a ViT on MNIST (section \ref{sec:vit}), a \texttt{fairseq} Transformer on IWSLT'14 (section \ref{sec:fair}), and a Point Cloud Transformer on Model Net 40 (section \ref{sec:modelnet}). \textbf{The majority of columns naturally sum closely to $1$.} }\label{fig:sum_training} \vspace{-1em} \end{figure} Thus, row-wise stochastic attention matrices seem to approach doubly stochastic matrices during the learning process in classical Transformers. Therefore, it seems natural to impose double stochasticity as a prior and study theoretically and experimentally the resulting model. A process to obtain such matrices which extends the SoftMax is Sinkhorn's algorithm. \paragraph{Sinkhorn's algorithm.} Given a matrix $C \in \RR^{n \times n}$, and denoting $K^{0} \in \RR^{n \times n}$ such that $K^{0} = \exp(C)$, Sinkhorn's algorithm \citep{sinkhorn1964relationship, cuturi2013sinkhorn, peyre2019computational} iterates, starting from $K^{0}$: \begin{equation}\label{eq:sinkhorn_alg} K^{l+1} = \left\{ \begin{array}{r@{\hspace{1mm}}l} &N_R(K^{l})\quad\text{if $l$ is even} \\ &N_C(K^{l})\quad\text{if $l$ is odd}, \end{array} \right. \end{equation} where $N_R$ and $N_C$ correspond to row-wise and column-wise normalizations: $(N_R(K))_{i,j} \coloneqq \frac{K_{i,j}}{\sum_{l=1}^{n} K_{i,l}}$ and $(N_C(K))_{i,j} \coloneqq \frac{K_{i,j}}{\sum_{l=1}^{n} K_{l,j}}$. We denote the resulting scaled matrix limit $K^{\infty} \coloneqq \texttt{Sinkhorn}(C)$. Note that it is doubly stochastic in the sense that $ K^{\infty} \mathbb{1}_n = \mathbb{1}_n$ and ${K^{\infty}}^{\top} \mathbb{1} _{n} = \mathbb{1}_n$. The operations in \eqref{eq:sinkhorn_alg} are perfectly suited for being executed on GPUs \citep{charlier2021kernel, cuturi2013sinkhorn}. \paragraph{Sinkformers.} For simplicity, we consider a one head attention block that iterates equation \eqref{eq:attention}. Note that $K^1 \coloneqq \texttt{SoftMax}(C)$ is precisely the output of Sinkhorn's algorithm \eqref{eq:sinkhorn_alg} after $1$ iteration. In this paper, we propose to take Sinkhorn's algorithm several steps further until it approximately converges to a doubly stochastic matrix $K^{\infty}$. This process can be easily implemented in practice, simply by plugging Sinkhorn's algorithm into self-attention modules in existing architectures, without changing the overall structure of the network. We call the resulting drop-in replacement of a Transformer a Sinkformer. It iterates \vspace{-1em} \begin{equation}\label{eq:attention_particles_sinkhorn} x_i \leftarrow x_i + \sum^{n}_{j=1} K^{\infty}_{i,j} W_V x_j. \end{equation} In the next two sections \ref{sec:gradient_flows} and \ref{sec:laplacians}, we investigate the theoretical properties of Sinkformers. We exhibit connections with energy minimization in the space of measures and the heat equation, thereby proposing a new framework for understanding attention mechanisms. All our experiments are described in Section \ref{sec:experiments} and show the benefits of using Sinkformers in a wide variety of applications. \paragraph{Computational cost and differentiation.} Turning a Transformer into a Sinkformer simply relies on replacing the SoftMax by Sinkhorn, i.e., substituting $K^1$ with $K^\infty$. In practice, we use a finite number of Sinkhorn iterations and therefore use $K^{l}$, where $l$ is large enough so that $K^{l}$ is almost doubly stochastic. Doing $l$ iterations of Sinkhorn takes $l$ times longer than the SoftMax. However, this is not a problem in practice because Sinkhorn is not the main computational bottleneck and because only a few iterations of Sinkhorn are sufficient (typically $3$ to $5$) to converge to a doubly stochastic matrix. As a result, the practical training time of Sinkformers is comparable to regular Transformers, as detailed in our experiments. Sinkhorn is perfectly suited for backpropagation (automatic differentiation), by differentiating through the operations of \eqref{eq:sinkhorn_alg}. The Jacobian of an optimization problem solution can also be computed using the implicit function theorem \citep{griewank_2008, krantz_2012, blondel2021efficient} instead of backpropagation if the number of iterations becomes a memory bottleneck. Together with Sinkhorn, implicit differentiation has been used by \cite{luise_2018} and \cite{cuturi_2020}. \paragraph{Invariance to the cost function.} Recall that in practice one has $C_{i,j} = ({W_Qx_i})^{\top}{W_Kx_j}$. An important aspect of Sinkformers is that their output is unchanged if the cost is modified with non interacting terms, as the next proposition shows. \begin{prop}\label{prop:modularity} Let $C \in \RR^{n\times n}$. Consider, for $(f, g) \in \RR^n \times \RR^n$ the modified cost function $\Tilde{C}_{i,j} \coloneqq C_{i,j} + f_i + g_j$. Then $\texttt{Sinkhorn}(C) = \texttt{Sinkhorn}(\Tilde{C})$. \end{prop} A proof is available in Appendix \ref{app:proofs}. A consequence of this result is that one can consider the cost $\Tilde{C}_{i,j} \coloneqq - \frac{1}{2}\| {W_Qx_i} - {W_Kx_j}\|^2$ instead of $C_{i,j} = ({W_Qx_i})^{\top}{W_Kx_j}$, without affecting $K^\infty$. A Transformer using the cost $\Tilde{C}$ is referred to as L2 self-attention, and is Lipschitz under some assumptions \citep{kim2021lipschitz} and can therefore be used as an invertible model \citep{behrmann2019invertible}. For instance, we use $\tilde{C}$ in Proposition \ref{prop:soft}. \section{Attention and gradient flows}\label{sec:gradient_flows} In this section, we make a parallel between self-attention modules in Sinkformers and gradient flows in the space of measures. We denote $\Mm(\RR^d)$ the probability measures on $\RR^d$ and $\Cc(\RR^d)$ the continuous functions on $\RR^d$. We denote $\nabla$ the gradient operator, $\mathrm{div}$ the divergence, and $\Delta$ the Laplacian, that is $\Delta = \mathrm{div}(\nabla)$. \paragraph{Residual maps for attention.} We consider a one-head attention block operating with different normalizations. We consider the continuous counterparts of the attention matrices seen in the previous section. We denote $c(x, x') \coloneqq {(W_Qx)}^{\top}W_Kx'$ and $k^{0} \coloneqq \exp(c)$. For some measure $\mu \in \Mm(\RR^d)$, we define the $\texttt{SoftMax}$ operator on the cost $c$ by $k^{1}(x, x') = \texttt{SoftMax}(c)(x, x') \coloneqq \frac{k^{0}(x, x')}{\int k^{0}(x, y) d \mu(y)} $. Similarly, we define Sinkhorn's algorithm as the following iterations, starting from $k^{0} = \exp(c)$: \begin{equation}\label{eq:sinkhorn_alg_measure} k^{l+1}(x,x') = \left\{ \begin{array}{r@{\hspace{2mm}}l} &\frac{k^{l}(x, x')}{\int k^{l}(x, y) d \mu(y)}\quad\text{if $l$ is even} \\ &\frac{k^{l}(x, x')}{\int k^{l}(y, x) d \mu(y)}\quad\text{if $l$ is odd}. \end{array} \right. \end{equation} We denote $k^{\infty} \coloneqq \texttt{Sinkhorn}(c)$ the resulting limit. Note that if $\mu$ is a discrete measure supported on a $n$ sequence of particles $(x_1 , x_2 , ... , x_n)$, $\mu =\frac{1}{n} \sum_{i=1}^{n}\delta_{x_i}$, then for all $(i,j)$, $k^{0}(x_i, x_j) = K^0_{i,j}$, $k^{1}(x_i, x_j) = K^{1}_{i,j}$ and $k^{\infty}(x_i, x_j) = K^{\infty}_{i,j}$, so that $k^0$, $k^1$ and $k^\infty$ are indeed the continuous equivalent of the matrices $K^0$, $K^1$ and $K^\infty$ respectively. \paragraph{Infinitesimal step-size regime.} In order to better understand the theoretical properties of attention matrices in Transformers and Sinkformers, we omit the feed forward neural networks acting after each attention block. We consider a succession of attention blocks with tied weights between layers and study the infinite depth limit where the output is given by solving a neural ODE \citep{chen2018neural}. In this framework, iterating the Transformer equation \eqref{eq:attention}, the ResNet equation \eqref{eq:resnet} and the Sinkformer equation \eqref{eq:attention_particles_sinkhorn} corresponds to a Euler discretization with step-size $1$ of the ODEs \vspace{-1em} \begin{equation} \dot x_i = T_{\mu}(x_i) ~ \text{for all } i, \end{equation} where $x_i(t)$ is the position of $x_i$ at time $t$. For an arbitrary measure $\mu \in \Mm(\RR^d)$, these ODEs can be equivalently written as a continuity equation \citep{renardy_2006} \begin{equation}\label{eq:partial} \partial_t\mu + \mathrm{div}(\mu T_{\mu}) = 0. \end{equation} When $T_\mu$ is defined by the ResNet equation \eqref{eq:resnet}, $T_{\mu} = T$ does not depend on $\mu$. It defines an advection equation where the particles do not interact and evolve independently. When $T_{\mu}$ is defined by the Transformer equation~\eqref{eq:attention} or Sinkformer equation~\eqref{eq:attention_particles_sinkhorn}, $T_{\mu}$ has a dependency in $\mu$ and the particles interact: the local vector field depends on the position of the other particles. More precisely we have in this case $T^1_{\mu}(x) = \int k^1(x,x')W_V x' d \mu(x')$ for the Transformer and $ T^\infty_{\mu}(x) = \int k^{\infty}(x,x')W_V x' d \mu(x')$ for the Sinkformer. It is easily seen that when $\mu$ is discrete we recover the operators in equation~\eqref{eq:attention} and~\eqref{eq:attention_particles_sinkhorn}. % \paragraph{Wasserstein gradient flows.} A particular case of equation \eqref{eq:partial} is when $T_{\mu}$ is a gradient with respect to the Wasserstein metric $W_2$. Let $\Ff$ be a function on $\Mm(\RR^d)$. As is standard, we suppose that $\Ff$ admits a first variation at all $\mu$: there exists a function $\frac{\delta \Ff}{\delta \mu}(\mu)$ such that $\frac{d}{d \varepsilon} \Ff(\mu + \varepsilon \rho)_{|\varepsilon = 0} = \int \frac{\delta \Ff}{\delta \mu}(\mu) d \rho $ for every perturbation $\rho$ \citep{santambrogio2017euclidean}. The Wasserstein gradient of $\Ff$ at $\mu$ is then $\nabla_W \Ff (\mu) \coloneqq \nabla (\frac{\delta \Ff}{\delta \mu}(\mu))$. The minimization of $\Ff$ on the space of measures corresponds to the PDE~\eqref{eq:partial} with $T_\mu=- \nabla_W \Ff (\mu)$. This PDE can be interpreted as ruling the evolution of the measure $\mu$ of particles initially distributed according to some measure $\mu_0$, for which the positions $x(t)$ follow the flow $\dot{x} = - \nabla_W \Ff (\mu)(x)$, that minimizes the global energy $\Ff$. It corresponds to a steepest descent in Wasserstein space \citep{jordan1998variational}. In Proposition \ref{prop:gradient_flows}, we show in the symmetric kernel case that Sinkformers correspond to a Wasserstein gradient flow for some functional $\Ff^\infty$, while Transformers do not. \paragraph{Particular case.} An example is when $T_\mu$ does not depend on $\mu$ and writes $T_\mu = - \nabla E$ where $E : \RR^d \to \RR$. Under regularity assumptions, a solution of \eqref{eq:partial} then converges to a local minimum of $E$. This fits in the implicit deep learning framework \citep{bai2019deep}, where a neural network is seen as solving an optimization problem. A typical benefit of implicit models is that the iterates $x_i$ do not need to be stored during the forward pass of the network because gradients can be calculated using the implicit function theorem: it bypasses the memory storage issue of GPUs \citep{wang2018superneurons,peng2017large,zhu2017unpaired} during automatic differentiation. Another application is to consider neural architectures that include an argmin layer, for which the output is also formulated as the solution of a nested optimization problem \citep{agrawal2019differentiable, gould2016differentiating, gould2019deep}. % \paragraph{Flows for attention.} Our goal is to determine the PDEs \eqref{eq:partial} defined by the proposed attention maps. We consider the symmetric case, summarized by the following assumption: \begin{asp}\label{asp:sym} ${W_K}^{\top} W_Q = {W_Q}^{\top} W_K =- W_V$ \end{asp} Assumption \ref{asp:sym} means we consider symmetric kernels (by imposing ${W_K}^{\top} W_Q = {W_Q}^{\top} W_K$), and that when differentiating $x \mapsto \exp(c(x, x'))$, we obtain $-\exp(c)W_V$. We show that, under this assumption, the PDEs defined by $k^{0}$ and $k^{\infty}$ correspond to Wasserstein gradient flows, whereas it is not the case for $k^{1}$. A particular case of imposing ${W_K}^{\top} W_Q = {W_Q}^{\top} W_K$ is when $W_Q = W_K$. This equality setting is studied by \cite{kim2021lipschitz}, where the authors show that it leads to similar performance for Transformers. Since imposing ${W_K}^{\top} W_Q = {W_Q}^{\top} W_K$ is less restrictive, it seems to be a natural assumption. Imposing $W_Q^\top W_K = -W_V$ is more restrictive, and we detail the expressions for the PDEs associated to $k^0, k^1, k^{\infty}$ without this assumption in Appendix \ref{app:proofs}. We have the following result. \begin{prop}[PDEs associated to $k^0, k^1, k^{\infty}$]\label{prop:gradient_flows} Suppose Assumption \ref{asp:sym}. Let $\Ff^0$ and $\Ff^\infty$ $ : \Mm(\RR^d) \to \RR$ be such that $\Ff^0(\mu) \coloneqq \frac{1}{2} \int k^{0} d (\mu\otimes \mu) $ and $\Ff^\infty(\mu) \coloneqq -\frac{1}{2}\int k^{\infty} \log(\frac{k^{\infty}}{k^{0}})d (\mu\otimes \mu)$. Then $k^0$, $k^1$ and $k^{\infty}$ respectively generate the PDEs $\frac{\partial \mu}{\partial t} + \mathrm{div}(\mu T_{\mu}^{k}) = 0$ with $T_{\mu}^{0} \coloneqq - \nabla_W \Ff^0 (\mu)$, $T_{\mu}^{1} \coloneqq - \nabla[\log(\int k^{0}(\cdot, x') d \mu(x'))]$ and $T_{\mu}^{\infty} \coloneqq - \nabla_W \Ff^\infty (\mu)$. \end{prop} A proof is given in Appendix \ref{app:proofs}. Proposition \ref{prop:gradient_flows} shows that $k^0$ and $k^{\infty}$ correspond to Wasserstein gradient flows. In addition, the PDE defined by $k^1$ does not correspond to such a flow. More precisely, we have the following result. \begin{prop}[The SoftMax normalization does not correspond to a gradient flow]\label{prop:not_a_gradient} One has that $T_{\mu}^{1} = -\nabla[\log(\int k^{0}(\cdot, x') d \mu(x'))]$ is not a Wasserstein gradient. \end{prop} A proof is given in Appendix \ref{app:proofs}, based on the lack of symmetry of $T_\mu^1$. As a consequence of these results, we believe this variational formulation of attention mechanisms for Sinkformers (Proposition \ref{prop:gradient_flows}) provides a perspective for analyzing the theoretical properties of attention-based mechanisms in light of Wasserstein gradient flow theory \citep{santambrogio2017euclidean}. Moreover, it makes it possible to interpret Sinkformers as argmin layers, which is promising in terms of theoretical and experimental investigations, and which is not possible for Transformers, according to Proposition \ref{prop:not_a_gradient}. % \textcolor{black}{Our results are complementary to the one of \cite{dong2021attention}, where the authors show that, \textbf{with no skip connections} and without the feed forward neural network acting after each attention block, the output of a Transformer converges doubly exponentially with depth to a rank-1 matrix. On the contrary, we propose a complementary analysis by taking skip-connections into account, as is standard in Transformers. Precisely because we consider such connections, we end up with very different behaviors. Indeed, as shown in the next section, our analysis reveals that the relative signs for $W_K$, $W_Q$ and $W_V$ imply very different behavior, such as aggregation or diffusion. The dynamics obtained when considering skip connections are therefore richer than a rank collapse phenomenon.} \section{Attention and diffusion}\label{sec:laplacians} In this section, we use the same notations as in section \ref{sec:gradient_flows}. We consider the mean-field limit, where the measure $\mu$ has a density with respect to the Lebesgue measure. We are interested in how the density of particles evolves for an infinite depth self-attention network with tied weights between layers. We consider Assumption \ref{asp:sym} and suppose that $W_K^\top W_Q$ is positive semi-definite. For a bandwidth $\varepsilon > 0$, let ${k^{\infty}_{\varepsilon}} = \texttt{Sinkhorn}({c}/{\varepsilon})$, that is the attention kernel for the Sinkformer with the cost ${c}/{\varepsilon}$. The mapping ${T}^{\infty}_{\mu, \varepsilon} : x \mapsto \frac{1}{\varepsilon}\int {k^{\infty}_{\varepsilon}}(x,x')W_V x'd\mu(x')$ corresponds to the continuous version of the Sinkformer where we re-scale $W_Q W_K^{T} = -W_V$ by ${\varepsilon}$. To better understand the dynamics of attention, we study the asymptotic regime in which the bandwidth $\varepsilon \to 0$. In this regime, one can show that $\forall x \in \RR^d$, $\varepsilon T^{\infty}_{\mu,\varepsilon}(x) \to W_V x$ (details in Appendix \ref{app:proofs}). Thus, to go beyond first order, we study the modified map $\overline{T}^{\infty}_{\mu, \varepsilon} = {T}^{\infty}_{\mu, \varepsilon} - \frac{1}{\varepsilon} W_V.$ A natural question is the limit of this quantity when $\varepsilon \to 0$, and what the PDE defined by this limit is. We have the following theorem. \begin{thm}[Sinkformer's PDE] \label{thm:diffusion} Let $\mu \in \Mm(\RR^d)$. Suppose that $\mu$ is supported on a compact set and has a density $\rho \in \Cc^{3}(\RR^d)$. Suppose assumption \ref{asp:sym} and that $W_K^\top W_Q$ is positive semi-definite. Then one has in $L^2$ norm as $\varepsilon \to 0$, $$\overline{T}^{\infty}_{\mu, \varepsilon} \to \overline{T}^{\infty}_{\mu, 0} \coloneqq - \frac{\nabla \rho}{\rho}.$$ In this limit, the PDE $\partial_t\rho + \mathrm{div}(\rho \overline{T}^{\infty}_{\mu, 0}) = 0$ rewrites \begin{equation}\label{eq:_laplacian_sink} \partial_t \rho = \Delta \rho. \end{equation} \end{thm} A proof is available in Appendix \ref{app:proofs}, making use of Theorem 1 from \citet{marshall2019manifold}. We recover in Equation \eqref{eq:_laplacian_sink} the well-known \textbf{heat equation}. We want to compare this result with the one obtained with the SoftMax normalization. In order to carry a similar analysis, we make use of a Laplace expansion result \citep{tierney1989fully, singer2006graph}. However, the kernel ${k}^{1}_{\varepsilon} = \texttt{SoftMax}(c/\varepsilon)$ is not suited for using Laplace method because it does not always have a limit when $\varepsilon \to 0$. Thus, we consider the modified cost as in Proposition~\ref{prop:modularity}, $\tilde{c}(x, x') = -\frac{\|W_Qx - W_K x'\|^{2}}2$. The kernel $\tilde{k}^{1}_{\varepsilon} =\texttt{SoftMax}(\tilde{c}/\varepsilon)$, for which we can now apply Laplace expansion result, then corresponds to the L2 self-attention formulation \citep{kim2021lipschitz}. Note that thanks to Proposition \ref{prop:modularity}, $\tilde{k}^{\infty}_{\varepsilon} = {k}^{\infty}_{\varepsilon}$: Sinkorn's algorithm will have the same output for both costs. To simplify the expressions derived, we assume that $W_Q$ and $W_K$ are in $\RR^{d\times d}$ and are invertible. Similarly to the analysis conducted for Sinkformers, we consider the mapping ${T}^{1}_{\mu, \varepsilon} : x \mapsto \frac{1}{\varepsilon}\int {\tilde{k}^{1}_{\varepsilon}}(x,x')W_V x'd\mu(x')$. When $\varepsilon \to 0$, we show that $\forall x \in \RR^d$, $\varepsilon T^{1}_{\mu,\varepsilon}(x) \to - W_Q^{\top} W_Q x$ (details in Appendix \ref{app:proofs}). Thus, we consider $\overline{T}^{1}_{\mu, \varepsilon} = {T}^{1}_{\mu, \varepsilon} + \frac{1}{\varepsilon}\mathrm{W_Q^{\top} W_Q}$. We have the following result. \begin{prop}[Transformer's PDE] \label{prop:soft} Let $\mu \in \Mm(\RR^d)$. Suppose that $\mu$ is supported on a compact set and has a density $\rho \in \Cc^{1}(\RR^d)$. Suppose assumption \ref{asp:sym} and that $W_Q$ and $W_K$ are in $\RR^{d\times d}$ and are invertible. Then one has $\forall x \in \RR^d$, $$\overline{T}^{1}_{\mu, \varepsilon}(x) \to \overline{T}^{1}_{\mu, 0}(x) \coloneqq - W^{\top}_Q W^{-1} _K \frac{\nabla \rho}{\rho}(W^{-1}_K W_Q x ).$$ In this limit, the PDE $\partial_t\rho+ \mathrm{div}(\rho \overline{T}^{1}_{\mu, 0}) = 0$ rewrites \begin{equation}\label{eq:_laplacian_softmax} \partial_t \rho = \mathrm{div}(W^{\top}_Q W^{-1} _K \frac{\nabla \rho}{\rho}(W^{-1}_K W_Q \cdot ) \rho)\end{equation} \end{prop} A proof is given in Appendix \ref{app:proofs}. While equation \eqref{eq:_laplacian_sink} corresponds to the heat equation, equation \eqref{eq:_laplacian_softmax} is different. First, it is nonlinear in $\rho$. Second, it is nonlocal since the evolution of the density at $x$ depends on the value of this density at location $W^{-1}_K W_Qx$. Note that the linear and local aspect of Sinkformer's PDE on the one hand, and the nonlinear and nonlocal aspect of Transformer's PDE on the other hand, remain true without assuming ${W_Q}^\top W_K = -W_V$ (details in Appendix \ref{app:proofs}). \section{Experiments}\label{sec:experiments} We now demonstrate the applicability of Sinkformers on a large variety of experiments with different modalities. We use Pytorch \citep{paszke2017automatic} and Nvidia Tesla V100 GPUs. Our code is open-sourced and is available at this address: \url{https://github.com/michaelsdr/sinkformers}. All the experimental details are given in Appendix \ref{app:exp_details}. \paragraph{Practical implementation.} In all our experiments, we use existing Transformer architectures and modify the SoftMax operator in attention modules with Sinkhorn's algorithm, which we implement in $\log$ domain for stability (details in Appendix \ref{app:imp_details}). \subsection{ModelNet 40 classification}\label{sec:modelnet} The ModelNet 40 dataset \citep{wu20153d} is composed of 40 popular object categories in 3D. Transformers for point clouds and sets have been applied to the ModelNet 40 classification in several works, such as Set Transformers \citep{lee2019set} or Point Cloud Transformers \citep{guo2021pct}. \paragraph{Set Sinkformers.} Set Transformers \citep{lee2019set} also have an encoder decoder structure with different possibilities for defining attention-based set operations. We propose to focus on the architecture that uses \textit{Induced Self Attention Block} (ISAB), which bypasses the quadratic time complexity of Self Attention Blocks (SAB). More details about this architecture can be found in \citep{lee2019set}. We reproduce the ModelNet 40 classification experiment using $5000$ uniformly sampled points for each shape and use a Set Transformer and a Set Sinkformer with two ISAB layers in the encoder and a decoder composed of a SAB and a Pooling by Multihead Attention (PMA) module. While the reported test accuracy is of $87.8 \%$ using a Set Transformer, we obtain as our best accuracy when performing $21$ iterations of Sinkhorn algorithm within our Sinkformer of $89.1 \%$. Results are summarized in Table \ref{tab:results_MODELNET}. \begin{figure}[H] \includegraphics[width=\columnwidth]{figures/compare_result_modelnet-crop.pdf} \caption{\textbf{Classification error and loss on ModelNet 40} when training a Set Transformer and a Set Sinkformer with different number of iterations in Sinkhorn's algorithm.}\label{fig:model_net_lc} \vspace{-1em} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figures/compare_sinkhorn_classif_40k_10k_8_tries-crop.pdf} \vspace{-2em} \caption{\textbf{Learning curves} when training a Transformer and a Sinkformer on the Sentiment Analysis task on the IMDb Dataset.} \label{fig:sentiment_imbd} \vspace{-1em} \end{figure*} Moreover, we show in Figure \ref{fig:model_net_lc} the learning curves corresponding to this experiment. Interestingly, the number of iterations within Sinkhorn's algorithm increases the accuracy of the model. Note that we only consider an odd number of iterations since we always want to have row-wise stochastic attention matrices to be consistent with the properties of the SoftMax. \paragraph{Point Cloud Transformers.} We also train Point Cloud Transformers \citep{guo2021pct} on ModelNet 40. This architecture achieves accuracy comparable to the state of the art on this dataset. We compare best and median test accuracy over $4$ runs. Results are reported in Table \ref{tab:results_MODELNET}, where we see that while the best test-accuracy is narrowly achieved for the Transformer, the Sinkformer has a slightly better median accuracy. \begin{table}[h] \vskip -0.15in \centering \caption{\label{tab:results_MODELNET}\textbf{Test accuracy for ModelNet 40} over 4 runs for each model.} \vskip 0.15in \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{|l|l|l|l|l|} \hline \textbf{Model} & \textbf{Best} & \textbf{Median} & \textbf{Mean} & \textbf{Worst} \\ \Xhline{5\arrayrulewidth} {Set Transformer} & {$87.8 \%$} & {$86.3 \%$} & {$85.8 \%$} & {$84.7 \%$} \\ \hline {Set Sinkformer} & {$\mathbf{89.1} \%$} & {$\mathbf{88.4} \%$} & {$\mathbf{88.3} \%$} & {$\mathbf{88.1} \%$} \\ \Xhline{3\arrayrulewidth} {Point Cloud Transformer} & {$\mathbf{93.2} \%$} & {$92.5 \%$} & {${92.5} \%$} & {$92.3 \%$} \\ \hline {Point Cloud Sinkformer} & {$93.1 \%$} & {$\mathbf{92.8} \%$} & {$\mathbf{92.7} \%$} & {$\mathbf{92.5} \%$} \\ \hline \end{tabular} \end{adjustbox} \end{table} \subsection{Sentiment Analysis} We train a Transformer (composed of an attention-based encoder followed by a max-pooling layer) and a Sinkformer on the IMDb movie review dataset \citep{maas-EtAl:2011:ACL-HLT2011} for sentiment analysis. This text classification task consists of predicting whether a movie review is positive or negative. The learning curves are shown in Figure \ref{fig:sentiment_imbd}, with a gain in accuracy when using a Sinkformer. In this experiment, Sinkhorn's algorithm converges perfectly in 3 iterations (the resulting attention matrices are doubly stochastic), which corresponds to the green curve. The Sinkformer only adds a small computational overhead, since the training time per epoch is $4$m $02$s for the Transformer against $4$m $22$s for the Sinkformer. \subsection{Neural Machine Translation}\label{sec:fair} We train a Transformer and its Sinkformer counterpart using the \texttt{fairseq} \citep{ott2019fairseq} sequence modeling toolkit on the IWSLT'14 German to English dataset \citep{cettolo2014report}. The architecture used is composed of an encoder and a decoder, both of depth $6$. We plug Sinkhorn's algorithm only into the encoder part. \textcolor{black}{Indeed, in the decoder, we can only pay attention to previous positions in the output sequence. For this reason, we need a mask that prevents a straightforward application of Sinkhorn's algorithm.} We demonstrate that even when using the hyper-parameters used to optimally train the Transformer, we achieve a similar BLEU~\citep{papineni2002bleu} over $6$ runs. We first train a Transformer for $30$ epochs. On the evaluation set, we obtain a BLEU of $34.43$. We then consider a Sinkformer with the weights of the trained Transformer. Interestingly, even this un-adapted Sinkformer provides a median BLEU score of $33.81$. We then divide the learning rate by $10$ and retrain for $5$ additional epochs both the Transformer and the Sinkformer to obtain a median BLEU of respectively $34.68$ and $34.73$ (Table \ref{tab:nmt}). Importantly, the runtime for one training epoch is almost the same for both models: $2$m $48$s (Transformer) against $2$m $52$s (Sinkformer). \begin{table}[h] \vskip -0.15in \centering \caption{\label{tab:results_CIFAR}\textbf{Median BLEU score} over 6 runs on the IWSLT'14 German to English dataset. The score ${}^{\mathbf {\star}}$ is when evaluating the Sinkformer with the weights of the trained Transformer.} \vskip 0.15in \begin{adjustbox}{width=0.8\columnwidth,center} \begin{tabular}{|l|l|l|} \hline \textbf{Model} & Epoch 30 & Epoch 35 \\ \Xhline{3\arrayrulewidth} {Transformer} & {$34.43$} & {$34.68$} \\ \hline {Sinkformer} & {$33.81^{\mathbf {\star}}$} & {$\mathbf{34.73}$} \\ \hline \end{tabular}\label{tab:nmt} \end{adjustbox} \end{table} \subsection{Vision Transformers}\label{sec:vit} Vision Transformers (ViT) \citep{dosovitskiy2020image} have recently emerged as a promising architecture for achieving state of the art performance on computer vision tasks \citep{zhai2021scaling}, using only attention based mechanisms by selecting patches of fixed size in images and feeding them into an attention mechanism. \begin{figure}[H] % \begin{minipage}[c]{0.56\linewidth} \includegraphics[width=\textwidth]{figures/compare_sinkhorn_cats_and_dogs_6_tries-crop.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.37\linewidth} \caption{\textbf{Train} (dotted) and \textbf{test} (plain) \textbf{accuracy} as a function of the number of epochs when training a ViT and its Sinkformer counterpart on the cats and dogs classification task (median over 5 runs). }\label{fig:vit} \end{minipage} \end{figure} \paragraph{Cats and dogs classification.} We train a ViT and its Sinkformer counterpart on a binary cats and dogs image classification task. The evolution of the train and test accuracy is displayed in Figure \ref{fig:vit}. The \textit{median} test accuracy is $79.0 \%$ for the Transformer against $79.5\%$ for the Sinkformer, whereas the \textit{maximum} test accuracy is $80.0 \%$ for the Transformer against $80.5\%$ for the Sinkformer. \textcolor{black}{We also use $3$ iterations in Sinkhorn's algorithm which leads to a negligible computational overhead (training time per epoch of 3m 25s for the Sinkformer against 3m 20s for the Transformer).} \paragraph{Impact of the patch size on the final accuracy.} We consider a one-layer and one-head self-attention module on the MNIST dataset, with no additional layer. The purpose is to isolate the self-attention module and study how its accuracy is affected by the choice of the patch size. Results are displayed in Figure \ref{fig:patches}. We recall that a MNIST image is of size $28 \times 28$. When taking only one patch of size $28$, both models are equivalent because the attention matrix is of size $1$. However, when the patch size gets smaller, the two models are different and the Sinkformer outperforms the Transformer. \begin{figure}[h] % \begin{minipage}[c]{0.50\linewidth} \includegraphics[width=\textwidth]{figures/compare_patches-crop.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.4\linewidth} \caption{\textbf{Final test accuracy} when training a one layer and one head self attention module on the MNIST dataset, with no feedforward neural network, when varying the patch size (median over 5 runs).}\label{fig:patches} \end{minipage} \end{figure} \vspace{-0.3cm} \section*{Conclusion} In this paper, we presented the Sinkformer, a variant of the Transformer in which the SoftMax, which leads to row-wise stochastic attention, is replaced by Sinkhorn's algorithm, which leads to doubly stochastic attention. This new model is motivated by the empirical finding that attention matrices in Transformers get closer and closer to doubly stochastic matrices during the training process. This modification is easily implemented in practice by simply replacing the SoftMax in the attention modules of existing Transformers without changing any parameter in the network. It also provides a new framework for theoretically studying attention-based mechanisms, such as the interpretation of Sinkformers as Wasserstein gradient flows in the infinitesimal step size regime or as diffusion operators in the mean-field limit. On the experimental side, Sinkformers lead to better accuracy in a variety of experiments: classification of 3D shapes, sentiment analysis, neural machine translation, and image classification. \section*{Acknowledgments} This work was granted access to the HPC resources of IDRIS under the allocation 2020-[AD011012073] made by GENCI. This work was supported in part by the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). This work was supported in part by the European Research Council (ERC project NORIA). We thank Marco Cuturi and D. Sculley for their comments on a draft of the paper. We thank Scott Pesme, Pierre Rizkallah, Othmane Sebbouh, Thibault Séjourné and the anonymous reviewers for helpful feedbacks.
3bd90d795c291dd7f9a706e81c2bf0e8a83189c1
\section{Introduction} In real-world applications, object can always be represented by multiple source information, i.e., multiple modalities~\cite{BaltrusaitisAM19,DebieRFBKAGA21}. For example, the news always contains image and text information, the video can be divided into image, audio and text information. Along this line, the study of cross-modal learning has emerged for bridging the connections among different modalities, so as to better perform downstream tasks, in which the image captioning is one of the important research directions. Specifically, image captioning aims to automatically generate natural language descriptions for images, and has emerged as a prominent research problem in both academia and industry~\cite{KarpathyL15,XuBKCCSZB15,SammaniE19,BinYSXSL19}. For example, we can automatically broadcast road conditions by learning visual images to assist driving, and can also help visually impaired users to read more conveniently. In fact, the challenge of image captioning is to learn the generator between two heterogeneous modalities (i.e., the image and text modalities), which needs to recognize salient objects in an image using computer vision techniques and generate coherent descriptions using natural language processing. To solve this problem, researchers firstly explored the neural encoder-decoder models~\cite{KarpathyL15,YangYWCS16}, which are composed of a CNN encoder and a LSTM (or Transformer) decoder. In detail, these methods firstly encode the image into a set of feature vectors using a CNN based model, each segmentation captures semantic information about an image region, then decode these feature vectors to words sequentially via a LSTM-based or Transformer-based network. Furthermore, \cite{XuBKCCSZB15,LuXPS17,HuangWCW19} adopted the single or hierarchical attention mechanism that enables the model to focus on particular image regions during decoding process. To mitigate the incorrect or repetitive content, several researches consider to edit inputs independently from the problem of generating inputs~\cite{HashimotoGOL18,SammaniE19}. However, note that all these methods require full image-sentence pairs in advance, i.e., all the images need to be described manually, which is hard to accomplish in real-world applications. A more general scenario is shown in Figure \ref{fig:data}, we have limited described images with corresponding label ground-truths, and a large number of undescribed images. Therefore, a resulting challenge is the ``\emph{Semi-Supervised Image Captioning}'', which aims to conduct the captioning task by reasonably using the huge number of undescribed images and limited supervised data. The key difficulty of semi-supervised image captioning is to design the pseudo supervision for the generated sentences. Actually, there have been some preliminary attempts recently. For example, \cite{Feng00L19a,GuJCZYW19} proposed unsupervised captioning methods, which combined the adversarial learning~\cite{GoodfellowPMXWOCB14} with traditional encoder-decoder models to evaluate the quality of generated sentences. In detail, based on the traditional encoder-decoder models, these approaches employ adversarial training to generate sentences such that they are indistinguishable from the sentences within auxiliary corpus. In order to ensure that the generated captions contain the visual concepts, they additionally distill the knowledge provided by a visual concept detector into the image captioning model. However, the domain discriminator and visual concept distiller do not fundamentally evaluate the matching degree and structural rationality of the generated sentence, so the captioning performance is poor. As for semi-supervised image captioning, a straightforward way is directly utilizing the undescribed images together with their machine-generated sentences~\cite{MithunPPR18,HuangKLCH19} as the pseudo image-sentence pair, to fine-tune the model. However, limited amount of parallel data can hardly establish a proper initial generator to generate precisely pseudo descriptions, which may have negative affection to the fine-tuning of visual-semantic mapping function. To circumvent this issue, we attempt to utilize the raw image as pseudo supervision. However, heterogeneous gap between modalities always leads the supervision difficulty if we directly constrain the consistency between global embedding of image and sentence. Thereby, we switch to use the broader and effective semantic prediction information, rather than directly utilize the embedding, and introduce a novel approach, dubbed \emph{semi-supervised image captioning by exploiting the Cross-modal Prediction and Relation Consistency} (CPRC). In detail, there are two common approaches for traditional semi-supervised learning: 1) Pseudo labeling: it minimizes the entropy of unlabeled data using predictions; 2) Consistency regularization: it transforms the unlabeled raw images using data augmentation techniques, then constrains the consistency of transformed instances' outputs. Different form these two techniques, we design cross-modal prediction and relation consistency by comprehensively considering the informativeness and representativeness: 1) Prediction consistency: we utilize the soft label of image to distill effective supervision for generated sentence; 2) Relation consistency: we work on encouraging the generated sentences to have similar relational distribution to the augmented image inputs. The central tenet is that the relations of learned representations can better present the consistency than individual data instance~\cite{ParkKLC19}. Consequently, CPRC can effectively qualify the generated sentences from both the prediction confidence and distribution alignment perspectives, thereby to learn more robust mapping function. \emph{Note that CPRC can be implemented with any current captioning model}, and we adopt several typical approaches for verification~\cite{RennieMMRG17,ZhouWLHZ20}. Source code is available at \url{https://github.com/njustkmg/CPRC}. In summary, the contributions in this paper can be summarized as follows: \begin{itemize} \item We propose a novel semi-supervised image captioning framework for processing undescribed images, which is universal for any captioning model; \item We design the cross-modal prediction and relation consistency to measure the undescribed images, which maps the raw image and corresponding generated sentence into the shared semantic space, and supervises the generated sentence by distilling the soft label from image prediction and constraining the cross-modal relational consistency; \item In experiments, our approach improves the performance under semi-supervised scenario, which validates that knowledge hidden in the content and relation is effective for enhancing the generator. \end{itemize} \section{Related Work} \subsection{Image Captioning} Image captioning approaches can be roughly divided into three categories: 1) Template based methods, which generate slotted captioning templates manually, and then utilize the detected keywords to fill the templates~\cite{YaoYLLZ10}, but their expressive power is limited because of the need for designing templates manually; 2) Encoder-decoder based methods, which are inspired by the neural machine translation~\cite{ChoMGBBSB14}. For example, \cite{VinyalsTBE15} proposed an end-to-end framework with a CNN encoding the image to feature vector and a LSTM decoding to caption; \cite{HuangWCW19} added an attention-on-attention module after both the LSTM and the attention mechanism, which can measure the relevance between attention result and query; and 3) Editing based methods, which consider editing inputs independent from generating inputs. For example, \cite{HashimotoGOL18} learned a retrieval model that embeds the input in a task-dependent way for code generation; \cite{SammaniE19} introduced a framework that learns to modify existing captions from a given framework by modeling the residual information. However, all these methods need huge amount of supervised image-sentence pairs for training, whereas the scenario with large amount of undescribed images is more general in real applications. To handle the undescribed images, several attempts propose unsupervised image captioning approaches. \cite{Feng00L19a} distilled the knowledge in visual concept detector into the captioning model to recognize the visual concepts, and adopted sentence corpus to teach the captioning model; \cite{GuJCZYW19} developed an unsupervised feature alignment method with adversarial learning that maps the scene graph features from the image to sentence modality. Nevertheless, these methods mainly depend on employing the domain discriminator for learning plausible sentences, that are difficult for generating matched sentences. On the other hand, considering the semi-supervised image captioning, \cite{MithunPPR18,HuangKLCH19} proposed to extract regional semantics from un-annotated images as additional weak supervision to learn visual-semantic embeddings. However, the generated pseudo sentences are always unqualified to fine-tune the generator in real experiments. \begin{figure*}[t]\centering \centering \includegraphics[width = 170mm]{./figure/framework.pdf}\\ \caption{Diagram of the proposed unsupervised loss. For example, three weakly-augmented images and the raw image are fed into the encoder to obtain image region embeddings, then four corresponding sentences are generated by the decoder. Then, the embeddings of image inputs and generated sentences are fed into the shared classifier to obtain the predictions. The model is trained by considering two objectives: 1) \emph{supervised loss} includes the generation cross-entropy and prediction cross-entropy for described images. In detail, generation cross-entropy measures the quality of generated sentence sequence, and prediction cross-entropy considers the multi-label prediction loss of generated sentence. 2) \emph{unsupervised loss} includes the prediction consistency and relation consistency for undescribed images. In detail, prediction consistency utilizes the image's prediction as pseudo labels for corresponding generated sentence, and relation consistency consist the generated sentences' distribution with image inputs' distribution.}\label{fig:framework} \end{figure*} \subsection{Semi-Supervised Learning} Recently, deep networks achieve strong performance by supervised learning, which requires a large number of labeled data. However, it comes at a significant cost when labeling by human labor, especially by domain experts. To this end, semi-supervised learning, which concerns combining supervised and unsupervised learning techniques to perform certain learning tasks and permits harnessing the large amounts of unlabeled data in combination with typically smaller sets of labeled data, attracts more and more attention. Existing semi-supervised learning mainly considers two aspects: 1) Self-training~\cite{GrandvaletB04}. The generality of self-training is to use a model’s predictions to obtain artificial labels for unlabeled data. A specific variant is the pseudo-labeling, which converts the model predictions of unlabeled data to hard labels for calculating the cross-entropy. Besides, pseudo-labeling is often used along with a confidence thresholding that retains sufficiently confident unlabeled instances. In result, pseudo-labeling results in entropy minimization, which has been used as a component for many semi-supervised algorithms, and has been validated to produce better results~\cite{ArazoOAOM20}. 2) Consistency regularization~\cite{BachmanAP14}. Early extensions include exponential moving average of model parameters~\cite{TarvainenV17} or using previous model checkpoints~\cite{LaineA17}. Recently, data augmentation, which integrates these techniques into the self-training framework, has shown better results~\cite{XieDHL020,BerthelotCCKSZR20}. A mainstream technology is to produce random perturbations with data augmentation~\cite{FrenchMF18}, then enforce consistency between the augmentations. For example, \cite{XieDHL020} proposed unsupervised data augmentation with distribution alignment and augmentation anchoring, which encourages each output to be close to the weakly-augmented version of the same input; \cite{BerthelotCCKSZR20} used a weakly-augmented example to generate an artificial label and enforce consistency against strongly-augmented example. Furthermore, \cite{Kihyuk2020} combined the pseudo labeling and consistency regularization into a unified framework, which generates pseudo-labels using the model’s predictions on weakly-augmented unlabeled images, and constrain the prediction consistency between weakly-augmented and strongly-augmented version. Note that the targets in previous semi-supervised methods are uniform and simple, i.e., the label ground-truths. However, cross-modal semi-supervised learning is more complicated, e.g., each image has the corresponding sentence and label ground-truth. It is more difficult for building cross-modal generator than single modal classifier with limited supervised data, thereby it may causes noise accumulation if we directly employ the traditional semi-supervised technique for the generated sentences. The remainder of this paper is organized as follows. Section \ref{sec:s1} presents the proposed method, including the model, solution, and extension. Section \ref{sec:s2} shows the experimental results on COCO dataset, under different semi-supervised setting. Section \ref{sec:s3} concludes this paper. \section{Proposed Method}\label{sec:s1} \subsection{Notations} Without any loss of generality, we define the semi-supervised image-sentence set as: $\mathcal{D} = \{\{{\bf v}_i,{\bf w}_i,{\bf y}_i\}_{i=1}^{N_l}, \{{\bf v}_j\}_{j=1}^{N_u}\}$, where ${\bf v}_i \in \mathcal{R}^{d_v}$ denotes the $i-$th image instance, ${\bf w}_i \in \mathcal{R}^{d_w}$ represents the aligned sentence instance, ${\bf y}_i \in \mathcal{R}^{C}$ denotes the instance label, ${\bf y}_{i,k} = 1$ if $i-$th instance belongs to the $k-$th label, otherwise is $0$. ${\bf v}_j$ is the $j-$th undescribed image. $N_l$ and $N_u$ ($N_l \ll N_u$) are the number of described and undescribed instances, respectively. \begin{defn}\label{def:d1} \textbf{Semi-Supervised Image Captioning.} Given limited parallel image-sentence pairs $\{{\bf v}_i,{\bf w}_i,{\bf y}_i\}_{i=1}^{N_p}$ and a huge number of undescribed images $\{{\bf v}_j\}_{j=1}^{N_u}$, we aim to construct a generator $G$ for image captioning by reliably utilizing the undescribed images. \end{defn} \subsection{The Framework} It is notable that CPRC focuses on employing the undescribed images, and is a general semi-supervised framework. Thereby the image-sentence generator, i.e., $G: {\bf v} \rightarrow {\bf w} $, can be represented as any state-of-the-art captioning model. In this paper, considering the effectiveness and reproducibility, we adopt the attention model, i.e., AoANet~\cite{HuangWCW19}, for $G$ as base model. In detail, the $G$ is an encoder-decoder based captioning model, which always includes an image encoder and a text decoder. Given an image ${\bf v}$, the target of $G$ is to generate a natural language sentence $\hat{{\bf w}}$ describing the image. The formulation can be represented as: $\hat{{\bf w}} = D(E({\bf v}))$, where the encoder $E$ is usually a convolutional neural network~\cite{HeZRS16,RenHG017} for extracting the embedding of raw image input. Note that $E$ usually includes refining module such as attention mechanism~\cite{BahdanauCB14}, which aims to refine the visual embedding for suiting the language generation dynamically. The decoder $D$ is widely used RNN-based model for the sequence prediction $\hat{{\bf w}}_i$. The learning process of CPRC is shown in Figure \ref{fig:framework}. Specifically, CPRC firstly samples a mini-batch of images from the dataset $\mathcal{D}$ (including described and undescribed images), and adopts the data augmentation techniques for each undescribed image (i.e., each image has $K$ variants). Then we can acquire the generated sentences for both augmented images and the raw image using the $G$, and compute the predictions for image inputs and generated sentences using the shared prediction classifier $f$. The model is trained through two main objects: 1) \emph{supervised loss}, which is designed for described images, i.e., supervised image-sentence pairs. In detail, supervised loss considers both the label and sentence predictions, including: a) \emph{generation cross-entropy}, which employs the cross-entropy loss or reinforcement learning based reward~\cite{RennieMMRG17} for generated sentence sequence and ground-truth sentence. b) \emph{prediction cross-entropy}, which calculates the multi-label loss between image/sentence's prediction and label ground-truth. 2) \emph{unsupervised loss}, which is designed for undescribed images. In detail, unsupervised loss considers both the informativeness and representativeness: a) \emph{prediction consistency}, which uses the image's prediction as pseudo label to distill effective information for generated sentence, so as to measure the instance's informativeness; b) \emph{relation consistency}, which adopts the relational structure of the augmented images as the supervision distribution for generated sentences, so as to measure the instance's representativeness. Therefore, in addition to the traditional loss for described images, we constrain the sentences generated from undescribed images by comprehensively using the raw image inputs as pseudo labels. The details are described as follows. \subsection{Supervised Loss} \subsubsection{Generation Loss} Given an image ${\bf v}$, the decoder (Figure \ref{fig:framework}) generate a sequence of sentence $\hat{{\bf w}} = \{w_{1}, w_{2}, \cdots , w_{T}\}$ describing the image, $T$ is the length of sentence. Then, we can minimize the cross-entropy loss (i.e., $\ell_{XE}$) or maximize a reinforcement learning based reward~\cite{RennieMMRG17} (i.e., $\ell_{RL}$), according to ground truth caption ${\bf w}$: \begin{equation}\label{eq:e0} \begin{split} \ell_{XE}& = - \sum_{t = 1}^T \log p({\bf w}_{t} | {\bf w}_{{1:t-1}}), \\ \ell_{RL}&= - \mathbb{E}_{{\bf w}_{1:T}}~ p [r({\bf w}_{1:T})], \\ \end{split} \end{equation} where ${\bf w}_{1:T}$ denotes the target ground truth sequence, $p(\cdot)$ is the prediction probability. the reward $r(\cdot)$ is a sentence-level metric for the sampled sentence and the ground-truth, which always uses the score of some metric (e.g. CIDEr-D~\cite{VedantamZP15}). In detail, as introduced in~\cite{RennieMMRG17}, captioning approaches traditionally train the models using the cross entropy loss. On the other hand, to directly optimize NLP metrics and address the exposure bias issue. \cite{RennieMMRG17} casts the generative models in the Reinforcement Learning terminology as~\cite{RanzatoCAZ15}. In detail, traditional decoder (i.e., LSTM) can be viewed as an “agent” that interacts with the “environment” (i.e., words and image features). The parameters of the network define a policy, that results in an “action” (i.e., the prediction of the next word). After each action, the agent updates its internal “state” (i.e., parameters of the LSTM, attention weights etc). Upon generating the end-of-sequence (EOS) token, the agent observes a “reward” that is, e.g., the CIDEr score of the generated sentence. \subsubsection{Prediction Loss} On the other hand, we can measure the generation with classification task using label ground-truth ${\bf y}$. We extract the embeddings of image input and generated sentence from the representation output layer. Considering that the image and corresponding sentence share the same semantic representations, the embeddings of image input and generated sentence can be further put into the shared classifier $f$ for predicting. Thereby, the forward prediction process can be represented as: \begin{equation} \begin{split} {\bf p}^v = f(E_e({\bf v})), \quad {\bf p}^w = f(D_e(E({\bf v}))), \nonumber \end{split} \end{equation} where ${\bf p}^v$ and ${\bf p}^w$ are normalized prediction distribution of image input and generated sentence. $f(\cdot)$ denotes the shared classification model for text and image modalities. Without any loss of generality, we utilize three fully connected layer network here. $E_e({\bf v}), D_e(E({\bf v})) \in \mathcal{R}^d$ represents the embeddings of image input and generated sentence. Note that $E_e({\bf v})$ and $D_e(E({\bf v}))$ are the final embeddings of image/text region embedding with $mean(\cdot)$ operator. The commonly used image captioning dataset (i.e., COCO dataset) is a multi-label dataset, i.e., different from multi-class dataset that each instance only has one ground-truth, each instance has multiple labels. Therefore, we utilize the binary cross entropy loss (BCELoss) here: \begin{equation}\label{eq:e} \begin{split} \ell_{p} = & \sum_{m \in \{v,w\}} H({\bf p}^m, {\bf y}^m)\\ H({\bf p}^m, {\bf y}^m) = & - \sum_j (y_j^m \log p_j^m + (1-y_j^m)\log (1-p_j^m), \end{split} \end{equation} where $H(\cdot)$ denotes the BCELoss for multi-label prediction, and the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on supervised data. \subsection{Unsupervised Loss} \subsubsection{Prediction Consistency} First, we introduce the augmentation technique for transforming the images. Existing methods usually leverage two kinds of augmentations: a) Weak augmentation is a standard flip-and-shift strategy, which does not significantly change the content of the input. b) Strong augmentation always refers to the AutoAugment~\cite{Cubuk20} and its variant, which uses reinforcement learning to find an augmentation strategy comprising transformations from the Python Imaging Library\footnote{https://www.pythonware.com/products/pil/}. Considering that ``strong'' augmented (i.e., heavily-augmented) instances are almost certainly outside the data distribution, which leads to the low quality of generated sentence, we leverage the ``weak'' augmentation instead. In result, each image can be expanded to $K+1$ variants, i.e., $\Psi({\bf v}) = \{{\bf v}_{0},{\bf v}_{1},\cdots,{\bf v}_{K}\}$, $0$ denotes the raw input. Then, we input the augmented image set to the image-sentence generator $G$, and extract the embeddings of generated sentences from the representation output layer. The embeddings are further put into the shared classifier for prediction. Thereby, the prediction process can be represented as: \begin{equation}\label{eq:e1} \begin{split} {\bf p}_{k}^w = f(D_e(E({\bf v}_k))), \quad k \in \{0,1,\cdots,K\}, \\ \end{split} \end{equation} where $f(\cdot)$ denotes the shared classification model for text and image modalities. $D_e(E({\bf v}_k)) \in \mathcal{R}^d$ represents the embedding of generated sentence. Similarly, we can acquire the prediction of image inputs: ${\bf p}_{k}^v = f(E_e({\bf v}_k)), \quad k \in \{0,1,\cdots,K\}$, $E_e({\bf v}_k) \in \mathcal{R}^d$ represents the embedding of image. The commonly used image captioning dataset (i.e., COCO dataset) is a multi-label dataset, i.e., different from multi-class dataset that each instance only has one ground-truth, each instance in COCO has multiple labels. Therefore, traditional pseudo-labeling that leverages ``hard'' labels (i.e., the $\arg\max$ of model’s output) is inappropriate, because it is difficult to determine the number of ``hard'' label for each instance. As a consequence, we directly utilize the prediction of image for knowledge distillation~\cite{LinWCS21} in the multi-label BCEloss: \begin{equation}\label{eq:e2} \begin{split} \ell_{pc} = &\sum_{k \in \{0,1,\cdots,K\}} H({\bf p}_{k}^v,{\bf p}_{k}^w)\\ H({\bf p}_{k}^v,{\bf p}_{k}^w) = & - \sum_j (p_{k_j}^v \log p_{k_j}^w + (1-p_{k_j}^v)\log (1-p_{k_j}^w)), \end{split} \end{equation} where $H(\cdot)$ denotes the binary cross entropy loss (BCELoss), and the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on unsupervised data. \begin{figure}[htb] \centering \includegraphics[width = 80mm]{./figure/rcl.pdf}\\ \caption{The relational consistency. The blue and orange rectangles represent image domain and text domain, respectively. Any point inside the rectangles represents a specific instance in that domain. Relational Consistency: for example, given a tuple of image instances $\{{\bf v}_{0}, {{\bf v}}_{1}, {{\bf v}}_{2}, {{\bf v}}_{3}, {{\bf v}}_{4}\}$, relational consistency loss requires that the generated sentences, $\{{\bf w}_{0}, {{\bf w}}_{1}, {{\bf w}}_{2}, {{\bf w}}_{3}, {{\bf w}}_{4}\}$, should share the similar relation structure with the raw inputs.}\label{fig:consistency} \end{figure} \subsubsection{Relation Consistency} Inspired by the linguistic structuralism~\cite{Matthews2001A} that relations can better present the knowledge than individual example, the primary information actually lies in the structure of the data space. Therefore, we define a new relation consistency loss, $\ell_{rc}$, using a metric learning-based constraint, which calculates the KL divergence of the similarity vectors between the image inputs and generated sentences. The relation consistency aims to ensure the structural knowledge using mutual relations of data examples in the raw inputs. Specifically, each image input can be denoted as a bag of $K+1$ instances, i.e., $\Psi({\bf v})$, while the corresponding generated sentences can also be represented as a bag of instances, i.e., $G(\Psi({\bf v}))$. With the shared classifier, the image and sentence prediction can be formulated as: \begin{equation}\label{eq:e3} \begin{split} {\bf p}_{k}^v = &f(E_e({\bf v}_k)), \quad k \in \{0,1,\cdots,K\} \\ {\bf p}_{k}^w = &f(D_e(E({\bf v}_k))), \quad k \in \{0,1,\cdots,K\}, \nonumber \end{split} \end{equation} With the predictions of image inputs and generated sentences, the objective of relational consistency can be formulated as: \begin{equation}\label{eq:e4} \begin{split} \ell_{rc} = & KL(\Phi({\bf p}_{0}^v,{\bf p}_{1}^v, \cdots,{\bf p}_{K}^v), \Phi({\bf p}_{0}^w,{\bf p}_{1}^w,\cdots,{\bf p}_{K}^w)), \end{split} \end{equation} $KL(a,b) = a \log \frac{a}{b}$ is the KL divergence that penalizes difference between the similarity distributions of image inputs and the similarity distributions of generated sentences. $\Phi$ is a relation prediction function, which measures a relation energy of the given tuple. In detail, $\Phi$ aims to measure the similarities formed by the examples in semantic prediction space: \begin{equation}\label{eq:e5} \begin{split} \Phi({\bf p}_{0}^v,{\bf p}_{1}^v,\cdots,{\bf p}_{K}^v) &= [q_{{mn}}^v]_{m,n \in [0,\cdots,K]} \\ \Phi({\bf p}_{0}^w,{\bf p}_{1}^w,\cdots,{\bf p}_{K}^w) &= [q_{{mn}}^w]_{m,n \in [0,\cdots,K]} \\ q_{{mn}}^v &= \frac{exp(d_{{mn}}^v)}{\sum exp(d_{\cdot}^v)} \\ q_{{mn}}^w &= \frac{exp(d_{{mn}}^w)}{\sum exp(d_{\cdot}^w)}, \\ \end{split} \end{equation} where $d_{{mn}}^v = -Dist({\bf p}_{m}^v,{\bf p}_{n}^v), d_{{mn}}^w = -Dist({\bf p}_{m}^w,{\bf p}_{n}^w)$ measures the distance between $({\bf p}_{m}^v, {\bf p}_{n}^v)$ and between $({\bf p}_{m}^w, {\bf p}_{n}^w)$ respectively, $Dist({\bf p}_{m}^v,{\bf p}_{n}^v) = \| {\bf p}_{m}^v - {\bf p}_{n}^v\|_2$ and $ Dist({\bf p}_{m}^w,{\bf p}_{n}^w) = \| {\bf p}_{m}^w - {\bf p}_{n}^w\|_2$. $q_{{mn}}^v$ and $q_{{mn}}^w$ denote the relative instance-wise similarity. Finally, we pull the $[q_{{mn}}^v]$ and $[q_{{mn}}^v]$ into vector form. In result, the relation consistency loss can deliver the relationship of examples by penalizing structure differences. Since the structure has higher-order properties than single output, it can transfer knowledge more effectively, and is more suitable for consistency measure. \subsection{Overall Function} In summary, with the limited amount of parallel image-sentence pairs and large amount of undescribed images, we define the total loss by combining the Eq. \ref{eq:e0}, Eq. \ref{eq:e}, Eq. \ref{eq:e2} and Eq. \ref{eq:e4}: \begin{equation}\label{eq:e6} \begin{split} L = &\sum_{j=1}^{N_l} \ell_s({\bf v}_i,{\bf w}_i,{\bf y}_i) + \sum_{j=1}^{N_u} \lambda_1 \ell_{pc}({\bf v}_i) + \lambda_2 \ell_{rc}({\bf v}_i)\\ \ell_s({\bf v}_i,{\bf w}_i,{\bf y}_i) = & \ell_c ({\bf v}_i,{\bf w}_i) + \ell_p({\bf v}_i,{\bf w}_i,{\bf y}_i) \end{split} \end{equation} where $\ell_c$ denotes the captioning loss, which can be adopted as $\ell_{XE}$ or $\ell_{RL}$ in Eq. \ref{eq:e0}. Note that $\ell_c$ and $\ell_p$ are with same order of magnitude, so we do not add hyper-parameter here. $\lambda_1$ and $\lambda_2$ are scale values that control the weights of different losses. In $\ell_{s}$, we use labeled images and sentences to jointly train the shared classifier $f$, which increases the amount of training data, as well as adjusts the classifier to better suit subsequent prediction of augmented images and generated sentences. Furthermore, considering that the pseudo labels ${\bf p}^v,{\bf p}^w$ may exist noises, we can also adopt a confidence threshold that retains confident generated sentences. The Eq. \ref{eq:e6} can be reformulated as: \begin{equation}\label{eq:e7} \begin{split} L = &\sum_{j=1}^{N_l} \ell_s({\bf v}_i,{\bf w}_i,{\bf y}_i) + \sum_{j=1}^{N_u} {\bf 1}(\max({\bf p}_{j_0}^v) \ge \tau ) \\ & \big\{ \lambda_1 \ell_{pc}({\bf v}_i) + \lambda_2 \ell_{rc}({\bf v}_i)\big \} \\ \ell_s({\bf v}_i,{\bf w}_i,{\bf y}_i) = & \ell_{XE}({\bf v}_i,{\bf w}_i) + \ell_p({\bf v}_i,{\bf w}_i,{\bf y}_i) \end{split} \end{equation} where ${\bf p}_{j_0}^v$ denotes the prediction probability of the $j-$th raw image input, $\tau$ is a scalar hyperparameter denoting the threshold above which we retain the generated sentences. The details are shown in Algorithm \ref{alg:alg1}. {\begin{algorithm}[htb] \caption{The Code of CPRC} \label{alg:alg1} \textbf{Input}:\\ Data: $\mathcal{D} = \{\{{\bf v}_i,{\bf w}_i,{\bf y}_i\}_{i=1}^{N_p}, \{{\bf v}_j\}_{j=1}^{N_u}\}$ \\ Parameters: $\lambda_1$, $\lambda_2$ \\ \textbf{Output}:\\ Image captioning mapping function: $G$ \\ \begin{algorithmic}[1]{ \STATE Initialize the $G$ and $f$ randomly; \WHILE {stop condition is not triggered} \FOR{mini-batch sampled from $\mathcal{D}$} \STATE Calculate $\ell_{s}$ according to Eq. \ref{eq:e0} and Eq. \ref{eq:e}; \STATE Calculate $\ell_{pc}$ according to Eq. \ref{eq:e2}; \STATE Calculate $\ell_{rc}$ according to Eq. \ref{eq:e4}; \STATE Calculate $L$ according to Eq. \ref{eq:e6} or Eq. \ref{eq:e7}; \STATE Update model parameters of $G,f$ using SGD; \ENDFOR \ENDWHILE } \end{algorithmic} \end{algorithm}} \begin{table*}[!htb]{ \centering \caption{Performance of comparison methods on MS-COCO “Karpathy” test split, where B$@$N, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores. } \label{tab:tab1} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}@{}l|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c|@{}c} \toprule \multirow{2}{*}{Methods} & \multicolumn{8}{c|}{Cross-Entropy Loss} & \multicolumn{8}{c}{CIDEr-D Score Optimization} \\ \cmidrule(l){2-17} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S\\ \midrule SCST &56.8&38.6&25.4&16.3&16.0 &42.4&38.9&9.3&59.4 &39.5&25.3 &16.3&17.0&42.9&43.7&9.9\\ AoANet &67.9&49.8&34.7 &23.2&20.9&49.2&69.2&14.3&66.8 &48.6&34.1&23.6&21.8 &48.7&70.4&15.2\\ AAT &63.2&45.8&31.7 &21.3&19.0&47.6&58.0&12.4&66.7 &48.1&33.3&22.7&20.4 &47.8&63.5&13.2\\ ORT &63.6&45.8&31.7 &21.4&19.4&46.9&61.1&12.6&65.3 &46.5&31.9&21.3&20.3 &47.2&62.0&13.3\\ GIC &63.0&46.8&33.2 &20.0&19.2&50.3&50.5&12.3&64.7 &46.9&32.0&20.7&19.0 &47.8&55.7&12.5\\ \midrule Graph-align &-&-&-&-&-&-&-&-&67.1 &47.8&32.3&21.5&20.9&47.2&69.5&15.0\\ UIC &-&-&-&-&-&-&-&-&41.0 &22.5&11.2&5.6&12.4&28.7&28.6&8.1\\ \midrule A3VSE &68.0&50.0&34.9 &23.3&20.8&49.3&69.6&14.5&67.6 &49.6&35.2&24.5&22.1 &49.3&72.4&15.3\\ \midrule AoANet+P &67.4&49.7&35.2 &24.3&22.3&49.1&71.7&14.9&67.2 &49.5&35.9&24.4&21.6 &50.1&74.2&15.7\\ AoANet+C &67.1&49.4&35.2 &24.5&22.7&49.5&71.5&14.9&67.8 &49.4&35.5&24.7&22.0 &50.0&73.9&15.6\\ PL &67.8&49.6&35.2&24.2&22.0&50.4&74.7&15.6&67.9 &50.0&35.6&24.3&22.2&49.7&76.6&16.1\\ AC &67.8&48.8&34.6&23.7&21.9&49.1&69.7&14.5&67.9 &50.0&25.3&24.1&22.1&49.7&73.0&15.5\\ Embedding+ &65.1&46.4&31.9 &21.5&20.7&47.6&65.1&14.1&65.6 &47.1&32.3&22.6&20.8 &47.8&69.1&14.5\\ Semantic+ &68.3&49.9&34.9 &23.8&21.5&49.9&70.3&14.7&69.3 &50.8&35.5&24.1&21.6 &50.0&72.7&14.9\\ Strong+ &68.4&50.8&35.4&24.8&22.5&\bf 50.6&77.8& 16.2&69.5 &51.5& 36.7& 25.5&23.3&50.6&78.6&16.7\\ w/o Prediction &68.3&49.6&35.3 &24.4&22.2&49.6&70.5&15.0&68.2 &50.4&35.8&24.8&22.5 &50.1&73.6&15.6\\ w/o Relation &68.1&50.0& 35.5 &24.8&22.4&50.5&75.2&15.8&68.3 &50.5&35.8&24.9&22.7 &50.4&76.9&16.3\\ w/o $\tau$ &66.9&49.8&34.5&24.2&21.5&49.5&76.2&15.4&68.5 &50.8&36.2&25.0&22.5&49.8&77.5&16.2\\ \midrule CPRC &\bf 68.8&\bf 51.1&\bf 35.5 &\bf 24.9&\bf 22.8&50.4&\bf 77.9&\bf 16.2&\bf 69.9 &\bf 51.8&\bf 36.7&\bf 25.5&\bf 23.4 &\bf 50.7&\bf 78.8&\bf 16.8\\ \bottomrule \end{tabular*}} \end{table*} \begin{figure*}[t] \begin{center} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_b1.pdf}\\ \mbox{ \;\;\;\; ({\it a}) {BLEU$@$1}} \end{minipage} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_b2.pdf}\\ \mbox{ \;\;\;\; ({\it b}) {BLEU$@$2}} \end{minipage} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_b3.pdf}\\ \mbox{ \;\;\;\; ({\it c}) {BLEU$@$3}} \end{minipage} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_b4.pdf}\\ \mbox{ \;\;\;\; ({\it d}) {BLEU$@$4}} \end{minipage}\\ \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_meteor.pdf}\\ \mbox{ \;\;\;\; ({\it e}) {METEOR}} \end{minipage} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_rouge.pdf}\\ \mbox{ \;\;\;\; ({\it f}) {ROUGE-L}} \end{minipage} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_cider.pdf}\\ \mbox{ \;\;\;\; ({\it g}) {CIDEr-D}} \end{minipage} \begin{minipage}[h]{43mm} \centering \includegraphics[width=43mm]{./figure/c_spice.pdf}\\ \mbox{ \;\;\;\; ({\it h}) {SPICE}} \end{minipage} \end{center} \caption{Relationship between captioning performance with different ratio of supervised data.}\label{fig:f1} \end{figure*} \begin{figure*}[htb] \begin{center} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_b1.pdf}\\ \mbox{ \;\;\;\; ({\it a}) {BLEU$@$1}} \end{minipage} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_b2.pdf}\\ \mbox{ \;\;\;\; ({\it b}) {BLEU$@$2}} \end{minipage} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_b3.pdf}\\ \mbox{ \;\;\;\; ({\it c}) {BLEU$@$3}} \end{minipage} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_b4.pdf}\\ \mbox{ \;\;\;\; ({\it d}) {BLEU$@$4}} \end{minipage}\\ \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_meteor.pdf}\\ \mbox{ \;\;\;\; ({\it e}) {METEOR}} \end{minipage} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_rouge.pdf}\\ \mbox{ \;\;\;\; ({\it f}) {ROUGE-L}} \end{minipage} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_cider.pdf}\\ \mbox{ \;\;\;\; ({\it g}) {CIDEr-D}} \end{minipage} \begin{minipage}[h]{42mm} \centering \includegraphics[width=42mm]{./figure/cr_spice.pdf}\\ \mbox{ \;\;\;\; ({\it h}) {SPICE}} \end{minipage} \end{center} \caption{Relationship between caption performance with different ratio of supervised data (Cross-Entropy Loss).}\label{fig:f11} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=180mm]{./figure/demo.pdf}\\ \caption{Examples of captions generated by CPRC and baseline models as well as the corresponding ground truths.}\label{fig:f3} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=180mm]{./figure/augmentation.pdf}\\ \caption{(Best viewed in color) Examples of captions generated by augmented images.}\label{fig:f4} \end{figure*} \begin{figure}[t] \begin{center} \begin{minipage}[h]{85mm} \centering \includegraphics[width=85mm]{./figure/cider_spice_1.pdf}\\ \end{minipage} \end{center} \caption{Relationship between captioning performance with different ratio of unsupervised data (CIDEr-D Score Optimization).}\label{fig:f2} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=180mm]{./figure/demo-p.pdf}\\ \includegraphics[width=180mm]{./figure/demo-p1.pdf}\\ \caption{Examples of captions generated by CPRC and baseline models as well as the corresponding ground truths (GT1-GT5 are the 5 given ground-truth sentences).}\label{fig:f12} \end{figure*} \section{Experiments}\label{sec:s2} \subsection{Datasets} We adopt the popular MS COCO dataset~\cite{LinMBHPRDZ14} for evaluation, as former related methods are mostly practiced exclusively on this dataset ~\cite{HuangWCW19,HuangWXC19,HerdadeKBS19,ZhouWLHZ20,RennieMMRG17}. MS COCO dataset contains 123,287 images (82,783 training images and 40,504 validation images), each labeled with 5 captions. The popular test sets are divided into two categories: online evaluation and offline evaluation. Considering that all methods are evaluated under semi-supervised scenario, online evaluation cannot be used, so we only use offline evaluation. The offline “Karpathy” data split~\cite{KarpathyF17} contains 5,000 images for validation, 5,000 images for testing, and the rest for training. To construct the semi-supervised scenario, we randomly selected examples with artificially set proportions as supervised data from the training set, and the rest are unsupervised data. \subsection{Implementation Details} The target of CPRC is to train the generator $G$. In detail, we employ AoANet~\cite{HuangWCW19} structure for $G$ as base model. Meanwhile, we adopt fully connected networks for $f$ with three fully connected layers (with 1024 dimension for the hidden layers). The dimension of original image vectors is 2048 and we project them to a new space with the dimension of 1024 following~\cite{HuangWCW19}. The $K=3$, i.e., each image has three augmentations using random occlusion technique. As for the training process, we train AoANet for $40$ epochs with a mini-batch size of 16, and ADAM~\cite{KingmaB14} optimizer is used with a learning rate initialized by $10^{-4}$ and annealed by $0.8$ every $3$ epochs. The parameter $\lambda_1$ and $\lambda_2$ is tuned in $\{0.01, 0.1, 1, 10\}$, and $\tau = 0.1$. The entire network is trained on an Nvidia TITAN X GPU. \subsection{Baselines and Evaluation protocol} The comparison models fall into three categories: 1) state-of-the-art supervised captioning methods: SCST~\cite{RennieMMRG17}, AoANet~\cite{HuangWCW19}, AAT~\cite{HuangWXC19}, ORT~\cite{HerdadeKBS19} and GIC~\cite{ZhouWLHZ20}. Note that these methods can only utilize the supervised image-sentence pairs. 2) state-of-the-art unsupervised captioning methods: Graph-align~\cite{GuJCZYW19} and UIC~\cite{Feng00L19a}. These approaches utilize the independent image set and corpus set for training. 3) state-of-the-art semi-supervised method: A3VSE~\cite{HuangKLCH19}. Moreover, we conduct extra ablation studies to evaluate each term in our proposed CPRC: 1) AoANet+P, we combine the label prediction consistency with the original AoANet generation loss as multi-task loss (only using the supervised data); 2) AoANet+C, we combine the relation consistency loss with the original AoANet generation loss as multi-task loss (only using the supervised data); 3) PL, we replace the prediction consistency with pseudo labeling as traditional semi-supervised methods; 4) AC, we replace the relation consistency with augmentation consistency as traditional semi-supervised methods; 5) Embedding+, we replace the relational consistency loss with embedding consistency loss, which minimizes the difference between the embedding of image inputs and generated sentences; 6) Semantic+, we replace the relational consistency loss with prediction consistency loss, which minimizes the difference between the predictions of image inputs and generated sentences; 7) Strong+, we replace the weak augmentation with strong augmentation for CPRC; 8) w/o Prediction, CPRC only retains the relation consistency loss in Eq. \ref{eq:e7}; 9) w/o Relation, CPRC only retains the prediction consistency in Eq. \ref{eq:e7}; and 10) w/o $\tau$, CPRC removes the confidence threshold as Eq. \ref{eq:e6}. For evaluation, we use different metrics, including BLEU~\cite{PapineniRWZ02}, METEOR~\cite{BanerjeeL05}, ROUGE-L~\cite{HuangWCW19}, CIDEr-D~\cite{VedantamZP15} and SPICE~\cite{AndersonFJG16}, to evaluate the proposed method and comparison methods. All the metrics are computed with the publicly released code\footnote{https://github.com/tylin/coco-caption}. In fact, the CIDer-D and SPICE metric is more suitable for the image caption task~\cite{AndersonFJG16,VedantamZP15}. One of the problems with using metrics such as BlEU, ROUGE, CIDEr and METEOR is that these metrics are primarily sensitive to n-gram overlap. However, n-gram overlap is neither necessary nor sufficient for two sentences to convey the same meaning~\cite{GimenezM07a}. As shown in the example provided by~\cite{AndersonFJG16}, consider the following two captions (a,b) from the MS COCO dataset: \begin{itemize} \item[(a)] A young girl standing on top of a tennis court. \item[(b)] A giraffe standing on top of a green field. \end{itemize} The captions describe two different images. However, the mentioned n-gram metrics produces a high similarity score due to the presence of the long 5-gram phrase ``standing on top of a'' in both captions. Meanwhile, the following captions (c,d) obtained from the same image: \begin{itemize} \item[(c)] A shiny metal pot filled with some diced veggies. \item[(d)] The pan on the stove has chopped vegetables in it. \end{itemize} These captions convey almost the same meaning, whereas exhibit low n-gram similarity as they have no words in common. To solve this problem, SPICE~\cite{AndersonFJG16} estimated caption quality by transforming both candidate and reference captions into a graph-based semantic representation (i.e., scene graph). The scene graph can explicitly encodes the objects, attributes and relationships found in image captions, abstracting away most of the lexical and syntactic idiosyncrasies of natural language in the process. CIDer-D~\cite{VedantamZP15} measured the similarity of a candidate sentence to a majority of how most people describe the image (i.e. the reference sentences). \subsection{Qualitative Analysis} Table \ref{tab:tab1} presents the quantitative comparison results with state-of-the-art methods (i.e., 1$\%$ supervised data and 99$\%$ unsupervised in the training set), it is notable that supervised captioning methods can only develop the mapping functions with supervised data, and leave out the unsupervised data. For fairness, all the models are first trained under cross-entropy loss and then optimized for CIDEr-D score as~\cite{HuangWCW19}. ``-'' represents the results have not given in the raw paper. The results reveal that: 1) AoANet achieves the best scores on most metrics compared with the existing supervised methods. Therefore, CPRC adopts AoANet as the base image-sentence mapping function. 2) Unsupervised approach, i.e., UIC, achieve the worst performance on all metrics under different loss. This verifies that the generated sentence may mismatch the image with a high probability when only considering the domain discriminator. Graph-align performs better than supervised approaches, but worse than A3VSE on most metrics, because it ignores to measure specific example matching. 3) Semi-Supervised method, i.e., A3VSE, has little effect on improving the captioning performance, e.g., cross-entropy loss/CIDEr-D score optimization only improves 0.4/2.0 and 0.2/0.1 on CIDEr-D and SPICE scores comparing with AoANet, because it is more difficult to ensure the quality of generated sentences. 4) CPRC achieves the highest scores among all compared methods in terms of all metrics, on both the cross-entropy loss and CIDEr-D score optimization stage, except ROUGE-L on cross-entropy loss. For example, CPRC achieves a state-of-the-art performance of 77.9/78.8 (CIDEr-D score) and 16.2/16.8 (SPICE score) under two losses (cross-entropy and CIDEr-D score), that acquires 8.7/8.4 and 1.9/1.6 improvements comparing with AoANet. The phenomena indicates that, with limited amount of supervised data, existing methods cannot construct a well mapping function, whereas CPRC can reliably utilize the undescribed image to enhance the model; and 5) CPRC performs better than w/o $\tau$ on all metrics, which indicates the effectiveness of threshold confidence. \subsection{Ablation Study} To quantify the impact of proposed CPRC modules, we compare CPRC against other ablated models with various settings. The bottom half of Table \ref{tab:tab1} presents the results: 1) AoANet+P and AoANet+C achieve better performance than AoANet, which indicates that the prediction loss and relation consistency loss can improve the generator learning, because the labels can provide extra semantic information; meanwhile, AoANet+P performs better than AoANet+C on most metric, which indicates that prediction loss is more significant than relation consistency; 2) PL and AC perform worse than the w/o Prediction and w/o Relation, which verifies that traditional semi-supervised techniques considering pseudo labeling are not as good as cross-modal semi-supervised techniques considering raw image as pseudo supervision; 3) Embedding+ performs worse than the Semantic+, which reveals that embeddings are more difficult to compare than predictions since image and text have heterogeneous representations; 4) Strong+ performs worse than CPRC, which validates that the strong augmentation may impact the generated sentence, and further affect the prediction as well as causing the noise accumulation; 5) Both the w/o Prediction and w/o Relation can improve the captioning performance on most criteria, especially on the important criteria, i.e., CIDEr-D and SPICE. The results indicate that both the prediction and relation consistencies can provide effective supervision to ensure the quality of generated sentences; 6) The effect of w/o Relation is more obvious, which shows that prediction loss can further improve the scores by comprehensively considering the semantic information; and 7) CPRC achieves the best scores on most metrics, which indicates that it is better to combine the content and relation information. \subsection{CPRC with Different Captioning Model} \begin{table}[htb]{ \centering \caption{Performance of CPRC with different caption model on MS-COCO “Karpathy” test split, where B$@$N, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores. } \label{tab:tab2} \begin{tabular}{@{}l@{}|@{}c|@{}c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{Cross-Entropy Loss} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule SCST &56.8&38.6&25.4&16.3&16.0 &42.4&38.9&9.3\\ GIC &63.0&46.8&33.2 &20.0&19.2&50.3&50.5&12.3\\ \midrule SCST+CPRC &\bf 63.5&\bf 45.9&\bf 31.7&\bf 21.6&\bf 19.4&\bf 45.8& \bf48.1&\bf 10.2\\ GIC+CPRC &\bf 66.8&\bf 47.5&\bf 34.5&\bf 21.4&\bf 19.2&\bf 50.8&\bf 57.7&\bf 13.4\\ \midrule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{CIDEr-D Score Optimization} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule SCST &59.4&39.5&25.3 &16.3&17.0&42.9&43.7&9.9\\ GIC &64.7&46.9&32.0&20.7&19.0 &47.8&55.7&12.5\\ \midrule SCST+CPRC &\bf66.5&\bf48.0&\bf33.7&\bf22.7&\bf20.4&\bf47.9&\bf48.7&\bf10.7\\ GIC+CPRC &\bf66.9&\bf47.9&\bf34.8&\bf21.8&\bf19.8&\bf48.2&\bf58.9&\bf13.6\\ \bottomrule \end{tabular}} \end{table} To explore the generality of CPRC, we conduct more experiments by incorporating CPRC with different supervised captioning approaches, i.e., SCST (encoder-decoder based model), GIC (attention based model). Note that we have not adopted the editing based method considering the reproducibility, the results are recorded in Table \ref{tab:tab11}. We find that all the methods, i.e., SCST, GIC and AoANet (results can refer to the Table \ref{tab:tab1}), have improved the performance after combing the CPRC framework. This phenomena validates that CPRC can well combine the undescribed images for existing supervised captioning models. \subsection{Influence of the Supervised and Unsupervised Images} To explore the influence of supervised data, we tune the ratio of supervised data, and the results are recorded in Figure \ref{fig:f1} and Figure \ref{fig:f11} with different metrics. Here, we find that with the percentage of supervised data increase, the performance of CPRC improves faster than other state-of-the-art methods. This indicates that CPRC can reasonably utilize the undescribed images to improve the learning of generator. Furthermore, we validate the influence of unsupervised data, i.e., we fix the supervised ratio to 1$\%$, and tune the ratio of unsupervised data in $\{10\%, 40\%, 70\%, 100\%\}$, the results are recorded in Figure \ref{fig:f2}. Note that one of the problems by using metrics, such as BlEU, ROUGE, CIDEr-D and METEOR to evaluate captions, is that these metrics are primarily sensitive to n-gram overlap~\cite{HuangWCW19,AndersonFJG16}. Therefore, we only give the results of CIDer-D and SPICE here (refer to the supplementary for more details). We find that with the percentage of unsupervised data increases, the performance of CPRC also improves. This indicates that CPRC can make full use of undescribed images for positive training. \begin{table}[htb]{ \centering \caption{Performance of CPRC with different augmentation number on MS-COCO “Karpathy” test split, where B$@$N, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores. } \label{tab:tab3} \begin{tabular}{@{}l@{}|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{Cross-Entropy Loss} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule K=1 &67.5&48.9&34.6&22.5&21.1&48.4&74.7&15.5\\ K=2 &67.8&49.5&34.9&23.4&21.7&49.5&75.9&15.8\\ K=3 &\bf 68.8&\bf51.1&\bf35.5&\bf24.9&\bf22.8&\bf50.4&\bf77.9&\bf16.2\\ K=4 &67.9&49.8&34.8&24.2&22.2&50.1&76.8&16.0\\ K=5 &67.6&49.7&34.5&23.8&22.0&49.8&76.2&16.0\\ \midrule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{CIDEr-D Score Optimization} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule K=1 &68.0&50.1&35.7&24.8&22.0&49.5&77.1&16.1\\ K=2 &68.3&50.5&35.9&25.3&22.1&49.7&77.7&16.5\\ K=3 &\bf69.9&\bf51.8&\bf36.7&\bf25.5&\bf23.4&\bf50.7&\bf78.8&\bf16.8\\ K=4 &68.7&51.4&36.5&25.2&22.8&49.7&77.4&16.3\\ K=5 &68.3&50.8&35.9&25.1&22.7&49.4&77.3&16.2\\ \bottomrule \end{tabular}} \end{table} \subsection{Influence of the Augmentation Number} To explore the influence of augmentation number, i.e., $K$, we conduct more experiments. In detail, we tune the $K$ in $\{1,2,3,4,5\}$ and recorded the results in Table \ref{tab:tab3}. The results reveal that the CPRC achieves the best performance with $K = 3$, for the reason that additional inconsistent noises between image and sentence may be introduced with the the number of augmentations increase. \begin{table}[htb]{ \centering \caption{Performance of CPRC with different $\tau$ on MS-COCO “Karpathy” test split, where B$@$N, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores. } \label{tab:tab4} \begin{tabular}{@{}l|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{Cross-Entropy Loss} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule $\tau = 0$ &66.9&49.8&34.5&24.2&21.5&49.5&76.2&15.4\\ $\tau = 0.1$ &\bf68.8&\bf51.1&\bf35.5&\bf24.9&\bf22.8&\bf50.4&\bf77.9&\bf16.2\\ $\tau = 0.4$ &66.4&49.5&34.3&24.0&21.1&48.8&75.8&15.2\\ $\tau = 0.7$ &64.2&48.1&33.4&22.9&20.4&46.5&73.3&15.0\\ \midrule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{CIDEr-D Score Optimization} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule $\tau = 0$ &68.5 &50.8&36.2&25.0&22.5&49.8&77.5&16.2\\ $\tau = 0.1$ &\bf69.9&\bf51.8&\bf36.7&\bf25.5&\bf23.4&\bf50.7&\bf78.8&\bf16.8\\ $\tau = 0.4$ &68.4&50.2&36.1&24.8&22.1&49.5&77.1&16.1\\ $\tau = 0.7$ &64.8&48.6&34.2&23.5&20.8&47.3&73.7&15.1\\ \bottomrule \end{tabular}} \end{table} \subsection{Influence of the Confidence Threshold} To explore the influence of confidence threshold, i.e., $\tau$, we conduct more experiments. In detail, we tune the $\tau$ in $\{0, 0.1,0.4,0.7\}$ and recorded the results in Table \ref{tab:tab4}. The results reveal that the performance of CPRC increases firstly, then decreases with the increasing of $\tau$. The reason is that fewer undescribed images are used with the increasing of $\tau$, thereby the generator training has not fully explored the unsupervised data. \subsection{Visualization and Analysis} Figure~\ref{fig:f3} shows a few examples with captions generated by our CPRC and two baselines, A3VSE and AoANet, as well as the human-annotated ground truths. From these examples, we find that the generated captions of baseline models lack the logic of language and lose accurate for the image content, while CPRC can generate accurate captions in high quality. Figure~\ref{fig:f4} shows an example of augmented images and corresponding generated captions. From these examples, we find that the generated captions basically have similar semantic information, which can help the prediction and relation consistencies for the undescribed images. \section{Influence of Label Prediction} To explore the effect of prediction loss, we conduct more experiments and exhibit several cases. Figure~\ref{fig:f12} shows a few examples with captions generated by our CPRC and two baselines, A3VSE and AoA, as well as the human-annotated ground truths. From these examples, we find that the generated captions of baseline models lack the logic of language and inaccurate for the image content, while CPRC can generate accurate captions in high quality. Meanwhile, it can be clearly seen that the label prediction helps the generator to understand the image from the red part of the sentence generated by CPRC, for example, in figure~\ref{fig:f12} (a), the content of the image is complicated and the part of bird is not obvious, which causes the sentences generated by AoANet and A3VSE inconsistent with the ground-truths. But CPRC can generate a good description of ``bird'' and ``umbrella'' by combining label prediction information. \begin{figure*}[htb] \begin{center} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_b1.pdf}\\ \mbox{ \;\;\;\; ({\it a}) {BLEU$@$1}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_b2.pdf}\\ \mbox{ \;\;\;\; ({\it b}) {BLEU$@$2}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_b3.pdf}\\ \mbox{ \;\;\;\; ({\it c}) {BLEU$@$3}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_b4.pdf}\\ \mbox{ \;\;\;\; ({\it d}) {BLEU$@$4}} \end{minipage}\\ \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_meteor.pdf}\\ \mbox{ \;\;\;\; ({\it e}) {METEOR}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_rouge.pdf}\\ \mbox{ \;\;\;\; ({\it f}) {ROUGE-L}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_cider.pdf}\\ \mbox{ \;\;\;\; ({\it g}) {CIDEr-D}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lcr_spice.pdf}\\ \mbox{ \;\;\;\; ({\it h}) {SPICE}} \end{minipage} \end{center} \caption{Parameter sensitivity of $\lambda_1$ and $\lambda_2$ with Cross-Entropy Loss.}\label{fig:f13} \begin{center} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_b1.pdf}\\ \mbox{ \;\;\;\; ({\it a}) {BLEU$@$1}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_b2.pdf}\\ \mbox{ \;\;\;\; ({\it b}) {BLEU$@$2}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_b3.pdf}\\ \mbox{ \;\;\;\; ({\it c}) {BLEU$@$3}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_b4.pdf}\\ \mbox{ \;\;\;\; ({\it d}) {BLEU$@$4}} \end{minipage}\\ \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_meteor.pdf}\\ \mbox{ \;\;\;\; ({\it e}) {METEOR}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_rouge.pdf}\\ \mbox{ \;\;\;\; ({\it f}) {ROUGE-L}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_cider.pdf}\\ \mbox{ \;\;\;\; ({\it g}) {CIDEr-D}} \end{minipage} \begin{minipage}[h]{40mm} \centering \includegraphics[width=40mm]{./figure/lc_spice.pdf}\\ \mbox{ \;\;\;\; ({\it h}) {SPICE}} \end{minipage} \end{center} \caption{Parameter sensitivity of $\lambda_1$ and $\lambda_2$ with CIDEr-D Score Optimization.}\label{fig:f14} \end{figure*} \subsection{Sensitivity to Parameters} The main parameters are the $\lambda_1$ and $\lambda_2$ in Eq. 5 of the main body. We vary the parameters in $\{0.01, 0.1, 1, 10\}$ to study its sensitivity for different performance, and record the results in Figure \ref{fig:f13} and Figure \ref{fig:f14}. We find that CPRC always achieves the best performance with small $\lambda_1$ (i.e., $\lambda_1=0.01$) and large $\lambda_2$ (i.e., $\lambda_2=10$) in terms of all metrics, on both cross-entropy and CIDEr-D score optimization. This phenomenon also validates that the relation consistency loss plays an important role in enhancing the generator. \begin{table}[!htb]{ \centering \caption{Performance with different ratio data from unsupervised data (i.e., the supervised is fixed with $1\%$) on MS-COCO “Karpathy” test split, where B$@$N, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores. } \label{tab:tab11} \begin{tabular}{@{}l|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{Cross-Entropy Loss} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule 10$\%$ &68.3&49.5&34.9&23.3&21.4&49.6&71.7&14.6\\ 40$\%$ &66.9&48.7&34.2 &23.4& 22.9&49.6&72.9&15.6\\ 70$\%$ &68.4&50.6&35.6 &24.4& 22.9&\bf 50.5&74.4&15.9\\ 100$\%$ &\bf 68.8&\bf 51.1&\bf 35.7&\bf 24.9&\bf 22.9&50.4&\bf 77.9&\bf 16.2\\ \midrule \multirow{2}{*}{Methods} & \multicolumn{8}{c}{CIDEr-D Score Optimization} \\ \cmidrule(l){2-9} & B$@$1 & B$@$2 & B$@$3 & B$@$4 & M & R & C & S \\ \midrule 10$\%$ &68.7&51.0&25.6&23.9&22.4&50.6&74.1&14.9\\ 40$\%$ &69.2&50.2&35.6 &24.1&22.9&\bf 50.8&75.7&15.9\\ 70$\%$ &69.4&51.3&36.5 &24.8&22.8&50.7&76.5&16.2\\ 100$\%$ &\bf 69.9&\bf 51.8&\bf 36.7&\bf 25.5&\bf 23.4&50.7&\bf 78.8&\bf 16.8\\ \bottomrule \end{tabular}} \end{table} \section{Conclusion}\label{sec:s3} Since traditional image captioning methods are usually working on supervised multi-modal data, in this paper, we investigated how to use undescribed images for semi-supervised image captioning. Specifically, our method can take Cross-modal Prediction and Relation Consistency (CPRC) into consideration. CPRC employs prediction distillation for the predictions of sentences generated from undescribed images, and develops a novel relation consistency between augmented images and generated sentences to retain the important relational knowledge. As demonstrated by the experiments on the MS-COCO dataset, CPRC outperforms state-of-the-art methods in various complex semi-supervised scenarios. \appendices \section{Influence of Unsupervised Data} Furthermore, we explore the influence of unsupervised data, i.e., we fix the supervised ratio to 1$\%$, and tune the data ratio from unsupervised data in $\{10\%, 40\%, 70\%, 100\%\}$, the results are recorded in Table \ref{tab:tab11}. We find that with the percentage of unsupervised data increases, the performance of CPRC also improves in terms of all metrics. This indicates that CPRC can make full use of undescribed images for positive training. But the growth rate slows down in the later period (i.e., after $70\%$), probably owing to the interference of pseudo label noise. \section*{Acknowledgment} This research was supported by NSFC (62006118), Natural Science Foundation of Jiangsu Province of China under Grant (BK20200460). NSFC-NRF Joint Research Project under Grant 61861146001, CCF- Baidu Open Fund (CCF-BAIDU OF2020011), Baidu TIC Open Fund. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtranN}\small
fa3136e6803d6cfea00db707f61164074c86ce5e
\section{Introduction} \label{sec:introduction} Information disorder~\cite{CouncilofEurope,UnescoHandbook}, a general term that includes different ways of describing how the information environment is polluted, is a reason of concern for many national and international institutions; in fact, incorrect beliefs propagated by malicious actors are supposed to have important consequences on politics, finance and public health. The European Commission, for instance, called out the mix of pervasive and polluted information that has been spread online during the COVID-19 pandemic as an ``infodemic''~\cite{EuropeanCommission}, which risks jeopardising the efficacy of interventions against the outbreak. In particular, social media users are continuously flooded by information shared or liked by accounts they follow, and news diffused are not necessarily controlled by editorial boards that in professional journalism are in charge of ruling out poor, inaccurate or unwanted reporting. However, the emergence of novel news consumption behaviours is not negligible: according to a research of the Pew Research Center~\cite{PewJournalism}, 23\% of United States citizens stated to get their information from social media often, while 53\% at least sometimes. Unfortunately, even if social media may actually increase media pluralism in principle, the novelty comes with some drawbacks, such as misinformation and disinformation that are spreading at an amazingly high rate and scale~\cite{Lazer1094,Allcott2017,Vosoughi1146,Grinberg374}. If the reliability of the source is an important issue to be addressed, the other side of the coin is individual's intentionality of spreading low quality information; in fact, although the distinction between misinformation and disinformation might seem blurry, the main difference between the two concepts lies in the individual's intention, since misinformation is false or inaccurate information deceiving its recipients that could spread it even unintentionally, while disinformation refers to a piece of manipulative information that has been diffused purposely to mislead and deceive. Focusing on misinformation, we may wonder how and why users can be deceived and pushed to share inaccurate information; the reasons for such behaviour is usually explained in terms of the inherent attractiveness of fake news, that are more novel and emotionally captivating than regular ones~\cite{Vosoughi1146}, the repeated exposure to the same information, amplified by filter bubbles and echo chambers~\cite{Flaxman2016FilterBubbles}, or the ways all of us are affected by many psycho-social and cognitive biases, e.g., the bandwagon effect~\cite{Nadeau1993}, confirmation bias~\cite{ConfirmationBias}, the vanishing hyper-correction effect~\cite{butler2011}, that apparently makes debunking a very difficult task to be achieved. However, much remains to be understood on how the single user decides that some news is true or false, how many individuals fact-check information before making such decision, how the interaction design of the Web site publishing the news can contribute in assisting or deceiving the user, and to which extent social influence and peer-pressure mechanisms can activate a person into believing something against all the evidences supporting the contrary. Our main objective is to investigate the latent cognitive and device-induced processes that may prevent us from distinguishing true from false news. We think that understanding the basic mechanisms that lead us to make such choices may help us to identify, and possibly fix, deceiving factors, to improve the design of forthcoming social media web-based platforms, and to increase the accuracy of fake news detection systems, that are traditionally trained on datasets where news has been previously annotated by humans as true or false. To this purpose, we created an artificial online news outlet where 7,298 volunteers were asked to review 20 different news, and to decide if they are true or false. The target of the experiment is manifold: we aim to assess if individuals make these kinds of decisions independently, or if the ``wisdom of the crowd'' may play a role; if decisions made in function of limited information (the headline and a short excerpt of the article) lead to better or worse results than choices made upon richer data (i.e., the full text, the source of the article); if users' better familiarity with the Web or other self provided information can be linked to better annotations. The results of the experiment and their relationships with related work are described in the rest of the paper. \section{Related Works} \label{sec:relatedworks} Telling which news is true or false is somehow a forced simplification of a multi-faceted problem, since we must encompass a substantial number of different bad information practices, ranging from fake news to propaganda, lies, conspiracies, rumours, hoaxes, hyper-partisan content, falsehoods or manipulated media. The information pollution is therefore not entirely made of totally fabricated contents, but rather of many shades of truth stretches. This complexity adds to the variety of styles and reputation of the writing sources, making it hard to tell misleading contents from accurate news. Given that users' time and attention are limited~\cite{qiu2017limited}, and an extensive fact-checking and source comparison is impossible on large scale, we must rely on heuristics to quickly evaluate a news piece even before reading it. Karnowski et al.~\cite{KARNOWSKI201742} showed that users online decide on the credibility of a news article by evaluating contextual clues. An important factor is the source of the article, as pointed out by Sterrett et al.~\cite{Sterrett2019}, who found that both the source (i.e., the website an article is taken from) and the author of the social media post that shares the article have a heavy impact on readers' judgement of the news. Also Pennycook et al.~\cite{Pennycook2521} investigated the role of the website that news is taken from, finding that most people is capable of telling a mainstream news website from a low-credibility one just by looking at them. Attribution of credibility by the source is a well documented practice in scientific literature. Due to the impossibility to fact-check every news article, researchers often relied on blacklists of low-credibility websites, curated by debunking organisations, in order to classify disinformation contents~\cite{Vargo2018TheAP,Allcott2019,guess2018selective,Tacchini2017}. Lazer et al.~\cite{Lazer1094} state that they ``advocate focusing on the original sources — the publishers — rather than individual stories, because we view the defining element of fake news to be the intent and processes of the publisher''. In fact, a malicious intentionality of the publisher is a distinctive characteristic of disinformation activities~\cite{CouncilofEurope}. Perception of the source can depend also on personal biases: Carr et al.~\cite{Carr2014} showed that sceptic and cynic users tended to attribute to citizen journalism website more credibility than to mainstream news websites. Go et al.~\cite{GOEun2014358} also pointed out that the credibility of an article is influenced by the believed expertise of the source, which in turn is affected by previous psychological biases of the reader, like in-group identity and bandwagon effect. In fact, also other people's judgement has been shown to produce an observable effect on individual perception of a news item~\cite{Houston2011, Rojas2010}. Lee et al.~\cite{Lee2010NeedForCognition} showed that people's judgement can be influenced by other readers’ reactions to news, and that others’ comments significantly affected one's personal opinion. Colliander et al.~\cite{COLLIANDER2019202} found that posts with negative comments from other people discourage users to share them. As noted by Winter et al.~\cite{Winter2015}, while negative comments can make an article appear less convincing to the readers, the same does not apply for a high or low number of likes alone. For an up-to-date review of the existing literature on `fake news' related research problems, see~\cite{ruffo2021surveying}. \noindent \textbf{Our contribution:} In our experiment we aimed to capture how the reader's decision process on the credibility of online news is modified by some of the above mentioned contextual features, which one was more effective on readers' mind, and on whom. We launched \texttt{Fakenewslab} (available at http://www.fakenewslab.it), an artificial online interactive news outlet designed to test different scenarios of credibility evaluation. We presented a sequence of 20 articles to volunteers, showing the articles to each user in five distinct ways. Some users find a title and a short description only, to simulate the textual information that would appear in the preview of the same article on social media. Additional information is presented to other users: some could read the full text of the article, someone else sees the source (i.e., a reference to the media outlet of the news), some other is told the percentage of other users that classified the news as true. Also, to better assess the significance of this last information, we showed a randomly generated percentage to a fifth group of users. The view mode is selected randomly for each volunteer when they enter the platform and start the survey. Everyone was asked to mark every single news as true or false, and this feature guided us in the preliminary selection of the articles, filtering out all those news where such a rigid distinction could not have been made. Similarly, several other works asked online users to rate true or false news~\cite{Pennycook2019, micallef2021fakey, pennycook2018prior}, but to the best of our knowledge with the aim of quantifying how much a small twist in the interface of an online post can impact on people's evaluation of a news article. Our platform is inspired by MusicLab by Salganik et al.~\cite{salganik2006experimental}, an artificial cultural market that was used to study success unpredictability and inequalities dynamics, and that marked a milestone in assessing the impact of social influence on individual's choices. In particular, we adopted the idea of dispatching users in parallel virtual Rooms, where it is possible to control the kind and the presentation of the information they could read, in order to compare different behavioural patterns that may show high variability even for identical articles' presentation modes. Our results can have implications on the way we design social media interfaces, on how we should conduct news annotation tasks intended to train machine learning models whose aim is to detect ``fake news'', also on forthcoming debunking strategies. Interestingly, false news are correctly identified more frequently than true news, but showing the full article instead of just the title, surprisingly, does not increase general accuracy. Also, displaying the original source of the news may contribute to mislead the user in some cases, while social influence can positively assist individuals' ability to classify correctly, even if the wisdom of the crowd can be a slippery slope. Better performances were observed for those participants that autonomously opened an additional browser tab while compiling the survey, suggesting that searching the Web for debunking actually helps. Finally, users that declare themselves as young adults are also those who open a new tab more often, suggesting more familiarity with the Web. \section{Data and Methodology} \label{sec:methodology} \texttt{Fakenewslab} is presented as a test accessible via Web, where volunteers are exposed to 20 news articles that have been actually published online in Italian news outlets; for each news, users should tell which were true or fake. Every user reads the same 20 news in a random order, and they are randomly redirected to one out of five different experimental environments, that we call ``virtual rooms'', in which the news are presented quite differently: \begin{itemize} \item \textbf{Room 1} shows, for every news, the bare headline and a short excerpt, as they would appear on social media platforms, but devoid of any contextual clue. \item \textbf{Room 2} shows also the full text of the articles, as they were presented on the original website, again without any clear references to the source. \item \textbf{Room 3} shows the headline with a short excerpt and the source of the article, like it would appear on social media but devoid of social features. \item \textbf{Room 4} shows the headline with a short excerpt, and also the percentages of users that classified the news as true or false. We will refer to this information as ``social ratings'' from now on. \item \textbf{Room 5} is similar to Room 4, but with randomly generated percentages. We will refer to these ratings as ``random ratings'' from now on. \end{itemize} All the rooms are just variations of Room 1, that is designed as an empirical baseline for comparative purposes. Room 5, also, has been introduced as a ``null model'' to evaluate the social influence effect in comparison with Room 4, that displays actual values. During the test, we monitored some activities, in order to have a better insight on the news evaluation process. We collected data such as the timestamp of each answer (which allowed us to reconstruct how much time each answer took), what room the user was redirected to, and what social ratings or random ratings were they seeing in Room 4 and Room 5. We also measured if users left the test tab of the browser to open another tab during the test, an activity we interpreted as a signal of attempted fact checking: searching online in order to assess the veracity of an article was not explicitly encouraged nor forbidden by the test rules, and we left to the users the chance to behave like they would have done on everyday web browsing. We are aware that the mindset of a test respondent may have discouraged many users from fact checking the articles, as it may have been perceived like cheating the test, but our argument is that the very same mindset lead the users to pay an attention to news they would not have payed otherwise, thus raising an alert on possible deception hiding in each news: explicitly allowing for web searches out of the test would have probably lead to a disproportionate, unrealistic fact checking activity. We noted that eventually 18.3\% of users fact checked the news at least once on their own initiative, a percentage in accordance to existing literature~\cite{Hassan2017, Robertson2020}. However, estimating how many users would have fact checked the articles is outside the goals of this experiment: we only looked at those users who possibly did it, and at how fact checking impacted on their perception and rating of the news. Along with information related to the test, we asked participants to fill an optional questionnaire about demographics personal information. We collected users genre, age, job, education, political alignment, time and number of daily newspapers read, each of them on an optional basis. 64\% of users responded to at least one of the previous questions. This personal information helped us to link many relevant patterns in news judgement with demographic segments of the population. Details about such relationships between test scores and some demographic information are discussed in Appendix~\ref{app:demography}. All the users' data were anonymous: we did neither ask nor collected family and given names, email addresses, and any other information that could be used to identify the person that participated to the task. We used session cookies to keep out returning users to try the test more than once. However, the participants to the test were informed about the kind of data we were collecting and why. The test has been taken more than 10,000 times, but not everyone completed the task of classifying the 20 articles; in fact, although users were redirected to each room with a uniform probability, we observed ex-post that some them abandoned the test before the end, especially in Room 2 and partially in Room 5, as reported in Table~\ref{table:completion}. The higher abandon rate in these two rooms could be explained by the length of articles in Room 2, which may have burden the reader more than the short excerpts in other settings, and by the unreliable variety of ``other users ratings'' in Room 5 that were actually random percentages, that may have induced some users to question the test integrity. It should also be noticed that users that were redirected to Room 1 finished the test with the highest completion rate. \begin{table}[h!] \begin{center} \begin{tabular}{| c | c | c |} \hline Room & Finished tests & Completion \\ [0.5ex] \hline\hline \multicolumn{1}{|l|}{1: Headline} & 21.72\% & 81.76\% \\ \hline \multicolumn{1}{|l|}{2: Full text} & 17.42\% & 70.00\% \\ \hline \multicolumn{1}{|l|}{3: Source} & 20.81\% & 79.18\% \\ \hline \multicolumn{1}{|l|}{4: Social ratings} & 20.50\% & 79.97\% \\ \hline \multicolumn{1}{|l|}{5: Random ratings} & 19.55\% & 79.01\% \\ [1ex] \hline \end{tabular} \caption{Tests completion percentages grouped by Rooms. Users are distributed randomly following an uniform distribution in the five Rooms, but complete tests in Room 2 are only 17.42\% of the total. Users in Room 2 left the test unfinished almost 10\% more times than users in other Rooms.} \label{table:completion} \end{center} \end{table} After some necessary data cleaning (i.e., leaving out aborted sessions and second attempts from the same user), we ended up with 7,298 unique participants that completed the 20 questions test (145,960 answers total). The answers were collected within a two weeks period in June 2020 by Italian speaking users. \subsection{News selection} \label{sub:news_selection} As mentioned in Sec.~\ref{sec:relatedworks}, the information disorders spectrum is wide, and it includes mis-/dis-/malinformation practices, that means that the reduction of the general problem to a ``true or false'' game is way too simplistic. In fact, professional fact checkers debunk a wide variety of malicious pseudo-journalism activities, flagging news articles in many different ways: reported news include not only those entirely fabricated, but also those that cherry-picked verifiable information to nudge a malicious narrative, or those that omitted or twisted an important detail, thus telling a story substantially different from reality. For our experimental setting, we selected 20 articles that we were able to divide into two equally sized groups of true (identified from now on with a number from 1 to 10) and false news (from 11 to 20) - see Appendix~\ref{app:news} for the complete list and additional information. Such division was straightforward in the majority of cases: we classified news as true when they were not falsified in any professional debunking sites and we also confirmed their veracity on our own. To challenge the user a little bit, we selected less known facts, sometimes exploited from some outlets as click-baiting, and sometimes with a poor writing style, with the exception of news 9 and 10, that we used as control questions. Marking a news as false can be much more ambiguous. However, we selected 10 stories that were dismissed in at least one debunking sites. In particular, article n. 14 was particularly difficult to classify as false for the reasons explained in App.~\ref{app:news}, but we kept it anyhow: in fact, when evaluating accuracy scores of \texttt{Fakenewslab} users, we are not calling out the gullibility of some of them, but we are only monitoring their activities and reactions that may be dependent on the environmental setting they were subject to. Thus it is important to stress here once again that we are more interested to the observed differences between Rooms' outcomes, than assessing people ability to guess the veracity of a news. Hence, even if in the following sections we will make use of terms such as ``correct'', ``wrong'', ``false/true positives'', or ``false/true negatives'' for the sake of simplicity, we must recall that we are measuring the alignment of users' decisions with our own classification just to assess those differences w.r.t. some given baseline. \section{Results} \label{sec:results} Overall, users that completed \texttt{Fakenewslab} showed a very suspicious attitude. Fig.~\ref{fig:overall_scores} and~\ref{fig:confmatrix} plot respectively the scores distribution and the confusion matrix of all accomplished tests, regardless of the setting (Room). Users marked news in agreement with our own classification on average 14.79 times out of 20 questions, with a standard deviation of 2.23. % \begin{figure}[h!] \centering \includegraphics[width = 0.7\linewidth]{img/general_results.png} % \caption[]{Scores' distribution regardless the test settings. Users marked news in agreement with our own classification on average 14.79 times out of 20 questions. The most common score was 15/20.} \label{fig:overall_scores} \end{figure} This quite generally ``good'' performance is unbalanced: 60\% of news were rated false, despite selected true and false news were actually equally distributed. False news, on the contrary, were hardly to be mistaken. False negatives were therefore the most common kind of error (see Fig.~\ref{fig:confmatrix}). It looks like that users' high sensitivity during the test lead to overall results similar to a random guessing game with a probability for a news to be false around three times out of four: in fact, a Mann-Whitney test does not reject the hypothesis that the overall distribution of correct decisions (i.e. accurate ratings of news) may be a binomial with probability $p \approx 0.74$. \begin{figure}[h!] \centering \includegraphics[width = 0.6\linewidth]{img/confusion_overall.png} \caption[]{Users ratings' confusion matrix. Users rated a news as false 60\% of the times, i.e., marking as false even some true ones. True news have been often mistaken for false.} \label{fig:confmatrix} \end{figure} Interestingly, there are differences depending on what kind of information the users were exposed to. Fig.~\ref{fig:results_by_room} shows the scores' distributions by Room. Users that saw the article's headline and its short excerpt (Room 1) scored an average of 14.79 correct attempts, matching exactly the overall average. \begin{figure*}[h!] \centering \includegraphics[width = 0.9\textwidth]{img/results_by_room.png} \caption{Scores' distribution by Room. Social influence is the feature with the highest positive/negative impact on users' scores: it can support the reader's judgement (Room 4), but it can also deceive it (Room 5).} \label{fig:results_by_room} \end{figure*} At first sight, all the other distributions and their characterising averages are similar to each other. Even so, if we keep Room 1 as a baseline, a Mann-Whitney test rejects the hypothesis that distributions for Rooms 2, 4, and 5 are equivalent to Room 1's (i.e., they are statistically significant diverse, with p-value $<10^{-5}$), while the same hypothesis for Room 3 could not be rejected. At a deeper insight, we observed that the article's source impacted the most on users' judgement; in fact, if we compute the average absolute difference between the users' decisions on each question between Room 3 and the other rooms, Room 3 is the one that produced the greatest gap. This effect was sometimes a positive, sometimes a negative contribution to users' decisions, depending on the specific news and source, making Room 1 and 3 scores distributions equivalent at the end. This effect is shown in Fig.~\ref{fig:accuracy_question}, that displays how many times users made a correct decision on each news, grouped by Room. News are sorted here with true news first (blue x-axis labels) and false news after (red x-axis labels) for improving readability. While other users' ratings were the most helpful feature (Room 4, see Subsection~\ref{subsec:social_influence}), Room 3's users decisions are the ones that diverge the most from the others, especially for questions 1, 4, 6 and 8. Quite interestingly, articles 1, 4 and 8 reported true news, but the presentation had a click-bait tendency, taken moreover from online blogs and pseudo-newspapers (see Table~\ref{tab:newstags} in App.~\ref{app:news}), which probably induced many users to believe them false. Question 6, instead, reported a case of an aggression to policeman committed in a Roma camp, a narrative usually nudged by racist and chauvinist online pseudo-newspapers. The news was actually true, and the source mainstream (Il Messaggero is the eight most sold newspaper in Italy\footnote{Source: https://www.adsnotizie.it, accessed on April 29, 2021.}): users that read the source attributed to the news a very higher chance to be true. Rooms 2, 4 and 5 deserve a deeper discussion in the next Subsections. \begin{figure} \centering \includegraphics[width = \linewidth]{img/accuracy_by_question.png} \caption{Accuracy for each question, grouped by Room. The source of the news (Room 3) is the feature that can influence users' judgement the most on the single question, either improving or worsening their scores, especially in questions 1, 4, 5, 8.} \label{fig:accuracy_question} \end{figure} \subsection{Individual decisions and social influence} \label{subsec:social_influence} The aim of Rooms 4 and 5 is to evaluate the impact of the so called ``Wisdom of the Crowd'' in users' decisions. Room 4 shows actual percentages of users marking every single news as true or false: here we observed the highest average score of 15.31. Room 5 has been created for validation purposes, since it displays percentages corresponding to alleged decisions by other users, that are actually random. This led to the observation of the lowest average score of 14.43 (see Fig.~\ref{fig:results_by_room}). Our hypothesis on this important difference is that random ratings were sometimes highly inconsistent with the content of the news, maybe suggesting that an overwhelming quota of readers had voted true some blatantly false news, or the contrary. This inconsistency triggered suspects on the test integrity, which may be the primary cause for the slightly higher abandon rate we observed in Room 5 (see Table~\ref{table:completion}). When users did not abandon the test, they still were likely to keep a suspicious attitude. In line with our argument, Fig.~\ref{fig:agreement} shows the agreement of users with the ratings they saw for both Room 4 and Room 5. Overall, Room 4 users conform with the social ratings 79.28\% of times, while Room 5 users agreed with the random ratings only 54.66\% of times. This low value is still slightly higher than 50\%, i.e., a setting where users conform with the crowd randomly. As a result, there is a clear signal that their judgement was deceived as they performed worse than the users in other rooms. Users that saw the random ratings but did not follow the crowd performed exactly like users in Room 1, that is our baseline setting. It should also be observed that randomly generated percentages displayed with true news were taken more into account (55.8\% of the times) than random percentages shown with false news (53.5\% of the times), probably because we selected true news articles that were more deceptive due to their style and borderline content (see App.~\ref{app:news}). Room 5 users may have followed the crowd when insecure about the answer, and taken a decision independently otherwise, if the percentages contradict their own previous beliefs. On the other hand, users in Room 4 that saw real decisions from other users about true news may have followed the crowd much less than they did when deciding on false news (70.8\% against 87.7\%). Our hypothesis is that social ratings were perfectly consistent with strong personal opinions over false news, while the signal from other users over true news was a little bit noisy to their judgement. \begin{figure} \centering \includegraphics[width = \linewidth]{img/agreement.png} \caption{Agreement of users with the social ratings (Room 4, blue) and random ratings (Room 5, orange). Users conform to social ratings way more than random, inconsistent ones. Random ratings are likely trusted slightly more over true news, where users are probably more hesitating, than over false news. Similarly, social ratings were likely more helpful over false news than on true ones.} \label{fig:agreement} \end{figure} It is worth noting that the majority of respondents answered wrongly to questions 2, 5 and 8 (see Fig.~\ref{fig:accuracy_question}), thus probably posing to other users a dilemma: whether they should follow the crowd that was consistent and helpful other times, even if they disagree on the decision of the majority. For questions 5 and 8 the agreement with the social ratings are similar to the ones in Room 5, with random ratings (see Fig.~\ref{fig:agreement}), while the majority's behaviour on question 2 was accepted 64.5\% of times, even if related decisions were wrong. \begin{figure}% \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width = \linewidth]{img/normative_influence_1.png} % \caption[]{Social ratings in Room 4 likely lead users to correct decisions, compared to those who did not have any additional information to use (Room 1).} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width = \linewidth]{img/normative_influence_2.png} \caption[]{Random ratings in Room 5 probably affected users' judgement in a negative way, often deceiving them compared to those in Room 1.} \end{subfigure} \caption{Comparison of percentages of users that labelled news as ``true'' in Room 1 vs Room 4 (top) and Room 1 vs Room 5 (bottom).} \label{fig:normative_influence}% \end{figure} Fig.~\ref{fig:normative_influence} shows the percentage of users that rated news as true in Room 1 vs Room 4 (top)) and in Room 1 vs Room 5 (bottom). Users with no additional information tended to rate correctly true news fewer times, and rated false news as true more times, than users in Room 4. On the contrary, users in Room 5 were led to take true news as false, and otherwise. Social ratings and random ratings had a strong impact on individuals' judgement. This impact may be explained by a combination of the Bandwagon's effect~\cite{nadeau1993new, MyersBandwagon}, which suggests that the rate of individual adoption of a belief increases proportionally to the number of individuals that adopted such belief yet; and the social bias known as normative social influence~\cite{cialdini2004social, aronson2005social, COLLIANDER2019202}, that suggests that users tend to conform to the collective behaviour, seeking for social acceptance, even when they intimately disagree. \texttt{Fakenewslab}'s users, when redirected in Rooms 4 and 5, could have interpreted the social ratings and random ratings as a genuine form of wisdom of the crowd~\cite{Kremer2014}, and conformed to them, while users in Room 1 just answered according to their personal judgement. \subsection{More information is not better information} \label{subsec:full_text} Articles' headlines and short excerpts are designed to catch the readers' attention, but full details of the story can only be found delving deep in the body of the article. The body of an article is supposed to report details and elements that bring information to the story: location, persons involved, dates, related events, commentaries. However, when the users could read the full text of the news in Room 2 instead of merely a short headline, their judgement was not helped by more details, but instead hindered: average scores in Room 2 are lower than the overall case, with 14.51 correct decisions per user (see Fig.~\ref{fig:results_by_room}). Fatigue could be an explanation. Table~\ref{table:completion} reports lower completion percentages for users in Room 2, suggesting that many of them could have been annoyed while reading longer stories, thus abandoning the test prematurely. Also, users that completed the test usually took less time to answer, if compared with the other Rooms: the median answer time is about one second less than the overall median answering time. \begin{figure}% \begin{subfigure}[b]{\linewidth} \centering \vskip\baselineskip \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width = 0.8 \linewidth]{img/text_length2.png} \caption[]{Length of texts in Room 2. Articles' bodies are sorted from the shortest to the longest.}% \end{subfigure} \includegraphics[width = 0.8 \linewidth]{img/text_length1.png} % \caption[]{Difference between the median answering time taken by users in Room 2 and all the other rooms on each article, with articles sorted by length. Apparently, there is no linear dependency of answering times on text length. Also, the decision on the longest articles shown in Room 2 took less time in 5 cases out of 7.} \end{subfigure} \caption{Articles full body length are not correlated with longer answering times.} \label{fig:text_length}% \end{figure} In fact, reading time does not depend on the length of the text, as shown in Fig.~\ref{fig:text_length}: on the top we show the articles length in characters, that ranges from few hundreds to more than 8,000. Longer articles are way longer than shorter ones, and they would require much more time to be read entirely. However, this was not observed. In the bar-plot on the bottom of Fig.~\ref{fig:text_length}, we plot the difference between the median answering time in Room 2 and in all the rooms altogether, grouped by question. The bar is blue when the difference is positive, i.e., users in Room 2 took more time to answer than the other users, while it is red otherwise. Questions in Fig.~\ref{fig:text_length} are sorted from the shortest to the longest. Answering time does not appear related to the text length, but rather to the news itself. However, from \nth{14} position and beyond, bars are more likely red (negative difference): users decided more quickly with longer texts when they had the opportunity to read them, suggesting that they did not read the texts entirely, possibly because they made their mind in the headline anyway, or also because the writing style of the full article just confirmed their preliminary guesses. \begin{figure}% \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width = 0.8 \linewidth]{img/fatigue1.png} % \caption[]{Accuracy in Room 2 does not drop progressively. However, it is constantly lower than overall accuracy.} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width = 0.8 \linewidth]{img/fatigue2.png} \caption[]{Users in Room 2 start reading the full text on the first questions, but they get quickly annoyed and the reading time fall in short. During the second half of the test, Room 2 users answer even more rapidly than in the other rooms.}% \end{subfigure} \caption{Average accuracy (top) and median answering times (bottom) of users for questions in order of presentation to the users. Values in Room 2 are compared to their corresponding values in all the rooms taken altogether.} \label{fig:fatigue}% \end{figure} Moreover, users in Room 2 tended to answer more quickly to the last half of the test, if compared to users that only saw a shorter text. In Fig.~\ref{fig:fatigue}, we reported the median answering time in both Room 2 and overall, with questions ordered as the articles were actually presented to the reader. On the first questions of the test, users in Room 2 took more time to answer, apparently reading the long texts. However, their attention collapsed quickly, and from the \nth{7} question and beyond they spent even less time reading the articles than the users in other Rooms. The relationship between the users' fatigue and the average scores is not clear, however. The right panel shows the average accuracy of users by question, where questions are ordered according to the reader's perspective. If a cumulative fatigue was burdening the users of Room 2 more than the others, average accuracy in the last questions would have to progressively fall. In the top panel of Fig.~\ref{fig:fatigue}, this effect is not evident, even if a slight degrading of accuracy is visible. The interplay between fatigue, texts length and user's behaviour must be properly addressed in a dedicated experiment, but these preliminary findings suggest that the toll required by a long read in terms of attention may discourage users from actually reading and processing more informative texts, and that may lead to even wronger decisions. \subsection{Web's familiarity, fact checking and young users} \label{subsec:opened_tabs} One common way to assess the veracity of a piece of news is online fact-checking, i.e., searching for external sources to confirm or disprove the news. In Room 3 the users could see the sources of the articles, and a link to the original article was displayed: we observed that 26.5\% of the users clicked at least once on such a link during the test. However, fact-checking activity requires the user to check news against external sources; retrieving the original web article could only confirm its existence, and give some clue about the publisher (for instance, by reading other news from the same news outlet, or by technological and stylistic features of the website~\cite{Chung2012}). True fact-checking would have required the user leaving temporarily the test, to browse in search for confirmation on other websites~\cite{Wineburg2017LateralRR}. As anticipated in Sec.~\ref{sec:methodology}, we captured information about the users that left the test for a while, coming back at it a few seconds later, by monitoring when the test tab in the users' browser was no longer the active one. It is worth recalling that we had no way to see what the users were searching in other tabs, if they were actively fact-checking the test articles or just taking a break from the test. Also, in our guidelines we did not explicitly mention the possibility of searching the answers online. However, estimating how many users do fact-checking of online news is outside of the scope of this work, and it is a research problem that must be addressed properly. We only acknowledge that opening a new tab and coming back to the test later could be a hint of an external research; that we did nor encouraged nor discouraged such activity; and that in the end 18.25\% of the users did open another tab at least once during the test. \begin{figure} \centering \includegraphics[width =0.8\linewidth]{img/tab_result.png} \caption{Distribution of scores for those users that opened another tab during the test, and returned to the test a while after, at least once. Actually, the score's average is higher here than the overall outcomes (see Fig.~\ref{fig:tab_user_scores}). Tab opening could be a hint of online fact-checking.} \label{fig:tab_user_scores} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{img/who_tabs.png} \caption{Percentage of users that opened a tab at least once, grouped by age. Lollipop size is proportional to the subgroup population in our dataset. On average, 18.25\% of users opened another tab at least once. Teenagers and young adults in the groups 10-20, 20-30, 30-40 opened a tab more often than the others.} \label{fig:who_tabs} \end{figure} Users who opened a tab answered correctly 78.23\% of times, against the 73.82\% of the others that did not. Fig.~\ref{fig:tab_user_scores} reports the average distribution of users' scores, only for those who have opened a tab at least once. The mean of the distribution (15.19 correct guesses over 20) is close to the one in Room 4, significantly higher than in Room 1. Opening a tab for searching an answer online is an activity that we therefore correlate with a possible fact-checking, due to the positive effects it has on users' tests. Also, it is an activity that correlates with the age of the respondents. Fig.~\ref{fig:who_tabs} shows the percentage of users that opened another tab in the browser, by age. Each age category is represented by the corresponding lollipop, which is long proportionally to the tab opening rate, and sized after the number of respondents in such age category. The vertical dashed red bar points out the percentage of users who opened a tab, regardless of their age. Young adults and adults in the age 20-30 through 50-60 are the most represented in the data, but their tab opening rate is different. Users in the age groups 20-30 and 30-40 showed a higher propensity to open another browser's tab, while users in the 50-60 age group did it significantly less than the average. Other categories are under-represented, but they still confirm the general trend: overall, the chances for users to open a new tab while responding to the test are higher for teenagers and young adults, and they decrease substantially with age. Incidentally, young adults and teenagers are the users that are often considered showing more familiarity with technologies such as web browsers, the ones that more easily could think of opening a parallel research in another tab. Wineburg et al.~\cite{Wineburg2017LateralRR} noted that the most experienced users about online media, when called to tell reliable online news outlets from unreliable ones in test conditions, tended to open a new tab. This result suggests that users with higher familiarity with the medium are more prone to fact-check dubious information online, carrying a simultaneous and independent research on the topic on their own, even when not encouraged to do it. Young adults are also the age group with the highest average scores. \section{Conclusions} \label{sec:conclusion} Via \texttt{Fakenewslab} volunteers were asked to mark as true or false 20 news, previously selected from some online news outlets. We monitored users' activities under five different test environments, each showing articles with distinct information. Participants able to see the news's source did not classify articles better than those who have only seen the headlines, rather the source misled many users, probably because users stopped focusing on the news itself, distracted by the (good or bad) reputation of the outlet. Users that could read the full text performed even worse. Volunteers exposed to others' decisions were helped by the ``wisdom of the crowd'', though this could be a slippery slope: in fact, manipulated random feedbacks pushed a negative herding effect leading to more mis-classifications. Last, young adults were prone to interrupt momentarily the test, coming back at it with more accurate answers, a possible hint of an underlying fact-checking activity. These results may support drawing up forthcoming guidelines for annotation tasks to train fake news classification systems, and for properly designing web and social media platforms. Furthermore, although low credibility online articles posted on social media is often on display as responsible for mis-/dis-information spreading, high reputation news outlets are accountable, too, because their inaccurate publications can have a greater impact on reader's minds. Further research is needed: e.g., the correlation between reading long texts and poor decisions has to be better addressed. Social influence is an important asset, but we are also aware that informational cascades are vulnerable to manipulations: studying the underlying mechanisms on how users evaluate news online, individually or influenced by other factors, is crucial in fighting information disorders online. \section{Supplementary Materials} \label{app:suppmaterials} \subsection{Ethical and Reproducibility Statement} All the users' data were anonymous: we did neither ask nor collected sensible information, such as family and given names, email and IP addresses, and any other data that could be used to identify the person that participated to the task. We used session cookies to keep out returning users to try the test more than once. However, the participants to the test were informed about the kind of data we were collecting and why. All the collected information will be distributed in aggregated form in a dedicated Github repository, for reproducibility purposes. \subsection{Demography of users} \label{app:demography} Along with their evaluation of news, we asked \texttt{Fakenewslab} users to respond to an optional questionnaire about their personal information. This questionnaire was taken by 64\% of users. In Fig.~\ref{fig:scores_anag} we report the average scores of users depending on some details they declared, like their age, genre, the number of newspapers they read by day, and their education. Population segments are represented by lollipops. Each lollipop's length is proportional to the accuracy of users in that segment, while its size is proportional to their number. Due to the word of mouth spreading process of \texttt{Fakenewslab} on social networks, the population show some biases, as men are over-represented in comparison to women, and the same applies to college degrees (or even PhDs) over high school (and lower) diplomas. Even with this limitations, it is worth noting that mean scores are higher for young adults and adults from 20 to 50 years old; for users that declared to read at least three newspapers per day, over those who read 1, 2 or none; for users with higher school diplomas and degrees. Although under-represented, it is interesting that users that declared to have pursuit a PhD performed slightly worse than users that declared to have a college degree or a high school diploma. \begin{figure}[h] \centering \includegraphics[width = 0.5\textwidth]{img/anag_score.png} \caption{Scores' distribution by four personal information that users filled into the optional questionnaire. Each lollipop is sized proportionally to the population.} \label{fig:scores_anag} \end{figure} \subsection{News} \label{app:news} We presented to \texttt{Fakenewslab} users 20 news, and asked them to decide whether the news were true or false. However, as described in Sec.~\ref{sub:news_selection}, the ``true or false'' framing is an oversimplification of a faceted problem. News flagged by fact checkers include click-baits, fabricated news, news with signs of cherry-picking, crucial omissions that frame a story into a new (malicious) narrative, and other subtle stretches of truth. On the other side, true news can also be borderline, open to a broader discussion. In order to offer a complex landscape of debatable information, we did not choose only plausible news from famously reliable sources, which would have been probably easier to spot by contrast with false news, but we also selected a few borderline news, true news taken from blogs, some with even a slight tendency to click-bait. The final sample included hence sometimes borderline and ambiguous pieces of information to process. In tables~\ref{tab:newslist} we listed the 20 news proposed in \texttt{Fakenewslab}, and in Table~\ref{tab:newstags} we added some information on the source the debunking. \begin{table*}[h!] \begin{center} \begin{tabular}{|c|c|l|} \hline & Id & Headline \\ \hline \hline \parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{True news}}} & 1 & A ``too violent arrest'': policeman sentenced on appeal trial to compensate the criminal\\ \cline{2-3} & 2 & The Islamic headscarf will be part of the Scottish police uniform\\ \cline{2-3} & 3 & Savona, drunk and drug addict policeman runs over and kills an elderly man\\ \cline{2-3} & 4 & Erdogan: The West is more concerned about gay rights than Syria\\ \cline{2-3} & 5 & \# Cop20: the ancient Nazca lines damaged by Greenpeace activists\\ \cline{2-3} & 6 & Rome, policemen attacked and stoned in the Roma camp\\ \cline{2-3} & 7 & Thief tries to break into a house but the dog bites him: he asks for damages\\ \cline{2-3} & 8 & These flowers killed my kitty - don't keep them indoors if you have them\\ \cline{2-3} & 9 & Climate strike, Friday for future Italy launches fundraising\\ \cline{2-3} & 10 & Paris, big fire devastates Notre-Dame: roof and spire collapsed | The firefighters: ``The structure is safe''\\ \hline \hline \parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{False news}}} & 11 & Carola Rackete: ``The German government ordered me to bring migrants to Italy'' \\ \cline{2-3} & 12 & With the agreement of Caen Gentiloni sells Italian waters (and oil) to France \\ \cline{2-3} & 13 & Vinegar eliminated from school canteens because prohibited by the Koran\\ \cline{2-3} & 14 & He kills an elderly Jewish woman at the cry of Allah Akbar: acquitted because he was drugged\\ \cline{2-3} & 15 & The measles virus defeats cancer. But we persist in defeating the measles virus!\\ \cline{2-3} & 16 & 193 million from the EU to free children from the stereotypes of father and mother\\ \cline{2-3} & 17 & Astonishing: parliament passes the law to check our Facebook profiles\\ \cline{2-3} & 18 & Italy. The first illegal immigrant mayor elected: ``This is how I will change Italian politics''\\ \cline{2-3} & 19 & INPS: 60,000 IMMIGRANTS IN RETIREMENT WITHOUT HAVING EVER WORKED\\ \cline{2-3} & 20 & EU: 700 million on 5G, but no risk controls\\ \hline \end{tabular} \end{center} \caption{News headlines' list. In our experiment, we adopted 20 articles corresponding to 10 true and to 10 false news. They were actually published in Italian online outlets, mainstream or not. In all the rooms, we integrally reported headlines, short excerpts, but the full articles were available only in Room 2. In this table, we show the (translated) headlines.} \label{tab:newslist} \end{table*} \begin{table*}[h!] \begin{center} \begin{tabular}{|c|c|l|c|l|} \hline & Id & Source & Type & Tagged as \\ \hline \hline \parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{True news}}} & 1 & Sostenitori delle Forze dell'Ordine & blacklisted & \\ \cline{2-4} & 2 & Il Giornale & mainstream & \\ \cline{2-4} & 3 & Today & mainstream & \\ \cline{2-4} & 4 & L'antidiplomatico & online newspaper & \\ \cline{2-4} & 5 & Greenme & online newspaper & \\ \cline{2-4} & 6 & Il Messaggero & online newspaper & \\ \cline{2-4} & 7 & CorriereAdriatico & mainstream & \\ \cline{2-4} & 8 & PostVirale & blog & \\ \cline{2-4} & 9 & Adnkronos & mainstreeam & \\ \cline{2-4} & 10 & TgCom & mainstreeam & \\ \hline \hline \parbox[t]{2mm}{\multirow{10}{*}{\rotatebox[origin=c]{90}{False news}}} & 11 & IlGiornale & mainstreeam & Wrong Translation -- Pseudo-Journalism \\ \cline{2-5} & 12 & Diario Del Web & online newspaper & Hoax -- Alarmism\\ \cline{2-5} & 13 & ImolaOggi & blacklisted & Hoax \\ \cline{2-5} & 14 & La Voce del Patriota & blog & Clarifications Needed \\ \cline{2-5} & 15 & Il Sapere è Potere & blackisted & Disinformation \\ \cline{2-5} & 16 & Jeda News & blacklisted & Well Poisoning \\ \cline{2-5} & 17 & Italiano Sveglia & blacklisted & Hoax -- Disinformation \\ \cline{2-5} & 18 & Il Fatto Quotidaino & blacklisted & Hoax \\ \cline{2-5} & 19 & VoxNews & blacklisted & Unsubstantiated -- Disinformation \\ \cline{2-5} & 20 & Oasi Sana & blog & Well Poisoning -- Pseudo-Journalism \\ \hline \end{tabular} \end{center} \caption{Articles' additional information. We show the sources for each article of Table~\ref{tab:newslist}; in fact, even if the same news could have been reported in more different news outlets, in Room 3 we showed only the source that published the article as it was presented to our volunteers. Moreover, we give here some more information, such as the type of the news outlet (i.e., mainstream, online newspaper, blog, or blacklisted by some debunking sites), and the tags used by fact-checkers upon correction. Such information was not explicitly disclosed to the users.} \label{tab:newstags} \end{table*} Every news shown in Table~\ref{tab:newslist} has been fact-checked in at least one outlet among debunking sites or mainstream newspaper from the following list: \texttt{bufale.net}, \texttt{butac.it}, \texttt{ilpost.it}, \texttt{open.online}. According tags adopted by fact-checkers, we classified news from 11 to 19 as false (see~Table~\ref{tab:newstags}). News n. 20 is somehow an exception: we found it on an online website focused on health, openly against the 5G technology. Apparently, the news did not became viral, and it has been ignored by mainstream journalism and also by debunkers. The article suggests that the European Union has not assessed the health risks associated to the 5G technology, which is clearly false~\footnote{e.g., see the ``Health impact on 5G'' review supported by the European Parliament \texttt{https://www.europarl.europa.eu/RegData/etudes/STUD/2021/690012/\-EPRS\_STU(2021)690012\_EN.pdf}.}, and full of misunderstanding, to say the least, on how the European Commission funding processes really work. It should be noticed that the most controversial classification, among the ``false'' news, is probably article n. 14. The source, ``La Voce del Patriota'', is a nationalist blog. The article refers of a factual event: a drugged Muslim man killed a elderly Jewish woman, and the reported homicide is mainly framed as racial, serving a broader narrative that criminalise immigration. When it was published the trial was only at its first stage, and the acquittal was temporary, waiting for a rebuttal. However, a correction of the news was available at the time of our experiment in some debunking sites; in fact, further investigations dismissed the racial aggravating factors, stressing the drug-altered state of the murderer. This is an example of how difficult could be to answer to a simple ``true or false'' game. The fact was true, but it was reported incorrectly. Nevertheless, we should recall here that our objective is not to judge people's ability to tell true from false, but how a social media platform's contextual information may influence us from making such decision. Hence, the results we observed comparing user's activities in different rooms are the core of our motivations.
af013330554a840f91d7ab7bbf1a460b4d2e05cc
\section{Introduction} That measurements generally disturb quantum systems is one of the fundamental aspects of quantum mechanics. The consequences of this effect range from the foundational to the applied, sometimes entering in the guise of measurement ``back-action'', playing a key role in quantum metrology, computation, and information processing \cite{Busch1990, Ozawa2003, Busch2009, Heinosaari2010, Tsang2010, Tsang2012, Rozema2012, Groen2013, Hatridge2013, Busch2013a, Busch2014a, Kaneda2014, Blok2014, Shitara2016, Moller2017, Hamamura-Miyadera, Carmeli2019, Wu2020, DAriano2020, Ipsen2021}. Measurement disturbance can be clearly seen when two observables are measured in succession, and the statistics of the second measurement depend on the first. The conditions under which measurement disturbance arises have now been well studied, and are intimately connected with the theory of \emph{incompatibility} \cite{Heinosaari2015, Guhne2021}. The disturbance caused by measurement can be analysed in operational terms within the framework of the quantum theory of measurement \cite{Busch2016a}, which allows for the most general description of observables, state changes and measurement interactions, modelled as positive-operator-valued measures (POVMs), instruments, and channels respectively. In this paper, we investigate various constraints on the measurement interaction and the impact these have on the disturbance, focussing most attention on the constraint imposed by a conservation law, which we phrase here in purely operational terms. From here we are able to prove a sweeping generalisation of the famous Wigner-Araki-Yanase (WAY) theorem \cite{E.Wigner1952,Busch2010,Araki1960}, which continues to inspire research in a variety of directions (some recent examples are \cite{Miyadera2006a,Kimura2008,Busch2011, Busch2013, uczak2016, Tukiainen2017, Tajima2019b, Sotan2021}), having impact also in other fields of research: for instance in quantum computing \cite{Ozawa2002a,Karasawa2007,Karasawa2009}, the resource theories of asymmetry \cite{Ahmadi2013b} and coherence \cite{Aberg2014,Tajima2019}, the theory of quantum reference frames \cite{Loveridge2017a, Loveridge2020a}, quantum clocks \cite{Gisin2018}, and quantum thermodynamics \cite{Navascues2014a, Mohammady2017, Mohammady2019c, Chiribella2021a, Mohammady2021}. In \sect{sec:pre}, we present the elements of operational quantum theory pertinent to our investigation, introducing also a sesquilinear mapping \cite{Janssens2017} which is utilised frequently throughout the paper. Conservation laws are examined in operational terms, and a distinction is drawn between a ``average'' and a ``full'' conservation law, which plays an important role in interpreting our findings. \sect{sect:quantifying-disturbance} considers disturbance in sequential measurements when the first measurement obeys a conservation law. Independent of the conservation law, we provide quantitative bounds for the minimum disturbance that results when the observables do not commute. Such bounds recover the necessary conditions for non-disturbance given a pair of non-commuting observables that were previously reported in Ref. \cite{Heinosaari2010}. By contrast, in the presence of a conservation law we obtain additional bounds that depend on whether the conservation law is full or average. In the full case, we give conditions under which an apparatus preparation with a large coherence in the conserved quantity, quantified by the quantum Fisher information \cite{PETZ2011}, is required for non-disturbance. In \sect{sect:first-kind-measurements} we prove a theorem which generalises WAY in several respects: we do not assume that the observable to be measured is sharp (projection-valued), or that the apparatus pointer observable is sharp, or that the measurement interaction is unitary, or that the apparatus is prepared in a pure state. Here the theorem is given in the form of a single, quantitative bound, capturing many essential features of the original WAY theorem. Finally, in \sect{sect:faithful-fixed-points} and \sect{sect:non-faithful-fixed-points} we consider how the structure of the set of fixed states of the measurement channel imposes restrictions on non-disturbance. Here, a quantitative bound for non-disturbance is presented which further generalisaes the WAY theorem and indicates that, while necessary, a large coherence in the apparatus preparation is not sufficient for non-disturbance. \section{Preliminaries}\label{sec:pre} In this section we first introduce the elements of operational quantum theory, including the theory of quantum measurement (see, e.g., \cite{PaulBuschMarianGrabowski1995,Busch1996, Heinosaari2011, Busch2016a}). Additionally, we present an operationally motivated framework for describing conservation laws in quantum theory, prising apart two distinct notions of conservation---full and average---whose difference manifests for general channels and which plays a key conceptual role in interpreting our generalisation of the Wigner-Araki-Yanase theorem in the sequel. \subsection{Operators on Hilbert space, operations, and channels} Let ${\mathcal{H}}$ be a complex separable Hilbert space, with ${\mathcal{L}}({\mathcal{H}}) \supset {\mathcal{L}_{s}}({\mathcal{H}}) \supset {\mathcal{L}_{p}}({\mathcal{H}})$ the algebra of bounded (linear) operators, the real vector space of self-adjoint operators, and the (cone of) positive operators on ${\mathcal{H}}$, respectively. We shall denote by $\mathds{1}$ and $\mathds{O}$ the identity and zero operators of ${\mathcal{L}}({\mathcal{H}})$, respectively, and an operator $A \in {\mathcal{L}_{p}}({\mathcal{H}})$ satisfying $\mathds{O} \leqslant A \leqslant \mathds{1}$ will be called an \emph{effect}. We define by ${\mathcal{T}}({\mathcal{H}}) \subseteq {\mathcal{L}}({\mathcal{H}})$ the two-sided ideal of trace-class operators in ${\mathcal{L}}({\mathcal{H}})$. The (normal) state space is the space of positive, unit-trace operators ${\mathcal{S}}({\mathcal{H}}) \subset {\mathcal{T}}({\mathcal{H}})$, and a state $\rho \in {\mathcal{S}}({\mathcal{H}})$ is called faithful if for all $A \in {\mathcal{L}}({\mathcal{H}})$, $\mathrm{tr}[A^*A \rho] = 0 \implies A = \mathds{O}$, which implies that all of the eigenvalues of $\rho$ are non-zero. Transformations of quantum systems are called \emph{operations}, defined as completely positive (CP), trace non-increasing linear maps $\Phi: {\mathcal{T}}({\mathcal{H}}) \to {\mathcal{T}}({\mathcal{K}})$. Among the operations are the \emph{channels}, which preserve the trace. For any operation $\Phi : {\mathcal{T}}({\mathcal{H}}) \to {\mathcal{T}}({\mathcal{K}})$, there is an associated (``Heisenberg picture") dual operation $\Phi^* : {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$, defined via the duality $\mathrm{tr}[\Phi^*(A) T] = \mathrm{tr}[A \Phi(T)]$ for all $A\in {\mathcal{L}}({\mathcal{K}})$ and $T \in {\mathcal{T}}({\mathcal{H}})$. $\Phi^*$ is completely positive and sub-unital, and unital exactly when $\Phi$ is trace-preserving. Unital operations $\Phi^*$ will also be referred to as channels. Operations allow for the construction of an ``operator-valued inner product", which will be frequently used in this paper. For an operation $\Phi^*: {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$, we define the sesquilinear mapping $\langle\langle \cdot |\cdot \rangle \rangle : {\mathcal{L}}({\mathcal{K}}) \times {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$ by \begin{align}\label{eq:sesquilinear-map-defn} \langle \langle A|B \rangle \rangle := \Phi^*(A^*B) - \Phi^*(A^*) \Phi^*(B), \end{align} to hold for all $A, B \in {\mathcal{L}}({\mathcal{K}})$. The following lemma shows that such a map mimics several important properties of an inner product. \begin{lemma}\label{lemma:cauchy-schwarz} The sesquilinear mapping defined in \eq{eq:sesquilinear-map-defn} satisfies: (i) $\<\< A | B + \lambda C\>\> = \<\<A|B\>\> + \lambda \<\<A| C\>\>$ for all $\lambda \in \mathds{C}$; (ii) $\<\< A| B \>\> = \<\< B| A \>\>^*$; (iii) $\<\< A| A \>\> \geqslant \mathds{O}$; and (iv) the Cauchy-Schwarz inequality \begin{align*} \langle \langle A|B\rangle \rangle \langle \langle B|A\rangle \rangle \leqslant \Vert \langle \langle B | B \rangle \rangle \Vert \langle \langle A|A \rangle \rangle . \end{align*} \end{lemma} \begin{proof} (i) trivially follows from linearity of operations, while (ii) follows from the fact that an operation preserves the involution, i.e., $\Phi^*(A)^* = \Phi^*(A^*)$. (iii) follows from Kadison's inequality, or the two-positivity of CP maps \cite{Kadison1952,Choi1974}. To show this, note that by Stinespring's dilation theorem we may write $\Phi^*(A) = V^*(A \otimes \mathds{1}\sub{{\mathcal{K}}'}) V$, where $V : {\mathcal{H}} \to {\mathcal{K}}\otimes {\mathcal{K}}'$ is a linear operator. Since $\Phi^*$ is sub-unital, it must hold that $\Phi^*(\mathds{1}\sub{{\mathcal{K}}}) = V^*(\mathds{1}\sub{{\mathcal{K}}}\otimes \mathds{1}\sub{{\mathcal{K}}'} )V \equiv V^* V \leqslant \mathds{1}\sub{{\mathcal{H}}}$, with equality if $\Phi^*$ is a channel, in which case $V$ is an isometry. By the C* identity we therefore have $\|V V^*\| = \|V^* V\| \leqslant 1$, which implies that $\mathds{O} \leqslant V V^* \leqslant \mathds{1}\sub{{\mathcal{K}}}\otimes\mathds{1}\sub{{\mathcal{K}}'}$. By \eq{eq:sesquilinear-map-defn} we may therefore write \begin{align}\label{eq:stinespring-sesquilinear} \<\< A| B \>\> &= V^*(A^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \pi (B \otimes \mathds{1}\sub{{\mathcal{K}}'})V, \end{align} where $\pi = \pi^* := \sqrt{\mathds{1}\sub{{\mathcal{K}}}\otimes\mathds{1}\sub{{\mathcal{K}}'} - V V^*}$. That $\<\<A|A\>\> \geqslant \mathds{O}$ trivially follows. Finally, we prove the Cauchy-Schwarz inequality which, for the case of channels, was proven by Janssens in Lemma 1 of Ref. \cite{Janssens2017}. The proof for the case of general operations is identical; by \eq{eq:stinespring-sesquilinear} we may write \begin{align*} \langle \langle A|B\rangle \rangle \langle \langle B|A\rangle \rangle &= V^*(A^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \pi (B \otimes \mathds{1}\sub{{\mathcal{K}}'})V V^*(B^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \pi (A \otimes \mathds{1}\sub{{\mathcal{K}}'})V \nonumber \\ & \leqslant \| \pi (B \otimes \mathds{1}\sub{{\mathcal{K}}'})V V^*(B^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \| V^*(A^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \pi (A \otimes \mathds{1}\sub{{\mathcal{K}}'})V \nonumber \\ & = \|V^*(B^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \pi (B \otimes \mathds{1}\sub{{\mathcal{K}}'})V \| V^*(A^* \otimes \mathds{1}\sub{{\mathcal{K}}'})\pi^* \pi (A \otimes \mathds{1}\sub{{\mathcal{K}}'})V \nonumber \\ & = \|\<\< B| B\>\> \| \<\<A|A \>\>. \end{align*} In the second line we have used the fact that for any self-adjoint operator $A \in {\mathcal{L}_{s}}({\mathcal{H}})$, it holds that $B^* A B \leqslant \| A\| B^* B$ for all $B \in {\mathcal{L}}({\mathcal{H}})$, while in the third line we have used the C* identity $\|A A^* \| = \| A^* A\|$ for all $A \in {\mathcal{L}}({\mathcal{H}})$. \end{proof} Note that the sesquilinear mapping in \eq{eq:sesquilinear-map-defn} does not satisfy the positive definiteness property in general, that is, $\langle \langle A| A\rangle \rangle = \mathds{O} $ does not imply $A=\mathds{O}$. This plays an important role in the multiplicability theorem \cite{Choi1974}, which can be seen as a consequence of \lemref{lemma:cauchy-schwarz}: \begin{corollary}\label{corollary:multiplicability} Let $\Phi^*: {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$ be an operation, and consider an operator $B \in {\mathcal{L}}({\mathcal{K}})$. The following hold: \begin{enumerate}[(i)] \item If $\Phi^*(B^* B) = \Phi^*(B^*) \Phi^*(B)$, then $\Phi^*(A B) = \Phi^*(A)\Phi^*(B)$ for all $A \in {\mathcal{L}}({\mathcal{K}})$. \item If $\Phi^*(B B^*) = \Phi^*(B) \Phi^*(B^*)$, then $\Phi^*(B A) = \Phi^*(B)\Phi^*(A)$ for all $A \in {\mathcal{L}}({\mathcal{K}})$. \end{enumerate} \end{corollary} \begin{proof} Let us first prove (i). If $ \<\< B| B\>\> = \Phi^*(B^* B) - \Phi^*(B^*) \Phi^*(B)=\mathds{O}$, then $\|\<\< B|B \>\>\|=0$. Therefore, by \lemref{lemma:cauchy-schwarz} we have for all $A \in {\mathcal{L}}({\mathcal{K}})$ the following: \begin{align*} \mathds{O} \leqslant \<\< A^*| B\>\> \<\<A^* | B \>\>^* = \<\< A^*| B\>\> \<\<B | A^* \>\> \leqslant \mathds{O}. \end{align*} This implies that $\<\< A^*| B\>\> = \Phi^*(A B) - \Phi^*(A)\Phi^*(B) = \mathds{O}$. Similarly for (ii), $ \<\< B^*| B^*\>\> = \Phi^*(B B^*) - \Phi^*(B) \Phi^*(B^*)=\mathds{O}$ implies that for all $A \in {\mathcal{L}}({\mathcal{K}})$ we have \begin{align*} \mathds{O} \leqslant \<\< B^*| A\>\>^* \<\< B^*| A\>\> = \<\< A| B^*\>\> \<\<B^* | A \>\> \leqslant \mathds{O}, \end{align*} which implies that $\<\< B^*| A\>\> = \Phi^*(B A) - \Phi^*(B)\Phi^*(A) = \mathds{O}$. \end{proof} \lemref{lemma:cauchy-schwarz} also has the following useful consequence: \begin{corollary}\label{corollary:Norm-commutator-inequality} Let $\Phi^*: {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$ be an operation. Given the sesquilinear mapping defined in \eq{eq:sesquilinear-map-defn}, for all $A,B \in {\mathcal{L}}({\mathcal{K}})$ it holds that \begin{align}\label{eq:norm-channel-commutator} \|[\Phi^*(A), \Phi^*(B)] - \Phi^*([A,B]) \| \leqslant &\| \<\<A|A\>\> \|^{\frac{1}{2}} \| \<\<B^*|B^*\>\> \|^{\frac{1}{2}} + \|\<\<A^*|A^*\>\> \|^{\frac{1}{2}} \| \<\<B|B\>\> \|^{\frac{1}{2}}. \end{align} \end{corollary} \begin{proof} Let us first write \begin{align*} [\Phi^*(A), \Phi^*(B)] - \Phi^*([A,B]) = \<\<B^*|A\>\> - \<\<A^*|B\>\>, \end{align*} which gives \begin{align}\label{eq:norm-channel-commutator-1} \| [\Phi^*(A), \Phi^*(B)] - \Phi^*([A,B])\| \leqslant \|\<\<B^*|A\>\>\| + \|\<\<A^*|B\>\>\|. \end{align} By \lemref{lemma:cauchy-schwarz} and the C* identity $\| A^* \| = \| A \| = \| A^* A\|^{\frac{1}{2}} = \| A A^* \|^{\frac{1}{2}}$ for all $A \in {\mathcal{L}}({\mathcal{H}})$, we therefore have \begin{align*} \|\<\<B^*|A\>\>\| & = \|\<\<B^*|A\>\>\<\<A|B^*\>\> \|^{\frac{1}{2}} \leqslant \| \<\<A|A\>\> \|^{\frac{1}{2}} \| \<\<B^*|B^*\>\> \|^{\frac{1}{2}},\nonumber \\ \|\<\<A^*|B\>\>\| &= \|\<\<B|A^*\>\>\<\<A^*|B\>\> \|^{\frac{1}{2}} \leqslant \|\<\<A^*|A^*\>\> \|^{\frac{1}{2}} \| \<\<B|B\>\> \|^{\frac{1}{2}}. \end{align*} Inserting the above inequalities in \eq{eq:norm-channel-commutator-1} gives the bound in \eq{eq:norm-channel-commutator}. \end{proof} Finally, we present the following useful properties of operations: \begin{lemma}\label{lemma:unsharp-disturbance-bound} Let $\Phi^* : {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$ be an operation. Consider the effects $A\in {\mathcal{L}_{p}}({\mathcal{K}})$ and $B \in {\mathcal{L}_{p}}({\mathcal{H}})$. It holds that \begin{align*} \|\Phi^*(A^2) - \Phi^*(A)^2 \| \leqslant 2 \|\Phi^*(A) - B \| + \|B - B^2 \|. \end{align*} \end{lemma} \begin{proof} This inequality (for channels) was given as Eq.(4) in Ref. \cite{Miyadera2008}; the proof below follows Theorem 2 of Ref. \cite{Miyadera2015e}. Let us first define $C:= \Phi^*(A) - B$ for notational simplicity. Now, given that $\mathds{O} \leqslant A \leqslant \mathds{1}\sub{{\mathcal{K}}}$ implies $A^2 \leqslant A$, we may write \begin{align*} \Phi^*(A^2) - \Phi^*(A)^2 &\leqslant \Phi^*(A) - \Phi^*(A)^2 \nonumber \\ & = [C, B] + C\big(\mathds{1}\sub{{\mathcal{H}}} - \Phi^*(A) - B \big) + B - B^2, \end{align*} and so we have the bound \begin{align*} \|\Phi^*(A^2) - \Phi^*(A)^2 \| & \leqslant \| [C , B]\| + \|C\big(\mathds{1}\sub{{\mathcal{H}}} - \Phi^*(A) - B \big)\| + \|B - B^2\| \nonumber \\ &\leqslant \| [C , B]\| + \|C\| \|\mathds{1}\sub{{\mathcal{H}}} - \Phi^*(A) - B\| + \|B - B^2\|\nonumber \\ & \leqslant \| [C , B]\| + \| C\| + \|B - B^2\| \nonumber \\ & \leqslant 2 \|C\| + \|B - B^2\|. \end{align*} In the third line we use the fact that $A$ and $B$ are effects which, given that $\Phi^*$ is an operation, gives $\mathds{O} \leqslant \Phi^*(A) + B \leqslant 2\mathds{1}\sub{{\mathcal{H}}} $. This in turn implies that $\|\mathds{1}\sub{{\mathcal{H}}} - \Phi^*(A) - B\|\leqslant 1$. The inequality in the final line follows from Robertson's uncertainty relation, by which we have \begin{align*} \| [C , B]\| = \sup_{\|\phi\|=1} |\<\phi | \mathfrak{i}[C, B] \phi \>| & \leqslant 2 \sqrt{\<\phi|C^2 \phi\> - \<\phi| C\phi\>^2} \sqrt{\<\phi| B^2 \phi\> - \<\phi| B \phi\>^2} \nonumber \\ & \leqslant 2 \| C\| \sqrt{\<\phi| B^2 \phi\> - \<\phi| B \phi\>^2}\nonumber \\ & \leqslant \| C\|. \end{align*} The final line follows from the fact that $\mathds{O} \leqslant B \leqslant \mathds{1}\sub{{\mathcal{H}}}$ implies $\sqrt{\<\phi| B^2 \phi\> - \<\phi| B \phi\>^2} \leqslant 1/2$. \end{proof} \begin{lemma}\label{lemma:operation-annihilation} Let $\Phi^* : {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$ be an operation. Assume that $\Phi^*(A)= \mathds{O}$ for some $A \in {\mathcal{L}_{p}}({\mathcal{K}})$. It holds that \begin{align*} \Phi^*(A B) = \Phi^*(B A) = \mathds{O} \end{align*} for all $B \in {\mathcal{L}}({\mathcal{K}})$. \end{lemma} \begin{proof} First, let us note that for any $B \in {\mathcal{L}_{p}}({\mathcal{K}})$, we have \begin{align*} \mathds{O} \leqslant \Phi^*(A B A ) \leqslant \| B\| \Phi^*(\sqrt{A}A\sqrt{A}) \leqslant \| B\| \| A\| \Phi^*(A ) = \mathds{O}, \end{align*} and so $\Phi^*(A B A )=\mathds{O}$. By the two-positivity of CP maps, it follows that for any $B \in {\mathcal{L}}({\mathcal{K}})$ we have \begin{align*} \mathds{O} = \Phi^*(A B^* B A ) \geqslant \Phi^*(A B^*) \Phi^*(B A) \geqslant \mathds{O}. \end{align*} The claim immediately follows. \end{proof} \subsection{Fixed points of channels} For channels $\Phi : {\mathcal{T}}({\mathcal{H}}) \to {\mathcal{T}}({\mathcal{H}})$, and their duals $\Phi^* : {\mathcal{L}}({\mathcal{H}}) \to {\mathcal{L}}({\mathcal{H}})$, we define the fixed-point sets as \begin{align*} &{\mathcal{F}}(\Phi) := \{T \in {\mathcal{T}}({\mathcal{H}}) : \Phi(T) = T\}, &{\mathcal{F}}(\Phi^*) := \{A \in {\mathcal{L}}({\mathcal{H}}) : \Phi^*(A) = A\}. \end{align*} Linearity of $\Phi^*$ ensures that ${\mathcal{F}}(\Phi^*)$ is closed under linear combinations, and because $\Phi^*$ preserves the involution, ${\mathcal{F}}(\Phi^*)^* = {\mathcal{F}}(\Phi^*)$. In general, ${\mathcal{F}}(\Phi^*)$ is not closed under multiplication. However, the following lemma provides a useful criterion under which multiplicative closure is satisfied, in which case ${\mathcal{F}}(\Phi^*)$ is a $*$-algebra. In fact, as we shall soon see, it is a von Neumann algebra \cite{Bratteli-operator-algebras-1}. \begin{lemma}[Lindblad]\label{lemma:Lindblad} Assume that $ {\mathcal{F}}(\Phi)$ contains a faithful state. Then ${\mathcal{F}}(\Phi^*)$ is a von Neumann algebra. \end{lemma} \begin{proof} Suppose $ B \in {\mathcal{F}}(\Phi^*)$, and define the operator $\Phi^*(B^*B)- \Phi^*(B^*) \Phi^*(B)=\Phi^*(B^*B)-B^*B$, which is positive due to the two-positivity of CP maps. Let ${\mathcal{F}}(\Phi)$ contain a faithful state $\omega$. Then we have \begin{eqnarray*} \mathrm{tr}[\omega(\Phi^*(B^*B)-B^*B)]=\mathrm{tr}[\omega(B^*B-B^*B)]=0, \end{eqnarray*} which implies that $\Phi^*(B^*B) = B^*B$. \corref{corollary:multiplicability} therefore implies that for all $A\in{\mathcal{L}}({\mathcal{H}})$, \begin{eqnarray*} \Phi^*( A B) = \Phi^*(A) B. \end{eqnarray*} Therefore, if $A\in {\mathcal{F}}(\Phi^*)$, then $\Phi^*( A B) = A B$, and so ${\mathcal{F}}(\Phi^*)$ is closed under multiplication and therefore a $*$-algebra. Finally, if ${\mathcal{F}}(\Phi^*)$ is an algebra, then ${\mathcal{F}}(\Phi^*) = \{K_i, K_i^*\}' := \{A \in {\mathcal{L}}({\mathcal{H}}) : [K_i, A] = [K_i^*, A] = \mathds{O} \, \forall i\}$, with $\{K_i\}$ any Kraus representation of $\Phi$ \cite{Kraus1983}, making ${\mathcal{F}}(\Phi^*)$ a von Neumann algebra (as the commutant of a self-adjoint subset of ${\mathcal{L}}({\mathcal{H}})$) \cite{Bratteli1998}. \end{proof} If ${\mathcal{F}}(\Phi^*)$ is a von Neumann algebra, it holds that for any self-adjoint operator $A \in {\mathcal{F}}(\Phi^*)$, the spectral measure of $A$ is also contained in ${\mathcal{F}}(\Phi^*)$. In the case that $A$ has a discrete spectrum, i.e., $A = \sum_n \lambda_n P_n$, this implies that $\{P_n\} \subset {\mathcal{F}}(\Phi^*)$. \subsection{Conservation laws in quantum mechanics} Here, we give two distinct notions of what it could mean operationally for a channel to obey a conservation law: ``average", in which expectation values are preserved, and ``full", in which all moments are preserved; these will be seen to agree in the unitary case but not in general. In the first analysis, a conservation law can be defined by equality of expectation values before and after the action of the channel, i.e., average conservation: \begin{definition}\label{defn:average-conservation} A channel $\Phi: {\mathcal{T}}({\mathcal{H}}) \to {\mathcal{T}}({\mathcal{H}})$ conserves a self-adjoint operator $N\in {\mathcal{L}_{s}}({\mathcal{H}})$ on average if for all $\rho \in {\mathcal{S}}({\mathcal{H}})$, \begin{align*} \mathrm{tr}[N \Phi(\rho)] = \mathrm{tr}[N \rho], \end{align*} i.e., $N \in {\mathcal{F}}(\Phi^*)$. \end{definition} However, this does not rule out the higher moments of the ``conserved'' quantity changing their values. Thus we may strengthen the definition in the following way: \begin{definition}\label{defn:conservation-law} A channel $\Phi: {\mathcal{T}}({\mathcal{H}}) \to {\mathcal{T}}({\mathcal{H}})$ fully conserves a self-adjoint operator $N \in {\mathcal{L}_{s}}({\mathcal{H}})$ if for all $\rho \in {\mathcal{S}}({\mathcal{H}})$ and $k \in \nat$, \begin{align*} \mathrm{tr}[N^k \Phi(\rho)] = \mathrm{tr}[N^k \rho], \end{align*} i.e., $N^k \in {\mathcal{F}}(\Phi^*)$ for all $k \in \nat$. \end{definition} We now show that full conservation is in fact equivalent to just the first two moments being conserved: \begin{prop}\label{prop:conservation-multiplication} Let $\Phi : {\mathcal{T}}({\mathcal{H}}) \to {\mathcal{T}}({\mathcal{H}})$ be a channel, and let $N \in {\mathcal{L}_{s}}({\mathcal{H}})$ be a self-adjoint operator. The following statements are equivalent: \begin{enumerate}[(i)] \item $\Phi$ fully conservess $N$. \item $\Phi^*(N^k) = N^k$ for $k=1,2$. \end{enumerate} \end{prop} \begin{proof} (i) $\implies$ (ii) is trivial, and so we now show that (ii) $\implies$ (i). Assume that $\Phi^*(N^k) = N^k$ for $k=1,2$. For any $k \geqslant 2$, \corref{corollary:multiplicability} implies that $\Phi^*(N^{k+1}) = \Phi^*(N^{k} N) = \Phi^*(N^k) N$. If $\Phi^*(N^k) = N^k$ for some $k>2$, it follows that $\Phi^*(N^{k+1}) = N^{k+1}$. But it trivially holds that $\Phi^*(N^3) = N^3$. The claim follows by induction. \end{proof} We note that the condition $\Phi^*(N^k) = N^k$ for $k=1,2$ was taken as a potential definition of conservation \emph{simpliciter} in Ref. \cite{Luo2007}. However, the authors here conjectured that, in finite dimensions, the condition $\Phi^*(N^2) = N^2$ may be dropped, and that (in our formulation) both average and full conservation are equivalent. We shall now address this issue: by a simple counter-example, we shall show that average and full conservation are in fact not equivalent for general channels, even in finite dimensions. However, in the special case of unitary channels, such equivalence holds even in infinite dimensions. Let us consider a system ${\mathcal{H}} \simeq \mathds{C}^3$ with an orthonoramal basis $\{|-1\>, |0\>, |1 \>\}$. Now consider $N=\sum_n n |n\rangle \langle n| \equiv |1\rangle \langle 1| - |-1\rangle \langle -1|$, and a channel $\Phi^*$ defined by \begin{align*} \Phi^*(A) = \<1|A|1\> |1\> \<1| +\< -1|A|-1\> |-1\>\<-1| + \frac{1}{2}\mathrm{tr}[(|1\rangle\langle 1|+ |-1\rangle \langle -1|) A]|0\>\< 0|, \end{align*} to hold for all $A \in {\mathcal{L}}({\mathcal{H}})$. It is simple to verify that $\Phi^*(N)=N$, that is, $\Phi$ conserves $N$ on average. However, $\Phi^*(N^2) = \mathds{1} \ne N^2$, and so $\Phi$ does not fully conserve $N$. We shall now show that in the special case of unitary channels, average conservation and full conservation are equivalent: \begin{lemma}\label{lemma:conservation-unitary} Let $\Phi(\cdot) := U(\cdot) U^*$ be a unitary channel, with $U \in {\mathcal{L}}({\mathcal{H}})$ a unitary operator, and let $N\in {\mathcal{L}_{s}}({\mathcal{H}})$ be a self-adjoint operator. The following statements are equivalent: \begin{enumerate}[(i)] \item $[U, N]=\mathds{O}$. \item $\Phi$ conserves $N$ on average. \item $\Phi$ fully conservess $N$. \end{enumerate} \end{lemma} \begin{proof} (i) $\iff$ (ii), (i) $\implies$ (iii), and (iii) $\implies$ (ii) are trivial. To show (iii) $\implies$ (i), let us first write \begin{align*} [U, N]^*[U,N] = \Phi^*(N^2) + N^2 - \Phi^*(N)N - N \Phi^*(N). \end{align*} If $\Phi$ fully conserves $N$, then the right hand side vanishes. But since the left hand side is a positive operator, then it holds that $[U, N]=\mathds{O}$. Finally, we shall show that (ii) $\implies$ (iii). Since $\Phi$ is unitary, then $\Phi^*(A^*) \Phi^*(B) = U^* A^* U U^* B U = U^* A^* B U = \Phi^*(A^* B)$ holds for all $A, B\in {\mathcal{L}}({\mathcal{H}})$. If $\Phi^*(N) = N$, it follows that $\Phi^*(N^2) = N^2$. The claim follows from \propref{prop:conservation-multiplication}. \end{proof} \subsection{Observables, instruments, and measurement schemes} In general, the state changes, or \emph{disturbance} caused by measurement is captured by the notion of an \emph{instrument}, or \emph{operation valued measure} \cite{Davies1970, Ozawa2000, Ozawa2001, Pellonpaa2013, Pellonpaa2013a}. In this subsection, we provide some background on instruments, observables and measurement schemes, as part of the quantum theory of measurement. Given a quantum system ${\mathcal{S}}$, with Hilbert space ${\mathcal{H}\sub{\s}}$, an \emph{observable} of ${\mathcal{S}}$ is represented by a \emph{normalised positive operator valued measure} (POVM) $\mathsf{E}: \Sigma\to {\mathcal{L}_{p}}({\mathcal{H}\sub{\s}})$, where $\Sigma$ is a $\sigma-$algebra of subsets of some value space ${\mathcal{X}}$, representing possible outcomes of a measurement of $\mathsf{E}$. For any $X\in \Sigma$, the positive operator $\mathds{O} \leqslant \mathsf{E}(X) \leqslant \mathds{1}\sub{\s}$ is referred to as an effect of $\mathsf{E}$. $\mathsf{E}$ is sigma-additive on disjoint elements of $\Sigma$, and normalisation implies that $\mathsf{E}({\mathcal{X}})$ is the identity operator on ${\mathcal{H}\sub{\s}}$. {\it Discrete observables} are those for which ${\mathcal{X}} = \{x_1, x_2, \dots\}$ is countable. In such a case $\mathsf{E}$ can be identified with the set $ \{\mathsf{E}(x) \equiv \mathsf{E}(\{x\}) \in {\mathcal{L}_{p}}({\mathcal{H}\sub{\s}}) : x \in {\mathcal{X}}\} \equiv \mathsf{E}$. If it is not stated otherwise, observables will be assumed to be discrete. Combined with states, observables give rise to the probabilities \begin{align*} p^{\mathsf{E}}_\rho(x) := \mathrm{tr}[\mathsf{E}(x) \rho], \end{align*} holding for all $\rho \in \mathcal{S}({\mathcal{H}\sub{\s}})$ and all $x \in {\mathcal{X}}$, interpreted as the probability of observing outcome $x$ when the observable $\mathsf{E}$ is measured in the state $\rho$. If $\mathsf{E}$ is a POVM acting in ${\mathcal{H}\sub{\s}}$, the {\it commutant} of $\mathsf{E}$ is denoted by $\mathsf{E}' := \{A \in {\mathcal{L}}({\mathcal{H}\sub{\s}}) : [\mathsf{E}(x), A]=\mathds{O} \, \forall \, x\in {\mathcal{X}}\}$. Since $\mathsf{E} =\mathsf{E}^*$ is a self-adjoint set, $\mathsf{E}'$ is a von Neumann algebra, and $\mathsf{E}'' \equiv (\mathsf{E}')'$ is the smallest von Neumann algebra containing $\mathsf{E}$ (i.e., it is the von Neumann algebra generated by $\mathsf{E}$). For any $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ such that $A \in \mathsf{E}'$, we write $[\mathsf{E}, A]=\mathds{O}$. Similarly, for any observable $\mathsf{F} :=\{\mathsf{F}(y): y \in {\mathcal{Y}}\}$ such that $\mathsf{F} \subset \mathsf{E}'$, we shall write $[\mathsf{E}, \mathsf{F}]=\mathds{O}$. Among the observables are those that are \emph{commutative}, meaning that $\mathsf{E} \subset \mathsf{E}'$ (that is, all the effects $\mathsf{E}(x)$ mutually commute). Among the commutative observables are the {\it sharp} observables, which satisfy the additional condition that for all $x, y \in {\mathcal{X}}$, $\mathsf{E}(x) \mathsf{E}(y) = \delta_{x,y}\mathsf{E}(x)$, i.e., $\mathsf{E}(x)$ are mutually orthogonal projection operators. These observables correspond to self-adjoint operators through the spectral theorem. Observables that are not sharp will be called {\it unsharp}, and similarly any effect $E$ which is not a projection will be called unsharp. The unsharpness of $E$ can be quantified through the operator norm as $0\leqslant \| E - E^2\|\leqslant 1/4$, which vanishes exactly when $E$ is a projection. Finally, an observable $\mathsf{E}$ is defined as ``norm-1'' if $\| \mathsf{E}(x)\|=1$ for every $x$ for which $\mathsf{E}(x) \ne \mathds{O}$, and we note that while sharp observables are trivially norm-1, this property may also be enjoyed by some unsharp observables. \begin{figure}[htbp!] \begin{center} \includegraphics[width=0.5\textwidth]{Instrument} \vspace*{-0.2cm} \caption{An instrument measures an observable $\mathsf{E}$ of the system ${\mathcal{S}}$, and also transforms the system conditional on registering a given outcome. The system, initially prepared in an arbitrary state $\rho$, enters the instrument which then registers outcome $x$ with probability $p^\mathsf{E}_\rho(x) := \mathrm{tr}[\mathsf{E}(x) \rho] = \mathrm{tr}[{\mathcal{I}}_x(\rho)]$. Subsequently, the instrument transforms the system to the (non-normalised) state ${\mathcal{I}}_x(\rho)$. }\label{fig:Instrument} \vspace*{-0.5cm} \end{center} \end{figure} Though the state-observable pairings describe the totality of the measurement statistics, this is not sufficient for determining other interesting properties of a measurement, for instance the form of the associated state changes. To this end, we make use of the notion of \emph{instrument}, or \emph{operation valued measure} \cite{Davies1970}. A discrete instrument is a collection of operations $ {\mathcal{I}} := \{{\mathcal{I}}_x \equiv {\mathcal{I}}_{\{x\}} : x\in {\mathcal{X}}\}$ such that ${\mathcal{I}}_{\mathcal{X}}(\cdot) := \sum_{x\in {\mathcal{X}}} {\mathcal{I}}_x(\cdot)$ is a channel. Throughout, we shall always assume that ${\mathcal{I}}$ acts in ${\mathcal{H}\sub{\s}}$, that is, ${\mathcal{I}}_x: {\mathcal{T}}({\mathcal{H}\sub{\s}}) \to {\mathcal{T}}({\mathcal{H}\sub{\s}})$. Each instrument is associated with a unique observable $\mathsf{E}$ via ${\mathcal{I}}^*_x(\mathds{1}\sub{\s}) = \mathsf{E}(x)$, which implies that $p^{\mathsf{E}}_\rho(x) = \mathrm{tr}[\mathsf{E}(x)\rho] = \mathrm{tr}[{\mathcal{I}}_x(\rho)]$. We refer to ${\mathcal{I}}$ as an $\mathsf{E}$-compatible instrument, or an $\mathsf{E}$-instrument for short, and to ${\mathcal{I}}_{\mathcal{X}}(\cdot)$ as the associated $\mathsf{E}$-channel. ${\mathcal{I}}_x(\rho)$ is interpreted as the non-normalised state after a measurement of $\mathsf{E}$ has taken place and the outcome $x$ has been registered, and ${\mathcal{I}}_{\mathcal{X}}(\rho)$ is the normalised state after a non-selective measurement. A schematic representation of an instrument is given in \fig{fig:Instrument}. We note that for every discrete observable $\mathsf{E}$, there are infinitely many $\mathsf{E}$-compatible instruments; every $\mathsf{E}$-instrument ${\mathcal{I}}$ can be constructed as the set of operations $\{\Phi_x \circ{\mathcal{I}}^L_x : x\in {\mathcal{X}}\}$ \cite{Ozawa2001, Pellonpaa2013a}, where $\Phi_x : {\mathcal{T}}({\mathcal{H}\sub{\s}}) \to {\mathcal{T}}({\mathcal{H}\sub{\s}})$ are arbitrary channels, and ${\mathcal{I}}^L$ is the L\"uders instrument for $\mathsf{E}$ \cite{Luders2006}, defined as \begin{align}\label{eq:luders} &{\mathcal{I}}^L_x(T) := \sqrt{\mathsf{E}(x)} T \sqrt{\mathsf{E}(x)}, &{{\mathcal{I}}^L_x}^*(A) := \sqrt{\mathsf{E}(x)} A \sqrt{\mathsf{E}(x)}, \end{align} to hold for all $x\in {\mathcal{X}}$, $T\in {\mathcal{T}}({\mathcal{H}\sub{\s}})$, and $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$. \begin{figure}[htbp!] \begin{center} \includegraphics[width=0.4\textwidth]{Measurement-Scheme} \vspace*{-0.2cm} \caption{An $\mathsf{E}$-instrument ${\mathcal{I}}$ is implemented on the system ${\mathcal{S}}$ via a measurement scheme. The system, initially prepared in an arbitrary state $\rho$, and a measurement apparatus $\aa$, prepared in a fixed state $\xi$, undergo a joint evolution by the channel ${\mathcal{E}}$. Subsequently, a pointer observable $\mathsf{Z}$ of the apparatus is measured. With probability $p^\mathsf{E}_\rho(x) := \mathrm{tr}[\mathsf{E}(x) \rho] = \mathrm{tr}[{\mathcal{I}}_x(\rho)]$ the apparatus registers outcome $x$, thereby transforming the system to the non-normalised state ${\mathcal{I}}_x(\rho)$. }\label{fig:Measurement-Scheme} \vspace*{-0.5cm} \end{center} \end{figure} An even more comprehensive description of the measurement process involves the modelling of a measuring apparatus $\aa$ and a specification of how it couples to the system under investigation. A \emph{measurement scheme} for an $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ is characterised by the tuple ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ where: ${\mathcal{H}\sub{\aa}}$ is the Hilbert space for the measurement apparatus $\aa$ and $\xi \in {\mathcal{S}}({\mathcal{H}\sub{\aa}})$ is a state of the apparatus; ${\mathcal{E}}: {\mathcal{T}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}) \to {\mathcal{T}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})$ is a channel which serves to correlate system and apparatus; and $\mathsf{Z} := \{\mathsf{Z}(x) : x \in {\mathcal{X}}\}$ is a ``pointer'' observable of the apparatus, which we choose to have the same value space ${\mathcal{X}}$ as the system observable $\mathsf{E}$ . The operations of the instrument implemented by ${\mathcal{M}}$ can be written as \begin{align}\label{eq:instrument-dilation} {\mathcal{I}}_x(T) = \mathrm{tr}\sub{\aa}[(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x) ) {\mathcal{E}}(T \otimes \xi)] \end{align} for all $T \in {\mathcal{T}}({\mathcal{H}\sub{\s}})$ and $x\in {\mathcal{X}}$, where $\mathrm{tr}\sub{\aa}: {\mathcal{T}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})\to {\mathcal{T}}({\mathcal{H}\sub{\s}})$ is the partial trace channel over the apparatus, defined as $\mathrm{tr}[A \mathrm{tr}\sub{\aa}[T]] = \mathrm{tr}[(A \otimes \mathds{1}\sub{\aa}) T]$ for all $A\in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ and $T \in {\mathcal{T}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})$. The channel implemented by ${\mathcal{M}}$ may thus be written as ${\mathcal{I}}_{\mathcal{X}}(T) = \mathrm{tr}\sub{\aa}[{\mathcal{E}}(T \otimes \xi)]$. A schematic representation of a measurement scheme is given in \fig{fig:Measurement-Scheme}. We note that every $\mathsf{E}$-compatible instrument admits infinitely many \emph{normal} measurement schemes, where $\xi$ is a pure state, ${\mathcal{E}}(\cdot) = U(\cdot) U^*$ is a unitary channel, and $\mathsf{Z}$ is a sharp observable \cite{Ozawa1984}. However, unless stated otherwise, we shall consider the more general situation where $\xi$ may be mixed, ${\mathcal{E}}$ may be non-unitary, and $\mathsf{Z}$ may be unsharp. We now introduce the unital, completely positive normal conditional expectation $\Gamma_\xi: {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}) \to {\mathcal{L}}({\mathcal{H}\sub{\s}})$. $\Gamma_\xi$, called a {\it restriction map} for $\xi$, is defined as the dual of the isometric embedding (or the {\it preparation map}) $T \mapsto T \otimes \xi$, and satisfies $\mathrm{tr}[\Gamma_\xi(B)T] = \mathrm{tr}[B (T\otimes \xi)]$ for all $B \in {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})$ and $T \in {\mathcal{T}}({\mathcal{H}\sub{\s}})$. We may use the restriction map to define the channel $\Gamma_\xi^{\mathcal{E}} : {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}) \to {\mathcal{L}}({\mathcal{H}\sub{\s}})$ as \begin{align}\label{eq:Gamma-U} \Gamma_\xi^{\mathcal{E}}(\cdot) := \Gamma_\xi\circ {\mathcal{E}}^*(\cdot). \end{align} Using \eq{eq:Gamma-U}, we may express the duals of the operations defined in \eq{eq:instrument-dilation} as \begin{align* {\mathcal{I}}_x^*(A) = \Gamma_\xi^{\mathcal{E}}( A \otimes \mathsf{Z}(x) ), \end{align*} to hold for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ and $x\in {\mathcal{X}}$. In particular, we may write $\mathsf{E}(x) = \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))$ and ${\mathcal{I}}^*_{\mathcal{X}}(A) = \Gamma_\xi^{\mathcal{E}}(A \otimes \mathds{1}\sub{\aa})$. We may also be interested in asking how the apparatus is transformed as a result of the measurement interaction. To this end, we introduce the channel $\Lambda: {\mathcal{T}}({\mathcal{H}\sub{\s}}) \to {\mathcal{T}}({\mathcal{H}\sub{\aa}})$ and its dual $\Lambda^*: {\mathcal{L}}({\mathcal{H}\sub{\aa}}) \to {\mathcal{L}}({\mathcal{H}\sub{\s}})$, referred to as \emph{conjugate} channels to ${\mathcal{I}}_{\mathcal{X}}$ and ${\mathcal{I}}_{\mathcal{X}}^*$, respectively, defined as \begin{align}\label{eq:conjugate-channel} &\Lambda(T) := \mathrm{tr}\sub{{\mathcal{S}}}[{\mathcal{E}}(T \otimes \xi)], &\Lambda^*(A) := \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes A), \end{align} to hold for all $T \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ and $A \in {\mathcal{L}}({\mathcal{H}\sub{\aa}})$. That is, $\Lambda(\rho)$ is the state of the apparatus after it has interacted with the system, when the system is initially prepared in state $\rho$. On the other hand, for the initial system state $\rho$ and $A \in {\mathcal{L}}({\mathcal{H}\sub{\aa}})$, the expected value of $A$ in the state of the apparatus after the measurement interaction can be obtained by evaluating the expected value of $\Lambda^*(A)$ in $\rho$. As such, we may write $\mathsf{E}(x) = \Lambda^*(\mathsf{Z}(x))$. We now prove a useful result regarding the fixed-point structure of the $\mathsf{E}$-channel ${\mathcal{I}}^*_{\mathcal{X}}$, which we shall use in several places in this paper. \begin{lemma}\label{lemma:fixed-points-instrument} Let ${\mathcal{I}}$ be an instrument compatible with an observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}}$. The following hold: \begin{enumerate} [(i)] \item If $\mathsf{E}$ is sharp, then ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) \subset {\mathcal{I}}_{\mathcal{X}}^*({\mathcal{L}}({\mathcal{H}\sub{\s}})) \subset \mathsf{E}'$. \item If ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra, then ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) \subset \mathsf{E}'$. \item If ${\mathcal{I}}_{\mathcal{X}}$ fully conserves a self-adjoint operator $A \in {\mathcal{L}_{s}}({\mathcal{H}\sub{\s}})$, then $A \in \mathsf{E}'$. \end{enumerate} \end{lemma} \begin{proof} All $\mathsf{E}$-compatible instruments ${\mathcal{I}}$ admit a measurement scheme ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$. Therefore, by the channel $\Gamma_\xi^{\mathcal{E}}$ defined in \eq{eq:Gamma-U}, we may write $[\mathsf{E}(x), {\mathcal{I}}^*_{\mathcal{X}}(A)] = [\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)), \Gamma_\xi^{\mathcal{E}}(A \otimes \mathds{1}\sub{\aa})]$. Since $[\mathds{1}\sub{\s} \otimes \mathsf{Z}(x), A \otimes \mathds{1}\sub{\aa}]=\mathds{O}$, and $\mathsf{Z}(x)$ are positive operators, then by the sesquilinear mapping $\langle \langle A|B \rangle \rangle := \Gamma_\xi^{\mathcal{E}}(A^*B) - \Gamma_\xi^{\mathcal{E}}(A^*) \Gamma_\xi^{\mathcal{E}}(B)$ and \corref{corollary:Norm-commutator-inequality} we obtain \begin{align}\label{eq:lemma-fixed-point-intro-1} \|[\mathsf{E}(x), {\mathcal{I}}^*_{\mathcal{X}}(A)] \| &\leqslant \| \<\< \mathds{1}\sub{\s} \otimes \mathsf{Z}(x)| \mathds{1}\sub{\s} \otimes \mathsf{Z}(x)\>\>\|^{\frac{1}{2}} \bigg( \|\<\<A \otimes \mathds{1}\sub{\aa}|A \otimes \mathds{1}\sub{\aa}\>\>\|^{\frac{1}{2}} + \|\<\<A^* \otimes \mathds{1}\sub{\aa}|A^* \otimes \mathds{1}\sub{\aa}\>\>\|^{\frac{1}{2}}\bigg). \end{align} Since $\mathds{O} \leqslant \mathsf{Z}(x) \leqslant \mathds{1}\sub{\aa}$, it follows that $\mathsf{Z}(x)^2 \leqslant \mathsf{Z}(x)$, and hence \begin{align*} \<\<\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)|\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)\>\> = \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^2) - \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2 \leqslant \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)) - \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2= \mathsf{E}(x) - \mathsf{E}(x)^2. \end{align*} On the other hand, we have $\<\<A \otimes \mathds{1}\sub{\aa}|A \otimes \mathds{1}\sub{\aa}\>\> = {\mathcal{I}}^*_{\mathcal{X}}(A^* A ) - {\mathcal{I}}^*_{\mathcal{X}}( A^*){\mathcal{I}}^*_{\mathcal{X}}(A) $ and $\<\<A^* \otimes \mathds{1}\sub{\aa}|A^* \otimes \mathds{1}\sub{\aa}\>\> = {\mathcal{I}}^*_{\mathcal{X}}(A A^*) - {\mathcal{I}}^*_{\mathcal{X}}(A) {\mathcal{I}}^*_{\mathcal{X}}( A ^*)$. We thus obtain from \eq{eq:lemma-fixed-point-intro-1} the bound \begin{align}\label{eq:commutation-effect-fixed-point-sharp-or-algebra} \|[\mathsf{E}(x), {\mathcal{I}}^*_{\mathcal{X}}(A)] \| \leqslant \| \mathsf{E}(x) - \mathsf{E}(x)^2\|^{\frac{1}{2}}\bigg( \|{\mathcal{I}}^*_{\mathcal{X}}(A^* A) - {\mathcal{I}}^*_{\mathcal{X}}(A^*){\mathcal{I}}^*_{\mathcal{X}}(A) \|^{\frac{1}{2}} + \|{\mathcal{I}}^*_{\mathcal{X}}(A A^*) - {\mathcal{I}}^*_{\mathcal{X}}(A){\mathcal{I}}^*_{\mathcal{X}}(A^*) \|^{\frac{1}{2}} \bigg). \end{align} Now we may prove (i). If $\mathsf{E}$ is sharp, then the upper bound of \eq{eq:commutation-effect-fixed-point-sharp-or-algebra} vanishes and so for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, ${\mathcal{I}}_{\mathcal{X}}^*(A) \in \mathsf{E}'$. As such, ${\mathcal{I}}_{\mathcal{X}}^*({\mathcal{L}}({\mathcal{H}\sub{\s}}))\subset \mathsf{E}'$. That ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) \subset {\mathcal{I}}_{\mathcal{X}}^*({\mathcal{L}}({\mathcal{H}\sub{\s}}))$ is trivial. Now we prove (ii). Assume that $A \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, which implies that $A^* \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. But if ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra, this implies that $A^*A, A A^* \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, and so the upper bound of \eq{eq:commutation-effect-fixed-point-sharp-or-algebra} vanishes. Consequently, we see that for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, $A \in {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) \implies A \in \mathsf{E}'$, which implies that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) \subset \mathsf{E}'$. Finally, let us prove (iii). Let $A$ be a self-adjoint operator, and assume that ${\mathcal{I}}_{\mathcal{X}}$ fully conserves $A$. By \defref{defn:conservation-law} it holds that ${\mathcal{I}}_{\mathcal{X}}^*(A^k) = A^k$ for $k=1,2$, and so once again the upper bound of \eq{eq:commutation-effect-fixed-point-sharp-or-algebra} vanishes, implying that $A \in \mathsf{E}'$. \end{proof} \section{Quantifying measurement disturbance in the presence of additive conservation laws}\label{sect:quantifying-disturbance} In this section we present an operational quantification for measurement disturbance, obtaining a novel lower bound for the disturbance that results when two observables are measured in succession. From this result we obtain necessary conditions for non-disturbance previously reported in Ref. \cite{Heinosaari2010}. We then see that the extra constraint imposed by a conservation law yields additional restrictions on the possibility of non-disturbance, which depend on whether the conservation law is full or average. We see that non-disturbance in the full case requires large coherence in the initial state of the apparatus, in the spirit of the original Wigner-Araki-Yanase theorem. \begin{figure}[htbp!] \begin{center} \includegraphics[width=0.5\textwidth]{Sequential-Measurement} \vspace*{-0.2cm} \caption{In a sequential measurement, an observable $\mathsf{E}$ is measured by an instrument ${\mathcal{I}}$, and subsequently a second observable $\mathsf{F}$ is measured. ${\mathcal{I}}$ does not disturb $\mathsf{F}$ if for all input states $\rho$, the statistics of $\mathsf{F}$ do not depend on whether an $\mathsf{E}$-measurement took place or not. ${\mathcal{I}}$ does not disturb $\mathsf{F}$ precisely when $\mathsf{F}$ is contained in the fixed-point set of the $\mathsf{E}$-channel ${\mathcal{I}}^*_{\mathcal{X}}$. }\label{fig:Sequential-Measurement} \vspace*{-0.5cm} \end{center} \end{figure} \subsection{Disturbance, compatibility, and commutativity} Let $\mathsf{E} := \{\mathsf{E}(x) : x \in {\mathcal{X}}\}$ and $\mathsf{F}:= \{\mathsf{F}(y) : y \in {\mathcal{Y}}\}$ be two observables acting in ${\mathcal{H}\sub{\s}}$. Consider the sequential measurement of these observables, as depicted in \fig{fig:Sequential-Measurement}, where at first $\mathsf{E}$ is measured by the instrument ${\mathcal{I}}$, and subsequently $\mathsf{F}$ is measured. For any initial state $\rho \in {\mathcal{S}}({\mathcal{H}\sub{\s}})$, the probability of observing outcome $y$ of $\mathsf{F}$ after a non-selective measurement of $\mathsf{E}$ is given as \begin{align*} \mathrm{tr}[\mathsf{F}(y) {\mathcal{I}}_{\mathcal{X}}(\rho)] \equiv \mathrm{tr}[{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)) \rho]. \end{align*} That is, the prior $\mathsf{E}$-measurement implies that we perform a measurement of the disturbed observable $\{{\mathcal{I}}_{\mathcal{X}}^*(\mathsf{F}(y)) : y \in {\mathcal{Y}}\}$ in the state $\rho$. We may define the ``difference'' between the disturbed and non-disturbed effects of $\mathsf{F}$ by \begin{align}\label{eq:disturbance-quantification} \delta(y) &:= {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)) - \mathsf{F}(y). \end{align} The disturbance of $\mathsf{F}(y)$ by ${\mathcal{I}}$ may be quantified as $\| \delta(y) \|$, which has the operational meaning as the least upper bound of the discrepancy between the probability arising for outcome $y$ from a measurement of the disturbed and non-disturbed observables: \begin{align*} \|\delta(y) \| = \sup_{\rho\in {\mathcal{S}}({\mathcal{H}\sub{\s}})} \bigg|\mathrm{tr}[ {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)) \rho] - \mathrm{tr}[\mathsf{F}(y) \rho]\bigg|. \end{align*} A global quantification of the disturbance of $\mathsf{F}$ can then be defined as the largest disturbance of one of the effects, \begin{align*} \delta := \max_{y \in {\mathcal{Y}}} \| \delta(y)\|, \end{align*} and $\mathsf{F}$ is non-disturbed by ${\mathcal{I}}$ exactly when $\delta = 0$, which is the case when \begin{align}\label{eq:measurement-non-disturbance} {\mathcal{I}}_{\mathcal{X}}^*(\mathsf{F}(y)) = \mathsf{F}(y) \qquad \forall \, y \in {\mathcal{Y}}. \end{align} In other words, ${\mathcal{I}}$ does not disturb $\mathsf{F}$ exactly when each $\mathsf{F}(y)$ is a fixed point of the $\mathsf{E}$-channel ${\mathcal{I}}^*_{\mathcal{X}}$, i.e., $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$. In such a case, for any initial state $\rho$ the measurement statistics of $\mathsf{F}$ will not depend on whether an $\mathsf{E}$-measurement took place. Non-disturbance of $\mathsf{F}$ by the $\mathsf{E}$-instrument ${\mathcal{I}}$ implies that $\mathsf{E}$ and $\mathsf{F}$ are \emph{compatible} (or \emph{jointly measurable}) \cite{Heinosaari2015}. The pair of observables $\mathsf{E}$ and $\mathsf{F}$ are compatible if they admit a joint observable $\mathsf{G} := \{\mathsf{G}(x,y) : (x,y) \in {\mathcal{X}}\times {\mathcal{Y}}\}$ so that \begin{align}\label{eq:joint-observable-compatible} &\sum_{y \in {\mathcal{Y}}} \mathsf{G}(x,y) = \mathsf{E}(x), &\sum_{x\in {\mathcal{X}}} \mathsf{G}(x,y) = \mathsf{F}(y) \qquad \forall \, x\in {\mathcal{X}}, y \in {\mathcal{Y}}. \end{align} If $\mathsf{E}$ and $\mathsf{F}$ do not admit a joint observable, then they are \emph{incompatible}. But if $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, we may choose $\mathsf{G}$ as $\mathsf{G}(x,y) = {\mathcal{I}}^*_x(\mathsf{F}(y))$, which satisfies \eq{eq:joint-observable-compatible}. We may therefore conclude that for two incompatible observables $\mathsf{E}$ and $\mathsf{F}$, no $\mathsf{E}$-instrument ${\mathcal{I}}$ exists that satisfies $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. Note that while non-disturbance requires compatibility, compatibility does not guarantee non-disturbance. For instance, while any observable is compatible with itself, for every informationally complete observable the fixed-point set of its compatible channel is trivial. Indeed, the size of the fixed-point set of an $\mathsf{E}$-channel is strongly related to the amount of information given by $\mathsf{E}$ as shown in Ref. \cite{Hamamura-Miyadera}. Furthermore, as shown in Ref. \cite{Heinosaari2010}, there exist pairs of compatible observables $\mathsf{E}$ and $\mathsf{F}$ where $\mathsf{E}$ admits an instrument that does not disturb $\mathsf{F}$, but all possible $\mathsf{F}$-instruments necessarily disturb $\mathsf{E}$. This further demonstrates that unlike compatibility, non-disturbance is not symmetric. As shown in Ref. \cite{Miyadera2008}, the pair of observables $\mathsf{E}$ and $\mathsf{F}$ are compatible only if \begin{align}\label{eq:compatibility-necessary-commutativity} \|[\mathsf{E}(x), \mathsf{F}(y)] \| \leqslant 2 \|\mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} \| \mathsf{F}(y) - \mathsf{F}(y)^2 \|^{\frac{1}{2}} \qquad \forall \, x\in {\mathcal{X}}, y \in {\mathcal{Y}}. \end{align} Commutativity is a sufficient condition for compatibility; if $\mathsf{E}$ commutes with $\mathsf{F}$, then there is a joint observable $\mathsf{G}$ with effects $\mathsf{G}(x,y) = \mathsf{E}(x) \mathsf{F}(y) $, which are positive since they can be written as $\mathsf{G}(x,y) = (\sqrt{\mathsf{E}(x)} \sqrt{\mathsf{F}(y)})^* (\sqrt{\mathsf{E}(x)} \sqrt{\mathsf{F}(y)})$. On the other hand, if either $\mathsf{E}$ or $\mathsf{F}$ is sharp, in which case the upper bound of \eq{eq:compatibility-necessary-commutativity} vanishes, then commutativity is a necessary condition for compatibility \cite{Lahti2003}. For two non-commuting observables to be compatible, therefore, their effects must be sufficiently unsharp. We now provide a bound for the commutation between the effects of $\mathsf{E}$ and $\mathsf{F}$ in terms of the disturbance of $\mathsf{F}$ by ${\mathcal{I}}$. \begin{prop}\label{prop:quantitative-bound-disturbance} Consider the observables $\mathsf{E}$ and $\mathsf{F}$ acting in ${\mathcal{H}\sub{\s}}$, and let $\|\delta(y)\|$ be the disturbance of the effects of $\mathsf{F}$ caused by an $\mathsf{E}$-instrument ${\mathcal{I}}$. Then for all $x\in {\mathcal{X}}$ and $y \in {\mathcal{Y}}$ \begin{align}\label{eq:disturbance-inequality} \| [\mathsf{E}(x), \mathsf{F}(y)] \| &\leqslant \|\delta(y)\| + 2\| \mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} \| {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) - {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))^2 \|^{\frac{1}{2}}. \end{align} If $\mathsf{F}$ is non-disturbed by ${\mathcal{I}}$, that is, if $\delta = 0$, then for all $x\in {\mathcal{X}}$ and $y \in {\mathcal{Y}}$ \begin{align}\label{eq:non-disturbance-inequality} \| [\mathsf{E}(x), \mathsf{F}(y)] \| &\leqslant 2\| \mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} \|{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) - \mathsf{F}(y)^2 \|^{\frac{1}{2}}. \end{align} \end{prop} \begin{proof} By \eq{eq:disturbance-quantification}, we may write \begin{align}\label{eq:non-disturbance-equality-1} [\mathsf{E}(x), \mathsf{F}(y)] = [\delta(y), \mathsf{E}(x)] + [\mathsf{E}(x),{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))]. \end{align} Every $\mathsf{E}$-instrument ${\mathcal{I}}$ admits a measurement scheme ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$. Using the channel $\Gamma_\xi^{\mathcal{E}}$ defined in \eq{eq:Gamma-U}, we may therefore write $ [\mathsf{E}(x),{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))] = [\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)), \Gamma_\xi^{\mathcal{E}}(\mathsf{F}(y) \otimes \mathds{1}\sub{\aa})]$. Given that $[\mathds{1}\sub{\s} \otimes \mathsf{Z}(x), \mathsf{F}(y) \otimes \mathds{1}\sub{\aa}]=\mathds{O}$, then by the sesquilinear mapping $\<\< A| B\>\> := \Gamma_\xi^{\mathcal{E}}( A^* B) - \Gamma_\xi^{\mathcal{E}}(A^*) \Gamma_\xi^{\mathcal{E}}(B)$ and \corref{corollary:Norm-commutator-inequality}, we obtain from \eq{eq:non-disturbance-equality-1} the bound \begin{align}\label{eq:non-disturbance-equality-2} \| [\mathsf{E}(x), \mathsf{F}(y)]\| &\leqslant \|[\delta(y), \mathsf{E}(x)]\| + 2 \|\<\< \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) | \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) \>\> \|^{\frac{1}{2}} \|\< \< \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} | \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} \>\> \|^{\frac{1}{2}}. \end{align} Since $\mathsf{E}(x)$ is an effect, then as shown in \lemref{lemma:unsharp-disturbance-bound} we have $ \|[\delta(y), \mathsf{E}(x)]\| \leqslant \| \delta(y) \|$. As shown in \lemref{lemma:fixed-points-instrument}, we have $ \<\< \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) | \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) \>\> \leqslant \mathsf{E}(x) - \mathsf{E}(x)^2$ and $\< \< \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} | \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} \>\> = {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) - {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))^2$. We therefore obtain from \eq{eq:non-disturbance-equality-2} the bound given in \eq{eq:disturbance-inequality}. If $\mathsf{F}$ is non-disturbed by ${\mathcal{I}}$, then $\|\delta(y)\| = 0$ and ${\mathcal{I}}_{\mathcal{X}}^*(\mathsf{F}(y))^2 = \mathsf{F}(y)^2$ for all $y$. We thus arrive at \eq{eq:non-disturbance-inequality}. \end{proof} We see that when $\mathsf{E}$ commutes with $\mathsf{F}$ the lower bound of \eq{eq:non-disturbance-inequality} vanishes, in which case \propref{prop:quantitative-bound-disturbance} does not prohibit non-disturbance. Indeed, in the case of commuting observables there always exists a non-disturbing instrument; since $\mathsf{E}' \subset {\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*)$ always holds, where ${\mathcal{I}}^L$ is the L\"uders $\mathsf{E}$-instrument defined in \eq{eq:luders}, then a L\"uders measurement of $\mathsf{E}$ is guaranteed not to disturb any $\mathsf{F}$ commuting with $\mathsf{E}$ \cite{Busch1998}. On the other hand, if $\mathsf{E}$ does not commute with $\mathsf{F}$, then \propref{prop:quantitative-bound-disturbance} allows us to obtain a lower bound for the disturbance that results given any $\mathsf{E}$-compatible instrument, determined only by the unsharpness and non-commutativity of $\mathsf{E}$ and $\mathsf{F}$: \begin{corollary}\label{corollary:minimum-error-bound-sharpness} Consider the setup of \propref{prop:quantitative-bound-disturbance}. For all $x\in {\mathcal{X}}$ and $y\in {\mathcal{Y}}$, it also holds that \begin{align}\label{eq:disturbance-inequality-observables} \| [\mathsf{E}(x), \mathsf{F}(y)] \| & \leqslant \|\delta(y)\| + 2\| \mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} \left(2 \|\delta(y)\| + \|\mathsf{F}(y) - \mathsf{F}(y)^2 \|\right)^{\frac{1}{2}} . \end{align} \end{corollary} \begin{proof} Since $\mathsf{F}(y)$ are effects and ${\mathcal{I}}^*_{\mathcal{X}}$ is a channel, then by \lemref{lemma:unsharp-disturbance-bound} we have $\|{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) - {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))^2 \| \leqslant 2 \|\delta(y)\| + \|\mathsf{F}(y) - \mathsf{F}(y)^2 \|$. As such, \eq{eq:disturbance-inequality-observables} is obtained directly from \eq{eq:disturbance-inequality}. \end{proof} Note that while \corref{corollary:minimum-error-bound-sharpness} provides a lower bound for the disturbance, which is strictly positive whenever either $\mathsf{E}$ or $\mathsf{F}$ is sharp and these observables do not commute, such a lower bound will differ depending on whether $\mathsf{E}$ or $\mathsf{F}$ is sharp; if $\mathsf{E}$ is sharp, we have $\delta \geqslant \max_{x,y} \|[\mathsf{E}(x), \mathsf{F}(y)] \|$, whereas if $\mathsf{F}$ is sharp but $\mathsf{E}$ is unsharp, the lower bound for the disturbance may be smaller. Let us illustrate this with the following example. Consider a system ${\mathcal{H}\sub{\s}} \simeq \mathds{C}^2$, with the orthonormal basis $\{|0\>, |1\>\}$, and define $|\pm\> := \frac{1}{\sqrt{2}}(|0\> \pm |1\>)$. Now consider a pair of binary observables $\mathsf{A}=\{\mathsf{A}(a): a=0,1\}$ and $\mathsf{B}_{\lambda}= \{\mathsf{B}_{\lambda}(b) : b = \pm\}$ acting in ${\mathcal{H}\sub{\s}}$, defined by $\mathsf{A}(a)=|a\rangle \langle a|$ and $\mathsf{B}_{\lambda} (b)= \lambda |b\rangle \langle b | + (1-\lambda) \frac{\mathds{1}}{2}$ for some $0\leqslant \lambda \leqslant 1$. It is simple to verify that $\|[\mathsf{A}(a), \mathsf{B}_{\lambda}(b)]\| = \frac{\lambda}{2}$ for any $a=0,1$ and $b=\pm$. Now we may evaluate the disturbance of one of these observables caused by a L\"uders measurement of the other. The disturbance of $\mathsf{B}_\lambda(b)$ by a L\"uders measurement of $\mathsf{A}$ reads $\| \delta(b)\| = \frac{\lambda}{2}$ for each $b$. Since $\mathsf{A}$ is sharp, then by setting $\mathsf{E} = \mathsf{A}$ and $\mathsf{F} = \mathsf{B}_\lambda$, we see that the inequality in \eq{eq:disturbance-inequality-observables} is tight. On the other hand, the disturbance of $\mathsf{A}(a)$ by a L\"uders measurement of $\mathsf{B}_\lambda$ reads $\| \delta(a)\| = \frac{1-\sqrt{1-\lambda^2}}{2}$ for each $a$, which is smaller than $\frac{\lambda}{2}$ for $0<\lambda <1$. Let us now consider the case of non-disturbance more carefully. First, let us note that when we set $\|\delta(y)\|=0$, \eq{eq:disturbance-inequality-observables} reduces to the compatibility bound of \eq{eq:compatibility-necessary-commutativity}, and states that for non-disturbance to be possible when $\mathsf{E}$ and $\mathsf{F}$ do not commute, then both observables must be sufficiently unsharp so as to be compatible. To be sure, compatibility is a necessary condition for non-disturbance, and the fact that \eq{eq:disturbance-inequality-observables} does not contradict the compatibility bound is not surprising. On the other hand, in the case of non-disturbance this bound is also not very informative---it is possible for two observables to be compatible, while a measurement of one still disturbs the other. To gain a better understanding of non-disturbance, let us consider instead \eq{eq:non-disturbance-inequality}, the upper bound of which is smaller than the upper bound in \eq{eq:disturbance-inequality-observables} when we set $\|\delta(y)\|=0$, and vanishes if both $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ and $\mathsf{F}^2 := \{\mathsf{F}(y)^2 : y \in {\mathcal{Y}}\} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ holds. We immediately see that while unsharpness of both $\mathsf{E}$ and $\mathsf{F}$ is necessary for non-disturbance when $\mathsf{E}$ and $\mathsf{F}$ do not commute, it is not sufficient; as shown in Ref. \cite{Heinosaari2010} there are at least two classes of unsharp observables $\mathsf{F}$ where given \emph{any} instrument ${\mathcal{I}}$, i.e., including instruments that measure an unsharp observable $\mathsf{E}$ that does not commute with $\mathsf{F}$ but is still compatible with $\mathsf{F}$, it holds that $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ guarantees $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$: if $\mathsf{F}$ is a rank-1 observable, or if $\mathsf{F}$ is an ``informationally equivalent coarse-graining'' of a sharp observable. Let us consider the first option. If $\mathsf{F}$ is a rank-1 observable, then all the effects of $\mathsf{F}$ may be written as $\mathsf{F}(y) = \lambda_y P_y$, where $P_y$ is a rank-1 projection operator and $\lambda_y \in (0,1]$. As shown in \cite{Pellonpaa2014b}, all observables $\mathsf{E}$ that are compatible with a rank-1 observable $\mathsf{F}$ are the post-processings of $\mathsf{F}$, that is, the effects of $\mathsf{E}$ may be written as $\mathsf{E}(x) = \sum_y p(x|y) \mathsf{F}(y)$, where $\{p(x|y)\}$ is a family of non-negative numbers satisfying $\sum_x p(x|y) =1$ for all $y$. It follows that so long as $\mathsf{F}$ is a non-commutative rank-1 observable, then there exists an unsharp observable $\mathsf{E}$ that is compatible with $\mathsf{F}$ but does not commute with $\mathsf{F}$. But note that ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)) = \mathsf{F}(y)$ if and only if ${\mathcal{I}}^*_{\mathcal{X}}(P_y) = P_y$. As such, ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) = \lambda_y^2{\mathcal{I}}^*_{\mathcal{X}}(P_y) = \lambda_y^2 P_y = \mathsf{F}(y)^2$. It follows that $\mathsf{F}$ will be non-disturbed by an $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ only if $\mathsf{E}$ commutes with $\mathsf{F}$. Let us now consider the second option. We say that $\mathsf{F}$ is an informationally equivalent coarse-graining of a sharp observable $\mathsf{G}:= \{\mathsf{G}(z) : z \in {\mathcal{Z}}\}$ if there exists an invertible stochastic matrix $M$ such that \begin{align*} &\mathsf{F}(y) = \sum_z M_{y, z} \mathsf{G}(z), & \mathsf{G}(z) = \sum_y M_{z, y}^{-1} \mathsf{F}(y). \end{align*} $\mathsf{F}$ and $\mathsf{G}$ are informationally equivalent because a measurement of $\mathsf{F}$ produces different probability distributions for two states $\rho_1$ and $\rho_2$ if and only if these states produce different probability distributions given a measurement of $\mathsf{G}$. Since $\mathsf{G}$ is sharp, then $\mathsf{F}(y)^2 = \sum_z M_{y,z}^2 \mathsf{G}(z)$. Now assume that $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. It is simple to verify that this implies $\mathsf{G} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. Therefore, we have ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) = \sum_z M_{y,z}^2 {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{G}(z)) = \sum_z M_{y,z}^2 \mathsf{G}(z) = \mathsf{F}(y)^2$. Once again, $\mathsf{F}$ will be non-disturbed by an $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ only if $\mathsf{E}$ commutes with $\mathsf{F}$. Both of the above examples offer a very simple interpretation in terms of compatibility. If $\mathsf{F}$ is a rank-1 observable, then non-disturbance of $\mathsf{F}$ implies non-disturbance of sharp rank-1 effects $P_y$. Since non-disturbance requires compatibility, this implies that $\mathsf{E}$ must commute with all $P_y$, and hence with $\mathsf{F}$. On the other hand, if $\mathsf{F}$ is a classical coarse-graining of a sharp observable $\mathsf{G}$, then non-disturbance of $\mathsf{F}$ implies non-disturbance of $\mathsf{G}$, and by compatibility $\mathsf{E}$ must commute with $\mathsf{G}$. Since the effects of $\mathsf{F}$ are constructed as a mixture of the (projective) effects of $\mathsf{G}$, this concludes that $\mathsf{E}$ must commute with $\mathsf{F}$. \subsection{Imposing conservation laws on the measurement interaction} Let us now examine the sequential measurement of $\mathsf{E}$ followed by $\mathsf{F}$ more closely. Specifically, let us consider the measurement scheme ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ that implements the $\mathsf{E}$-instrument ${\mathcal{I}}$, as depicted in \fig{fig:Sequential-Measurement-Scheme}. Any restrictions we impose on ${\mathcal{M}}$ will in turn restrict the types of instruments that can be implemented, and hence the class of observables that may be non-disturbed. One such restriction is given by conservation laws---for example, the interaction between system and apparatus may be restricted so that the total energy, charge, or angular momentum must be conserved. We shall now investigate how such conservation laws impose further necessary conditions that must be fulfilled, in addition to those dictated by \propref{prop:quantitative-bound-disturbance}, for an observable $\mathsf{F}$ to be non-disturbed. \begin{figure}[htbp!] \begin{center} \includegraphics[width=0.4\textwidth]{Sequential-Measurement-Scheme} \vspace*{-0.2cm} \caption{Consider again the case wher the observables $\mathsf{E}$ and $\mathsf{F}$ are measured in succession. Here, $\mathsf{E}$ is first measured by a measurement scheme ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ where ${\mathcal{E}}$ conserves an additive quantity $N = {N\sub{\s}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s} \otimes {N\sub{\aa}}$. The conservation law imposes further restrictions on the possibility of non-disturbance for the observable $\mathsf{F}$ }\label{fig:Sequential-Measurement-Scheme} \vspace*{-0.5cm} \end{center} \end{figure} A quantity of the compound of system and apparatus is \emph{additive} if it can be written as $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$, where $N\sub{{\mathcal{S}}} \in {\mathcal{L}_{s}}({\mathcal{H}\sub{\s}})$ and $N\sub{\aa} \in {\mathcal{L}_{s}}({\mathcal{H}\sub{\aa}})$ are respectively quantities of the system and apparatus alone. We may therefore say that the measurement interaction channel ${\mathcal{E}}$ obeys an additive conservation law if ${\mathcal{E}}$ conserves an additive quantity $N$. By \defref{defn:average-conservation}, we say that ${\mathcal{E}}$ conserves $N$ on average if ${\mathcal{E}}^*(N) = N$. On the other hand, by \defref{defn:conservation-law} and \propref{prop:conservation-multiplication} we say that ${\mathcal{E}}$ fully conservess $N$ if ${\mathcal{E}}^*(N) = N$ and ${\mathcal{E}}^*(N^2) = N^2$. Clearly, full conservation of $N$ is strictly stronger than average conservation of $N$ in general. But recall from \lemref{lemma:conservation-unitary} that if ${\mathcal{E}}(\cdot) := U(\cdot)U^*$ is a unitary channel, then full and average conservation are equivalent, being captured by the commutation relation $[U, N]=\mathds{O}$. In the case of normal measurement schemes, therefore, there is no distinction to be drawn between the two notions of conservation law. But note that conservation of $N$ by ${\mathcal{E}}$ does not imply conservation of ${N\sub{\s}}$ by the $\mathsf{E}$-channel ${\mathcal{I}}_{\mathcal{X}}$, since the channel ${\mathcal{E}}$ may allow for an ``exchange'' of the conserved quantity between system and apparatus; specifically, we have \begin{align*} \mathrm{tr}[{N\sub{\s}} ({\mathcal{I}}_{\mathcal{X}}(\rho) - \rho)] = \mathrm{tr}[{N\sub{\aa}} (\xi - \Lambda(\rho))] \end{align*} for all $\rho \in {\mathcal{S}}({\mathcal{H}\sub{\s}})$, where $\Lambda$ is the conjugate channel of ${\mathcal{I}}_{\mathcal{X}}$ defined in \eq{eq:conjugate-channel}. We can see that it is possible for the expected value of ${N\sub{\s}}$ to increase (decrease), provided that the expected value of ${N\sub{\aa}}$ decreases (increases) by an equal amount. As we show below non-disturbance of an observable $\mathsf{F}$, when the measurement interaction for $\mathsf{E}$ conserves an additive quantity on average, imposes constraints on the commutation between the effects of $\mathsf{F}$ and the system part of the conserved quantity, ${N\sub{\s}}$. Subsequently, we shall refine these results for the case where ${\mathcal{E}}$ fully conserves $N$, indicating conditions under which a large coherence in the apparatus preparation will be necessary for non-disturbance to be possible. \begin{theorem} \label{theorem:quantitative-bound-disturbance-WAY} Consider the observables $\mathsf{E}$ and $\mathsf{F}$ acting in ${\mathcal{H}\sub{\s}}$. Let ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. Let $\|\delta(y)\|$ be the disturbance of the effects of $\mathsf{F}$ caused by ${\mathcal{I}}$. Then for all $y \in {\mathcal{Y}}$ \begin{align}\label{eq:disturbance-inequality-WAY} \| [\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| & \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \|{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) ^2) - {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))^2 \|^{\frac{1}{2}} . \end{align} If $\mathsf{F}$ is non-disturbed by ${\mathcal{I}}$, that is, if $\delta = 0$, then for all $y \in {\mathcal{Y}}$ \begin{align}\label{eq:non-disturbance-inequality-WAY} \| [\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| & \leqslant 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \|{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) ^2) - \mathsf{F}(y)^2 \|^{\frac{1}{2}} . \end{align} \end{theorem} \begin{proof} By \eq{eq:disturbance-quantification}, we may write \begin{equation}\label{eq:WAY-disturbance-equality-1} [\mathsf{F}(y) ,{N\sub{\s}} ] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) = [N\sub{{\mathcal{S}}} ,\delta(y) ] + [{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) ),N\sub{{\mathcal{S}}} ] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]). \end{equation} Since $N = {N\sub{\s}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s} \otimes {N\sub{\aa}}$ is additive, then $\Gamma_\xi(N) = {N\sub{\s}} + \mathrm{tr}[{N\sub{\aa}} \xi] \mathds{1}\sub{\s}$. Average conservation of $N$ by ${\mathcal{E}}$ implies that $\Gamma_\xi(N) = \Gamma_\xi^{\mathcal{E}}(N)$, where $\Gamma_\xi^{\mathcal{E}}$ is the channel defined in \eq{eq:Gamma-U}. It follows that for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, $[A, N\sub{{\mathcal{S}}} ] = [A, \Gamma_\xi(N)] = [A, \Gamma_\xi^{\mathcal{E}}(N)]$. We may therefore write $[{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) ),N\sub{{\mathcal{S}}} ] = [\Gamma_\xi^{\mathcal{E}}(\mathsf{F}(y) \otimes \mathds{1}\sub{\aa}), \Gamma_\xi^{\mathcal{E}}(N)]$. Note that $[\mathsf{F}(y) \otimes \mathds{1}\sub{\aa}, N] = [\mathsf{F}(y) , N\sub{{\mathcal{S}}} ]\otimes \mathds{1}\sub{\aa}$, and that $\Gamma_\xi^{\mathcal{E}}([\mathsf{F}(y) , N\sub{{\mathcal{S}}} ]\otimes \mathds{1}\sub{\aa}) = {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}])$. Then by the sesquilinear mapping $\<\< A | B\>\> := \Gamma_\xi^{\mathcal{E}}(A^* B) - \Gamma_\xi^{\mathcal{E}}(A^*) \Gamma_\xi^{\mathcal{E}}(B)$ and \corref{corollary:Norm-commutator-inequality} we obtain from \eq{eq:WAY-disturbance-equality-1} the bound \begin{align}\label{eq:equality-disturbance-commutation} \| [\mathsf{F}(y) ,N\sub{{\mathcal{S}}} ] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y) , N\sub{{\mathcal{S}}} ])\| & \leqslant \|[N\sub{{\mathcal{S}}} ,\delta(y) ]\| + 2\| \<\<N | N\>\> \|^{\frac{1}{2}} \|\<\< \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} | \mathsf{F}(y) \otimes \mathds{1}\sub{\aa}\>\> \|^{\frac{1}{2}}. \end{align} As in \propref{prop:quantitative-bound-disturbance}, we have $\<\< \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} | \mathsf{F}(y) \otimes \mathds{1}\sub{\aa} \>\> = {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) ^2) - {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) )^2$. Similarly, $\<\<N | N\>\> = \Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2$. Noting that $\|[{N\sub{\s}}, \delta(y) ] \|\leqslant 2 \|N\sub{{\mathcal{S}}} \|\|\delta(y) \| $, we thus obtain from \eq{eq:equality-disturbance-commutation} the bound given in \eq{eq:disturbance-inequality-WAY}. If $\mathsf{F}$ is non-disturbed by ${\mathcal{I}}$, then $\|\delta(y)\| = 0$ and ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) )^2 = \mathsf{F}(y)^2$ for all $y$. We thus arrive at \eq{eq:non-disturbance-inequality-WAY}. \end{proof} \begin{corollary}\label{corollary:WAY-disturbance-bound-unsharpness} Consider the set-up of \thmref{theorem:quantitative-bound-disturbance-WAY}. For all $y \in {\mathcal{Y}}$ it also holds that \begin{align*} \| [\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| & \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \big( 2 \|\delta(y)\| + \|\mathsf{F}(y) - \mathsf{F}(y)^2 \|\big)^{\frac{1}{2}} . \end{align*} \end{corollary} \begin{proof} Since $\mathsf{F}(y)$ are effects and ${\mathcal{I}}^*_{\mathcal{X}}$ is a channel, then by \lemref{lemma:unsharp-disturbance-bound} we have $\|{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) ^2) - {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) )^2 \|\leqslant 2 \|\delta(y)\| + \|\mathsf{F}(y) - \mathsf{F}(y)^2 \|$. The claim immediately follows from \eq{eq:disturbance-inequality-WAY}. \end{proof} If the commutators $[\mathsf{F}(y), {N\sub{\s}}]$ are fixed points of the $\mathsf{E}$-channel ${\mathcal{I}}^*_{\mathcal{X}}$, then the left hand side of \eq{eq:non-disturbance-inequality-WAY} vanishes, and so \thmref{theorem:quantitative-bound-disturbance-WAY} does not impose any additional constraints on non-disturbance beyond those imposed by \propref{prop:quantitative-bound-disturbance}. Note that if $\mathsf{F}$ commutes with ${N\sub{\s}}$, then $[\mathsf{F}(y), {N\sub{\s}}]$ will necessarily be fixed points of ${\mathcal{I}}^*_{\mathcal{X}}$. But if $[\mathsf{F}(y), {N\sub{\s}}]$ are not fixed points of ${\mathcal{I}}^*_{\mathcal{X}}$, implying that $\mathsf{F}$ does not commute with ${N\sub{\s}}$, then non-disturbance is only permitted if $\mathsf{F}^2 \not\subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ and $\|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|$ is sufficiently large. We shall address the first condition in the next section, and for now focus only on the second---in the special case where $N$ is fully conserved by ${\mathcal{E}}$, this condition can be tightened further: \begin{lemma}\label{lemma:conservation-variance-condition} If the channel ${\mathcal{E}}$ fully conservess an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$, then $\|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \| = \var{{N\sub{\aa}}, \xi}$, where $\var{{N\sub{\aa}}, \xi}:= \mathrm{tr}[N\sub{\aa}^2 \xi] - \mathrm{tr}[N\sub{\aa} \xi]^2$ denotes the variance of $N\sub{\aa}$ in the state $\xi$. \end{lemma} \begin{proof} If $N$ is fully conserved by ${\mathcal{E}}$, then by \defref{defn:conservation-law} we have ${\mathcal{E}}^*(N^k) = N^k$ for $k=1,2$. It follows that $\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 = \Gamma_\xi(N^2) - \Gamma_\xi(N)^2$. Recall that $\Gamma_\xi(N) = {N\sub{\s}} + \mathrm{tr}[{N\sub{\aa}} \xi]\mathds{1}\sub{\s}$, and note that $N^2 = N\sub{{\mathcal{S}}}^2\otimes \mathds{1}\sub{\aa} + 2 {N\sub{\s}} \otimes {N\sub{\aa}} + \mathds{1}\sub{\s} \otimes N\sub{\aa}^2$. Therefore, \begin{align*} \Gamma_\xi(N^2) &= N\sub{{\mathcal{S}}}^2 + 2 \mathrm{tr}[{N\sub{\aa}} \xi] {N\sub{\s}} + \mathrm{tr}[N\sub{\aa}^2 \xi] \mathds{1}\sub{\s}, \nonumber \\ \Gamma_\xi(N)^2 &= N\sub{{\mathcal{S}}}^2 + 2 \mathrm{tr}[{N\sub{\aa}} \xi] {N\sub{\s}} + \mathrm{tr}[N\sub{\aa} \xi]^2 \mathds{1}\sub{\s}, \end{align*} which gives \begin{align*} \Gamma_\xi( N^2) - \Gamma_\xi(N)^2 & = \mathrm{tr}[N\sub{\aa}^2 \xi] \mathds{1}\sub{\s} - \mathrm{tr}[{N\sub{\aa}} \xi]^2 \mathds{1}\sub{\s} = \var{{N\sub{\aa}}, \xi} \mathds{1}\sub{\s}. \end{align*} \end{proof} We see that if ${\mathcal{E}}$ fully conservess $N$, then an observable $\mathsf{F}$ for which $[\mathsf{F}(y), {N\sub{\s}}]$ are not fixed points of ${\mathcal{I}}^*_{\mathcal{X}}$ (and hence for which $[\mathsf{F}(y), {N\sub{\s}}] \ne \mathds{O}$) will be non-disturbed only if the apparatus state $\xi$ has a sufficiently large variance in ${N\sub{\aa}}$. If $\xi$ is a pure state, this implies that the apparatus must be prepared in a state with a large coherence in ${N\sub{\aa}}$. Of course, if $\xi$ is a mixed state then it may still be the case that $\var{{N\sub{\aa}}, \xi}$ is large even if $\xi$ commutes with ${N\sub{\aa}}$, and hence has zero coherence in the conserved quantity. A quantifier of coherence (or asymmetry) for general states is given by the quantum Fisher information \cite{PETZ2011, Streltsov2016a, Takagi2018, Marvian2021}, which is equal to four times the convex roof of the variance \cite{Toth2013, QFI-variance}: \begin{align}\label{eq:QFI-defn} {\mathcal{Q}}({N\sub{\aa}}, \xi) = 4 \inf_{\{q_i, \phi_i\}} \bigg(\sum_i q_i \var{{N\sub{\aa}}, \phi_i} \bigg). \end{align} Here, $\{q_i, \phi_i\}$ is an arbitrary ensemble of (not necessarily orthogonal) unit vectors $\phi_i \in {\mathcal{H}\sub{\aa}}$, with $\{q_i\}$ a probability distribution, satisfying $\xi = \sum_i q_i \pr{\phi_i}$ where $\pr{\psi}\equiv |\psi\>\<\psi|$ denotes the projection on $\psi$. Note that we use the short-hand notation $\var{{N\sub{\aa}}, \phi_i} \equiv \var{{N\sub{\aa}}, \pr{\phi_i}}$. It is clear that ${\mathcal{Q}}({N\sub{\aa}}, \xi)=0$ if $[ {N\sub{\aa}}, \xi]=\mathds{O}$, while ${\mathcal{Q}}({N\sub{\aa}}, \xi) = 4 \var{{N\sub{\aa}}, \xi}$ if $\xi$ is a pure state. The following proposition demonstrates that a large coherence of the conserved quantity in the initial state of the apparatus, when such a state may be mixed, is a necessary condition for non-disturbance in the presence of a full conservation law: \begin{prop}\label{prop:quantitative-bound-disturbance-WAY-Fisher} Consider the observables $\mathsf{E}$ and $\mathsf{F}$ acting in ${\mathcal{H}\sub{\s}}$. Let ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$, and assume that ${\mathcal{E}}$ fully conservess an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$. Let $\|\delta(y)\|$ be the disturbance of the effects of $\mathsf{F}$ caused by ${\mathcal{I}}$. Then for all $y \in {\mathcal{Y}}$ \begin{align}\label{eq:WAY-bound-Fisher} \| [\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| & \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + \frac{1}{2}{\mathcal{Q}}({N\sub{\aa}}, \xi)^{\frac{1}{2}}, \end{align} where ${\mathcal{Q}}({N\sub{\aa}}, \xi)$ denotes the quantum Fisher information of ${N\sub{\aa}}$ in the state $\xi$. Additionally, if ${\mathcal{I}}$ is an extremal instrument, then for all $y\in {\mathcal{Y}}$ \begin{align}\label{eq:WAY-bound-Fisher-extremal} \| [\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| & \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + {\mathcal{Q}}({N\sub{\aa}}, \xi)^{\frac{1}{2}} \|{\mathcal{I}}_{\mathcal{X}}^*(\mathsf{F}(y)^2) - {\mathcal{I}}_{\mathcal{X}}^*(\mathsf{F}(y))^2 \|^{\frac{1}{2}}. \end{align} \end{prop} \begin{proof} Let $\{q_i, \phi_i\}$ be an arbitrary ensemble of unit vectors that satisfies $\xi = \sum_i q_i \pr{\phi_i}$. We may thus write $\Gamma_\xi^{\mathcal{E}}(\cdot) = \sum_i q_i \Gamma_{\phi_i}^{\mathcal{E}}(\cdot)$. By the conservation law and additivity of $N$, we may therefore rewrite \eq{eq:WAY-disturbance-equality-1} as \begin{equation*}\label{eq:WAY-disturbance-equality-pure-dec} [\mathsf{F}(y) ,{N\sub{\s}} ] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) = [N\sub{{\mathcal{S}}} ,\delta(y) ] + \sum_i q_i \bigg([\Gamma_{\phi_i}^{\mathcal{E}}(\mathsf{F}(y)\otimes \mathds{1}\sub{\aa} ), \Gamma_{\phi_i}^{\mathcal{E}}(N) ] - \Gamma_{\phi_i}^{\mathcal{E}}([\mathsf{F}(y)\otimes \mathds{1}\sub{\aa}, N])\bigg), \end{equation*} which, by the sesquilinear mappings $\<\<A|B\>\>_i := \Gamma_{\phi_i}^{\mathcal{E}}(A^*B) - \Gamma_{\phi_i}^{\mathcal{E}}(A^*) \Gamma_{\phi_i}^{\mathcal{E}}(B)$, \corref{corollary:Norm-commutator-inequality}, and \lemref{lemma:conservation-variance-condition} gives the bound \begin{align*} \|[\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + 2 \sum_i q_i \var{{N\sub{\aa}}, \phi_i}^{\frac{1}{2}} \| {\mathcal{I}}_{\mathcal{X}}^{(i)*}(\mathsf{F}(y)^2) - {\mathcal{I}}_{\mathcal{X}}^{(i) *}(\mathsf{F}(y))^2 \|^{\frac{1}{2}}. \end{align*} Here, we define ${\mathcal{I}}_{\mathcal{X}}^{(i)*}(A) := \Gamma_{\phi_i}^{\mathcal{E}}(A \otimes \mathds{1}\sub{\aa})$. Since both $\mathsf{F}(y)$ and ${\mathcal{I}}_{\mathcal{X}}^{(i) *}(\mathsf{F}(y))$ are effects, we have \begin{align*} \| {\mathcal{I}}_{\mathcal{X}}^{(i)*}(\mathsf{F}(y)^2) - {\mathcal{I}}_{\mathcal{X}}^{(i) *}(\mathsf{F}(y))^2 \|^{\frac{1}{2}} \leqslant \| {\mathcal{I}}_{\mathcal{X}}^{(i)*}(\mathsf{F}(y)) - {\mathcal{I}}_{\mathcal{X}}^{(i) *}(\mathsf{F}(y))^2 \|^{\frac{1}{2}} \leqslant \frac{1}{2}. \end{align*} We thus arrive at the bound \begin{align*} \|[\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| &\leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + \sum_i q_i \var{{N\sub{\aa}}, \phi_i}^{\frac{1}{2}} \nonumber \\ & \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + \left(\sum_i q_i \var{{N\sub{\aa}}, \phi_i}\right)^{\frac{1}{2}}, \end{align*} where the second line follows from the concavity of the square root. By choosing the ensemble $\{q_i, \phi_i\}$ that gives the quantum Fisher information as in \eq{eq:QFI-defn}, we arrive at \eq{eq:WAY-bound-Fisher}. Now assume that ${\mathcal{I}}$ is an extremal instrument. This implies that for any pair of instruments ${\mathcal{I}}^{(1)}$ and ${\mathcal{I}}^{(2)}$, and any $\lambda \in [0,1]$, the operations of ${\mathcal{I}}$ can be decomposed as ${\mathcal{I}}_x(\cdot) = \lambda \, {\mathcal{I}}_x^{(1)}(\cdot) + (1-\lambda) {\mathcal{I}}_x^{(2)}(\cdot)$ only if ${\mathcal{I}} = {\mathcal{I}}^{(1)} = {\mathcal{I}}^{(2)}$. It holds that ${\mathcal{I}}_{\mathcal{X}}^{(i)*} = {\mathcal{I}}_{\mathcal{X}}^*$ for all $i$, and so we obtain \begin{align*} \|[\mathsf{F}(y), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) \| \leqslant 2\|{N\sub{\s}}\| \|\delta(y)\| + 2 \left(\sum_i q_i \var{{N\sub{\aa}}, \phi_i}\right)^{\frac{1}{2}} \| {\mathcal{I}}_{\mathcal{X}}^{*}(\mathsf{F}(y)^2) - {\mathcal{I}}_{\mathcal{X}}^{*}(\mathsf{F}(y))^2 \|^{\frac{1}{2}}. \end{align*} Once again, by choosing the ensemble that gives the quantum Fisher information, we arrive at \eq{eq:WAY-bound-Fisher-extremal}. \end{proof} \subsection{Necessary conditions for non-disturbance beyond coherence} While the above results indicate that, in the presence of a conservation law, a large coherence in the apparatus preparation is a necessary condition for non-disturbance when $[\mathsf{F}(y), {N\sub{\s}}]$ are not fixed points of the $\mathsf{E}$-channel ${\mathcal{I}}_{\mathcal{X}}^*$, it is clearly not sufficient. By \thmref{theorem:quantitative-bound-disturbance-WAY} we may infer that if $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, then the only observables that may be non-distubed are those for which $[\mathsf{F}(y), {N\sub{\s}}]$ are fixed points of ${\mathcal{I}}^*_{\mathcal{X}}$, irrespective of how much coherence is initially present in the apparatus---recall that if $\mathsf{F}$ is either sharp, a rank-1 observable, or a coarse-graining of a sharp observable, then non-disturbance of $\mathsf{F}$ implies that $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. In fact, in such cases we may obtain a more refined necessary condition for non-disturbance in terms of commutation of $\mathsf{F}$ with some ``shifted'' variant of ${N\sub{\s}}$. \begin{lemma}\label{lemma:non-disturbance-WAY-multiplicability} Let ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. Consider an observable $\mathsf{F}$ that is non-disturbed by ${\mathcal{I}}$, and assume that $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. Then $\mathsf{F}$ commutes with both $\mathsf{E}$ and $\Delta {N\sub{\s}} := {\mathcal{I}}^*_{\mathcal{X}}({N\sub{\s}}) - {N\sub{\s}}$. If $[\mathsf{F}, {N\sub{\s}}]=\mathds{O}$, then $[\mathsf{F}, \Delta {N\sub{\s}}]=\mathds{O}$ necessarily holds. \end{lemma} \begin{proof} Non-disturbance implies that $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. If $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ also holds, then by \eq{eq:non-disturbance-inequality} $\mathsf{F}$ must commute with $\mathsf{E}$, and by \eq{eq:non-disturbance-inequality-WAY} we must have $[\mathsf{F}(y), {N\sub{\s}}] = {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}])$ for all $y$. Now, given that ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y)^2) = {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y))^2 = \mathsf{F}(y)^2$, by \corref{corollary:multiplicability} we have ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{F}(y) A) = \mathsf{F}(y) {\mathcal{I}}^*_{\mathcal{X}}(A)$ and ${\mathcal{I}}^*_{\mathcal{X}}( A \mathsf{F}(y)) = {\mathcal{I}}^*_{\mathcal{X}}(A) \mathsf{F}(y)$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$. As such, \begin{align*} {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{F}(y), {N\sub{\s}}]) & = [\mathsf{F}(y), {\mathcal{I}}^*_{\mathcal{X}}({N\sub{\s}})]. \end{align*} This implies that $[\mathsf{F}(y), {N\sub{\s}}] = [\mathsf{F}(y), {\mathcal{I}}^*_{\mathcal{X}}({N\sub{\s}})] \implies [\mathsf{F}(y), \Delta {N\sub{\s}}]=\mathds{O}$ for all $y$. That $[\mathsf{F}, {N\sub{\s}}]=\mathds{O}$ implies $[\mathsf{F}, \Delta {N\sub{\s}}] = \mathds{O}$ trivially follows from the above and the fact that ${\mathcal{I}}_{\mathcal{X}}^*(\mathds{O}) = \mathds{O}$. \end{proof} \lemref{lemma:non-disturbance-WAY-multiplicability} implies that for any observable $\mathsf{F}$ satisfying $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ and $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, if $\mathsf{F}$ does not commute with ${N\sub{\s}}$ then it must fail to commute with ${\mathcal{I}}^*_{\mathcal{X}}({N\sub{\s}})$ to an equal extent. Interestingly, however, if it holds that ${\mathcal{I}}_{\mathcal{X}}^*({N\sub{\s}}) = {N\sub{\s}}$, so that $\Delta {N\sub{\s}} = \mathds{O}$, then \lemref{lemma:non-disturbance-WAY-multiplicability} does not rule out non-disturbance so long as $\mathsf{F}$ commutes with $\mathsf{E}$. In the special case where $\mathsf{E}$ is a sharp observable, the above result has the following consequence: \begin{corollary} Let ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for a sharp observable $\mathsf{E}$ with the instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. Let $\mathsf{F}$ be either sharp, rank-1, or a coarse-graining of a sharp observable. If $\mathsf{E}$ does not commute with ${N\sub{\s}}$, then $\mathsf{F}$ is non-disturbed by ${\mathcal{I}}$ only if $\mathsf{F}$ commutes with both $\mathsf{E}$ and a non-trivial quantity $\Delta {N\sub{\s}} \ne \mathds{O}$. \end{corollary} \begin{proof} Given that $\mathsf{F}$ is either sharp, rank-1, or a coarse-graining of a sharp observable, then non-disturbance implies that $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$. Moreover, since $\mathsf{E}$ is sharp and does not commute with ${N\sub{\s}}$, then by \lemref{lemma:fixed-points-instrument} it holds that ${\mathcal{I}}_{\mathcal{X}}^*({N\sub{\s}}) \ne {N\sub{\s}}$, and so $\Delta {N\sub{\s}} \ne \mathds{O}$. The claim follows from \lemref{lemma:non-disturbance-WAY-multiplicability}. \end{proof} \section{Repeatability, Measurability, and the Wigner-Araki-Yanase theorem}\label{sect:first-kind-measurements} The Wigner-Araki-Yanase theorem as formulated in Ref. \cite{Loveridge2011} states that for any (discrete) sharp observable (represented as a self-adjoint operator $A$) not commuting with the system part of a (bounded) conserved quantity, either the measurement (described by a normal measurement scheme) cannot be repeatable or cannot be ``accurate", in the sense that $A$ is not measured by the scheme. It does not rule out approximate measurements with approximate repeatability properties, where approximate measurement is understood to mean that $A$ can be made statistically close to some unsharp observable which is actually measured. Therefore, WAY has both a strict impossibility part, along with the provision of conditions under which approximate measurements may be possible. As shown in Ref. \cite{Loveridge2011}, accurate measurements must also violate the Yanase condition: that the pointer observable commutes with the conserved quantity. While the WAY theorem has developed over the years, its full scope is still not known. In particular, while much of the previous work around the WAY theorem has focused on the ``measurability question''---upon which observables cannot be measured, or can be only approximately measured given the conservation law---the role of disturbance has been much less fully examined. The measurability question can be tackled independently of the repeatability question, and will be analysed below. We begin by formulating a number of conditions that characterise the repeatability of an instrument, which are called upon at various points. We then address the measurability of observables independent of disturbance but in the presence of conservation, where some quantitative bounds on the error are given, which provide necessary conditions for measurability. We then prove the WAY theorem in the new setting, i.e., without assuming sharpness of the measured observable, the sharpness of the pointer observable, the unitarity of the interaction or the purity of the apparatus preparation. Both impossibility and possibility parts are contained in a single quantitative bound, which is referred to as the generalised WAY theorem here. The section concludes with a demonstration that the measurability part of WAY can be recovered without the Yanase condition, by imposing conservation laws on pointer objectification in addition to the measurement interaction between system and apparatus. \subsection{Measurements of the first kind, and repeatability}\label{sub:first} A special instance of a non-disturbing measurement is when an $\mathsf{E}$-instrument does not disturb $\mathsf{E}$ itself, i.e., when the statistics of a measurement of $\mathsf{E}$ are not affected by a prior non-selective measurement of $\mathsf{E}$. Such measurements are referred to as \emph{measurements of the first kind}, and the $\mathsf{E}$-instrument ${\mathcal{I}}$ corresponds to a measurement of the first kind exactly when $\mathsf{E} \subset {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ \cite{Lahti1991}. Necessary conditions for first-kindness can be obtained from \propref{prop:quantitative-bound-disturbance} and \thmref{theorem:quantitative-bound-disturbance-WAY}, by identifying $\mathsf{F}$ with $\mathsf{E}$. A subclass of measurements of the first kind are those which are \emph{repeatable}. Though repeatability is a standard assumption in many textbook treatments of quantum mechanics, that it is a property which a measurement may or may not enjoy appeared already in Wigner's 1952 contribution on the WAY theorem. However, within the general framework presented thus far, repeatability corresponds to a very special form of state change, possible only for a privileged class of observables---an observable $\mathsf{E}$ admits a repeatable measurement only if it is discrete \cite{Ozawa1984}, and all the effects have at least one eigenvector with eigenvalue 1 \cite{Busch1995}---and arising only for very special measurement implementations. ${\mathcal{I}}$ is a repeatable $\mathsf{E}$-instrument if \begin{align*} \mathrm{tr}[{\mathcal{I}}_y \circ {\mathcal{I}}_x(\rho) ] = \delta_{x,y}\mathrm{tr}[{\mathcal{I}}_x(\rho)] \qquad \forall \, \rho \in {\mathcal{S}}({\mathcal{H}\sub{\s}}), \, x,y \in {\mathcal{X}}, \end{align*} which implies that \begin{align}\label{eq:measurement-repeatable} {\mathcal{I}}_x^*(\mathsf{E}(y)) = \delta_{x,y}\mathsf{E}(x)\qquad \forall \, x,y\in {\mathcal{X}}. \end{align} The above definition is equivalent to ${\mathcal{I}}^*_x(\mathsf{E}(x)) = \mathsf{E}(x)$ for all $x$, since if this holds then ${\mathcal{I}}_x^*(\mathds{1}\sub{\s} - \mathsf{E}(x)) = \mathsf{E}(x) - \mathsf{E}(x) = \mathds{O}$ \cite{Busch1990}. In other words, if ${\mathcal{I}}$ is a repeatable instrument, then repeated measurements by ${\mathcal{I}}$ are guaranteed (with probability one) to produce the same result. It is straightforward to verify that if a measurement of $\mathsf{E}$ is repeatable, then it is also of the first kind, since \begin{align*} {\mathcal{I}}_{\mathcal{X}}^*(\mathsf{E}(x)) = {\mathcal{I}}^*_x(\mathsf{E}(x)) = \mathsf{E}(x). \end{align*} While the converse relation does not hold in general---a measurement can be of the first kind and not repeatable, such as is the case for a L\"uders instrument compatible with a commutative but unsharp observable---in the special case of sharp observables repeatability and first-kindness coincide (Theorem 1 in Ref. \cite{Lahti1991}). We now prove a useful result regarding the structure of repeatable instruments, and the measurement schemes that implement them. \begin{prop}\label{prop:repeatable-instrument-identity} Let ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-compatible instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$. If ${\mathcal{I}}$ is repeatable, then the following hold: \begin{enumerate}[(i)] \item For all $x\in {\mathcal{X}}$ and $A\in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, it holds that ${\mathcal{I}}_x^*(A) = {\mathcal{I}}_x^*(\mathsf{E}(x) A) = {\mathcal{I}}_x^*(A \mathsf{E}(x)) = {\mathcal{I}}_x^*(\mathsf{E}(x) A \mathsf{E}(x))$. \item For all $x\in {\mathcal{X}}$ and $A\in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, it holds that ${\mathcal{I}}_{\mathcal{X}}^*(\mathsf{E}(x) A) = {\mathcal{I}}_{\mathcal{X}}^*(A \mathsf{E}(x)) = {\mathcal{I}}_{\mathcal{X}}^*(\mathsf{E}(x) A \mathsf{E}(x)) = {\mathcal{I}}_x^*(A)$. \item For all $x\in {\mathcal{X}}$ and $n \in \nat$, it holds that $\mathsf{E}(x) = \Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)^n \otimes \mathds{1}\sub{\aa}) =\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^n)$. \item For all $x\in {\mathcal{X}}$, it holds that $\mathsf{E}(x)$ and $ \mathsf{Z}(x)$ have $1$ as an eigenvalue, and so there exist projection operators $\P(x) \in {\mathcal{L}_{p}}({\mathcal{H}\sub{\s}})$ and $\mathsf{Q}(x) \in {\mathcal{L}_{p}}({\mathcal{H}\sub{\aa}})$ which project onto the eigenvalue-1 eigenspaces of $\mathsf{E}(x)$ and $\mathsf{Z}(x)$, respectively. \item For all $x, y \in {\mathcal{X}}$, it holds that $\P(x) \mathsf{E}(y) = \P(x) \P(y) = \delta_{x,y} \P(x)$ and $\mathsf{Q}(x) \mathsf{Z}(y) = \mathsf{Q}(x) \mathsf{Q}(y) = \delta_{x,y} \mathsf{Q}(x)$. \item For all $x \in {\mathcal{X}}$ and $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, it holds that ${\mathcal{I}}_x^*(A) = {\mathcal{I}}_x^*(\P(x) A) = {\mathcal{I}}_x^*(A \P(x)) = {\mathcal{I}}_x^*(\P(x) A \P(x))$. \item For all $A \in {\mathcal{L}}({\mathcal{H}\sub{\aa}})$, it holds that $\Lambda^*(A) = \Lambda^*(\mathsf{Q}({\mathcal{X}}) A) = \Lambda^*(A \mathsf{Q}({\mathcal{X}})) = \Lambda^*(\mathsf{Q}({\mathcal{X}}) A \mathsf{Q}({\mathcal{X}}))$, where $\Lambda^*$ is the conjugate channel to ${\mathcal{I}}_{\mathcal{X}}^*$ defined in \eq{eq:conjugate-channel}, and $\mathsf{Q}({\mathcal{X}}) := \sum_{x\in {\mathcal{X}}} \mathsf{Q}(x)$. \item For all $x \in {\mathcal{X}}$ and $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, it holds that ${\mathcal{I}}_x^*(A) = \Gamma_\xi^{\mathcal{E}}(A \otimes \mathsf{Q}(x))$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate}[(i):] \item For each $x$, define $\mathsf{E}(x)^\perp := \mathds{1}\sub{\s} - \mathsf{E}(x)$. Note that by \eq{eq:measurement-repeatable}, repeatability implies that ${\mathcal{I}}_x^*(\mathsf{E}(x)^\perp) = \mathds{O}$. By \lemref{lemma:operation-annihilation} it holds that ${\mathcal{I}}_x^*(\mathsf{E}(x)^\perp A) = {\mathcal{I}}_x^*(A \mathsf{E}(x)^\perp) = \mathds{O}$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$. The claim immediately follows by noting that we may write $A = (\mathsf{E}(x) + \mathsf{E}(x)^\perp) A = A (\mathsf{E}(x) + \mathsf{E}(x)^\perp) = (\mathsf{E}(x) + \mathsf{E}(x)^\perp) A (\mathsf{E}(x) + \mathsf{E}(x)^\perp)$. \item For each $x$, we may write ${\mathcal{I}}_{\mathcal{X}}^*(\cdot) = {\mathcal{I}}_x^*(\cdot) + {\mathcal{I}}_{\overline{x}}^*(\cdot)$, where $\overline{x} := {\mathcal{X}} \backslash \{x\}$ is the complement of $x \equiv \{x\}$ in ${\mathcal{X}}$, and it holds that ${\mathcal{I}}_{\overline{x}}^*(\mathds{1}\sub{\s}) = \mathsf{E}(\overline{x}) \equiv \mathsf{E}(x)^\perp$. Moreover, note that by the same arguments as (i), for all $A\in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ it holds that ${\mathcal{I}}_{\overline{x}}^*(\mathsf{E}(x) A) = {\mathcal{I}}_{\overline{x}}^*(\mathsf{E}(\overline{x})^\perp A) = \mathds{O}$. We thus obtain the following: \begin{align*} {\mathcal{I}}_{\mathcal{X}}^*(\mathsf{E}(x) A) &= {\mathcal{I}}_x^*(\mathsf{E}(x) A) + {\mathcal{I}}_{\overline{x}}^*(\mathsf{E}(x) A)={\mathcal{I}}_x^*(\mathsf{E}(x) A) = {\mathcal{I}}_x^*(A). \end{align*} Similarly, it holds that ${\mathcal{I}}_{\mathcal{X}}^*( A \mathsf{E}(x)) = {\mathcal{I}}_{\mathcal{X}}^*(\mathsf{E}(x) A \mathsf{E}(x)) = {\mathcal{I}}_x^*(A)$. The claim immediately follows. \item The repeatability condition implies that for all $x\in {\mathcal{X}}$, it holds that $\mathsf{E}(x) = {\mathcal{I}}_x^*(\mathsf{E}(x)) = \Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)\otimes \mathsf{Z}(x))$. It follows that for any state $\rho \in {\mathcal{S}}({\mathcal{H}\sub{\s}})$, we have \begin{align*} \mathrm{tr}[\rho \mathsf{E}(x)] &=\mathrm{tr}[\rho \Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)\otimes \mathsf{Z}(x))] \\ &\leqslant \mathrm{tr}[\rho \Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)^2\otimes \mathds{1}\sub{\aa})]^{\frac{1}{2}} \mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s}\otimes \mathsf{Z}(x)^2)]^{\frac{1}{2}} \\ &\leqslant \mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)\otimes \mathds{1}\sub{\aa})]^{\frac{1}{2}} \mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))]^{\frac{1}{2}} \\ &=\mathrm{tr}[\rho \mathsf{E}(x)]. \end{align*} Here, the second line follows from the Cauchy-Schwarz inequality, the third line follows from the fact that $\mathsf{E}(x)$ and $\mathsf{Z}(x)$ are effects and so $\mathsf{E}(x)^2 \leqslant \mathsf{E}(x)$ and $\mathsf{Z}(x)^2 \leqslant \mathsf{Z}(x)$, and the final line follows from the fact that repeatability implies first-kindness and that ${\mathcal{M}}$ is a measurement scheme for $\mathsf{E}$. As the second inequality must be an equality, we thus have $\mathsf{E}(x) = \Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)^n \otimes \mathds{1}\sub{\aa}) =\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^n)$ for $n=1,2$. To show that the relations hold for all $n\in \nat$, it suffices to show that for all $\rho$, the Cauchy-Schwarz inequality and the above arguments implies \begin{align*} 0 \leqslant \mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}}((\mathsf{E}(x)^n - \mathsf{E}(x)^{n+1})\otimes \mathds{1}\sub{\aa} )] &\leqslant \mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}}(E(x)^{2(n-1)}\otimes \mathds{1}\sub{\aa})]^{\frac{1}{2}}\mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}}((\mathsf{E}(x) -\mathsf{E}(x)^2)^2 \otimes \mathds{1}\sub{\aa})]^{\frac{1}{2}} \\ &\leqslant \mathrm{tr}[\rho\Gamma_\xi^{\mathcal{E}} (\mathsf{E}(x)^{2(n-1)}\otimes \mathds{1}\sub{\aa})]^{\frac{1}{2}} \mathrm{tr}[\rho \Gamma_\xi^{\mathcal{E}} ((\mathsf{E}(x)-\mathsf{E}(x)^2)\otimes \mathds{1}\sub{\aa})]^{\frac{1}{2}} = 0, \end{align*} and so it holds that $\Gamma_\xi^{\mathcal{E}}((\mathsf{E}(x)^n - \mathsf{E}(x)^{n+1})\otimes \mathds{1}\sub{\aa} )=\mathds{O}$. Similar steps show that $\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes (\mathsf{Z}(x)^n - \mathsf{Z}(x)^{n+1})) = \mathds{O}$. The claims are thus obtained by induction. \item Note that for any operation $\Phi^* : {\mathcal{L}}({\mathcal{K}}) \to {\mathcal{L}}({\mathcal{H}})$, it holds that $\|\Phi^*(A)\| \leqslant \| A\|$ for all $A \in {\mathcal{L}}({\mathcal{K}})$. As such, by (iii) we have $\|\mathsf{E}(x) \| = \|\Gamma_\xi^{\mathcal{E}}(\mathsf{E}(x)^2\otimes \mathds{1}\sub{\aa})\| \leqslant \| \mathsf{E}(x)^2\| = \| \mathsf{E}(x) \|^2$. But since $\mathsf{E}(x)$ is an effect it also holds that $\| \mathsf{E}(x)\| \geqslant \| \mathsf{E}(x)\| ^2$. It follows that $\|\mathsf{E}(x)\|$ is either zero or one. As we assume that $\mathsf{E}(x)$ is not vanishing, then $\| \mathsf{E}(x)\| =1$ follows. Similarly, we have $1 = \| \mathsf{E}(x) \| = \|\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))\| \leqslant \| \mathsf{Z}(x)\|$, and since $\mathsf{Z}(x)$ is an effect, then it must hold that $\|\mathsf{Z}(x)\|=1$. Now we shall show that $\mathsf{E}(x)$ has $1$ as an eigenvalue, i.e., there exists a unit-vector $\psi \in {\mathcal{H}\sub{\s}}$ such that $\mathsf{E}(x) \psi = \psi$. If this is not so, then we would have $\lim_{n \to \infty} \mathsf{E}(x)^n = \mathds{O} $, which would contradict (iii). Therefore, there exists a projection operator $\P(x)$ that projects onto the eigenvalue-1 eigenspace of $\mathsf{E}(x)$. Similar arguments hold for $\mathsf{Z}(x)$ and $\mathsf{Q}(x)$. \item For each $x$, define $\P^c(x) := \mathsf{E}(x) - \P(x)$. Since $\mathsf{E}(x)$ is an effect and $\P(x)$ projects onto the eigenvalue-1 eigenspace of $\mathsf{E}(x)$, it trivially holds that $\psi \in \supp(\P(x)) \implies \psi \in \ker(\P^c(x))$. Now, given that $\psi \in \supp(\P(x))$ implies that $\P(x) \psi = \psi$, and denoting the null vector in ${\mathcal{H}\sub{\s}}$ as $\emptyset$, we have \begin{align*} \emptyset& = (\mathds{1}\sub{\s} - \P(x)) \psi \nonumber \\ &= (\mathds{1}\sub{\s} - \mathsf{E}(x)) \psi + \P^c(x) \psi \nonumber \\ & =(\mathds{1}\sub{\s} - \mathsf{E}(x)) \psi \nonumber \\ & = \sum_{y \ne x} \mathsf{E}(y) \psi . \end{align*} By positivity of $\mathsf{E}(y)$, the above equation implies that \begin{align*} \sum_{y \ne x} \<\psi| \mathsf{E}(y) \psi\> = \sum_{y \ne x} \<\sqrt{\mathsf{E}(y)} \psi| \sqrt{\mathsf{E}(y)} \psi\> = 0, \end{align*} which can only be satisfied if $\sqrt{\mathsf{E}(y)} \psi = \emptyset \implies \mathsf{E}(y) \psi = \emptyset$ for all $y \ne x$. We thus have $\psi \in \supp(\P(x)) \implies \psi \in \ker(\mathsf{E}(y)) \, \forall \, y \ne x$, and so the support of $\P(x)$ must be orthogonal to the support of $\mathsf{E}(y)$ for all $y \ne x$. That $\P(x)$ and $\P(y)$ for $x\ne y$ have orthogonal supports follows trivially. Similar arguments hold for $\mathsf{Q}(x)$, $\mathsf{Z}(y)$, and $\mathsf{Q}(y)$. \item Consider again $\P^c(x) := \mathsf{E}(x) - \P(x)$ as defined above. It trivially holds that $\mathds{O} \leqslant \P^c(x) < \mathds{1}\sub{\s}$, and so $\| \P^c(x)\| < 1$. We thus have $\lim_{n \to \infty } \P^c(x)^n = \mathds{O}$. Now note that since the supports of $\P(x)$ and $\P^c(x)$ are orthogonal, it holds that $\P(x) \P^c(x) = \P^c(x) \P(x) = \mathds{O}$. As such, for all $n\in \nat$ we have $\mathsf{E}(x)^n = \P(x) + \P^c(x)^n$. By (i)-(iii), it follows that $\mathsf{E}(x) = {\mathcal{I}}_x^*(\mathsf{E}(x)^n) = {\mathcal{I}}_x^*(\P(x)) + {\mathcal{I}}_x^*(\P^c(x)^n)$ for all $n \in \nat$, and so it must hold that ${\mathcal{I}}_x^*(\P^c(x)) = \lim_{n \to \infty } {\mathcal{I}}_x^*(\P^c(x)^n) = {\mathcal{I}}_x^*(\mathds{O}) = \mathds{O}$. By \lemref{lemma:operation-annihilation} it holds that ${\mathcal{I}}_x^*(\P^c(x) A) = {\mathcal{I}}_x^*(A \P^c(x)) = \mathds{O}$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, and so the claims follow by noting that we may write $\mathsf{E}(x) A = (\P(x) + \P^c(x)) A$, and so forth. \item By the same arguments as (vi), by defining $\mathsf{Q}^c(x) := \mathsf{Z}(x) - \mathsf{Q}(x)$, we may show that $\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Q}^c(x))=\mathds{O}$, which implies that $\Gamma_\xi^{\mathcal{E}}((\mathds{1}\sub{\s} \otimes \mathsf{Q}^c(x))B)= \Gamma_\xi^{\mathcal{E}}(B(\mathds{1}\sub{\s} \otimes \mathsf{Q}^c(x))) = \mathds{O}$ for all $B \in {\mathcal{L}}({\mathcal{H}\sub{\s}} \otimes {\mathcal{H}\sub{\aa}})$. By noting that $\mathds{1}\sub{\aa} = \mathsf{Q}({\mathcal{X}}) + \mathsf{Q}({\mathcal{X}})^\perp$ where $\mathsf{Q}({\mathcal{X}})^\perp = \sum_x \mathsf{Q}^c(x)$, it trivially follows that $\Lambda^*(A) = \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes A) = \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Q}({\mathcal{X}})A) = \Lambda^*(\mathsf{Q}({\mathcal{X}}) A)$ and so forth. \item By the same argument as (vii) it trivially follows that ${\mathcal{I}}_x^*(A) = \Gamma_\xi^{\mathcal{E}}(A \otimes \mathsf{Z}(x)) = \Gamma_\xi^{\mathcal{E}}(A \otimes \mathsf{Q}(x))$. \end{enumerate} \end{proof} Let us highlight one interesting property of repeatable instruments: if ${\mathcal{I}}$ is repeatable, then for all input states $\rho$, the output states will be perfectly distinguishable. For the input state $\rho$, we define the normalised post-measurement states as $\rho_x:= {\mathcal{I}}_x(\rho)/\mathrm{tr}[{\mathcal{I}}_x(\rho)]$ for any $x$ satisfying $\mathrm{tr}[{\mathcal{I}}_x(\rho)]>0$. By item (vi) of the above proposition, the Schr\"odinger picture operations of a repeatable instrument satisfy ${\mathcal{I}}_x(T) = \P(x) {\mathcal{I}}_x(T) \P(x)$ for all $x$ and $T \in {\mathcal{T}}({\mathcal{H}\sub{\s}})$, and so $\rho_x$ will only have support in the eigenvalue-1 eigenspace of $\mathsf{E}(x)$. But by item (v), such eigenvalue-1 eigenspaces are orthogonal, and so it holds that $\rho_x \rho_y = \rho_y \rho_x = \mathds{O}$ for all $x\ne y$. \subsection{Measurability} Consider a measurement scheme ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ which measures an observable with effects $\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)) \equiv \Lambda^*(\mathsf{Z}(x))$, where $\Lambda^*$ is the conjugate channel to ${\mathcal{I}}_{\mathcal{X}}^*$ defined in \eq{eq:conjugate-channel}. Now, let $\mathsf{E}$ be the \emph{target} observable, i.e., the observable we wish to measure, which may be different to the measured observable, but has the same value space ${\mathcal{X}}$. For each outcome $x$, we define the difference between the effects of the measured observable and the effects of the target observable as \begin{align}\label{eq:error-equality} \epsilon(x) := \Lambda^*(\mathsf{Z}(x)) - \mathsf{E}(x). \end{align} The measurement error for outcome $x$ can be quantified as $\| \epsilon(x) \|$, with the operational meaning as the least upper bound of the discrepancy between the probability for such outcome arising given the measured, and target, observables: \begin{align*} \|\epsilon(x) \| = \sup_{\rho \in {\mathcal{S}}({\mathcal{H}\sub{\s}})} \bigg|\mathrm{tr}[\Lambda^*(\mathsf{Z}(x)) \rho] - \mathrm{tr}[\mathsf{E}(x) \rho] \bigg|. \end{align*} A global quantification of measurement error is thus \begin{align*} \epsilon := \max_{x\in {\mathcal{X}}} \|\epsilon(x)\|, \end{align*} and ${\mathcal{M}}$ is a measurement scheme for $\mathsf{E}$ if $\epsilon = 0$, that is, if the target observable is what is actually measured, so that $\Lambda^*(\mathsf{Z}(x)) = \mathsf{E}(x)$ for all $x$. We shall now show that conservation laws impose constraints on the commutation of the target observable with the system part of the conserved quantity, independently of any disturbance that may result as a consequence of such measurement. \begin{theorem}\label{theorem:measurability} Let ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an observable acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = {N\sub{\s}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s} \otimes {N\sub{\aa}}$ on average. Let $\|\epsilon(x)\|$ quantify the error in measuring the effects of the target observable $\mathsf{E}$. Then for all $x\in {\mathcal{X}}$ \begin{align}\label{eq:measurability-error-bound} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| \leqslant 2 \|{N\sub{\s}}\|\|\epsilon(x)\| + 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \big(2 \|\epsilon(x) \| + \| \mathsf{E}(x) - \mathsf{E}(x)^2 \|\big)^{\frac{1}{2}}. \end{align} \end{theorem} \begin{proof} By \eq{eq:error-equality}, we have \begin{align}\label{eq:error-quantity-equality-1} [\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) = [{N\sub{\s}}, \epsilon(x)] + [\Lambda^*(\mathsf{Z}(x)), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]). \end{align} Given that $N$ is additive while ${\mathcal{E}}$ conserves $N$ on average we may write $[\Lambda^*(\mathsf{Z}(x)), {N\sub{\s}}] = [\Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)), \Gamma_\xi^{\mathcal{E}}(N)]$. Noting that $[\mathds{1}\sub{\s} \otimes \mathsf{Z}(x), N] = \mathds{1}\sub{\s} \otimes[ \mathsf{Z}(x), {N\sub{\aa}}]$, and that $\Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) = \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes [\mathsf{Z}(x), {N\sub{\aa}}])$, then by the sesquilinear mapping $\<\<A|B\>\> = \Gamma_\xi^{\mathcal{E}}(A^* B) - \Gamma_\xi^{\mathcal{E}}(A^*)\Gamma_\xi^{\mathcal{E}}(B)$ and \corref{corollary:Norm-commutator-inequality} we obtain from \eq{eq:error-quantity-equality-1} the bound \begin{align*} \| [\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}])\| &\leqslant 2\| {N\sub{\s}}\| \|\epsilon(x)\| + 2\|\<\< N | N \>\>\|^{\frac{1}{2}} \|\<\< \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) | \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) \>\>\|^{\frac{1}{2}}. \end{align*} Now note that $\<\< N | N \>\> = \Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2$ by definition. Given that $\mathds{1}\sub{\s}\otimes \mathsf{Z}(x)$ and $\mathsf{E}(x)$ are effects and $\Gamma_\xi^{\mathcal{E}}$ is a channel, \lemref{lemma:unsharp-disturbance-bound} gives $\|\<\< \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) | \mathds{1}\sub{\s} \otimes \mathsf{Z}(x) \>\>\| \leqslant 2 \|\epsilon(x) \| + \| \mathsf{E}(x) - \mathsf{E}(x)^2 \|$. We thus obtain the bound given in \eq{eq:measurability-error-bound}. \end{proof} We see that the measurability bounds given in \thmref{theorem:measurability} are structurally very similar to the disturbance bounds given in \corref{corollary:WAY-disturbance-bound-unsharpness} (when we identify $\mathsf{F}$ with $\mathsf{E}$), with the only difference being that the terms ${\mathcal{I}}_{\mathcal{X}}^*([\mathsf{E}(x), {N\sub{\s}}])$ appearing in the disturbance bounds are replaced by $\Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}])$ in the measurability bounds. Of course, this difference leads to a subtle distinction regarding the conditions under which measurement and non-disturbance can be achieved. Recall that if $\mathsf{E}$ commutes with ${N\sub{\s}}$, then \corref{corollary:WAY-disturbance-bound-unsharpness} does not impose any constraints on non-disturbance. But commutation of $\mathsf{E}$ with ${N\sub{\s}}$ does not lead to the lower bound of \eq{eq:measurability-error-bound} to vanish in general, since it may be the case that $[\mathsf{E}(x), {N\sub{\s}}]=\mathds{O}$ while $\Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}])\ne \mathds{O}$. Indeed, it is simple to infer that if $\mathsf{E}$ is sharp then it is measurable only if $[\mathsf{E}(x), {N\sub{\s}}]=\Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}])$, and so unless it holds that $\Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}])= \mathds{O}$, then $\mathsf{E}$ commuting with ${N\sub{\s}}$ cannot be measured. If $[\mathsf{E}(x), {N\sub{\s}}] \ne \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}])$, then an observable $\mathsf{E}$ is measurable only if it is unsharp, and $\| \Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|$ is large. In the special case where ${\mathcal{E}}$ fully conserves $N$, then as shown in \lemref{lemma:conservation-variance-condition} the variance of the conserved quantity in the state of the apparatus must be large. Indeed, similarly to \propref{prop:quantitative-bound-disturbance-WAY-Fisher}, a large coherence in the apparatus preparation is a necessary condition for measurability: \begin{prop}\label{prop:measurability-Fisher} Let ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an observable acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ fully conserves an additive quantity $N = {N\sub{\s}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s} \otimes {N\sub{\aa}}$. Let $\|\epsilon(x)\|$ quantify the error in measuring the effects of the target observable $\mathsf{E}$. Then for all $x\in {\mathcal{X}}$ \begin{align}\label{eq:measurability-error-bound-Fisher} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| \leqslant 2 \|{N\sub{\s}}\|\|\epsilon(x)\| + \frac{1}{2} {\mathcal{Q}}({N\sub{\aa}}, \xi)^{\frac{1}{2}}, \end{align} where ${\mathcal{Q}}({N\sub{\aa}}, \xi)$ denotes the quantum Fisher information of ${N\sub{\aa}}$ in the state $\xi$. Aditionally, if $\mathsf{E}$ is an extremal observable and ${\mathcal{M}}$ is a measurement scheme for $\mathsf{E}$, then for all $x\in {\mathcal{X}}$ \begin{align}\label{eq:measurability-bound-Fisher-extremal} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| \leqslant {\mathcal{Q}}({N\sub{\aa}}, \xi)^{\frac{1}{2}} \|\mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}}. \end{align} \end{prop} \begin{proof} Let $\{q_i, \phi_i\}$ be an arbitrary ensemble of unit vectors that satisfies $\xi = \sum_i q_i \pr{\phi_i}$. We may thus write $\Gamma_\xi^{\mathcal{E}}(\cdot) = \sum_i q_i \Gamma_{\phi_i}^{\mathcal{E}}(\cdot)$. Given the additivity of $N$ and the conservation law, we may rewrite \eq{eq:error-quantity-equality-1} as \begin{align*} [\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) = [{N\sub{\s}}, \epsilon(x)] + \sum_i q_i \bigg([\Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)), \Gamma_{\phi_i}^{\mathcal{E}}(N)] - \Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes[ \mathsf{Z}(x), {N\sub{\aa}}])\bigg). \end{align*} By the sesquilinear mappings $\<\<A|B\>\>_i := \Gamma_{\phi_i}^{\mathcal{E}}(A^* B) - \Gamma_{\phi_i}^{\mathcal{E}}(A^*) \Gamma_{\phi_i}^{\mathcal{E}}(B)$, \corref{corollary:Norm-commutator-inequality} and \lemref{lemma:conservation-variance-condition}, we obtain the bounds \begin{align*} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| & \leqslant 2\|{N\sub{\s}}\| \|\epsilon(x)\| + 2 \sum_i q_i \var{{N\sub{\aa}}, \phi_i} ^{\frac{1}{2}} \|\Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^2) - \Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2\|^{\frac{1}{2}}. \end{align*} Using the inequalities $\|\Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^2) - \Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2\|^{\frac{1}{2}} \leqslant 1/2$, and the concavity of the square root, we obtain the bound \begin{align*} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| & \leqslant 2\|{N\sub{\s}}\| \|\epsilon(x)\| + \left( \sum_i q_i \var{{N\sub{\aa}}, \phi_i} \right)^{\frac{1}{2}}. \end{align*} By choosing the ensemble $\{q_i, \phi_i\}$ that gives the quantum Fisher information as in \eq{eq:QFI-defn}, we arrive at the bound in \eq{eq:measurability-error-bound-Fisher}. Now assume that $\mathsf{E}$ is an extremal observable. This implies that for any pair of observables $\mathsf{E}^{(1)}$ and $\mathsf{E}^{(2)}$, and any $\lambda \in [0,1]$, the effects of $\mathsf{E}$ can be decomposed as $\mathsf{E}(x) = \lambda \mathsf{E}^{(1)}(x) + (1-\lambda) \mathsf{E}^{(2)}(x)$ only if $\mathsf{E} = \mathsf{E}^{(1)} = \mathsf{E}^{(2)}$. It follows that if ${\mathcal{M}}$ is a measurement scheme for $\mathsf{E}$, that is, if $\epsilon = 0$, then $\Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)) = \mathsf{E}(x)$ for all $i$. Consequently, we obtain the bounds $\|\Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^2) - \Gamma_{\phi_i}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2\|^{\frac{1}{2}} \leqslant \| \mathsf{E}(x) - \mathsf{E}(x)^2\|^{\frac{1}{2}}$ for all $i$, which gives \begin{align*} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| & \leqslant 2\left( \sum_i q_i \var{{N\sub{\aa}}, \phi_i} \right)^{\frac{1}{2}}\| \mathsf{E}(x) - \mathsf{E}(x)^2\|^{\frac{1}{2}}. \end{align*} Once again choosing the ensemble that gives the quantum Fisher information, we arrive at \eq{eq:measurability-bound-Fisher-extremal}. \end{proof} \subsection{The generalised Wigner-Araki-Yanase theorem} The classic theorem connecting measurement, conservation, and disturbance (in the form of repeatability) was formulated by Araki and Yanase in 1960 \cite{Araki1960}, capturing in a fairly general setting an observation due to Wigner given in 1952 \cite{E.Wigner1952, Busch2010} regarding spin measurements in the presence of angular momentum conservation. This theorem has undergone several refinements and generalisations in the intervening years; here we give a formulation which goes beyond existing work in several respects. \begin{theorem}[Generalised WAY theorem]\label{theorem:Generalized-WAY} Let ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average, where ${N\sub{\s}} \in {\mathcal{L}_{s}}({\mathcal{H}\sub{\s}})$ and ${N\sub{\aa}} \in {\mathcal{L}_{s}}({\mathcal{H}\sub{\aa}})$. If either ${\mathcal{I}}$ is repeatable, or the Yanase condition $[\mathsf{Z}, {N\sub{\aa}}]=\mathds{O}$ is satisfied, then for all $x\in {\mathcal{X}}$ \begin{align}\label{eq:WAY-bound} \| [\mathsf{E}(x), {N\sub{\s}}] \| \leqslant 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \|\mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} . \end{align} \end{theorem} \begin{proof} Let us first assume that ${\mathcal{I}}$ is repeatable. Since repeatability implies first-kindness, which is a specific instance of non-disturbance, then by \thmref{theorem:quantitative-bound-disturbance-WAY} and identifying $\mathsf{F}$ with $\mathsf{E}$ we must have \begin{align*} \| [\mathsf{E}(x), {N\sub{\s}}] - {\mathcal{I}}^*_{\mathcal{X}}([\mathsf{E}(x), {N\sub{\s}}]) \|& \leqslant 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \|{\mathcal{I}}^*_{\mathcal{X}}(\mathsf{E}(x)^2) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} \qquad \qquad \qquad \forall \, x \in {\mathcal{X}}. \end{align*} By item (ii) of \propref{prop:repeatable-instrument-identity}, we have ${\mathcal{I}}^*_{\mathcal{X}}(\mathsf{E}(x)^2) = \mathsf{E}(x)$ and ${\mathcal{I}}^*_{\mathcal{X}}([\mathsf{E}(x), {N\sub{\s}}]) = {\mathcal{I}}^*_{\mathcal{X}}(\mathsf{E}(x){N\sub{\s}}) - {\mathcal{I}}^*_{\mathcal{X}}({N\sub{\s}} \mathsf{E}(x)) =\mathds{O}$. We thus obtain \eq{eq:WAY-bound}. Now let us abandon the requirement of repeatability. By \thmref{theorem:measurability}, ${\mathcal{M}}$ is a measurement scheme for $\mathsf{E}$ only if \begin{align*} \|[\mathsf{E}(x), {N\sub{\s}}] - \Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) \| \leqslant 2 \|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|^{\frac{1}{2}} \| \mathsf{E}(x) - \mathsf{E}(x)^2 \|^{\frac{1}{2}} \qquad \qquad \qquad \forall \, x \in {\mathcal{X}}. \end{align*} If the Yanase condition is satisfied, then we have $\Lambda^*([\mathsf{Z}(x), {N\sub{\aa}}]) = \Lambda^*(\mathds{O}) = \mathds{O}$, and so once again we obtain \eq{eq:WAY-bound}. \end{proof} This theorem goes beyond the original WAY theorem (and its descendants) in the setting of bounded conserved quantities in the following respects: it holds for general interaction channels (with average conservation), unsharp target observables, unsharp pointer observables, and mixed probe states. It also provides an operationally motivated quantitative bound from which the original theorem can be obtained as a special case. If $\mathsf{E}$ is a sharp observable, the upper bound of \eq{eq:WAY-bound} vanishes, in which case an additive conservation law together with either repeatability or the Yanase condition necessitates commutation of $\mathsf{E}$ with the system part of the conserved quantity. Moreover, the above theorem allows us to prise apart the different roles played by the notion of average and full conservation as regards to the impossibility and possibility statements of the WAY theorem. Specifically, that a sharp observable not commuting with the conserved quantity cannot be accurately or repeatably measured arises under average conservation alone. In contradistinction, that an apparatus with a large coherence in the conserved quantity is necessary for accurate or repeatable measurements of an (unsharp) observable not commuting with the conserved quantity is borne out only in the case of full conservation, which can be immediately inferred by \lemref{lemma:conservation-variance-condition}, \propref{prop:quantitative-bound-disturbance-WAY-Fisher}, and \propref{prop:measurability-Fisher}, while noting that in the case of average conservation it may be the case that $\|\Gamma_\xi^{\mathcal{E}}(N^2) - \Gamma_\xi^{\mathcal{E}}(N)^2 \|$ is large even though ${\mathcal{Q}}({N\sub{\aa}}, \xi)$ as defined in \eq{eq:QFI-defn} is small. That such a difference was not noticed until now is a consequence of the fact that only normal measurement schemes were considered, and that for unitary channels full and average conservation coincide. \subsection{The Wigner-Araki-Yanase theorem without the Yanase condition} Let us consider the Yanase condition \cite{Yanase1961, Ozawa2002} in more detail. Traditionally, the Yanase condition is justified by applying the repeatability part of the WAY theorem to the pointer observable; if the pointer observable is sharp, and we consider its measurement as being implemented by a conservative interaction between one measurement apparatus and another, then the pointer observable will admit a repeatable measurement only if it commutes with the conserved quantity. Repeatability of the measurement of the pointer observable is deemed a natural requirement for the possibility of measurement, since an experimenter should be able to confirm the measurement outcome by repeated observations of the apparatus: there must be a stable record of the measurement outcomes. However, such an argument suffers from two drawbacks. Firstly, it only applies to sharp pointer observables. Secondly, it runs into the problem of infinite regress, since we have now shifted the role of the utimate pointer observable from the first apparatus to the second; repeatability of the first pointer observable can be abandoned if the second admits a repeatable measurement, in which case the experimenter may continue to verify the measurement outcomes. Below, we shall show that the measurability part of the WAY theorem can be justified without an appeal to the Yanase condition, but rather by imposing a conservation law on the full measurement process (including pointer objectification). Thus far, we have only considered the case where the measurement interaction ${\mathcal{E}}$ between system and apparatus conserves an additive quantity $N$. However, pointer objectification will also result in state changes, and it may be the case that the expected value of $N$ will change as a result. Now let us provide a generalised prescription of measurement schemes that captures also the state changes of the apparatus. Recall that ${\mathcal{M}}:=({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ is a measurement scheme for an observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}}$ if $\mathsf{E}(x) = \Gamma_\xi\circ {\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))$. Now consider the tuple $\tilde {\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{J}})$, where ${\mathcal{J}} := \{{\mathcal{J}}_x : x\in {\mathcal{X}}\}$ is an instrument acting in ${\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}$. $\tilde {\mathcal{M}}$ is also a measurement scheme for $\mathsf{E}$ if $\mathsf{E}(x) = \Gamma_\xi\circ {\mathcal{J}}_x^*(\mathds{1}\sub{\s} \otimes \mathds{1}\sub{\aa})$. It is straightforward to show that this is satisfied if ${\mathcal{J}}$ is compatible with the ``Heisenberg-evolved'' pointer observable \begin{align}\label{eq:heisenberg-pointer} \mathsf{Z}^\tau(x) := {\mathcal{E}}^*(\mathds{1}\sub{\s}\otimes \mathsf{Z}(x)), \end{align} that is, if ${\mathcal{J}}_x^*(\mathds{1}\sub{\s} \otimes \mathds{1}\sub{\aa}) = \mathsf{Z}^\tau(x)$. We say that $\tilde {\mathcal{M}}$ obeys a full (average) conservation law if the channel ${\mathcal{J}}_{\mathcal{X}}$ fully (on average) conserves a quantity $N$. The operations ${\mathcal{J}}_x$ can be constructed as a sequential application of the channel ${\mathcal{E}}$ followed by the operations of some $\mathsf{Z}$-compatible instrument acting in ${\mathcal{H}\sub{\aa}}$, the latter of which provides a physical characterisation of the pointer objectification process. In such a case, a sufficient condition for conservation of $N$ by ${\mathcal{J}}_{\mathcal{X}}$ is the conservation of $N$ by both ${\mathcal{E}}$ and the $\mathsf{Z}$-channel. But it may be the case that ${\mathcal{E}}$ fully conserves $N$ while the $\mathsf{Z}$-channel only conserves $N$ on average, and vice versa. In such cases, the channel ${\mathcal{J}}_{\mathcal{X}}$ will only conserve $N$ on average. By \lemref{lemma:fixed-points-instrument}, it holds that if ${\mathcal{J}}_{\mathcal{X}}$ conserves $N$ on average, and if either $\mathsf{Z}^\tau$ is sharp or if ${\mathcal{J}}_{\mathcal{X}}$ also fully conserves $N$, then \begin{align}\label{eq:weak-Yanase} [\mathsf{Z}^\tau, N]=\mathds{O}. \end{align} This commutation relation is known as the weak Yanase condition \cite{Tukiainen2017}. We note that if ${\mathcal{E}}$ conserves $N$ on average and if either $\mathsf{Z}^\tau$ is sharp or if ${\mathcal{E}}$ also fully conserves $N$, then the Yanase condition implies the weak Yanase condition. First, let us assume that $\mathsf{Z}^\tau$ is sharp. Since $\mathsf{Z}(x)$ is an effect then by two-positivity of CP maps we have $\mathsf{Z}^\tau(x)= {\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)) \geqslant {\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^2) \geqslant {\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2 = \mathsf{Z}^\tau(x)$, and so we have ${\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)^2) = {\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))^2$. On the other hand, if ${\mathcal{E}}$ fully conserves $N$ then ${\mathcal{E}}^*(N^2) = {\mathcal{E}}^*(N)^2 = N^2$. In either case, by \corref{corollary:multiplicability} we have \begin{align*} [\mathsf{Z}^\tau(x), N] = [{\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)), {\mathcal{E}}^*(N)] = {\mathcal{E}}^*([\mathds{1}\sub{\s} \otimes \mathsf{Z}(x), N]) = {\mathcal{E}}^*(\mathds{1}\sub{\s} \otimes[ \mathsf{Z}(x), {N\sub{\aa}}]), \end{align*} and so if $[ \mathsf{Z}(x), {N\sub{\aa}}]=\mathds{O}$, then $[\mathsf{Z}^\tau(x), N]=\mathds{O}$. Moreover, if ${\mathcal{E}}(\cdot) = U(\cdot)U^*$ is a unitary channel, and ${\mathcal{E}}$ conserves $N$, then $ [\mathsf{Z}^\tau(x), N] = [U^*(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))U,N] = U^*(\mathds{1}\sub{\s} \otimes[\mathsf{Z}(x), {N\sub{\aa}} ])U$. In such a case the weak Yanase condition is equivalent to the Yanase condition; multiplying both sides of the equality $U^*(\mathds{1}\sub{\s} \otimes[\mathsf{Z}(x), {N\sub{\aa}} ])U = \mathds{O}$ by $U$ from the left and by $U^*$ from the right shows that $[\mathsf{Z}^\tau(x), N]= \mathds{O} \iff [\mathsf{Z}(x), {N\sub{\aa}} ]=\mathds{O}$. However, in general it may be the case that the weak Yanase condition is satisfied but the Yanase condition is violated. The following proposition shows that if the weak Yanase condition is satisfied, then the measurability part of the WAY theorem will hold. Moreover, we see that there are cases where a large coherence of the conserved quantity in the apparatus is necessary for good measurements even without a full conservation law---for example, if either the interaction channel ${\mathcal{E}}$ or the $\mathsf{Z}$-channel only conserves $N$ on average, but $\mathsf{Z}^\tau$ is sharp, in which case the weak Yanase condition is guaranteed to hold. \begin{prop}\label{prop:weak-Yanase-WAY} Let ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an observable acting in ${\mathcal{H}\sub{\s}}$, and let $\| \epsilon(x)\|$ quantify the error in measuring the effects of the target observable $\mathsf{E}$. If ${\mathcal{M}}$ satisfies the weak Yanase condition $[\mathsf{Z}^\tau, N]=\mathds{O}$, where $\mathsf{Z}^\tau$ is the Heisenberg-evolved pointer observable defined in \eq{eq:heisenberg-pointer} and $N = {N\sub{\s}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s} \otimes {N\sub{\aa}}$, then for all $x\in {\mathcal{X}}$ \begin{align}\label{eq:weak-Yanase-WAY-variance} \| [\mathsf{E}(x), {N\sub{\s}}] \| \leqslant 2 \| {N\sub{\s}} \|\| \epsilon(x)\| + 2 \var{{N\sub{\aa}}, \xi}^{\frac{1}{2}} \big(2 \| \epsilon(x)\| + \|\mathsf{E}(x) - \mathsf{E}(x)^2 \|\big)^{\frac{1}{2}}, \end{align} and \begin{align}\label{eq:weak-Yanase-WAY-QFI} \| [\mathsf{E}(x), {N\sub{\s}}] \| \leqslant 2 \| {N\sub{\s}} \|\| \epsilon(x)\| + \frac{1}{2} {\mathcal{Q}}({N\sub{\aa}}, \xi)^{\frac{1}{2}}, \end{align} where $\var{{N\sub{\aa}}, \xi}$ is the variance of ${N\sub{\aa}}$ in $\xi$, and ${\mathcal{Q}}({N\sub{\aa}}, \xi)$ is the the quantum Fisher information of ${N\sub{\aa}}$ in $\xi$. Additionally, if $\mathsf{E}$ is an extremal observable, and ${\mathcal{M}}$ is a measurement scheme for $\mathsf{E}$, then for all $x\in {\mathcal{X}}$ \begin{align}\label{eq:weak-Yanase-WAY-QFI-extremal} \| [\mathsf{E}(x), {N\sub{\s}}] \| \leqslant {\mathcal{Q}}({N\sub{\aa}}, \xi)^{\frac{1}{2}}\|\mathsf{E}(x) - \mathsf{E}(x)^2\|^{\frac{1}{2}}. \end{align} \end{prop} \begin{proof} By \eq{eq:error-equality}, we may write $ \epsilon(x) := \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x)) - \mathsf{E}(x) \equiv \Gamma_\xi(\mathsf{Z}^\tau(x)) - \mathsf{E}(x)$. By additivity of $N$ we have $\Gamma_\xi(N) = {N\sub{\s}} + \mathrm{tr}[{N\sub{\aa}} \xi] \mathds{1}\sub{\s}$, and so we may write \begin{align*} [\mathsf{E}(x), {N\sub{\s}}] = [{N\sub{\s}}, \epsilon(x)] + [\Gamma_\xi(\mathsf{Z}^\tau(x)), \Gamma_\xi(N)] , \end{align*} which gives us the bound \begin{align*} \|[\mathsf{E}(x), {N\sub{\s}}]\| \leqslant 2 \|{N\sub{\s}}\| \| \epsilon(x)\| + \|[\Gamma_\xi(\mathsf{Z}^\tau(x)), \Gamma_\xi(N)] \|. \end{align*} Since $\Gamma_\xi$ is a channel, and the weak Yanase condition $[\mathsf{Z}^\tau, N]=\mathds{O}$ holds, then by \corref{corollary:Norm-commutator-inequality} we obtain \begin{align*} \|[\Gamma_\xi(\mathsf{Z}^\tau(x)), \Gamma_\xi(N)] \| \leqslant 2 \| \Gamma_\xi(N^2) - \Gamma_\xi(N)^2 \|^{\frac{1}{2}} \| \Gamma_\xi(\mathsf{Z}^\tau(x)^2) - \Gamma_\xi(\mathsf{Z}^\tau(x))^2\|^{\frac{1}{2}}. \end{align*} As shown in \lemref{lemma:conservation-variance-condition}, additivity of $N$ implies that $\|\Gamma_\xi(N^2) - \Gamma_\xi(N)^2\| = \var{{N\sub{\aa}}, \xi}$. On the other hand, by \lemref{lemma:unsharp-disturbance-bound} we obtain $\| \Gamma_\xi(\mathsf{Z}^\tau(x)^2) - \Gamma_\xi(\mathsf{Z}^\tau(x))^2\| \leqslant 2 \| \epsilon(x) \| + \|\mathsf{E}(x) - \mathsf{E}(x)^2 \|$. As such, we obtain the bound in \eq{eq:weak-Yanase-WAY-variance}. The bounds given in \eq{eq:weak-Yanase-WAY-QFI} and \eq{eq:weak-Yanase-WAY-QFI-extremal} are trivially obtained by adapting the arguments in \propref{prop:measurability-Fisher} to the above. \end{proof} \section{Faithful fixed states and measurement disturbance}\label{sect:faithful-fixed-points} In \sect{sect:quantifying-disturbance} we obtained quantitative bounds giving necessary conditions for non-disturbance of an observable $\mathsf{F}$ by an $\mathsf{E}$-instrument ${\mathcal{I}}$ implemented in the presence of an additive conservation law, whether the conservation law is full or average. We saw from \lemref{lemma:non-disturbance-WAY-multiplicability} that if $\mathsf{F}^2$ is contained in the fixed-point set of the $\mathsf{E}$-channel ${\mathcal{I}}^*_{\mathcal{X}}$---guaranteed by non-disturbance if $\mathsf{F}$ is either sharp, a rank-1 observable, or a coarse-graining of a sharp observable---then independently of the amount of coherence initially present in the apparatus it must hold that $\mathsf{F}$ commutes with both $\mathsf{E}$ and $\Delta {N\sub{\s}} := {\mathcal{I}}_{\mathcal{X}}^*({N\sub{\s}}) - {N\sub{\s}}$. Now we shall see that if the fixed-point set of the $\mathsf{E}$-channel ${\mathcal{I}}_{\mathcal{X}}^*$ is a von Neumann algebra, which is guaranteed if ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ contains a faithful state, then similar constraints on non-disturbance will hold for arbitrary observables. First, let us prove a useful lemma. \begin{lemma}\label{lemma:faithful-fixedpoint-commutant} Let ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$. Assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average, and that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a von Neumann algebra. Then for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, the following implication holds: $A\in {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) \implies [A, N\sub{{\mathcal{S}}} ] \in {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) \implies [A, N\sub{{\mathcal{S}}} ] \in \mathsf{E}'$. \end{lemma} \begin{proof} Recall that for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, we have ${\mathcal{I}}^*_{\mathcal{X}}(A) = \Gamma_\xi^{\mathcal{E}}(A \otimes \mathds{1}\sub{\aa})$, where $\Gamma_\xi^{\mathcal{E}}$ is the channel defined in \eq{eq:Gamma-U}. Average conservation of $N$ by ${\mathcal{E}}$ implies that \begin{align*} {N\sub{\s}} + \mathrm{tr}[{N\sub{\aa}} \xi] \mathds{1}\sub{\s} = \Gamma_\xi^{\mathcal{E}}( {N\sub{\s}}\otimes \mathds{1}\sub{\aa}) + \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s}\otimes {N\sub{\aa}} ). \end{align*} Since ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra, it follows that for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, $A \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) \implies A^*A, A A^* \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ which, by \corref{corollary:multiplicability}, implies that $A\Gamma_\xi^{\mathcal{E}}(B) = \Gamma_\xi^{\mathcal{E}}((A \otimes \mathds{1}\sub{\aa})B)$ and $ \Gamma_\xi^{\mathcal{E}}(B)A = \Gamma_\xi^{\mathcal{E}}(B(A \otimes \mathds{1}\sub{\aa}))$ for all $B\in {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})$. Therefore, for all $A \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ we have \begin{align*} [A, {N\sub{\s}}] &= [A, \Gamma_\xi^{\mathcal{E}}({N\sub{\s}} \otimes \mathds{1}\sub{\aa})] + [A, \Gamma_\xi^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes {N\sub{\aa}})] \nonumber \\ & = \Gamma_\xi^{\mathcal{E}}([A\otimes \mathds{1}\sub{\aa}, {N\sub{\s}} \otimes \mathds{1}\sub{\aa} ]) + \Gamma_\xi^{\mathcal{E}}([A \otimes \mathds{1}\sub{\aa}, \mathds{1}\sub{\s} \otimes {N\sub{\aa}}]) \nonumber \\ & = \Gamma_\xi^{\mathcal{E}}([A, {N\sub{\s}} ]\otimes \mathds{1}\sub{\aa}) = {\mathcal{I}}^*_{\mathcal{X}}([A, {N\sub{\s}}]). \end{align*} Consequently, we see that $A \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) \implies [A, {N\sub{\s}}] \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. But as shown in \lemref{lemma:fixed-points-instrument}, if ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a von Neumann algebra then ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) \subset \mathsf{E}'$. It follows that $A \in {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) \implies [A, {N\sub{\s}}] \in \mathsf{E}'$. \end{proof} We are now ready to prove the following: \begin{theorem}\label{theorem:faithful-fixedpoint-WAY} Let ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$. Assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average, and that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a von Neumann algebra. The following hold: \begin{enumerate}[(i)] \item ${\mathcal{I}}$ does not disturb an observable $\mathsf{F}$ only if $\mathsf{F}$ commutes with both $\mathsf{E}$ and $\Delta {N\sub{\s}} := {\mathcal{I}}_{\mathcal{X}}^*({N\sub{\s}}) - {N\sub{\s}}$, with $[\mathsf{F}, {N\sub{\s}}]= \mathds{O}$ being sufficient for $[\mathsf{F}, \Delta {N\sub{\s}}]=\mathds{O}$. \item ${\mathcal{I}}$ is a measurement of the first kind only if $\mathsf{E}$ is a commutative observable that commutes with $N\sub{{\mathcal{S}}}$. \item ${\mathcal{I}}$ is repeatable only if $\mathsf{E}$ is sharp and commutes with ${N\sub{\s}}$. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[(i):] \item ${\mathcal{I}}$ does not disturb $\mathsf{F}$ only if $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. Given that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a von Neumann algebra then it also holds that $\mathsf{F}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, and so by \lemref{lemma:non-disturbance-WAY-multiplicability} $\mathsf{F}$ must commute with both $\mathsf{E}$ and $\Delta {N\sub{\s}} := {\mathcal{I}}^*_{\mathcal{X}}({N\sub{\s}}) - {N\sub{\s}}$. But note that if $[\mathsf{F}, {N\sub{\s}}]=\mathds{O}$ and $\mathsf{F}$ is non-disturbed, then it is guaranteed that $[\mathsf{F}, \Delta {N\sub{\s}}]=\mathds{O}$. \item ${\mathcal{I}}$ is a measurement of the first kind only if $\mathsf{E} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. But by \lemref{lemma:fixed-points-instrument} ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) \subset \mathsf{E}'$, and so $\mathsf{E}$ must be commutative. Now, let us define ${N\sub{\s}}(t):=e^{\mathfrak{i} t \mathsf{E}(x)} {N\sub{\s}} e^{-\mathfrak{i} t \mathsf{E}(x)}$. We may write \begin{align*} \frac{d}{dt}{N\sub{\s}}(t) =\mathfrak{i} [\mathsf{E}(x), {N\sub{\s}}(t)], \end{align*} from which we obtain \begin{align*} {N\sub{\s}}(t)&={N\sub{\s}} + \mathfrak{i} \int^t_0 dt_1 [\mathsf{E}(x), {N\sub{\s}}(t_1)] \\ &= {N\sub{\s}} + \mathfrak{i} \int^t_0 d t_1 [\mathsf{E}(x), {N\sub{\s}}] - \int^t_0 dt_1 \int^{t_1}_0 d t_2 \, e^{\mathfrak{i} t_2 \mathsf{E}(x)}[\mathsf{E}(x), [\mathsf{E}(x), {N\sub{\s}}]] e^{- \mathfrak{i} t_2 \mathsf{E}(x)} \\ &= {N\sub{\s}} + \mathfrak{i} t[\mathsf{E}(x), {N\sub{\s}}]. \end{align*} In the final line we have used the conservation law, together with \lemref{lemma:faithful-fixedpoint-commutant}, which implies that $[[\mathsf{E}(x), {N\sub{\s}}], \mathsf{E}(x)]=\mathds{O}$ holds for all $x$. We thus obtain the inequality \begin{eqnarray*} 2\| {N\sub{\s}} \| \geqslant \| {N\sub{\s}}(t) - {N\sub{\s}} \|= |t| \| [\mathsf{E}(x), {N\sub{\s}}]\| \end{eqnarray*} for all $t$. Given that ${N\sub{\s}}$ is a bounded operator, this is clearly satisfied only if $[\mathsf{E}(x), {N\sub{\s}}]=\mathds{O}$. \item Since repeatability implies first-kindness, then by (ii) $\mathsf{E}$ must commute with ${N\sub{\s}}$. Sharpness of $\mathsf{E}$ follows from \propref{prop:repeatable-instrument-identity} which gives ${\mathcal{I}}_{\mathcal{X}}^*(\mathsf{E}(x)^2) = \mathsf{E}(x)$ for a repeatable ${\mathcal{I}}$, and the fact that if ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra then $\mathsf{E} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ implies that $\mathsf{E}^2 \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. \end{enumerate} \end{proof} We shall now give two examples where ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra, and so the implications of \thmref{theorem:faithful-fixedpoint-WAY} hold. \begin{lemma}\label{lemma:Luders-algebra} Consider the L\"uders $\mathsf{E}$-instrument ${\mathcal{I}}^L$ acting in ${\mathcal{H}\sub{\s}}$, defined in \eq{eq:luders}. If either (i) $\dim({\mathcal{H}\sub{\s}}) <\infty$, or (ii) $\mathsf{E}$ is commutative, then ${\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*)$ is a von Neumann algebra. \end{lemma} \begin{proof} Let us first consider (i). Define the complete mixture $\omega:= \mathds{1}\sub{\s}/ \dim({\mathcal{H}\sub{\s}})$, which is faithful. It follows trivially that ${\mathcal{I}}^L_{\mathcal{X}}(\omega) = \omega$, and so ${\mathcal{F}}({\mathcal{I}}^L_{\mathcal{X}})$ contains a faithful state $\omega$. By \lemref{lemma:Lindblad} ${\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*)$ is a von Neumann algebra. Now let us consider (ii). Recall that ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra if ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) = \{K_i, K_i^*\}'$, with $\{K_i\}$ any Kraus representation of ${\mathcal{I}}_{\mathcal{X}}$ \cite{Bratteli1998}. But for a L\"uders instrument, we have $\{K_i, K_i^*\}' = \{\sqrt{\mathsf{E}(x)}\}' = \mathsf{E}'$. While $\mathsf{E}' \subset {\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*)$ always holds, it was observed that in infinite-dimensional systems there exists $\mathsf{E}$ for which $ {\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*) \not\subset \mathsf{E}'$ \cite{Arias2002,Weihua2009}. However, it was shown in \cite{Busch1998} that for two-valued observables, it always holds that ${\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*) = \mathsf{E}'$. Since two-valued observables are commutative, this led to the conjecture that the fixed-point set of the L\"uders $\mathsf{E}$-channel is the commutant of $\mathsf{E}$ for all commutative observables \cite{Arias2002}, making ${\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*)$ a von Neumann algebra, which was later proven to be the case \cite{Weihua2010, Prunaru2011}. \end{proof} Let us highlight an interesting consequence of \lemref{lemma:Luders-algebra}. \begin{corollary}\label{Luders-WAY} Let ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for a commutative observable $\mathsf{E}$ with the L\"uders instrument ${\mathcal{I}}^L$ acting in ${\mathcal{H}\sub{\s}}$. If ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average, then $\mathsf{E}$ commutes with ${N\sub{\s}}$. \end{corollary} \begin{proof} If $\mathsf{E} \subset \mathsf{E}'$, then ${\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*) = \mathsf{E}'$ is a von Neumann algebra. Moreover, it holds that $\mathsf{E} \subset {\mathcal{F}}({{\mathcal{I}}^L_{\mathcal{X}}}^*)$, so that the L\"uders instrument for a commutative observable is a measurement of the first kind. It follows from item (ii) of \thmref{theorem:faithful-fixedpoint-WAY} that $\mathsf{E}$ must commute with ${N\sub{\s}}$. \end{proof} \begin{lemma}\label{lemma:fuzzy-sharp} Assume that $\dim({\mathcal{H}\sub{\s}}) <\infty$, and let $\mathsf{G}:= \{\mathsf{G}(z) : z = 1, \dots , \dim({\mathcal{H}\sub{\s}})\}$ be a rank-1 sharp observable acting in ${\mathcal{H}\sub{\s}}$. Let $\mathsf{F}$ be an observable and $M$ be an invertible stochastic matrix such that \begin{align*} &\mathsf{F}(y) = \sum_z M_{y, z} \mathsf{G}(z), & \mathsf{G}(z) = \sum_y M_{z, y}^{-1} \mathsf{F}(y). \end{align*} An instrument ${\mathcal{I}}$ does not disturb $\mathsf{F}$ only if ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra. \end{lemma} \begin{proof} Non-disturbance of $\mathsf{F}$ implies that $\mathsf{G} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. Since $\mathsf{G}(z)$ is a rank-$1$ projection, it follows that \begin{align*} \mathrm{tr}[\mathsf{G}(z) {\mathcal{I}}_{\mathcal{X}}(\mathsf{G}(z))] = \mathrm{tr}[{\mathcal{I}}_{\mathcal{X}}^*(\mathsf{G}(z)) \mathsf{G}(z)] = \mathrm{tr}[\mathsf{G}(z) \mathsf{G}(z)] = \mathrm{tr}[\mathsf{G}(z)] = 1 , \end{align*} and so ${\mathcal{I}}_{\mathcal{X}}(\mathsf{G}(z)) =\mathsf{G}(z)$. Consequently, we may construct the faithful state $\omega = \sum_z p_z \mathsf{G}(z) $ with $p_z >0$ and $\sum_z p_z =1$, so that $\omega \in {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$. By \lemref{lemma:Lindblad}, ${\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$ is a von Neumann algebra. \end{proof} \section{Non-faithful fixed states and measurement disturbance}\label{sect:non-faithful-fixed-points} In this section we analyse the structure of the fixed-point set of arbitrary channels, which need not contain a faithful state. From here, the results of the previous section are generalised. We then provide novel quantitative bounds for first-kind measurements which, in particular, complement the impossibility part of WAY, by imposing additional conditions which must be satisfied for repeatable measurements. Due to the Schauder–Tychonoff fixed point theorem \cite{Fixed-points-appl}, all channels $\Phi : {\mathcal{T}}({\mathcal{H}\sub{\s}}) \to {\mathcal{T}}({\mathcal{H}\sub{\s}})$ have at least one fixed state. However, it may be that none of these are faithful. In such a case, the fixed-point set of the dual channel is not necessarily a von Neumann algebra, but rather forms an operator space \cite{Intro-Operator-Space}. This setting has been much less investigated, and its analysis forms the first part of this section. While the discussion thus far has been applicable for infinite-dimensional systems---except in some examples---in this section we shall always assume that $d:= \dim({\mathcal{H}\sub{\s}}) < \infty$. \subsection{Fixed-point structure of arbitrary channels} Consider a channel $\Phi : {\mathcal{T}}({\mathcal{H}\sub{\s}}) \to {\mathcal{T}}({\mathcal{H}\sub{\s}})$, and its dual in the Heisenberg picture $\Phi^*: {\mathcal{L}}({\mathcal{H}\sub{\s}}) \to {\mathcal{L}}({\mathcal{H}\sub{\s}})$. We may define the channels \begin{align}\label{eq:Phi-av-defn} & \Phi_\mathrm{av}(\cdot):= \lim_{N\to \infty} \frac{1}{N}\sum_{n=1}^N \Phi^{n}(\cdot), &\Phi^*_\mathrm{av}(\cdot):= \lim_{N\to \infty} \frac{1}{N}\sum_{n=1}^N \Phi^{*n}(\cdot), \end{align} where $\Phi^{n}$ denotes $n$ consecutive applications of $\Phi$. Note that these limits exists since $d <\infty$. According to the Jordan decomposition theorem, $\Phi^*$ is represented as a summation of projections onto eigenspaces multiplied by the corresponding eigenvalues, and nilpotent operators whose eigenspaces are invariant subspaces; $\Phi^*_\mathrm{av}$ corresponds to the projection onto the subspace with eigenvalue $1$. The fixed-point set ${\mathcal{F}}(\Phi^*)$ forms an operator space, i.e., a norm-closed vector subspace of the codomain of ${\mathcal{F}}(\Phi^*)$, and $\Phi^*_\mathrm{av}$ is a CP projection onto ${\mathcal{F}}(\Phi^*)$. \begin{lemma}\label{lemma:Phi-av-prop} Consider the channels $\Phi_\mathrm{av}^*$ and $\Phi_\mathrm{av}$ defined in \eq{eq:Phi-av-defn}. These have the following properties: \begin{enumerate}[(i)] \item $\Phi^* \circ \Phi^*_\mathrm{av}=\Phi^*_\mathrm{av} \circ \Phi^* = \Phi^*_\mathrm{av} \circ \Phi^*_\mathrm{av}=\Phi^*_\mathrm{av}$ and $\Phi \circ \Phi_\mathrm{av} = \Phi_\mathrm{av} \circ \Phi = \Phi_\mathrm{av}\circ \Phi_\mathrm{av} = \Phi_\mathrm{av}$. \item $\Phi^*_\mathrm{av}({\mathcal{L}}({\mathcal{H}\sub{\s}}))={\mathcal{F}}(\Phi^*_\mathrm{av}) = {\mathcal{F}}(\Phi^*)$ and $\Phi_\mathrm{av}({\mathcal{T}}({\mathcal{H}\sub{\s}})) = {\mathcal{F}}(\Phi_\mathrm{av}) = {\mathcal{F}}(\Phi)$. \end{enumerate} \end{lemma} \begin{proof} (i) is trivial, and so we shall only prove (ii). Let us first consider the Heisenberg picture channel $\Phi_\mathrm{av}^*$. That ${\mathcal{F}}(\Phi^*) \subset {\mathcal{F}}(\Phi^*_\mathrm{av})$ is trivial. Conversely, for any $A\in {\mathcal{F}}(\Phi^*_\mathrm{av})$, by (i) we have $\Phi^*(A) = \Phi^*\circ \Phi^*_\mathrm{av}(A)=\Phi^*_\mathrm{av}(A) = A$, and therefore ${\mathcal{F}}(\Phi^*_\mathrm{av}) \subset {\mathcal{F}}(\Phi^*)$. It follows that ${\mathcal{F}}(\Phi^*_\mathrm{av}) = {\mathcal{F}}(\Phi^*)$. Similarly, for all $A\in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ it holds that $\Phi^*_\mathrm{av}\circ \Phi^*_\mathrm{av}(A) =\Phi^*_\mathrm{av}(A)$, and thus $\Phi^*_\mathrm{av}({\mathcal{L}}({\mathcal{H}\sub{\s}})) \subset {\mathcal{F}}(\Phi^*_\mathrm{av})$. That ${\mathcal{F}}(\Phi^*_\mathrm{av}) \subset \Phi^*_\mathrm{av}({\mathcal{L}}({\mathcal{H}\sub{\s}}))$ is trivial, and so we also have $\Phi^*_\mathrm{av}({\mathcal{L}}({\mathcal{H}\sub{\s}})) = {\mathcal{F}}(\Phi^*_\mathrm{av})$. The relations in (ii) for the Schr\"odinger picture channel $\Phi_\mathrm{av}$ follow from similar arguments. \end{proof} Now consider the state \begin{align}\label{eq:rho-0} \rho_0:= \Phi_\mathrm{av}\left(\frac{1}{d} \mathds{1}\sub{\s}\right). \end{align} By \lemref{lemma:Phi-av-prop}, it holds that $\rho_0 \in {\mathcal{F}}(\Phi_\mathrm{av})= {\mathcal{F}}(\Phi)$. We define by $P$ the minimal support projection on $\rho_0$: \begin{align}\label{eq:P-defn} P:= \min \{Q : Q \text{ is a projection, } \rho_0 = Q \rho_0 Q \}. \end{align} In other words, for all projections $Q$ such that $\rho_0 = Q \rho_0 Q$, it holds that $Q \geqslant P$. Note that if $P=\mathds{1}\sub{\s}$ then ${\mathcal{F}}(\Phi)$ contains a faithful state. The following lemma provides some useful properties of $P$. \begin{lemma}\label{lemma:P-properties} Consider the state $\rho_0$ defined in \eq{eq:rho-0}, with the minimal support projection $P$ as defined in \eq{eq:P-defn}, and $P^\perp := \mathds{1}\sub{\s} - P$ its orthogonal complement. The following hold: \begin{enumerate}[(i)] \item $\Phi^*_\mathrm{av}(P)=\mathds{1}\sub{\s}$ and $\Phi_\mathrm{av}^*(P^\perp) = \mathds{O}$. \item For all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$, $\Phi^*_\mathrm{av}(A) = \Phi^*_\mathrm{av}(P A P)$. \item $P=\min \{Q: Q\mbox{ is a projection, } \rho = Q \rho Q \,\ \forall \, \rho \in {\mathcal{F}}(\Phi)\}$. \item $P=\min \{Q: Q\mbox{ is a projection, } \Phi^*_\mathrm{av}(Q)=\mathds{1}\sub{\s}$\}. \item $\Phi^*(P)\geqslant P$ and $\Phi^*(P^{\perp}) \leqslant P^{\perp}$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i):] \item Since $\Phi_\mathrm{av}^*$ is a channel, and $\mathds{O} < P \leqslant \mathds{1}\sub{\s}$, it follows that $\mathds{O} \leqslant \Phi_\mathrm{av}^*(P) \leqslant \mathds{1}\sub{\s}$. But by \eq{eq:rho-0} $\mathrm{tr}[ \Phi^*_\mathrm{av}(P)] = d \, \mathrm{tr}[\rho_0 P] =d$, and so $\Phi_\mathrm{av}^*(P) = \mathds{1}\sub{\s}$. It trivially follows that $\Phi_\mathrm{av}^*(P^\perp) = \Phi_\mathrm{av}^*(\mathds{1}\sub{\s}) - \Phi_\mathrm{av}^*(P) = \mathds{1}\sub{\s} - \mathds{1}\sub{\s} = \mathds{O}$. \item Since $P$ is positive, then by (i) and \lemref{lemma:operation-annihilation} it holds that $ \Phi^*_\mathrm{av}(P^{\perp}A) = \Phi^*_\mathrm{av}(AP^{\perp}) = \mathds{O}$ for all $A$. The claim follows by noting that we may write $A = (P + P^\perp) A (P + P^\perp)$. \item By \lemref{lemma:Phi-av-prop} and (ii), for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ and $\rho \in {\mathcal{F}}(\Phi)$ we have $\mathrm{tr}[A \rho] = \mathrm{tr}[\Phi_\mathrm{av}^*(A) \rho] = \mathrm{tr}[\Phi_\mathrm{av}^*(P A P) \rho] = \mathrm{tr}[A P \rho P]$, and so $\rho \in {\mathcal{F}}(\Phi) \implies \rho = P \rho P$. Since $P$ is the minimal support projection on $\rho_0 \in {\mathcal{F}}(\Phi)$, the claim follows. \item By (i), $\Phi^*_\mathrm{av}(P)=\mathds{1}\sub{\s}$ holds. Suppose another projection $Q$ satisfies $\Phi^*_\mathrm{av}(Q)=\mathds{1}\sub{\s}$. Then (ii) implies that $ \Phi^*_\mathrm{av}(PQP) = \Phi^*_\mathrm{av}(Q) =\mathds{1}\sub{\s}$. As we have $1=\mathrm{tr}[(\mathds{1}\sub{\s}/d)\Phi^*_\mathrm{av}(PQP)] =\mathrm{tr}[\rho_0 PQP]=\mathrm{tr}[\rho_0 Q]$, it follows that $Q\geqslant P$. \item Since $P$ is the smallest projection satisfying $\rho_0 = P \rho_0 P$, while $\mathrm{tr}[\rho_0 \Phi^*(P)] =\mathrm{tr}[\Phi(\rho_0) P]=\mathrm{tr}[\rho_0 P]=1$, it follows that $\Phi^*(P) \geqslant P$, and hence $\Phi^*(P^\perp) = \Phi^*(\mathds{1}\sub{\s}) - \Phi^*(P) \leqslant \mathds{1}\sub{\s} - P = P^\perp$. \end{enumerate} \end{proof} Let us now define the operations \begin{align}\label{eq:Phi-P-av} &\Phi^*_{\mathrm{av}, P}(\cdot) := P\Phi^*_\mathrm{av}(\cdot)P, &\Phi^*_P(\cdot) := P\Phi^*(\cdot)P. \end{align} Note that $\Phi_{\mathrm{av},P}^* : {\mathcal{L}}({\mathcal{H}\sub{\s}}) \to {\mathcal{L}}( {\mathcal{H}\sub{\s}})$ is not necessarily unital, since $\Phi^*_{\mathrm{av},P}(\mathds{1}\sub{\s})=P \leqslant \mathds{1}\sub{\s}$. However, the restriction of $\Phi^*_{\mathrm{av},P}$ to ${\mathcal{L}}(P{\mathcal{H}\sub{\s}}) \to {\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, which is also denoted by $\Phi^*_{\mathrm{av},P}$, is unital and hence a channel, since $P$ is the identity in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$ and $\Phi^*_{\mathrm{av},P}(P) = P$. The same holds for $\Phi_{P}^*$. \begin{lemma}\label{lemma:fav1} Consider the operations defined in \eq{eq:Phi-P-av}. The following hold: \begin{enumerate}[(i)] \item $\Phi^*_P(A) = \Phi^*_P(P A P)$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$. \item $\Phi^*_{\mathrm{av}, P}(A) = \Phi^*_{\mathrm{av}, P}(PAP)$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$. \item $\Phi^*_{\mathrm{av}, P}$ is a completely positive projection ${\mathcal{L}}({\mathcal{H}\sub{\s}}) \to {\mathcal{L}}(P{\mathcal{H}\sub{\s}})$. \item ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})= \Phi^*_{\mathrm{av},P}({\mathcal{L}}(P{\mathcal{H}\sub{\s}}))$. \item $\Phi^*_\mathrm{av}$ is a bijection from ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})$ to ${\mathcal{F}}(\Phi^*)$. \item The inverse of $\Phi^*_\mathrm{av}: {\mathcal{F}}(\Phi^*_{\mathrm{av},P}) \to {\mathcal{F}}(\Phi^*)$ is $\mathrm{Ad}_P: A \mapsto P A P$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i):] \item By item (v) of \lemref{lemma:P-properties}, it holds that $\mathds{O} \leqslant \Phi^*_P(P^\perp) = P \Phi^*(P^\perp) P \leqslant P P^\perp P =\mathds{O} $, and so $\Phi^*_P(P^\perp) = \mathds{O}$. By \lemref{lemma:operation-annihilation}, it holds that $\Phi_P^*(P^\perp A) = \Phi_P^*(A P^\perp)= \mathds{O}$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$. Since $A = (P + P^\perp) A (P + P^\perp)$, the claim follows. \item By item (ii) of \lemref{lemma:P-properties}, $\Phi_{\mathrm{av}}^*(A) = \Phi_{\mathrm{av}}^*(P A P)$. The claim immediately follows. \item By item (i) of \lemref{lemma:Phi-av-prop} and (ii) above, we have \begin{align*} \Phi^*_{\mathrm{av}, P} \circ \Phi^*_{\mathrm{av}, P}(A) = \Phi^*_{\mathrm{av}, P} (P \Phi^*_{\mathrm{av}}(A) P) = \Phi^*_{\mathrm{av}, P} \circ \Phi^*_{\mathrm{av}}(A) =P\Phi^*_\mathrm{av}\circ\Phi^*_\mathrm{av}(A)P = P\Phi^*_\mathrm{av}(A)P =\Phi^*_{\mathrm{av}, P}(A). \end{align*} \item By (iii), it follows that for any $A \in {\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, $\Phi^*_{\mathrm{av},P}(A) \in {\mathcal{F}}(\Phi^*_{\mathrm{av},P})$ and so $\Phi^*_{\mathrm{av},P}({\mathcal{L}}(P{\mathcal{H}\sub{\s}})) \subset {\mathcal{F}}(\Phi^*_{\mathrm{av},P})$. The converse is trivial. \item For all $A \in {\mathcal{F}}(\Phi^*) = {\mathcal{F}}(\Phi^*_\mathrm{av})$, there exists an operator $P A P \in {\mathcal{F}}(\Phi_{\mathrm{av},P}^*)$ such that $\Phi_\mathrm{av}^*(P A P) = \Phi_\mathrm{av}^*(A) = A$. Therefore, $\Phi_\mathrm{av}^*$ is surjective. Now assume that there exists $A \in {\mathcal{F}}(\Phi_{\mathrm{av}, P}^*)$ such that $\Phi^*_\mathrm{av}(A)=\mathds{O}$. This implies that $A = \Phi_{\mathrm{av}, P}^*(A) = P\Phi_\mathrm{av}^*(A)P = \mathds{O}$. Therefore, $\Phi_\mathrm{av}^*$ is injective. \item Follows from above. \end{enumerate} \end{proof} The above results have the following useful consequence: \begin{prop}\label{prop:fav2} Consider the operations $\Phi_{\mathrm{av},P}^*$ and $\Phi^*_P$ defined in \eq{eq:Phi-P-av}. It holds that \begin{align*} {\mathcal{F}}(\Phi^*_{\mathrm{av},P})= {\mathcal{F}}(\Phi^*_P) = P {\mathcal{F}}(\Phi_\mathrm{av}^*)P = P{\mathcal{F}}(\Phi^*)P \end{align*} is a von Neumann algebra in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$. \end{prop} \begin{proof} Recall that the operation $\Phi_{\mathrm{av},P}^*: {\mathcal{L}}(P{\mathcal{H}\sub{\s}}) \to {\mathcal{L}}(P{\mathcal{H}\sub{\s}})$ is unital, where the unit in ${\mathcal{L}}(P{\mathcal{H}\sub{\s}})$ is $P$. Moreover, $\rho_0$ as defined in \eq{eq:rho-0} is a faithful fixed point of $\Phi_{\mathrm{av},P}$ in ${\mathcal{T}}(P {\mathcal{H}\sub{\s}})$. By \lemref{lemma:Lindblad}, the fixed-point set ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})$ is a von Neumann algebra in ${\mathcal{L}}(P{\mathcal{H}\sub{\s}})$. Now we need only show that ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})= {\mathcal{F}}(\Phi^*_P) = P {\mathcal{F}}(\Phi_\mathrm{av}^*)P = P{\mathcal{F}}(\Phi^*)P$. That $ P {\mathcal{F}}(\Phi_\mathrm{av}^*)P = P{\mathcal{F}}(\Phi^*)P$ trivially follows from \lemref{lemma:Phi-av-prop}, which gives ${\mathcal{F}}(\Phi^*) = {\mathcal{F}}(\Phi_\mathrm{av}^*)$. That ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})= P{\mathcal{F}}(\Phi^*)P$ follows from item (vi) of \lemref{lemma:fav1}, since the map $\mathrm{Ad}_P : A \mapsto PAP$ is a bijection from ${\mathcal{F}}(\Phi^*)$ to ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})$. To show that ${\mathcal{F}}(\Phi^*_{\mathrm{av},P})= {\mathcal{F}}(\Phi^*_P) $, let us first define the operation $\Phi_{P, \mathrm{av}}^*:{\mathcal{L}}({\mathcal{H}\sub{\s}}) \to {\mathcal{L}}(P{\mathcal{H}\sub{\s}})$ as \begin{align*} \Phi_{P, \mathrm{av}}^*(\cdot) &:= \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N \Phi_P^{*n}(\cdot). \end{align*} But by item (i) of \lemref{lemma:fav1}, $\Phi^*_P(A) = \Phi^*_P(P A P)$, and so $\Phi_P^{*n}(A) = P \Phi^{*n}(A) P$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ and $n \in \nat$. It follows that \begin{align*} \Phi_{P, \mathrm{av}}^*(\cdot) & = \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N P\Phi^{*n}(\cdot)P = P \Phi_{\mathrm{av}}^*(\cdot) P =: \Phi^*_{\mathrm{av}, P}(\cdot). \end{align*} since $\Phi_{\mathrm{av}, P}^* = \Phi_{P, \mathrm{av}}^*$, it trivially follows that $\Phi_P^* \circ \Phi_{\mathrm{av}, P}^* = \Phi_{\mathrm{av}, P}^* \circ \Phi_P^* = \Phi_{\mathrm{av}, P}^* \circ \Phi_{\mathrm{av}, P}^* = \Phi_{\mathrm{av}, P}^*$. Therefore, ${\mathcal{F}}(\Phi^*_{\mathrm{av},P}) = {\mathcal{F}}(\Phi^*_P)$ can be shown by the same arguments as in \lemref{lemma:Phi-av-prop} \end{proof} \subsection{Measurement disturbance revisited} We are now ready to address the question of measurement disturbance, generalising the observations of \sect{sect:faithful-fixed-points}. As before, let ${\mathcal{I}}:= \{{\mathcal{I}}_x: x\in {\mathcal{X}}\}$ be an $\mathsf{E}$-compatible instrument, with ${\mathcal{I}}_{\mathcal{X}}(\cdot) := \sum_x {\mathcal{I}}_x(\cdot)$ the corresponding $\mathsf{E}$-channel. By \eq{eq:Phi-av-defn} and \eq{eq:Phi-P-av} we define \begin{align*} {\mathcal{I}}_\mathrm{av}^*(\cdot) &:= \lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N {\mathcal{I}}_{\mathcal{X}}^{*n}(\cdot), \qquad {\mathcal{I}}_{\mathrm{av},P}^*(\cdot) := P {\mathcal{I}}_\mathrm{av}^*(\cdot) P, \qquad {\mathcal{I}}_P^*(\cdot) := P {\mathcal{I}}_{\mathcal{X}}^*(\cdot) P, \end{align*} where as in \eq{eq:P-defn}, $P$ is the minimal support projection of $\rho_0 := {\mathcal{I}}_\mathrm{av}(\frac{1}{d} \mathds{1}\sub{\s})$, which corresponds with the minimal projection on the support of ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$. By \propref{prop:fav2}, ${\mathcal{F}}({\mathcal{I}}_{\mathrm{av},P}^*) = {\mathcal{F}}({\mathcal{I}}_P^*) =P {\mathcal{F}}({\mathcal{I}}^*_\mathrm{av})P = P {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})P$ is a von Neumann algebra in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$. We define by $P \mathsf{E} P := \{P \mathsf{E}(x) P : x\in {\mathcal{X}}\}$ the restriction of $\mathsf{E}$ to an observable in $P{\mathcal{H}\sub{\s}}$, which satisfies $\sum_x P \mathsf{E}(x) P = P$, and $(P \mathsf{E} P) ' := \{A \in {\mathcal{L}}(P {\mathcal{H}\sub{\s}}) : [P \mathsf{E}(x) P, A] = \mathds{O} \, \forall x\}$ denotes the commutant of $P \mathsf{E} P$ in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$. $P \mathsf{F} P$ is similarly defined. Before generalising \thmref{theorem:faithful-fixedpoint-WAY} for the case where ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ may not contain any faithful states, and thus ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ may not necessarily be a von Neumann algebra, let us first prove a generalisation of \lemref{lemma:faithful-fixedpoint-commutant}. \begin{lemma}\label{lemma:non-faithful-fixedpoint-commutant} Let ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$. It holds that ${\mathcal{F}}({\mathcal{I}}_P^*) \subset (P \mathsf{E} P) '$. Additionally, if ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average, then for all $A\in {\mathcal{L}}(\P {\mathcal{H}\sub{\s}})$ the following implication holds: $A\in {\mathcal{F}}({\mathcal{I}}_P^*) \implies [A, P N\sub{{\mathcal{S}}} P ] \in {\mathcal{F}}({\mathcal{I}}_P^*) \implies [A, P N\sub{{\mathcal{S}}} P ] \in (P\mathsf{E} P)'$. \end{lemma} \begin{proof} By \eq{eq:Gamma-U}, let us define the operation $\Gamma_{\xi,P}^{\mathcal{E}} : {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}) \to {\mathcal{L}}(P {\mathcal{H}\sub{\s}}), B \mapsto P \Gamma_\xi^{\mathcal{E}}(B) P$. It is easily verified that ${\mathcal{I}}_P^*(A) = \Gamma_{\xi,P}^{\mathcal{E}}(A \otimes \mathds{1}\sub{\aa})$ for all $A \in {\mathcal{L}}(P{\mathcal{H}\sub{\s}})$, and $ P \mathsf{E}(x) P = \Gamma_{\xi,P}^{\mathcal{E}}(\mathds{1}\sub{\s} \otimes \mathsf{Z}(x))$. Note that by the same arguments as item (i) of \lemref{lemma:fav1}, it can easily be shown that $\Gamma_{\xi,P}^{\mathcal{E}}(B) = \Gamma_{\xi,P}^{\mathcal{E}}((P \otimes \mathds{1}\sub{\aa})B (P \otimes \mathds{1}\sub{\aa}))$ holds for all $B \in {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})$. It follows that $\Gamma_{\xi, P}^{\mathcal{E}}$ is unital when restricted to ${\mathcal{L}}(P{\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}}) \to {\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, and we may equivalently write $ P \mathsf{E}(x) P = \Gamma_{\xi,P}^{\mathcal{E}}(P \otimes \mathsf{Z}(x))$. By \propref{prop:fav2}, ${\mathcal{F}}({\mathcal{I}}_P^*) $ is a von Neumann algebra in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, and so for all $A \in {\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, if $A \in {\mathcal{F}}({\mathcal{I}}_P^*)$, then $A^*A, A A^* \in {\mathcal{F}}({\mathcal{I}}_P^*)$. By \corref{corollary:multiplicability} it holds that for all $A \in {\mathcal{F}}({\mathcal{I}}_P^*)$ and $B \in {\mathcal{L}}(P {\mathcal{H}\sub{\s}} \otimes {\mathcal{H}\sub{\aa}})$ we have $ A\Gamma_{\xi, P}^{\mathcal{E}} (B) = \Gamma_{\xi, P}^{\mathcal{E}} ((A \otimes \mathds{1}\sub{\aa})B)$ and $ \Gamma_{\xi, P}^{\mathcal{E}} (B)A = \Gamma_{\xi, P}^{\mathcal{E}} (B(A \otimes \mathds{1}\sub{\aa}))$. It follows that for all $A \in {\mathcal{F}}({\mathcal{I}}_P^*)$ and $x\in {\mathcal{X}}$ we have \begin{align*}[ A, P \mathsf{E}(x) P] = [A, \Gamma_{\xi,P}^{\mathcal{E}}(P \otimes \mathsf{Z}(x))]= \Gamma_{\xi,P}^{\mathcal{E}}([A \otimes \mathds{1}\sub{\aa}, P \otimes \mathsf{Z}(x)]) = \mathds{O}, \end{align*} and so $ {\mathcal{F}}({\mathcal{I}}^*_P) \subset (P \mathsf{E} P) '$. Now let us assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. This implies that $P \Gamma_\xi(N) P = \Gamma_{\xi,P}^{\mathcal{E}}(N) = \Gamma_{\xi,P}^{\mathcal{E}}((P \otimes \mathds{1}\sub{\aa})N(P \otimes \mathds{1}\sub{\aa}))$, and so \begin{eqnarray*} P {N\sub{\s}} P + \mathrm{tr}[{N\sub{\aa}} \xi] P = \Gamma_{\xi,P}^{\mathcal{E}}(P {N\sub{\s}} P \otimes \mathds{1}\sub{\aa}) + \Gamma_{\xi,P}^{\mathcal{E}}(P \otimes {N\sub{\aa}}). \end{eqnarray*} Since $ {\mathcal{F}}({\mathcal{I}}^*_P)$ is a von Neumann algebra in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, then by \corref{corollary:multiplicability} and the arguments above, it follows that for all $A \in {\mathcal{F}}({\mathcal{I}}_P^*)$ we have \begin{align*} [A, P {N\sub{\s}} P] &= [ A, \Gamma_{\xi,P}^{\mathcal{E}}(P {N\sub{\s}} P \otimes \mathds{1}\sub{\aa})] + [ A,\Gamma_{\xi,P}^{\mathcal{E}}(P \otimes {N\sub{\aa}})] \nonumber \\ & = \Gamma_{\xi,P}^{\mathcal{E}}([A \otimes \mathds{1}\sub{\aa}, P{N\sub{\s}} P \otimes \mathds{1}\sub{\aa}]) + \Gamma_{\xi,P}^{\mathcal{E}}([A \otimes \mathds{1}\sub{\aa}, P \otimes {N\sub{\aa}} ]) \nonumber \\ & = \Gamma_{\xi,P}^{\mathcal{E}}([A, P {N\sub{\s}} P]\otimes \mathds{1}\sub{\aa}) = {\mathcal{I}}^*_P([A, P {N\sub{\s}} P]). \end{align*} We thus have $A \in {\mathcal{F}}({\mathcal{I}}^*_P) \implies [A, P{N\sub{\s}} P] \in {\mathcal{F}}({\mathcal{I}}^*_P)$, and since $ {\mathcal{F}}({\mathcal{I}}^*_P) \subset (P \mathsf{E} P) '$, it follows that $[A, P{N\sub{\s}} P] \in (P \mathsf{E} P)'$. \end{proof} We are now ready to generalise \thmref{theorem:faithful-fixedpoint-WAY}. \begin{theorem}\label{theorem:non-faithful-fixedpoint-WAY} Let ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. The following hold: \begin{enumerate}[(i)] \item ${\mathcal{I}}$ does not disturb an observable $\mathsf{F}$ only if $P \mathsf{F} P$ commutes with both $P \mathsf{E} P$ and $\Delta P {N\sub{\s}} P := {\mathcal{I}}_P^*(P {N\sub{\s}} P) - P {N\sub{\s}} P$, with $[P \mathsf{F} P, P {N\sub{\s}} P] = \mathds{O}$ being sufficient for $[P \mathsf{F} P, \Delta P {N\sub{\s}} P] = \mathds{O}$. \item ${\mathcal{I}}$ is a measurement of the first kind only if $P\mathsf{E} P$ is commutative and commutes with $P {N\sub{\s}} P$. \item ${\mathcal{I}}$ is repeatable only if $P \mathsf{E} P$ is sharp and commutes with $P {N\sub{\s}} P$. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[(i):] \item Non-disturbance of $\mathsf{F}$ implies that $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, and it follows from \propref{prop:fav2} that $P \mathsf{F} P \subset {\mathcal{F}}({\mathcal{I}}^*_P)$. By \lemref{lemma:non-faithful-fixedpoint-commutant}, it holds that $P \mathsf{F} P$ must commute with $P \mathsf{E} P$, and that $[ P \mathsf{F}(y) P, P{N\sub{\s}} P] = {\mathcal{I}}_P^*([ P \mathsf{F}(y) P, P{N\sub{\s}} P])$. The second condition is guaranteed to hold if $[ P \mathsf{F}(y) P, P{N\sub{\s}} P]=\mathds{O}$. Given that ${\mathcal{F}}({\mathcal{I}}^*_P)$ is a von Neumann algebra, we have ${\mathcal{I}}_P^*((P \mathsf{F}(y) P)^2) = {\mathcal{I}}_P^*(P \mathsf{F}(y) P)^2 = (P \mathsf{F}(y) P)^2$. By \corref{corollary:multiplicability} it follows that $[ P \mathsf{F}(y) P, P{N\sub{\s}} P] = [ P \mathsf{F}(y) P, {\mathcal{I}}_P^*(P{N\sub{\s}} P)]$, and so $P \mathsf{F} P$ must commute with $\Delta P {N\sub{\s}} P$. \item If ${\mathcal{I}}$ is a measurement of the first kind, $\mathsf{E} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, it follows from \propref{prop:fav2} that $P \mathsf{E} P \subset {\mathcal{F}}({\mathcal{I}}^*_P)$. By \lemref{lemma:non-faithful-fixedpoint-commutant} $P \mathsf{E} P $ must be commutative, and since ${\mathcal{F}}({\mathcal{I}}^*_P)$ is a von Neumann algebra, then for all spectral projections $R \in {\mathcal{L}}(P {\mathcal{H}\sub{\s}})$ of $P \mathsf{E}(x) P$ it must hold that $[R, P {N\sub{\s}} P]$ commutes with $P \mathsf{E} P$. But this implies that $[[R, P {N\sub{\s}} P], R^\perp]=\mathds{O}$, where we define $R^\perp := P - R$. As such, given that $P R = R P = R$, it holds that $[R {N\sub{\s}} P -P {N\sub{\s}} R,R^\perp] = R {N\sub{\s}} R^\perp + R^\perp {N\sub{\s}} R = \mathds{O}$. Multiplying from the left by $R$, we thus have $R {N\sub{\s}} R^\perp = R {N\sub{\s}} (P - R) = \mathds{O}$, and so $R {N\sub{\s}} P = R {N\sub{\s}} R$. Since the right hand side is self-adjoint, and $R {N\sub{\s}} P = R P {N\sub{\s}} P$, it follows that $[R, P {N\sub{\s}} P ]=\mathds{O}$. Since this relation holds for all spectral projections $R$ of all effects of $P \mathsf{E} P$, it follows that $P \mathsf{E} P$ must commute with $P {N\sub{\s}} P$. \item Commutativity of $P \mathsf{E} P$ with $P {N\sub{\s}} P$ follows from (ii) and the fact that repeatability implies first-kindness. Sharpness of $P \mathsf{E} P$ follows from the fact that the fixed points of a repeatable instrument can only have support in the eigenvalue-1 eigenspaces of $\mathsf{E}$, as shown in \propref{prop:repeatable-instrument-identity}. \end{enumerate} \end{proof} Note that if $P=\mathds{1}\sub{\s}$, implying that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ contains a faithful state so that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a von Neumann algebra, then the above theorem reduces to \thmref{theorem:faithful-fixedpoint-WAY}. Interestingly, in the case of qubits such an equivalence will always hold, even if ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ does not contain a faithful state. We demonstrate this by an alternative proof for Proposition 6 in Ref. \cite{Heinosaari2010}. \begin{corollary}\label{corollary:first-kind-WAY-qubit} If $\dim({\mathcal{H}\sub{\s}}) = 2$, then \thmref{theorem:non-faithful-fixedpoint-WAY} reduces to \thmref{theorem:faithful-fixedpoint-WAY}. \end{corollary} \begin{proof} Let us consider the minimal support projection $P$ on the fixed-point set ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$. As all channels must have at least one fixed state, then when $\dim({\mathcal{H}\sub{\s}})=2$ it holds that, for any instrument ${\mathcal{I}}$, either $P= \mathds{1}\sub{\s}$ or $P$ is a rank-1 projection. If $P=\mathds{1}\sub{\s}$ then ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ contains a faithful state, so that by \lemref{lemma:Lindblad} ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a von Neumann algebra, and \thmref{theorem:non-faithful-fixedpoint-WAY} reduces to \thmref{theorem:faithful-fixedpoint-WAY}. Now assume that $ P $ is a rank-1 projection, so that for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}})$ it holds that $P A P = \lambda P$, with some $\lambda \in \mathds{C}$. Recall from \lemref{lemma:Phi-av-prop} that ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*) = {\mathcal{F}}({\mathcal{I}}_\mathrm{av}^*)$. By item (ii) of \lemref{lemma:P-properties}, for all $A\in {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ it holds that $A = {\mathcal{I}}_\mathrm{av}^*(A)={\mathcal{I}}_\mathrm{av}^*(P A P) = \lambda {\mathcal{I}}_\mathrm{av}^*( P) = \lambda \mathds{1}\sub{\s}$, and so ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$ is a trivial von Neumann algebra containing only operators proportional to the identity. In this case, the only non-disturbed observables are trivial, and will clearly commute with all of ${\mathcal{L}}({\mathcal{H}\sub{\s}})$. In such a case we may simply replace $P$ with $\mathds{1}\sub{\s}$ in items (i)-(iii) of \thmref{theorem:non-faithful-fixedpoint-WAY}, so that it reduces to \thmref{theorem:faithful-fixedpoint-WAY}. \end{proof} \subsection{Non-disturbance and distinguishability} Here, we present some novel results regarding the structure of non-disturbing measurements that go beyond those in the preceding sections, indicating an intimate relationship between non-disturbance and distinguishability. These results hold for general instruments, are independent of conservation laws, and do not explicitly depend on the support projection $P$ on the fixed-point set of the specific measurement channel ${\mathcal{I}}_{\mathcal{X}}$. First, let us show that if an instrument ${\mathcal{I}}$ does not disturb a non-trivial observable, then there exists a family of distinguishable states that remain distinguishable after a non-selective measurement by ${\mathcal{I}}$. \begin{prop}[Nondisturbance implies distinguishability]\label{prop:nondisturbance-distinguishability} Consider an instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{I}}$ does not disturb a non-trivial observable $\mathsf{F}$. Then there exists a norm-1 observable $\mathsf{G}:=\{\mathsf{G}(z) : z\in {\mathcal{Z}}\}$ acting in ${\mathcal{H}\sub{\s}}$ that is non-disturbed by ${\mathcal{I}}$, so that for every family of states $\{\rho_z : z\in {\mathcal{Z}}\}$ that are perfectly distinguishable by a $\mathsf{G}$ measurement, $\{{\mathcal{I}}_{\mathcal{X}}(\rho_z) : z \in {\mathcal{Z}}\}$ remain perfectly distinguishable by a $\mathsf{G}$ measurement. Moreover, if ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ contains a faithful state, then $\mathsf{G}$ can be taken as a sharp observable. \end{prop} \begin{proof} Suppose that a non-trivial observable $\mathsf{F}:= \{\mathsf{F}(y) : y \in {\mathcal{Y}}\}$ is non-disturbed by ${\mathcal{I}}$, i.e, $\mathsf{F} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) = {\mathcal{F}}({\mathcal{I}}_\mathrm{av}^*)$. Given that ${\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P}) = P {\mathcal{F}}({\mathcal{I}}^*_\mathrm{av}) P $ (see \propref{prop:fav2}), this implies that $P \mathsf{F}(y) P \in {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})$. That $\mathsf{F}$ is non-trivial implies that there must be a $y$ for which $P \mathsf{F}(y) P$ is not proportional to $P$. If that were the case every $P \mathsf{F}(y)P$ could be written as $P\mathsf{F}(y)P=c_y P$ with some $c_y\geqslant 0$, which would imply that $\mathsf{F}(y)={\mathcal{I}}^*_\mathrm{av}(\mathsf{F}(y)) = {\mathcal{I}}_\mathrm{av}^*(P \mathsf{F}(y) P)=c_y {\mathcal{I}}_\mathrm{av}^*(P) = c_y \mathds{1}\sub{\s}$, and so $\mathsf{F}$ would be a trivial observable. Therefore ${\mathcal{F}}({\mathcal{I}}^*_{ \mathrm{av}, P}) = P{\mathcal{F}}({\mathcal{I}}^*_\mathrm{av})P$ is a nontrivial von Neumann algebra in ${\mathcal{L}}(P {\mathcal{H}\sub{\s}})$, and there exists a family of projections $\mathsf{R}:= \{\mathsf{R}(z):z \in {\mathcal{Z}}\}\subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})$ satisfying $\mathsf{R}(z) \mathsf{R}(y) = \delta_{z,y} \mathsf{R}(z)$ and $\sum_z \mathsf{R}(z) = P$. We may consider $\mathsf{R}$ as a sharp observable acting in $P {\mathcal{H}\sub{\s}}$. Using $\mathsf{R}$, we may define a (generally unsharp) observable $\mathsf{G}$ acting in ${\mathcal{H}\sub{\s}}$ by \begin{align*} \mathsf{G} := \{\mathsf{G}(z) = {\mathcal{I}}^*_\mathrm{av}(\mathsf{R}(z)): z \in {\mathcal{Z}}\}, \end{align*} where $\sum_z \mathsf{G}(z) = {\mathcal{I}}_\mathrm{av}^*(P) = \mathds{1}\sub{\s}$. Given that ${\mathcal{I}}_{\mathcal{X}}^* \circ {\mathcal{I}}_\mathrm{av}^* = {\mathcal{I}}_\mathrm{av}^*$ (see \lemref{lemma:Phi-av-prop}), it follows that $\mathsf{G} \subset {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$, i.e., $\mathsf{G}$ is non-disturbed by ${\mathcal{I}}$. Moreover, since $ P \mathsf{G}(z) P = {\mathcal{I}}^*_{\mathrm{av},P}(\mathsf{R}(z))= \mathsf{R}(z)$ holds, then each (non-zero) effect of $\mathsf{G}$ has at least one eigenvector with eigenvalue $1$, and so $\| \mathsf{G}(z)\|=1$. Moreover, if ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ contains a faithful state, then $P=\mathds{1}\sub{\s}$, and so we have $\mathsf{G}(z) = P \mathsf{G}(z) P = \mathsf{R}(z)$, implying that $\mathsf{G} \equiv \mathsf{R}$ is sharp. Now let us note that the family of states $\{\rho_z \}$ are perfectly distinguishable given a measurement of $\mathsf{G}$ if and only if $\rho_z = \P(z) \rho_z \P(z)$, where $\P(z) \geqslant \mathsf{R}(z)$ projects onto the eigenvalue-1 eigenspace of $\mathsf{G}(z)$. In such a case it trivially holds that $\mathrm{tr}[\mathsf{G}(y) \rho_z] = \delta_{z,y}$. But Since $\mathsf{G} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$, we also have \begin{align*} \mathrm{tr}[\mathsf{G}(y) {\mathcal{I}}_{\mathcal{X}}(\rho_z)] = \mathrm{tr}[{\mathcal{I}}_{\mathcal{X}}^*(\mathsf{G}(y))\rho_z] = \mathrm{tr}[\mathsf{G}(y) \rho_z] = \delta_{z,y}, \end{align*} and so $\{{\mathcal{I}}_{\mathcal{X}}(\rho_z)\}$ continues to be perfectly distinguishable by a $\mathsf{G}$ measurement. \end{proof} In the special case where ${\mathcal{I}}$ is a measurement of the first kind, we may strengthen the above result as follows: \begin{theorem}\label{thm:first-kind-post-processing-norm-1} Let ${\mathcal{I}}$ be an instrument compatible with a non-trivial observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}}$. If ${\mathcal{I}}$ is a measurement of the first kind, then $\mathsf{E}$ is described by a classical post-processing of a norm-1 observable $\mathsf{G} :=\{\mathsf{G}(z) : z \in {\mathcal{Z}}\}$ with properties given in \propref{prop:nondisturbance-distinguishability}, i.e., \begin{align}\label{eq:post-processing-first-kind} \mathsf{E}(x) = \sum_z p(x|z) \mathsf{G}(z), \end{align} where $\{p(x|z)\}$ is a family of non-negative numbers that satisfy $\sum_x p(x|z)=1$ for each $z$. \end{theorem} \begin{proof} Assume that the $\mathsf{E}$-instrument ${\mathcal{I}}$ is a measurement of the first kind, that is, $\mathsf{E} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}})$. It follows that $P \mathsf{E} P \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})$. In fact, we can show that $P \mathsf{E} P \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P}) \cap {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})'$. First, recall from \propref{prop:nondisturbance-distinguishability} that if $\mathsf{E}$ is non-trivial then there exists a family of projections $\mathsf{R} := \{\mathsf{R}(z) : z\in {\mathcal{Z}}\} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})$, satisfying $\mathsf{R}(z) \mathsf{R}(y) = \delta_{z,y} \mathsf{R}(z)$ and $\sum_z \mathsf{R}(z) = P$. If $P \mathsf{E} P \not\subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P}) \cap {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})'$, then $\mathsf{R}$ can be chosen so that $[P\mathsf{E}(x)P, \mathsf{R}(z)]\neq \mathds{O}$ for some $x$ and $z$. But note that $\mathsf{R} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P}) = P {\mathcal{F}}({\mathcal{I}}^*_{\mathcal{X}}) P$ implies that the observable $\{P{\mathcal{I}}_x(\mathsf{R}(z))P, P {\mathcal{I}}_x(\mathds{1}\sub{\s} - \mathsf{R}(z)) P : x\in {\mathcal{X}}\}$ is a joint measurement for $P \mathsf{E} P$ and the sharp observable $\{\mathsf{R}(z), P - \mathsf{R}(z)\}$. By compatibility, it follows that $[P\mathsf{E}(x)P, \mathsf{R}(z)]= \mathds{O}$ must hold for all $x$ and $z$, and so $P \mathsf{E} P$ must be contained in the Abelian algebra ${\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P}) \cap {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})'$. Consequently, $\mathsf{R} \subset {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P}) \cap {\mathcal{F}}({\mathcal{I}}^*_{\mathrm{av},P})'$ can be chosen so as to simultaneously diagonalise all $P \mathsf{E}(x) P$, that is, we may write $P \mathsf{E}(x) P=\sum_z p(x|z) \mathsf{R}(z)$. Recalling that $\mathsf{E}(x) = {\mathcal{I}}_\mathrm{av}^*(\mathsf{E}(x)) = {\mathcal{I}}_\mathrm{av}^*(P \mathsf{E}(x) P)$, then defining the observable $\mathsf{G}$ by $\mathsf{G}(z) = {\mathcal{I}}^*_\mathrm{av}(\mathsf{R}(z))$ gives us \eq{eq:post-processing-first-kind}. As in \propref{prop:nondisturbance-distinguishability} it holds that $\mathsf{G} \subset {\mathcal{F}}({\mathcal{I}}_{\mathcal{X}}^*)$, $\mathsf{G}$ is a norm-1 observable, if ${\mathcal{F}}({\mathcal{I}}_{\mathcal{X}})$ contains a faithful state then $\mathsf{G}$ is also sharp, and if $\{\rho_z\}$ are perfectly distinguishable by a $\mathsf{G}$ measurement then so are $\{{\mathcal{I}}_{\mathcal{X}}(\rho_z)\}$. \end{proof} Finally, we present the following implication of the above theorem: \begin{corollary}\label{corollary:first-kind-fidelity-norm} Let ${\mathcal{I}}$ be an instrument compatible with a non-trivial observable $\mathsf{E}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{I}}$ is a measurement of the first kind. For any outcome $x$ associated with a non-trivial effect $\mathsf{E}(x)$, and for any pair of unit vectors $\psi, \phi \in {\mathcal{H}\sub{\s}}$ satisfying $ \mathsf{E}(x)\psi =\| \mathsf{E}(x)\| \psi$ and $(\mathds{1}\sub{\s} - \mathsf{E}(x))\phi =\| \mathds{1}\sub{\s} - \mathsf{E}(x)\| \phi$, respectively, it holds that $\psi$ and $\phi$ are orthogonal, and that $F({\mathcal{I}}_{\mathcal{X}}(\pr{\psi}), {\mathcal{I}}_{\mathcal{X}}(\pr{\phi}))=0$, where $F(\rho, \sigma):= \mathrm{tr}[\sqrt{\sqrt{\rho} \sigma \sqrt{\rho}}]^2$ is the fidelity between states $\rho$ and $\sigma$. \end{corollary} \begin{proof} For each outcome $x$, we may coarse-grain $\mathsf{E}$ into a binary observable $\{\mathsf{E}(x), \mathsf{E}(\overline{x}) := \mathds{1}\sub{\s} - \mathsf{E}(x)\}$. By \thmref{thm:first-kind-post-processing-norm-1}, first-kindness implies that $\mathsf{E}(x) = \sum_z p(x|z) \mathsf{G}(z)$ and $\mathsf{E}(\overline{x}) = \sum_z p(\overline{x}|z) \mathsf{G}(z)$, with $p(\overline{x}|z) := 1 - p(x|z)$, where $\{\mathsf{G}(z) : z \in {\mathcal{Z}}\}$ is a norm-1 observable with properties given in \propref{prop:nondisturbance-distinguishability}, while $\{p(x|z)\}$ is a family of non-negative numbers satisfying $\sum_x p(x|z)=1$ for each $z$. Now let us define $p_{\max}(x) := \max\{p(x|z)\}$ and $p_{\min}(x) := \min\{p(x|z)\}$ as the maximum and minimum values of the set $\{p(x|z)\}$. We may thus define the sets $Z_{\max} := \{z \in {\mathcal{Z}}: p(x|z) = p_{\max}(x)\}$ and $Z_{\min} := \{z \in {\mathcal{Z}}: p(x|z) = p_{\min}(x)\}$. Using such sets, we may define $\mathsf{G}(Z_{\max}) := \sum_{z\in Z_{\max}} \mathsf{G}(z)$ and $\mathsf{G}(Z_{\min}) := \sum_{z\in Z_{\min}} \mathsf{G}(z)$. Since $\mathsf{G}$ is norm-1, then we may also define $\P(Z_{\max}) := \sum_{z\in Z_{\max}} \P(z)$ and $\P(Z_{\min}) := \sum_{z\in Z_{\min}} \P(z)$, where $\P(z)$ is the projection onto the eigenvalue-1 eigenspace of $\mathsf{G}(z)$. Since $\mathsf{E}(x)$ is assumed to be non-trivial, then it must hold that $Z_{\max} \cap Z_{\min} = \emptyset$. If this were not so, it would hold that all $p(x|z)$ are the same, in which case $\mathsf{E}(x) \propto \mathds{1}\sub{\s}$. Consequently, $\P(Z_{\max}) \P(Z_{\min}) = \mathds{O}$. Now let us note that \begin{align*} \| \mathsf{E}(x) \| = \sup_{\|\psi\|=1} \<\psi| \mathsf{E}(x) \psi\> = \sup_{\|\psi\|=1} \sum_{z} p(x|z) \<\psi| \mathsf{G}(z) \psi\>. \end{align*} We may now show that a unit vector $\psi$ satisfies $\mathsf{E}(x) \psi = \| \mathsf{E}(x) \| \psi$ if and only if $\P(Z_{\max}) \psi = \psi$. Let us first prove the only if statement. For any unit vector $\psi$, it holds that $\<\psi| \mathsf{E}(x) \psi\> \leqslant p_{\max}(x)$, which follows from the fact that $p(x|z)$ are positive numbers and that $\{\<\psi| \mathsf{G}(z) \psi\>\}$ is a probability distribution, with the upper bound being saturated when $\<\psi| \mathsf{G}(Z_{\max}) \psi\> = 1$. But this in turn is satisfied only if $\<\psi| \P(Z_{\max}) \psi\> = 1$, in which case $\P(Z_{\max}) \psi = \psi$. As such, it follows that $\|\mathsf{E}(x)\| = p_{\max}(x)$, and the unit vector $\psi$ satisfies $\mathsf{E}(x) \psi = \| \mathsf{E}(x)\| \psi$ only if $\P(Z_{\max}) \psi = \psi$. The if statement is trivial. By similar arguments as above, we may show that $\|\mathds{1}\sub{\s} - \mathsf{E}(x) \| = \| \mathsf{E}(\overline{x}) \| = p_{\max}(\overline{x}) =1 - p_{\min}(x)$, and that the unit vector $\phi$ satisfies $(\mathds{1}\sub{\s} - \mathsf{E}(x))\phi =\| \mathds{1}\sub{\s} - \mathsf{E}(x)\| \phi$ if and only if $\P(Z_{\min})\phi = \phi$. Since $\mathsf{E}(x)$ is non-trivial, then as argued above $\psi$ and $\phi$ are orthogonal, and perfectly distinguishable by a $\mathsf{G}$ measurement. By \propref{prop:nondisturbance-distinguishability} it holds that ${\mathcal{I}}_{\mathcal{X}}(\pr{\psi})$ and ${\mathcal{I}}_{\mathcal{X}}(\pr{\phi})$ are also perfectly distinguishable by a $\mathsf{G}$ measurement, that is, $F({\mathcal{I}}_{\mathcal{X}}(\pr{\psi}), {\mathcal{I}}_{\mathcal{X}}(\pr{\phi}))=0$. \end{proof} \subsection{Measurements of the first kind, distinguishability, and the Wigner-Araki-Yanase theorem} We shall now use the results in the preceding section to obtain quantitative bounds that complement those of \thmref{theorem:quantitative-bound-disturbance-WAY} and \thmref{theorem:Generalized-WAY}, providing further necessary conditions that must be fulfilled for an observable if it is to admit a first-kind or repeatable measurement in the presence of a conservation law. To this end, let us first provide a generalisation of Theorem 2 in Ref. \cite{Miyadera2006a}, which we shall use in the sequel: \begin{lemma}\label{lemma:WAY-distinguishability-inequality} Let ${\mathcal{M}}:= ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ be a measurement scheme for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = {N\sub{\s}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s} \otimes {N\sub{\aa}}$ on average. For any pair of orthogonal unit vectors $\psi, \phi \in {\mathcal{H}\sub{\s}}$, the following will hold: \begin{align}\label{eq:WAY-distinguishability-inequality} |\<\psi| {N\sub{\s}} \phi\>| \leqslant \| {N\sub{\aa}} \| F({\mathcal{I}}_{\mathcal{X}}(\pr{\psi}), {\mathcal{I}}_{\mathcal{X}}(\pr{\phi})) + \|{N\sub{\s}}\| F(\Lambda(\pr{\psi}), \Lambda(\pr{\phi})), \end{align} where $\Lambda$ is the conjugate channel to ${\mathcal{I}}_{\mathcal{X}}$ defined in \eq{eq:conjugate-channel}, and $F(\rho, \sigma)$ is the fidelity between states $\rho$ and $\sigma$. \end{lemma} \begin{proof} Let us consider the augmented Hilbert space ${\mathcal{H}\sub{\aa}} \otimes {\mathcal{K}}$ so that $\xi\in {\mathcal{S}}({\mathcal{H}\sub{\aa}})$ admits the purification $\xi = \mathrm{tr}\sub{{\mathcal{K}}}[\pr{\varphi}]$, with the unit vector $\varphi \in {\mathcal{H}\sub{\aa}} \otimes {\mathcal{K}}$. Moreover, if ${\mathcal{K}}$ is sufficiently large, then by Stinespring's dilation theorem the channel ${\mathcal{E}}^*$ can be expressed as ${\mathcal{E}}^*(A) = V^*(A \otimes \mathds{1}\sub{{\mathcal{K}}})V$ for all $A \in {\mathcal{L}}({\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}})$, where $V: {\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}} \to {\mathcal{H}\sub{\s}}\otimes {\mathcal{H}\sub{\aa}} \otimes {\mathcal{K}}$ is an isometry. By additivity of $N$, and orthogonality of $\psi, \phi$, we have \begin{align*} \<\psi \otimes \varphi| N \phi \otimes \varphi\> = \<\psi| {N\sub{\s}} \phi\>. \end{align*} On the other hand, average conservation of $N$ by ${\mathcal{E}}$ implies that \begin{align*} N = {\mathcal{E}}^*(N) = V^*({N\sub{\s}} \otimes \mathds{1}\sub{\aa} \otimes \mathds{1}\sub{{\mathcal{K}}} )V + V^*(\mathds{1}\sub{\s} \otimes {N\sub{\aa}} \otimes \mathds{1}\sub{{\mathcal{K}}} )V. \end{align*} We therefore have \begin{align}\label{eq:inner-prod-WAY-dist-1} |\<\psi| {N\sub{\s}} \phi\>| \leqslant |\<\psi\otimes \varphi|V^*({N\sub{\s}} \otimes \mathds{1}\sub{\aa} \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>| + |\<\psi \otimes \varphi|V^*(\mathds{1}\sub{\s} \otimes {N\sub{\aa}} \otimes \mathds{1}\sub{{\mathcal{K}}} )V \phi \otimes \varphi\>|. \end{align} For any observable $\mathsf{G}:= \{\mathsf{G}(z)\}$ acting in ${\mathcal{H}\sub{\aa}}$, we may write \begin{align*} &|\<\psi\otimes \varphi|V^*({N\sub{\s}} \otimes \mathds{1}\sub{\aa} \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>| \nonumber \\ & \qquad = |\sum_z \<\psi\otimes \varphi|V^*({N\sub{\s}} \otimes \mathsf{G}(z) \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>| \nonumber \\ & \qquad \leqslant \|{N\sub{\s}} \| \sum_z | \<\psi\otimes \varphi|V^*(\mathds{1}\sub{\s} \otimes \mathsf{G}(z) \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>| \nonumber \\ &\qquad \leqslant \|{N\sub{\s}}\| \sum_z |\<\psi\otimes \varphi|V^*(\mathds{1}\sub{\s} \otimes \mathsf{G}(z) \otimes \mathds{1}\sub{{\mathcal{K}}} )V\psi \otimes \varphi\>|^{\frac{1}{2}} |\<\phi\otimes \varphi|V^*(\mathds{1}\sub{\s} \otimes \mathsf{G}(z) \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>|^{\frac{1}{2}} \nonumber \\ & \qquad = \|{N\sub{\s}}\| \sum_z \mathrm{tr}[\mathds{1}\sub{\s} \otimes \mathsf{G}(z) {\mathcal{E}}(\pr{\psi} \otimes \xi)]^{\frac{1}{2}} \mathrm{tr}[\mathds{1}\sub{\s} \otimes \mathsf{G}(z) {\mathcal{E}}(\pr{\phi} \otimes \xi)]^{\frac{1}{2}} \nonumber \\ & \qquad = \|{N\sub{\s}}\| \sum_z \mathrm{tr}[ \mathsf{G}(z) \Lambda(\pr{\psi})]^{\frac{1}{2}} \mathrm{tr}[\mathsf{G}(z) \Lambda(\pr{\phi})]^{\frac{1}{2}}. \end{align*} In the third line we have used the Cauchy-Schwarz inequality, in the fourth line we used Stinespring's dilation theorem together with the fact that $\varphi$ is a purification of $\xi$, and in the final line we use the definitions of the partial trace and the conjugate channel $\Lambda$. Now, note that the fidelity satisfies $F(\rho, \sigma) = \min_{\mathsf{G}} \sum_z \mathrm{tr}[\mathsf{G}(z) \rho]^{\frac{1}{2}}\mathrm{tr}[\mathsf{G}(z) \sigma]^{\frac{1}{2}}$. Therefore, choosing $\mathsf{G}$ so as to obtain the fidelity, we have \begin{align*} |\<\psi\otimes \varphi|V^*({N\sub{\s}} \otimes \mathds{1}\sub{\aa} \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>| \leqslant \|{N\sub{\s}}\| F(\Lambda(\pr{\psi}), \Lambda(\pr{\phi})). \end{align*} Using similar steps, we may also write \begin{align*} |\<\psi\otimes \varphi|V^*(\mathds{1}\sub{\s} \otimes {N\sub{\aa}} \otimes \mathds{1}\sub{{\mathcal{K}}} )V\phi \otimes \varphi\>| \leqslant \|{N\sub{\aa}}\| F({\mathcal{I}}_{\mathcal{X}}(\pr{\psi}), {\mathcal{I}}_{\mathcal{X}}(\pr{\phi})). \end{align*} By \eq{eq:inner-prod-WAY-dist-1}, we thus obtain \eq{eq:WAY-distinguishability-inequality}. \end{proof} We are now ready to prove our main result in this section: \begin{theorem}\label{theorem:first-kind-WAY-distinguishability} Consider a measurement scheme ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ for a nontrivial observable $\mathsf{E}$ with the instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$. Assume that ${\mathcal{I}}$ is a measurement of the first kind, and that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. For each outcome $x$ associated with a non-trivial effect $\mathsf{E}(x)$, let $\mathcal{K}_{\max}(x)$ and $\mathcal{K}_{\min}(x)$ be subspaces of ${\mathcal{H}\sub{\s}}$ defined by \begin{align*} &\mathcal{K}_{\max}(x):=\{\psi \in {\mathcal{H}\sub{\s}} : \ \mathsf{E}(x)\psi =\| \mathsf{E}(x)\| \psi\}, &\mathcal{K}_{\min}(x):=\{\phi \in {\mathcal{H}\sub{\s}}:\ (\mathds{1}\sub{\s} - \mathsf{E}(x))\phi =\| \mathds{1}\sub{\s} - \mathsf{E}(x)\| \phi\}. \end{align*} For all unit vectors $\psi \in \mathcal{K}_{\max}(x)$ and $\phi \in \mathcal{K}_{\min}(x)$, it holds that \begin{align}\label{eq:distinuishability-WAY-bound} |\langle \psi |{N\sub{\s}} \phi\rangle | \leqslant \|{N\sub{\s}}\| \left( \| \mathsf{E}(x)\|^{\frac{1}{2}} (1 - \|\mathds{1}\sub{\s} - \mathsf{E}(x) \|)^{\frac{1}{2}} + (1 - \| \mathsf{E}(x) \|)^{\frac{1}{2}} \|\mathds{1}\sub{\s} - \mathsf{E}(x) \|^{\frac{1}{2}}\right). \end{align} \end{theorem} \begin{proof} For each outcome $x$ associated with a non-trivial effect $\mathsf{E}(x)$, we may coarse-grain $\mathsf{E}$ into a binary observable $\{\mathsf{E}(x), \mathsf{E}(\overline{x}) := \mathds{1}\sub{\s} - \mathsf{E}(x)\}$. By \corref{corollary:first-kind-fidelity-norm}, it holds that for any unit vectors $\psi \in \mathcal{K}_{\max}(x)$ and $\phi \in \mathcal{K}_{\min}(x)$, $\psi$ and $\phi$ are orthogonal, and $F({\mathcal{I}}_{\mathcal{X}}(\pr{\psi}), {\mathcal{I}}_{\mathcal{X}}(\pr{\phi}))=0$. As such, given the average conservation of $N$ by the interaction channel ${\mathcal{E}}$, \lemref{lemma:WAY-distinguishability-inequality} implies that the following inequality must hold: \begin{align*} |\langle \psi | {N\sub{\s}} \phi \rangle | &\leqslant \| {N\sub{\s}} \| F\left(\Lambda(\pr{\psi}), \Lambda(\pr{\phi})\right) \\ & \leqslant \| {N\sub{\s}} \| \sum_{a=x,\overline{x}} \mathrm{tr}[\mathsf{Z}(x) \Lambda(\pr{\psi})]^{\frac{1}{2}}\mathrm{tr}[\mathsf{Z}(x) \Lambda(\pr{\phi})]^{\frac{1}{2}} \\ & = \| {N\sub{\s}} \| \sum_{a=x,\overline{x}} \langle \psi |\mathsf{E}(a) \psi\rangle^{\frac{1}{2}} \langle \phi|\mathsf{E}(a) \phi\rangle^{\frac{1}{2}} \\ & = \|{N\sub{\s}}\| \left( \| \mathsf{E}(x)\|^{\frac{1}{2}} (1 - \|\mathds{1}\sub{\s} - \mathsf{E}(x) \|)^{\frac{1}{2}} + (1 - \| \mathsf{E}(x) \|)^{\frac{1}{2}} \|\mathds{1}\sub{\s} - \mathsf{E}(x) \|^{\frac{1}{2}}\right). \end{align*} The second line uses the fact that for any states $\rho, \sigma$, it holds that $F(\rho, \sigma) \leqslant \sum_a \mathrm{tr}[\mathsf{F}(a) \rho]^{\frac{1}{2}}\mathrm{tr}[\mathsf{F}(a) \sigma]^{\frac{1}{2}}$ for any observable $\mathsf{F}$. The third line uses the fact that $\Lambda$ is the conjugate channel to ${\mathcal{I}}_{\mathcal{X}}$ defined in \eq{eq:conjugate-channel}, and so it holds that $\mathrm{tr}[\mathsf{Z}(a)\Lambda(\rho)] = \mathrm{tr}[\mathsf{E}(a) \rho]$ for all $\rho$ and $a=x,\overline{x}$. To see how the final line is obtained, note that we have $\langle \psi |\mathsf{E}(x) \psi\rangle = \| \mathsf{E}(x)\|$ and $\langle \phi| (\mathds{1}\sub{\s} - \mathsf{E}(x) ) \phi\rangle = \| \mathds{1}\sub{\s} - \mathsf{E}(x) \|$ by construction. For the first term, i.e., $a = x$, we obtain $\langle \psi |\mathsf{E}(a) \psi\rangle = \| \mathsf{E}(x)\|$ and $\langle \phi|\mathsf{E}(a) \phi\rangle = \langle \phi|(\mathds{1}\sub{\s} - (\mathds{1}\sub{\s} - \mathsf{E}(x)) ) \phi\rangle = 1 - \langle \phi| (\mathds{1}\sub{\s} - \mathsf{E}(x) ) \phi\rangle = 1 - \| \mathds{1}\sub{\s} - \mathsf{E}(x) \|$. The second term for $a = \overline{x}$ is obtained in a similar manner. \end{proof} Let us note that if $\mathsf{E}$ commutes with ${N\sub{\s}}$, then \thmref{theorem:first-kind-WAY-distinguishability} imposes no restrictions on first-kindness. To see this, let us note that for any (possibly trivial) effect $\mathsf{E}(x)$, and for any unit vectors $\psi, \phi$ satisfying $\mathsf{E}(x)\psi = \|\mathsf{E}(x)\| \psi$ and $(\mathds{1}\sub{\s} - \mathsf{E}(x)) \phi = \|\mathds{1}\sub{\s} - \mathsf{E}(x)\| \phi$, it holds that $\<\psi| \mathsf{E}(x) {N\sub{\s}} \phi\> = \|\mathsf{E}(x)\| \<\psi| {N\sub{\s}} \phi\>$ and $\<\psi| {N\sub{\s}} \mathsf{E}(x) \phi\> = (1 - \|\mathds{1}\sub{\s} - \mathsf{E}(x)\|) \<\psi| {N\sub{\s}} \phi\>$. It follows that if $[\mathsf{E}(x),{N\sub{\s}}]= \mathds{O}$ then either (i) $\<\psi| {N\sub{\s}} \phi\> = 0$, or (ii) $ \|\mathsf{E}(x)\| + \|\mathds{1}\sub{\s} - \mathsf{E}(x)\|=1$. Condition (i) implies that the lower bound of \eq{eq:distinuishability-WAY-bound} vanishes, and so no constraint is imposed. On the other hand, condition (ii) implies that $\|\mathsf{E}(x)\| + \|\mathds{1}\sub{\s} - \mathsf{E}(x)\| = 1 + (p_{\max}(x) - p_{\min}(x)) = 1$, where we recall from \corref{corollary:first-kind-fidelity-norm} that $p_{\max}(x) = \| \mathsf{E}(x)\|$ and $p_{\min}(x) = 1 - \| \mathds{1}\sub{\s} - \mathsf{E}(x)\|$ are the largest and smallest values from the set $\{p(x|z)\}$ given by \thmref{thm:first-kind-post-processing-norm-1}. Such equality is satisfied if and only if $p_{\max}(x) = p_{\min}(x) = \lambda$, in which case by \eq{eq:post-processing-first-kind} it follows that $\mathsf{E}(x) = \lambda \mathds{1}\sub{\s}$ is a trivial effect. But \eq{eq:distinuishability-WAY-bound} only applies to non-trivial effects, and so no constraints are imposed in such a case. On the other hand, if for any non-trivial $\mathsf{E}(x)$ not commuting with ${N\sub{\s}}$ it holds that $\<\psi| {N\sub{\s}} \phi\>\ne 0$ for some $\psi \in \mathcal{K}_{\max}(x)$ and $\phi \in \mathcal{K}_{\min}(x)$, then $\mathsf{E}$ admits a first-kind measurement only if either (a) $\|\mathsf{E}(x)\| < 1$ or (b) $\|\mathds{1}\sub{\s} - \mathsf{E}(x)\|<1$. This is so because if both (a) and (b) are violated, i.e., if $\|\mathsf{E}(x)\| = \|\mathds{1}\sub{\s} - \mathsf{E}(x)\| = 1$, then the upper bound of \eq{eq:distinuishability-WAY-bound} vanishes. Such an $\mathsf{E}$ admits a first-kind measurement only if at most one effect has $1$ as an eigenvalue. Equivalently, if some effect $\mathsf{E}(x)$ has $1$ as an eigenvalue, then it cannot also have eigenvalue $0$. This implies that such an observable must be highly unsharp. Moreover, we obtain the following result as an immediate consequence: \begin{corollary}\label{corollary:repeatable-WAY-distinguishability} Consider a measurement scheme ${\mathcal{M}} := ({\mathcal{H}\sub{\aa}}, \xi, {\mathcal{E}}, \mathsf{Z})$ for an $\mathsf{E}$-instrument ${\mathcal{I}}$ acting in ${\mathcal{H}\sub{\s}}$, and assume that ${\mathcal{E}}$ conserves an additive quantity $N = N\sub{{\mathcal{S}}} \otimes \mathds{1}\sub{\aa} + \mathds{1}\sub{\s}\otimes N\sub{\aa}$ on average. If ${\mathcal{I}}$ is repeatable, then \begin{align*} [\mathsf{E}, \P({\mathcal{X}}) {N\sub{\s}} \P({\mathcal{X}})] = \mathds{O}, \end{align*} where we define $\P({\mathcal{X}}):= \sum_{x\in {\mathcal{X}}} \P(x)$, with $\P(x)$ the projection onto the eigenvalue-1 eigenspace of $\mathsf{E}(x)$. \end{corollary} \begin{proof} By \propref{prop:repeatable-instrument-identity}, repeatability implies that $\mathsf{E}$ is a norm-1 observable, with $\{\P(x)\}$ mutually orthogonal projection operators that project onto the eigenvalue-1 eigenspaces of $\mathsf{E}$. For every $x$, define $\P(x)^\perp:= \P({\mathcal{X}}) - \P(x)$. It is trivial to verify that $\supp(\P(x)) \equiv {\mathcal{K}}_{\max}(x)$ and $\supp(\P(x)^\perp) \equiv {\mathcal{K}}_{\min}(x)$ as defined in \thmref{theorem:first-kind-WAY-distinguishability}. Any $\varphi \in \supp(\P({\mathcal{X}}))$ may be written as $\varphi = \alpha \psi + \beta \phi$, where $\psi \in {\mathcal{K}}_{\max}(x)$ and $\phi \in {\mathcal{K}}_{\min}(x)$ are unit vectors and $\alpha, \beta \in \mathds{C}$. Since repeatability implies first-kindness, and $\|\mathsf{E}(x)\|=1$ for all $x$, then by \thmref{theorem:first-kind-WAY-distinguishability} it follows that $\<\varphi| \P(x) {N\sub{\s}} \P(x)^\perp \varphi\> = \alpha^* \beta \<\psi | {N\sub{\s}} \phi\> = 0$ for all $\varphi \in \supp(\P({\mathcal{X}}))$. It follows that $ \P(x) {N\sub{\s}} \P(x)^\perp = \mathds{O} $, which implies that $\P(x) {N\sub{\s}} \P({\mathcal{X}}) = \P(x) {N\sub{\s}} \P(x)$. Since the right hand side is self adjoint, and $\P(x) = \P(x)\P({\mathcal{X}}) = \P({\mathcal{X}}) \P(x)$, it follows that $[\P(x), \P({\mathcal{X}}) {N\sub{\s}} \P({\mathcal{X}})]=\mathds{O}$. But since $\P(x) = \mathsf{E}(x) \P({\mathcal{X}}) = \P({\mathcal{X}}) \mathsf{E}(x)$, we have $[\mathsf{E}(x), \P({\mathcal{X}}) {N\sub{\s}} \P({\mathcal{X}})]=\mathds{O}$. This completes the proof. \end{proof} If $\mathsf{E}$ is sharp, then $\P({\mathcal{X}}) = \mathds{1}\sub{\s}$, in which case \corref{corollary:repeatable-WAY-distinguishability} reduces to the repeatability part of the original WAY theorem. When $\mathsf{E}$ is unsharp, however, so that $\P({\mathcal{X}}) < \mathds{1}\sub{\s}$, then the above result can be seen as a strengthening of our generalisation of WAY in \thmref{theorem:Generalized-WAY}, in the finite-dimensional setting, providing further necessary conditions for repeatability beyond unsharpness and coherence. Moreover, note that the condition for repeatability given by \thmref{theorem:non-faithful-fixedpoint-WAY} was that $P \mathsf{E} P$ must commute with $P{N\sub{\s}} P$, where $P$ is the minimal support projection on the fixed points of the $\mathsf{E}$-channel ${\mathcal{I}}_{\mathcal{X}}$. Clearly, such $P$ is contingent on the specific repeatable instrument that we are considering, whereas the condition given in \corref{corollary:repeatable-WAY-distinguishability} depends only on the observable, and is hence much more general. For example, consider the case where for each outcome $x$, the eigenvalue-1 eigenspace of $\mathsf{E}(x)$ has dimension larger than $1$. We may construct a repeatable instrument with operations satisfying ${\mathcal{I}}_x(\rho) = \mathrm{tr}[\mathsf{E}(x) \rho] \pr{\psi_x}$ for all states $\rho$ and outcomes $x$, with $\psi_x$ some specific eigenvalue-1 eigenstate of $\mathsf{E}(x)$. It is simple to verify that, in such a case, $P = \sum_x \pr{\psi_x} < \P({\mathcal{X}})$. Since $P \mathsf{E}(x) P = \pr{\psi_x}$ and $P {N\sub{\s}} P = \sum_{x,y} \<\psi_x| {N\sub{\s}} \psi_y\> |\psi_x\>\<\psi_y|$, the only necessary condition given by \thmref{theorem:non-faithful-fixedpoint-WAY} will be that $\<\psi_x | {N\sub{\s}} \psi_y\>=0$ must hold for all $x\ne y$, which is clearly much weaker than the conditions given by \thmref{theorem:first-kind-WAY-distinguishability} and hence \corref{corollary:repeatable-WAY-distinguishability}. \section{Conclusions} We have provided a number of general and operational bounds which capture measurement disturbance, with emphasis on the setting in which there is a conservation law. We obtained a new, quantitative version of the WAY theorem, which generalises previous work in several respects, going beyond normal measurement schemes and not assuming that the observable to be measured is sharp. We saw that the large apparatus coherence played a key role for measurability and non-disturbance, pointing to the requirement of ``large" apparatus. This points further to possible deep connections between the WAY theorem and the rapidly developing theory of quantum reference frames, analysed so far only when the conserved quantity has a conjugate phase \cite{Loveridge2020a,Loveridge2017a}. The work presented surrounding the WAY theorem was also studied in the novel setting of sequential measurements for general pairs of observables. The quantitative bounds are further refined by the analysis of the fixed point structure of the measurement channel in settings which have received scant attention. Such analysis indicates that, though necessary, large coherence in the conserved quantity is not sufficient for non-disturbance. We provide further quantitative bounds indicating scenarios in which non-disturbance cannot be achieved irrespective of the apparatus coherence. Our work suffers from the drawback that many physically arising conserved quantities are unbounded; this case should also be systematically studied. This is a technically challenging endeavour and we save it for future work. \acknowledgments M.H.M. acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 801505, as well as from the Slovak Academy of Sciences under MoRePro project OPEQ (19MRP0027). T.M. acknowledges financial support from JSPS KAKENHI Grant No.JP20K03732. \bibliographystyle{apsrev4-2}
faf14cf8dc88a0d07ead99a5c5251ebac95f6619
\section{Introduction} Local public goods are important phenomena in our society. For example, individuals often acquire some information on different alternatives whose advantages they do not know either personally or through their peers. Since people benefit from their neighbors' investment, personal acquisition of information is a local public good. Information can be exchanged both via in-person communication \citep{udry,feick}, in online forums and on social platforms. For instance, on Twitter, people write posts and see the posts of their connections on their wall. Another example of local public goods is constituencies offering some goods or services that can be enjoyed also by the citizens of nearby constituencies, such as public parks, a pedestrian city center, cultural activities, and the like. Importantly, the resulting pattern of spillovers can be represented as a directed network in which establishing a link to a player allows to free ride on her contribution. In the aforementioned examples, not only contributing is costly, but also free riding, as individuals need to incur the opportunity cost of travelling to nearby constituencies or searching whom to follow and reading their posts on Twitter. These opportunity costs are different across individuals; for example, they are higher for those who receive higher wages, as their time is more valuable. Additionally, while the most important driver of contributors is a specific need for the public good, agents might have different resources and motivations to provide it. For example, some may post on Twitter because they have a lot of spare time, and others to become influencers. Some constituencies might invest a lot in culture because they are rich, others because of a pronounced interest of their inhabitants. Heterogeneities in wages yield opposing distributional effects of free riding in networks: on the one hand, everyone can free ride, and this should reduce inequality. On the other hand, the richer might benefit more from free riding if this is costly. As a result, new technologies that changed the opportunity cost of free riding might have affected who benefits the most from free riding. This raises the question of how to design policies to increase welfare and redistribute resources. Indeed, the planner might want to subsidize the public good provision of certain individuals via monetary or non-monetary subsidies. For example, YouTube redistributes some of the revenue generated by user activity rewarding its contributors with many initiatives, ranging from awards to fan fests. To study these considerations, we propose a local public good game in which players allocate their time or budget to a public good, a private good, and to free ride on others' provision. Introducing a budget constraint captures that individuals have different amounts of resources and/or time to dedicate to public good provision. We also allow for heterogeneous preferences and relative prices of the private good with respect to the public good and linking. Thereby, we capture that individuals might have different motivations, derive a different utility and pay a different price for an identical bundle. The main contributions of the paper are two. First, we derive sufficient conditions for equilibrium existence and characterize equilibrium networks in a general framework that embeds well-known models of private provision of (local) public goods. Second, we use these results to analyze the impact of the income distribution on inequality and derive policy implications Regarding the first contribution, we show that, if both the public and the private good are normal and neighbors' contributions sufficiently crowd out own-contribution, a Nash equilibrium in which players establish weakly profitable connections exists; furthermore, these networks are core-periphery graphs in which the largest contributors are linked among themselves. Indeed, any two contributors are connected if they produce more than the linking cost, as this frees resources for additional public or private good consumption. Moreover, everyone prefers to link to the largest contributors, as this generates the largest spillovers. Therefore, any strict Nash network is a nested split graph, in which periphery players have nested neighborhoods. This equilibrium characterization is consistent with empirical evidence.\footnote{For example, both on Twitter and in an online forum where users ask and respond to queries about Java, there is a core of players who actively collaborate with each other, while most users only free ride \citep{bastos,adamic}.} Our characterization is also robust to several extensions (Section \ref{extensions}). The framework we propose allows us to revisit classical results in the literature on global public goods \citep{BBV,warr}. In our model, this corresponds to linking costs being zero. Then, when both the public and the private good are normal, a unique Nash equilibrium exists, in which total provision is neutral to income redistribution among contributors. If additionally players have identical preferences, the richest contribute and their contributions reduce inequality. To the contrary, when the linking costs are larger, contributors are not necessarily the richest players even when preferences are homogeneous; moreover, poorer players can be better off than richer ones. To resolve the indeterminacy arising from multiple equilibria, we study inequality in large societies, for which we derive the so-called law of the few \citep{gg}, i.e., the proportion of players in the core converges to zero. In this case, we can analyze welfare focusing on free riders. In particular, the impact of free riding in networks on inequality depends on the initial wealth distribution. When players have similar wealth levels, they can afford the same links and enjoy the same spillovers, thereby reducing inequality. If, to the contrary, a society is very unequal, some can afford more links and hence free ride more than others. As a result, in large societies, networks increase inequality when the economy is relatively unequal to begin with. The mechanism we highlight implies that new technologies that make it cheaper to free ride on others will reduce inequality. Our model has important implications for policy interventions. In contrast to models with an exogenous network \citep{nizar}, here income redistribution affects linking incentives, giving the planner an additional constraint, but also an additional instrument, to improve welfare. Indeed, in an exogenous core-periphery graph, it is optimal to transfer income to core players. However, when players can change their links as a result of the intervention, it is better to transfer income to players who are not central because of limited resources, if they value the public good a lot, as this translates into a larger provision. Beyond improving welfare, the efficient solution can be implemented if the social planner is able to charge personalized prices for the public good. This could be relevant for online social platforms, where public good provision of an individual can be subsidized via sponsorship, while other users can be taxed via digital services taxes that collect some of the revenue generated by user activity. For example, social platforms such as YouTube have a host of programs to reward largest contributors in order to incentivize the creation of new content, from awards to fan fests The novel policy implications of our model are due to following main innovations: players face a budget constraint, and both the network as well as the demand for the public good are endogenous. The latter assumption contrasts with existing models of games on endogenous networks that assume a fixed demand for the public good \citep{gg}. This paper is then the first to study free riding in endogenous networks under general preferences and heterogeneous budgets. When the budget is not relevant for the demand of the public good, players who value it more, provide more and are in more central positions in the network \citep{KM}. However, when the budget set matters, novel insights emerge on how poorer players can have more central positions or benefit more from free riding, thereby affecting both inequality and the policy implications.\footnote{\cite{leeuwenJEEA} introduce competition for status in such a framework. Status rents foster public good provision in the repeated game. Our model accommodates that spillovers increase consumption also in a static game by allowing for richer preferences. Our characterization reminds also of that of \cite{ramos}, but the two models are very different. Indeed, in their paper homogeneous players establish costly links to see other players' signals, and then play a beauty contest game. In \cite{dotan}, exogenously determined high types are in the core which provide low types in the periphery with cheap low-quality indirect connections.} We allow for best replies to be non-linear in neighbors' provision and income, which is empirically relevant (e.g., \citealp{quadratic}). Previous analysis with similarly general preferences has been limited to games on fixed networks; e.g., \cite{nizar} shows that the equilibrium is unique if the public good is sufficiently normal. For endogenous networks, very strong normality not only does not yield uniqueness, but may even conflict with equilibrium existence. Here uniqueness arises for very low linking costs.\footnote{Most of the literature assumes only one good and linear best replies, both when the network is fixed \citep{keyplayer,brkran,brkrandam} or endogenous \citep{baetz,hiller}. In particular, \cite{ktz} study a dynamic potential game of strategic complements that predicts nested split graphs as stochastically stable in a myopic best reply dynamics. However, the nature of the strategic interaction is very different, as we focus on substitutes. We also allow for heterogeneous players and games that do not admit a potential. More importantly, we study a static and simultaneous game with rational players where nestedness obtains only in strict Nash equilibria.} \cite{golub} characterize the Pareto efficient outcomes and Lindahl solution in a local public good game on an exogenously fixed network in terms of well-studied network statistics. In our paper, to the contrary, we are interested in understanding whether policies aimed at raising welfare would have the intended effects when players can change with whom to interact after the policy intervention.\footnote{\cite{belhaj2016} show that nested split graphs are also efficient networks in a network game with local complementarities. Our results on second-best implementation complements theirs, as we assume strategic substitutes.} In our benchmark model, spillovers flow one-way, i.e., towards the player establishing the link.\footnote{Some papers study network formation with one-way flow of spillovers abstracting from strategic interactions on the network (e.g., \citealp{balagoyal}, \citealp{galeotti2006one}, \citealp{billand2008existence}).} To account for situations when the interaction is face-to-face, we extend our analysis to two-way flow of spillovers (see the Online Appendix). While most of our findings are robust, we also derive the sufficient condition for a stricter version of the law of the few The remainder of the paper is organized as follows. Section \ref{model} introduces the model. Section \ref{mainsec} provides results on the existence and characterization of equilibria. Section \ref{sec:homogeneous} focuses on economies with agents with homogeneous preferences to compare our results to those of \cite{BBV}. Section \ref{inequality} studies the impact of income (re)distribution and personalized prices on inequality. Section \ref{extensions} discusses some extensions. Section \ref{conclusions} concludes. All proofs are in the appendix. \section{Model}\label{model} We introduce a local public good game, in which players spend their budget on private good consumption, public good provision and connections. \smallskip \noindent \textbf{Players.} There is a set of players $N = \{1, ..., n\}$; $i$ denotes a typical player. \smallskip \noindent \textbf{Network.} We denote the directed network of social connections by $g$. Player $i$'s linking strategy is denoted by a row vector $g_i =( g_{i1}, ..., g_{in} ) \in G_i = \{0, 1\}^{n}$, where $g_{ii}=0$ and $g_{ij} \in \{0,1\}$ for all $i,j \in N$, $i \neq j$. We say that player $i$ links to player $j$ if $g_{ij}=1$. Then, $g=(g_1,...,g_n)^{T}$. Linking decisions are one-sided: the player proposing a link pays $k>0$ and the link is established. Let $N_i(g) = \{j \in N: g_{ij} = 1\}$ be the set of players to which $i$ links and $\eta_i(g) = \left\vert N_i(g) \right\vert$ the number of links that $i$ sponsors. In a \textit{core-periphery graph}, there are two groups of players, the \textit{periphery} $\mathcal{P}(g)$ and the \textit{core} $\mathcal{C}(g)$, such that, \textit{(i)} for every $i,j \in \mathcal{P}(g)$, $g_{ij}=g_{ji}=0$, \textit{(ii)} for every $l,m \in \mathcal{C}(g)$, $g_{lm}=g_{ml}=1$, and \textit{(iii)} for any $i\in\mathcal{P}(g)$, there is $l\in\mathcal{C}(g)$ such that $g_{il}=1$. Hence, all links in the core are reciprocated. A \textit{complete core-periphery} network is such that $N_i(g)=\mathcal{C}(g)$ for all $i \in \mathcal{P}(g)$ and there are no isolated players. Nodes in $\mathcal{C}(g)$ are referred to as \textit{hubs}. We write $\mathcal{C}$ and $\mathcal{P}$ instead of $\mathcal{C}(g)$ and $\mathcal{P}(g)$, respectively, when no confusion arises. A core-periphery network with a single hub is referred to as a \textit{star}. A core-periphery network in which the sets of players' neighbors are nested is a \emph{nested split graph}. Formally, a nested split graph is a core-periphery graph where, if $\eta_j(g)\leq \eta_i(g)$, then $N_j(g) \subseteq N_i(g)$ for any $i,j\in \mathcal{P}(g)$.\footnote{We extend here in a natural way some graph theoretic notions usually defined on undirected networks to our model of directed network formation. The corresponding definitions for the two-way flow model are in the Online Appendix. In particular, the notion of core-periphery graph we use here is more specific than the ones defined on the closure of $g$ (see below), as it embeds that core players reciprocate links and periphery players link to the core.} Denote by $\overline{g}$ the closure of $g$, such that $\overline{g}_{ij} = \max \{g_{ij}, g_{ji}\}$, for each $i,j \in N$; that is, each directed link in $g$ is replaced by an undirected one. Let $N_i(\overline{g}) = \{j \in N: \overline{g}_{ij} = 1\}$ be the set of players to which $i$ is linked in $\overline{g}$, and let $\eta_i(\overline{g}) = \left\vert N_i(\overline{g})\right\vert$ be $i$'s degree, i.e., the number of $i$'s neighbors in $\overline{g}$. There is a path in $\overline{g}$ from $i$ to $j$ if either $\overline{g}_{ij}=1$, or there are $m$ different players $j_1,...,j_m$ distinct from $i$ and $j$, such that $\overline{g}_{ij_1}=\overline{g}_{j_1j_2}=...=\overline{g}_{j_m j}=1$. A \textit{component} of network $\overline{g}$ is a set of players such that there is a path connecting every two players in the set and no path to players outside the set. Define a cell $h$ of a nested split graph as the set of players $i \in N$ with $h = \eta_i(\overline{g})$ links \smallskip \noindent \textbf{Consumption.} We denote by $x_i \in \mathcal{R}^+$ and $y_i \in \mathcal{R}^+$ the amount of public and private good acquired by player $i$, respectively, where $\mathcal{R}^+ \equiv [0, + \infty)$. Given $g,$ we denote by $\overline{x}_{-i}= \sum_{j \in N} g_{ij} x_j$ player $i$'s spillovers and by $\overline{x}_i = x_i + \overline{x}_{-i}$ player $i$'s public good consumption, given by the sum of her provision and the spillovers she receives from her neighbors in network $g$. Hence, we assume that spillovers flow one-way, i.e., towards the player sponsoring the link. In Online Appendix A, we study the model with two-way flow of spillovers. \smallskip \noindent \textbf{Strategies.} Player $i$'s set of strategies is $S_i = \mathcal{R}^+\times\mathcal{R}^+\times G_i$, and the set of all players' strategies is $S = \prod_{i \in N} S_i $. A strategy profile $s = (x,y, g) \in S$ specifies provision of the public good $x = (x_1,... ,x_n)$, consumption of the private good $y = (y_1,...,y_n)$, as well as links $g = (g_1, ... ,g_n)^T$ for each player. In order to emphasize player $i$'s role, $s$ is sometimes written as $(s_i, s_{-i})$, where $s_{-i}\in \prod_{j\neq i} S_j.$ \smallskip \noindent \textbf{Payoffs.} Each player $i$ faces the following maximization problem: \begin{eqnarray}\label{max} \max_{(x_i,y_i,g_i)\in S_i} && U_i(\overline{x}_i,y_i) \\ \notag \text{s.t.} & & x_i+p_i y_i+\eta_i(g) k=w_i, \end{eqnarray} where $x_i$ and $y_i$ are the respective amounts of public and private good personally acquired by $i$, while $\overline{x}_i$ is $i$'s public good consumption, $p_i > 0$ is the price of the private good paid by player $i$, $k > 0$ is the cost of linking and $w_i >0$ her wealth. We assume $U_i(\cdot,\cdot)$ is a twice continuously differentiable, strictly concave and increasing function in its arguments for each $i \in N$. Under these assumptions, there is a unique and non-negative optimal investment in the public and private good for every $i$, which depends on own wealth, the links sponsored and the spillovers received from neighbors. Denote player $i$'s optimal consumption in isolation by $(x_i^I, y_i^I)$. By analyzing heterogeneous prices for the private good, we capture that the relative price of the private good with respect to that of the public good and the linking costs can differ across players. Additionally, some people might like the public good more than others. Hence, we allow players to have different preferences. The utility maximization problem can be rewritten with player $i$ choosing her local public good consumption $\overline{x}_i$, rather than her public good provision $x_i$: \begin{eqnarray}\label{max2} \max_{( \overline{x}_i,y_i,g_i)\in S_i} && U_i(\overline{x}_i,y_i) \\ \notag \text{s.t.} && \overline{x}_i+p_i y_i= w_i-\eta_i(g) k+ \overline{x}_{-i}, \text{ and } \overline{x}_i \geq \overline{x}_{-i}. \end{eqnarray} Ignoring the inequality constraint, we can express player $i$'s demand function for the public good as $\overline{x}_i=\gamma_i(w_i-\eta_i(g) k+ \overline{x}_{-i})$, where $\gamma_i:\mathcal{R}\rightarrow\mathcal{R}$ is a continuous function. We call $\overline{w}_i(g)=w_i-\eta_i(g) k+ \overline{x}_{-i}$ player $i$'s \textit{net social income} and $\gamma_i$ $i$'s Engel curve. Hence, $x_i=\overline{x}_i-\overline{x}_{-i}=\max\{\gamma_i(\overline{w}_i(g))-\overline{x}_{-i},0\}$. Since the network is endogenous, player $i$'s demand for the public good is net of the budget she spends on links. We assume that both the public and the private good are normal for all players.\footnote{Note that, when $i$'s links change, $i$'s net social income typically changes discontinuously. However, $\gamma$ \textit{per-se} is a continuously differentiable function and jumps in net social income do not pose a problem for our analysis. \begin{assumption}\label{normality} For each $i\in N$, $\gamma_i(w)$ is continuously differentiable with respect to $w$ with derivative $\gamma_i^{\prime}\in[0,1]$. \end{assumption} Let us stress that this assumption is mild and commonly made in public good games. In particular, our model embeds the global public good game of \cite{BBV} as $k\rightarrow 0$. Then, contributors form a core, periphery players link to all contributors and there are no spillovers among inactive players.\footnote{When $k=0$, a complete network with links among inactive players is also an equilibrium. Note that this is also a core-periphery graph, and still there are no spillovers among inactive players. Also note that we allow for $\gamma_i^{\prime}\in\{0,1\}$. As in \cite{BBV}, these extremes do not pose a problem for existence, but they do for uniqueness of the equilibrium (Corollary \ref{ex-bbv}). \smallskip \noindent \textbf{Equilibrium.} A strategy profile $s^\ast = (x^\ast, y^\ast, g^\ast)$ is a \emph{Nash equilibrium} if for all $i \in N$, $s_i^{\ast}$ is a solution to the maximization problem \eqref{max} given $s_{-i}^\ast$. To characterize the equilibrium networks, we will use two refinements. We say a Nash equilibrium is \emph{sociable} if any player $i$ who is indifferent between establishing a link or not, establishes this link. More formally, a Nash equilibrium $(x^\ast,g^\ast)$ is sociable if, when there is $(x_i^\prime,y_i^\prime)$ for $i\in N$ such that $U_i(\bar{x}^\ast,y^\ast)=U_i(\bar{x}_i^\prime,y^\prime_i|\bar{x}^\ast_{-i},y^\ast_{-i})$, then, for any $j\in N\setminus \{i\}$, $g^{\ast}_{ij}\geq g^{\prime}_{ij}$, with strict inequality for some $j$. This refinement allows us to discard equilibrium networks that are not robust to small changes in players' wealth and preferences. An equilibrium is \emph{strict} if no player can unilaterally change her strategy without reducing her payoff. Hence, any strict equilibrium is sociable. \smallskip \noindent \textbf{Social Welfare.} We define social welfare as the sum of individual payoffs \smallskip \noindent \textbf{Discussion and interpretation.} The presence of a budget set and that players pay to free ride are the key innovations of our model. The budget constraint stems from players having limited wealth or time to devote to these activities; links are one-sided and the flow of spillovers is one-way. These assumptions capture situations in which one needs to exert some effort to enjoy the public good provided by others, such as reading a post on Twitter. In this example, players decide how to allocate their time between working and leisure. During leisure, they either collect information directly and post it on Twitter, or read the posts of the users they follow. Heterogeneous wages result in heterogeneous relative prices for the private good. This generates novel trade-offs between using resources for free riding by linking, public good provision and private good consumption. Most importantly, a higher net social income translates into a higher demand for the public good whenever $\gamma_i^{\prime}>0$. In the next sections, we characterize the sociable Nash equilibria of this game, and discuss who the largest contributors are. Then, we will discuss how degree of heterogeneity in the initial endowment or in wages (if the constraint is a time constraint) shapes the distributional effect of free riding in networks. We will then analyze the policy implications for income redistribution and the design of personalized prices. \section{Main Analysis}\label{mainsec} Let us first discuss two lemmas that highlight the novel aspects of our model. First, differently from \cite{gg} and \cite{KM}, spillovers do not completely crowd out own contributions if $\gamma_i^{\prime}>0$. As we show later, this has an impact on the existence and characterization of equilibrium, and in particular, on who the largest contributors are. \begin{lemma}\label{contribution} If Assumption \ref{normality} holds, contributions are decreasing in spillovers. \end{lemma} This result follows immediately from the fact that $\overline{x}_{i}=\max\{\gamma_i( \overline{w}(g)),0\}$ and $\gamma_i^{\prime}\in[0,1]$ (Assumption \ref{normality}), so that demand for the public good is endogenous and increasing in net social income. Second, despite heterogeneity in preferences, the introduction of a budget constraint implies that largest contributors are linked. \begin{lemma}\label{almost_core} Given a Nash equilibrium, suppose there are players $i,j\in N$ with $x^{\ast}_i, x^{\ast}_j>k$. Then, if Assumption \ref{normality} holds, $g_{ij}^{\ast} = 1$. \end{lemma} When the budget set is irrelevant \citep{KM}, the gains from a connection determines who links with whom; here instead, all players who produce more than the linking cost are linked to each other even if they can have very different gains from being connected. Indeed, if two such players were not neighbors, they could increase their net social income by free riding on each other's provision. This increases their demand for the public good if it is normal. This novel insight drives the following proposition. \begin{proposition}\label{characterization} If $k>\overline{k}=max_{i\in N}\{x_i^I\}$, the unique equilibrium network is empty. If $k\leq \overline{k}$ and Assumption \ref{normality} holds, any sociable Nash equilibrium network is a core-periphery graph, while any strict Nash equilibrium network is a nested split graph. \end{proposition} This sharp characterization emerges from two equilibrium requirements. First, by Lemma \ref{almost_core}, players who produce strictly more than the linking cost $k$ form a core of interconnected players. The refinement of sociable Nash equilibrium ensures that the largest contributors form a core. Indeed, if there were several players providing an amount identical to $k$, then free riders would be indifferent between linking and providing an amount $k$ of the public good themselves. Networks without a core could be sustained in equilibrium due to these kinds of indifference. Second, conditional on linking, a player links to the largest contributors. Together with the existence of the core, this generates core-periphery structures. When players are indifferent between linking to players providing the same amount of public good, but do not want to link to all of them, they could link to different ones. The refinement of strict Nash equilibrium ensures that nested split graphs emerge in such situations, as everybody links to larger contributors first.\footnote{Note that core players in a strict Nash equilibrium can have identical provisions, as long as no periphery player links to only one of them.} The limited role of preferences in Proposition \ref{characterization} is due to the main novelty of our model: the introduction of the budget constraint. Indeed, when it does not constrain the demand for the public good, players who value the public good more have more links, and produce more if they are in the core (Theorem 2, \citealp{KM}). This happens because players with a high valuation not only have a larger demand, but also have larger gains from a connection, as in Figure \ref{1aej}. In the example, player $i$'s demand of the public good is $\min\{b_i^2/4,w_i\}$. Since in \ref{1aej} $w_i>b_i^2/4$ for all players, those who value the public good more provide more. In panel \ref{2aej} instead $w_1 < b_1^2/4$, player $1$, who values the public good the most, is in the periphery because she is too poor to contribute enough to be in the core of a sociable equilibrium network. A similar phenomenon can arise because of heterogeneity in the price of the private good.\footnote{For example, the network in Figure \ref{1aej} is an equilibrium also when $U_i(\overline{x}_i,y_i) =(\sqrt{\overline{x}}+\sqrt{y_i})^2$ for all $i\in N$, $k=2$, $w=(10,6,6)$, $p_1=1$ and $p_2=p_3=1.5$. In that case, $x^\ast=(3,0,0)$, even if $2$ and $3$ face a higher price for the private good and this is a substitute of the public good.} As we show later, taking into account these phenomena is particularly relevant in the design of policy interventions. \begin{figure}[htbp!] \begin{center} \begin{minipage}{.48\linewidth} \subfigure[$g^\ast$]{\label{1aej} \begin{minipage}{.5\linewidth} \resizebox{2.5cm}{!} { \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [below left of=1] {2}; \node[main node] (3) [below right of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.5\linewidth} \begin{center} \resizebox{2.7cm}{!} { \begin{tabular}{c|c|c|c} Player & 1 & 2 & 3 \\ \hline \hline $b_i^2/4$& 4 & 3 & 3 \\ \hline $w_i$& 5 & 5 & 5 \\ \hline $x_i^{\ast}$& 4 & 0 & 0 \\ \hline $y_i^{\ast}$& 1 & 4 & 4 \end{tabular} } \end{center} \end{minipage} } \end{minipage} \begin{minipage}{.48\linewidth} \subfigure[$g^{\ast\ast}$]{\label{2aej} \begin{minipage}{.5\linewidth} \resizebox{2.5cm}{!} { \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node,fill=black!20] (2) [below left of=1] {2}; \node[main node,fill=black!20] (3) [below right of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [<-] node [right] {} (1) (2) edge [<->] node [right] {} (3) ; \end{tikzpicture} } \end{minipage} \begin{minipage}{.5\linewidth} \begin{center} \resizebox{3.4cm}{!} { \begin{tabular}{c|c|c|c} Player & 1 & 2 & 3 \\ \hline \hline $b_i^2/4$& 4 & 3 & 3 \\ \hline $w_i$& 1.5 & 5 & 5 \\ \hline $x_i^{\ast\ast}$ & .5 & 1.5 & 1.5 \\ \hline $y_i^{\ast\ast}$ & 0 & 3 & 3 \end{tabular} } \end{center} \end{minipage} } \end{minipage} \caption{Two examples with $U_i(\overline{x}_i,y_i) =\sqrt{b_i\sqrt{\overline{x}_i}+y_i}$, $b_1=4$, $b_2=b_3=2\sqrt{3}$, $k=1$, and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray.}\label{comparison_aej} \end{center} \end{figure} Since we do not have linear best replies, the few conditions we impose to derive Proposition \ref{characterization} do not ensure equilibrium existence for two reasons: on the one hand, free riding decreases contribution (Lemma \ref{contribution}); on the other, it increases the demand for the public good. To see why this matters, suppose there is a player $i$ in the periphery who produces enough public good for other players to profitably link to $i$.\footnote{Note that, if $i$ is best-responding, $i$ is already linked to every player in the core; otherwise, doing so would constitute a profitable deviation.} Clearly this is no equilibrium, so consider moving $i$ in the core. In that case, players who remain in the periphery free ride also on $i$, thereby increasing their demand for the public good. Hence, if their own provision is not reduced enough, they might now themselves produce more than $k$, which is also no equilibrium. In sum, if players' Engel curves are very steep, it might be possible to construct cycles of deviations where players might produce too much when in the periphery, but not enough when in the core. Yet, it is sufficient for equilibrium existence to bound the slope of the Engel curve, $\gamma^{\prime}$. This bounds the increase in players' demand for the public good when they free ride on others. Moreover, it ensures that spillovers substantially crowd out contributions of free riders. \begin{proposition}\label{existence} If Assumption \ref{normality} holds, there is $\overline{\gamma}\leq 1$ such that a sociable Nash equilibrium exists if $\gamma_i^{\prime} \leq \overline{\gamma}$ for all $i\in N$ for whom $x_i^I \geq k$. \end{proposition} Proposition \ref{existence} complements existing results for exogenous networks, which show existence of a unique equilibrium when $\gamma_i^{\prime}$ is sufficiently large \citep{nizar}. The game we study instead has potentially a very large set of equilibria. However, a very large $\gamma^\prime$ does not guarantee uniqueness.\footnote{As usual, we define uniqueness up to permutations in the labels of otherwise identical players.} Indeed, Figure \ref{multiple_equilibria} presents an economy with three players with linear Engel curves with slope $.99$; while the assumption for uniqueness of \cite{nizar} is satisfied, all three networks are an equilibrium. Rather, in our framework the equilibrium is unique when the linking cost is sufficiently small, as then the ranking of players' demand for the public good is not affected by their linking strategies. In other words, if $k$ is sufficiently small, free riding opportunities are the same for all players, so that the set of large contributors is unique as in \cite{BBV} \footnote In the example of Figure \ref{multiple_equilibria}, $g^{\ast\ast}$ is the unique equilibrium if $2.63<k<3.93$, and the complete network if $k\leq 2.63$.} \begin{figure}[htbp!] \begin{center} \begin{minipage}{.34\linewidth} \centering \resizebox{2.5cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [below right of=1] {2}; \node[main node] (3) [below left of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.32\linewidth} \centering \resizebox{2.5cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node,fill=black!20] (2) [below right of=1] {2}; \node[main node] (3) [below left of=1] {3}; \path[every node/.style={font=\sffamily\small}] (3) edge [->] node [right] {} (1) (2) edge [<->] node [right] {} (1) (3) edge [->] node [right] {} (2); \end{tikzpicture} } \end{minipage} \begin{minipage}{.32\linewidth} \centering \resizebox{2.5cm}{!} { \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node,fill=black!20] (2) [below right of=1] {2}; \node[main node,fill=black!20] (3) [below left of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [<-] node [right] {} (1) (2) edge [<->] node [right] {} (3) (3) edge [<-] node [right] {} (1); \end{tikzpicture} } \end{minipage} \vspace{.3cm} \begin{minipage}{.32\linewidth} \centering \subfigure[$g^\ast$]{\label{1eq} \resizebox{3cm}{!}{ \begin{tabular}{c|c|c|c} Player & 1& 2 & 3 \\ \hline $w_i$& 10& 8 & 8 \\ \hline $x_i^{\ast}$& 9.9 & 3.91 & 3.91 \\ \hline $y_i^{\ast}$& .1 & .14 & .14 \end{tabular} }} \end{minipage} \begin{minipage}{.32\linewidth} \centering \subfigure[$g^{\ast\ast}$]{\label{2eq} \resizebox{3cm}{!} { \begin{tabular}{c|c|c|c} Player & 1& 2 & 3 \\ \hline $w_i$& 10& 8 & 8 \\ \hline $x_i^{\ast\ast}$& 5.95 & 3.95 & 0 \\ \hline $y_i^{\ast\ast}$& .1 & .1 & .1 \end{tabular} }} \end{minipage} \begin{minipage}{.32\linewidth} \centering \subfigure[$g^{\ast\ast\ast}$]{\label{3eq} \resizebox{3cm}{!} { \begin{tabular}{c|c|c|c} Player & 1& 2 & 3 \\ \hline $w_i$& 10& 8 & 8 \\ \hline $x_i^{\ast\ast\ast}$& 2 & 3.97 & 3.97 \\ \hline $y_i^{\ast\ast\ast}$& .1 & .08 & .08 \end{tabular} }} \end{minipage} \caption{Multiple equilibria for $k=3.95$, $U_i(\overline{x}_i,y_i) = \overline{x}_i^a y_i^{1-a}$, $\gamma_i^{\prime}(\overline{w}_i)=a=.99$ and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray.}\label{multiple_equilibria} \end{center} \end{figure} Furthermore, the unique equilibrium on a fixed network is not necessarily an equilibrium when the network is endogenous, because it might predict periphery players to be large contributors; this is impossible when the network is endogenous.\footnote{Consider the third example in Figure 3 in \citet[p.~541]{nizar}. The equilibrium is a star where the hub free rides on the contributions of the periphery. This is no equilibrium with endogenous links as contributors would free ride on each others' effort, thereby reducing their provision.} This result is stated in the following corollary.\footnote{We focus on sociable equilibria in the following corollary because there are many equilibria when $k=0$ as individuals are indifferent to delete or add links to inactive players. Note also that there is trivially a unique equilibrium when the linking cost is very high: a star with the largest contributor as hub, and eventually, the empty network as the linking cost increases further.} \begin{corollary}\label{ex-bbv} There exists $\tilde{k}$ such that if $0\leq k<\tilde{k}$ and $\gamma_i^{\prime} \in (0,1)$ for all $i\in N$, there always exists a unique sociable Nash equilibrium entailing a core-periphery network. \end{corollary} Intuitively, the game we study has potentially a very large set of equilibria because different positions in the network imply very different net social incomes, thereby affecting demand and provision of the public good. In particular, a player's budget available for provision decreases as more links are established. However, as the linking costs decrease, a player's demand is less sensitive to the number of links established. Hence, in the limit as $k\rightarrow 0$, the classical uniqueness result of \cite{BBV} is reestablished. \subsection{Large Societies}\label{large_soc} The law of the few predicts that, as the number of players increases, the proportion of active players in non-empty strict equilibrium networks goes to zero \citep{gg}. In our model, this holds only for the proportion of players in the core. Let $\omega$ be the maximal wealth that players possess. \begin{proposition}\label{lotf} Suppose Assumption \ref{normality} holds. Given a scalar $\omega > 0$, in every sociable Nash equilibrium, $\lim_{n\rightarrow\infty}|\mathcal{C}(\overline{g}^{\ast})|/n=0$, if $w_i \leq \omega$ for all $i \in N$ as $n \rightarrow \infty$. \end{proposition} Intuitively, if there were many active players providing more than $k$ of the public good, they would all be connected. However, this is not possible as we impose the restriction that all players have a finite budget bounded by $\omega$. In other words, most of them need to free ride. Proposition \ref{lotf} allows us to focus on periphery players to understand the welfare effects of networks. Indeed, the welfare effects on core players are less clear, since different players are in the core in different equilibria. \section{Homogeneous Preferences and Prices}\label{sec:homogeneous} We next study an economy in which players have identical preferences and face an identical price for the private good. \begin{assumption}\label{homogeneous} Every player has the same preferences and pays the same price for the private good. \end{assumption} When preferences are homogeneous and the linking costs are sufficiently low, as in \cite{BBV}, there is a unique threshold of wealth such that players whose wealth is above this threshold are in the core and consume the same amount of the private as well as of the public good, while the others are in the periphery and free ride on the contributors. When linking costs are higher, but the budget constraint is not binding for public good provision, differences in wealth instead are irrelevant. In that case, players with the highest demand of the public good are in the core, while the others are in the periphery or isolated \citep{KM}. As the next proposition shows, results are more nuanced in our model, as poorer players can be in the core if richer players free ride a lot, thereby reducing their provision until it is not profitable to link to them. \begin{proposition}\label{threshold} If Assumptions \ref{normality} and \ref{homogeneous} hold, a sociable Nash equilibrium $(x^\ast,y^\ast,g^\ast)$ always exists. Furthermore:\\ \textit{(i)} For any $i,j \in \mathcal{C}(g^\ast),$ $\overline{x}_i^* = \overline{x}_j^*$ and $y_i^* = y_j^*$, while $x_i^* > x_j^*$ if and only if $w_i > w_j.$\\ \textit{(ii)} If there is $j \in \mathcal{P}(g^*)$ such that $g_{ji}^{\ast}=1$ for all $i \in \mathcal{C}(g^*)$, then $\overline{x}_j^* \geq \overline{x}_i^*$ for all $i \in \mathcal{C}(g^*)$.\\ \textit{(iii)} For any $i,j\in \mathcal{P}(g^*)$ such that $N_i(g^*) = N_j(g^*)$, $\overline{x}_i^\ast \geq \overline{x}_j^\ast$, $x_i^\ast \geq x_j^\ast$ and $y_i^\ast \geq y_j^\ast$ if and only if $w_i\geq w_j$.\\ \textit{(iv)} Take $i, j \in \mathcal{P}(g^*)$. If $w_i\geq w_j$, then $N_i(g^*) \geq N_j(g^*)$.\\ \textit{(v)} There is $w^\prime$ such that $i\in \mathcal{P}(g^*)$ if $w_i\leq w^\prime$. \end{proposition} Part \textit{(i)} shows that in our model only players in the core consume the same bundle, while this is not true for players who contribute less. Part \textit{(ii)} shows that a periphery player who connects to all players in the core consumes more than them, while she contributes less. Indeed, such a player free rides the most, while her wealth can be larger or smaller than that of core players. Larger spillovers increase her net social income which, due to normality, translates into a higher demand for the public good. This does not arise in models without income effect \citep{KM}.\footnote{This result is not due to the assumption of one-way flow of spillovers; indeed, we show in the Online Appendix that it also holds under the alternative assumption of two-way flow. A similar effect arises when the largest contributors are not connected on a fixed network \citep{brkran,nizar}. However, these configurations are no equilibrium when the network is endogenous, as these contributors would like to free ride on each other.} While the monotonicity of provision in wealth holds for all players in global public good games, parts \textit{(iii)} and \textit{(iv)} show that here this holds among players with the same neighbors. Hence, it depends on who free rides on whom. Similarly, within the periphery, a player with more neighbors has to be richer to afford more links, free ride more and thereby possibly contribute less. Indeed, periphery players who sponsor less links may provide more public good: as they have less resources to sponsor links, they free ride less, which increases their contributions A consequence of part \textit{(iv)} is that periphery players tend to afford a different number of links the larger is wealth inequality. Therefore, the number of cells, that is, the sets of players having the same links in $\overline{g}$, weakly increases when wealth dispersion increases. In particular, complete core-periphery graphs emerge when players have similar wealth levels, since then all players can afford the same links. \begin{corollary} If Assumptions 1 and 2 hold, let $\tilde{w} = \min_{j \in \mathcal{P}} w_j$. Then, in any sociable Nash equilibrium, as $\tilde{w}$ decreases, the number of cells increases. \end{corollary} Last, but not least, part \textit{(v)} shows that we can identify those players who are poor enough so that they are always in the periphery. However, there is neither a threshold of wealth above which players are contributors nor one above which they belong to the core. Hence, while the richest players are the largest contributors when the linking costs are sufficiently low (Corollary \ref{ex-bbv}), when $k$ increases, poorer players can be in more central positions. When multiple equilibria exist, it is interesting to understand how they compare in terms of welfare. To fix ideas, consider Figure \ref{multiple_welfare}. In this example, there are two equilibria: in \ref{1eqw}, the richest players are in the core; in \ref{2eqw}, player $3$ is in the core, despite of being poorer than $2$. Total public good provision is higher in \ref{1eqw}. However, in \ref{2eqw}, player $1$ contributes more to the public good to compensate the lower contribution of $3$. As a result, player $4$, who can afford only one link, is better off in \ref{2eqw}. Additionally, if there are at least three players like $4$ (with a wealth of $4$ and linking only to $1$), equilibria like \ref{2eqw} are associated with a higher welfare. \begin{figure}[htbp!] \begin{center} \begin{minipage}{.48\linewidth} \subfigure[$g^\ast$]{\label{1eqw} \resizebox{2.8cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node,fill=black!20] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [<->] node [right] {} (1) (3) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (2) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \resizebox{4cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1& 2 & 3 &4\\ \hline $w_i$& 10 & 9 & 8 & 4 \\ \hline \rule{0pt}{12pt} $x_i^{\ast}$& $3.\overline{3}$ & $2.\overline{3}$ & $.1\overline{6}$ & $0$ \\ \hline \rule{0pt}{12pt} $y_i^{\ast}$& $5.\overline{6}$ & $5.\overline{6}$ & $5.8\overline{3}$ & $3$ \end{tabular} } } \end{minipage} \begin{minipage}{.48\linewidth} \subfigure[$g^{\ast\ast}$]{\label{2eqw} \resizebox{2.8cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [right of=1, xshift=20mm] {2}; \node[main node,fill=black!20] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (3) (1) edge [<->] node [right] {} (3) (1) edge [<-] node [right] {} (2) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \resizebox{4cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1& 2 & 3 &4\\ \hline $w_i$& 10 & 9 & 8 & 4 \\ \hline \rule{0pt}{12pt} $x_i^{\ast\ast}$& $3.\overline{6}$ & $.8\overline{3}$ & $1.\overline{6}$ & $0$ \\ \hline \rule{0pt}{12pt} $y_i^{\ast\ast}$& $5.\overline{3}$ & $6.1\overline{6}$ & $5.\overline{3}$ & $3$ \end{tabular} } } \end{minipage} \caption{Comparing welfare between two equilibria for $k=1$, $U_i(\overline{x}_i,y_i) =\sqrt{\overline{x}_i y_i}$ and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray.}\label{multiple_welfare} \end{center} \end{figure} \noindent This example highlights a general result: if players have similar wealth---so that they can afford the same links,---and the linking cost is sufficiently high---so that adding players to the core is too costly,---an equilibrium with few and richer players in the core yields a higher welfare. Denote this equilibrium by $(x^h,y^h,g^h)$.\footnote{This equilibrium is the one we constructed in the proof of Proposition \ref{threshold} to show existence.} Then: \begin{corollary}\label{richer} Suppose that there exists a sociable equilibrium $(x^\prime,y^\prime,g^\prime)$ different from $(x^h,y^h,g^h)$. If $\gamma^{\prime}\in(0,1)$ and Assumption 2 holds, there exist $\omega>0$ and $K$ such that welfare is higher in $g^h$ than in $g^\prime$ if $\max_{i\in N}{w_i}-\min_{i\in N}{w_i}<\omega$ and $k>K$. \end{corollary} To conclude, we have just shown how, with homogeneous preferences, our model yields precise predictions regarding the type of networks that emerge in equilibrium. We now derive further implications of these findings on welfare and inequality. \section{Income (Re)Distribution} In this section, we first describe how the impact of free riding in networks depends on the initial income distribution. After characterizing the efficient solution, we derive policy implications on how a planner can redistribute income to increase welfare when agents can change their linking strategies as a result of the intervention. Finally, we show how the efficient solution can be implemented by using personalized prices. \subsection{Income Distribution and Inequality} When the equilibrium of the local public good game is a complete core-periphery network with inactive players in the periphery, spillovers flow as when the public good is global (i.e., among contributors and from contributors to free riders). However, there can be multiple equilibria, whose welfare properties can differ a lot across players depending on their network position. All players in the core, for example, consume the same bundle, while one richer player in the periphery can consume more than them, and thus, also obtain a larger utility (Proposition \ref{threshold}.\textit{(i)} and \textit{(ii)}). For players in the periphery, it is key how many links they can afford. Each link frees additional resources the player can then spend on consuming more of both goods. Therefore, periphery players with more links benefit more from the network than those with less links: as richer players have more links, inequality increases. By focusing on large societies, the next proposition can abstract from the welfare of core players, who represent an infinitesimal proportion of the population. \begin{proposition}\label{inequality} If Assumptions \ref{normality} and \ref{homogeneous} hold, for any sociable Nash equilibrium, there exists $\underline{w}$ such that\\ (i) if $\max_{i\in N}{w_i}-\min_{i\in N}{w_i}\leq \underline{w}$, the network reduces inequality in utility among any two players for a proportion of players that converges to 1 as $n\rightarrow\infty$;\\ (ii) otherwise, the network increases inequality in net social income among any two players for a proportion of players that converges to 1 as $n\rightarrow\infty$.\\ Furthermore, $\underline{w}$ is inversely related with $k$. \end{proposition} This result shows how the initial wealth distribution shapes inequality in payoffs if free riding in networks is costly. When societies are sufficiently homogeneous, networks are conducive to equality, since all periphery players can afford the same links, thereby enjoying the same spillovers. As utility is concave, richer players gain less marginal utility than poorer ones from these spillovers, and the difference in utility between any two periphery players is lower than absent the network. If to the contrary, the society is sufficiently unequal to begin with, then some players can afford more links, thereby free riding more. As a result, networks will only exacerbate the initial level of wealth inequality in the sense that the difference in net social income between periphery players is larger than the initial difference in wealth. Moreover, as in large societies the proportion of periphery players converges to 1, Proposition \ref{inequality} follows. This result hinges on the presence of a budget constraint and heterogeneity in income. When the budget constraint is not relevant, all players have the same free riding opportunities. In this case, as shown in the example in Figure \ref{comparison_aej}, the network always increases inequality when players have different valuations of the public good (\citealp{KM}, Proposition 2). Finally, as $k$ decreases, the network reduces inequality even for a larger initial difference in wealth levels. The main reason for this is that net social income is increasing as $k$ decreases, and so poorer players also can take advantage of more free-riding opportunities in the network. This result highlights how lowering communication and transportation costs can increase welfare by allowing more people to take advantage of free riding opportunities \subsection{Efficiency} We now characterize the efficient solution. \begin{proposition}\label{efficiency} If Assumption \ref{normality} holds, the efficient network is either empty or a star in which only the hub is active. A player is isolated if her valuation of the public good is too low. \end{proposition} The social planner solves two trade-offs: \textit{(i)} between consumption and links, since each additional link increases spillovers, but linking is costly; and \textit{(ii)} between private and public good consumption of the different players. It is then quite intuitive that a star with only the hub contributing to the public good is the optimal solution as it minimizes linking costs while maximizing spillovers. While there is only one player in the core of the optimal solution, the model can accommodate for less stark results. If, for example, there were capacity constraints in links, the optimal solution would feature more than one player in the core. The reasoning on how to derive the optimal solution is the same. It depends on preferences and total income whether and which players are isolated from the star. Indeed, excluding a player from the star is more beneficial the less she values the public good. Moreover, the number of links that the planner can afford depends on the total income available in the economy. If all players have identical preferences and face identical prices, their marginal rate of substitution between public and private good consumption has to be equal. This implies that their ratios of marginal utilities per income are equal as well. \begin{corollary}\label{eff_homo} If Assumptions \ref{normality} and \ref{homogeneous} hold, in the efficient solution, their consumption of the private and public good are identical. \end{corollary} \subsection{Income Redistribution} As decentralized equilibria are not efficient, we now ask which policies can the social planner introduce to increase welfare. We know from classical results that small transfers of income among contributors are neutral \citep{BBV}. However, this is true only under very specific assumptions when we consider local public goods \citep{nizar}. In our context, with endogenous networks, a transfer scheme is neutral not only when the set of contributors does not change, but also the transfer is confined to the largest contributors in the network and their provision does not change much, so that links are not affected. We next show how a planner can redistribute income to increase welfare. Income redistribution is a budget balanced transfers scheme $t = (t_1, ..., t_n)$ such that $\sum_{i \in N} t_i = 0$. Players then choose their optimal consumption and links according to the utility maximization problem \eqref{max} given $w+t$. \begin{proposition}\label{improving_welfare} Suppose a sociable equilibrium $(x^\ast,y^\ast,g^\ast)$ exists with non-empty $g^\ast$ and $\gamma^{\prime}_i\in(0,1]$ for all $i\in N$. Then, there is an income redistribution $t$ that yields an equilibrium with a star network $g^{star}$ in which both public good consumption and welfare are higher than in $(x^\ast,y^\ast,g^\ast)$. \end{proposition} In the proposition, we show that redistributing income towards the player with most links, eventually increases both public good consumption and welfare. This holds both for small transfers which do not change the equilibrium network, and for larger ones, which eventually induce a star as an equilibrium network. Notably, Proposition \ref{improving_welfare} allows for heterogeneous preferences, as long as the public good is strictly normal, which is a slightly stronger assumption than Assumption \ref{normality}. This result builds on our characterization of equilibrium networks. Indeed, since sociable equilibrium networks are core-periphery graphs, spillovers among core players flow as in a complete network. Hence, a similar neutrality result as \cite{BBV}'s applies to this part of the network. In other words, consider the following thought experiment. First, fix the network; given this graph, we can transfer some resources from other core players to the player with the most links---who is also the largest contributor---in a way that each player's net social income is unchanged. Hence, their demand of the public good is unchanged. While the total provision of public good is then constant, the unique contributor in the core is the player who received the most links. As a result, periphery players' access to the public good increases, as they might not have been linked to all core players. However, it is possible to further increase welfare and income by transferring resources to the player with most links until all the other core players are inactive and it is no longer profitable to link to them. A social planner can then collect the resources saved from deleting these links and use them to increase welfare. In particular, as $\gamma^{\prime}_i>0$ for all $i\in N$, part of these resources will be assigned to the largest contributor to further increase everyone's public good consumption. Yet, the transfers we propose do not necessarily maximize welfare, for two reasons. First, when the planner transfers resources to the largest contributor, her net social income increases, but that of the player whose resources are being taken away decreases. So the optimal transfers need to trade-off the increases of public good provision of the hub with the decrease of private consumption of periphery players. Second, the player who becomes the hub of the star applying the transfers described in Proposition \ref{improving_welfare} might not be the ideal recipient. To clarify the ideas, consider the example in Figure \ref{ex_transfers}. Panel (a) represents the initial allocation, while panel (b) the optimal allocation given that player 1 is the hub of the star. Panel (c) instead represents the welfare maximizing allocation, where resources are being transferred to player 4, who has the highest valuation for the public good. Indeed, player 4 is initially in the periphery because of a limited initial budget.\footnote{A similar situation can arise when players pay different prices for the private good.} This subtle difference is very important, as it implies that repeatedly applying the redistribution policies characterized by \cite{nizar} to the initial equilibrium would not lead to the welfare-maximizing outcome. \begin{figure}[htbp!] \begin{center} \begin{minipage}{.33\linewidth} \centering \resizebox{3cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node,fill=black!20] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [<->] node [right] {} (1) (3) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (2) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.33\linewidth} \centering \resizebox{3cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (1) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.32\linewidth} \centering \resizebox{3cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node,fill=black!20] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (4) (3) edge [->] node [right] {} (4) (1) edge [->] node [right] {} (4); \end{tikzpicture} } \end{minipage} \vspace{.3cm} \begin{minipage}{.33\linewidth} \centering \subfigure[$ W(x^\ast,y^\ast,g^\ast)= 28.384$]{\label{1transf} \resizebox{3.5cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1 & 2 & 3 & 4 \\ \hline \hline $a_i$ & \multicolumn{3}{c|}{.5} & .8 \\ \hline $w_i$ & 15 & 15 & 10 & 4\\ \hline \rule{0pt}{12pt} $x_i^{\ast}$& $4.\overline{3}$ & $4.\overline{3}$ & $0$ & $.73$ \\ \hline \rule{0pt}{12pt} $y_i^{\ast}$& $8.\overline{6}$ & $8.\overline{6}$ & $6$ & $1.2\overline{6}$ \end{tabular} } } \end{minipage} \begin{minipage}{.33\linewidth} \centering \subfigure[$ W(x^\prime,y^\prime,g^\prime)= 38.5063$]{\label{2transf} \resizebox{3.5cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1 & 2 & 3 & 4 \\ \hline \hline $a_i$ & \multicolumn{3}{c|}{.5} & .8 \\ \hline $w_i$ & $29.02$ & \multicolumn{2}{c|}{$5.54$} & 3.91 \\ \hline $x_i^{\prime}$& $14.51$ & \multicolumn{3}{c}{0}\\ \hline \rule{0pt}{12pt} $y_i^{\prime}$& $14.51$ & \multicolumn{2}{c|}{$3.54$} & $1.91$ \end{tabular} } } \end{minipage} \begin{minipage}{.32\linewidth} \centering \subfigure[$ W(x^{\prime\prime},y^{\prime\prime},g^{\prime\prime})= 43.128$]{\label{3transf} \resizebox{3.5cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1 & 2 & 3 & 4 \\ \hline \hline $a_i$ & \multicolumn{3}{c|}{$.5$} & $.8$ \\ \hline $w_i$& \multicolumn{3}{c|}{$6.03$} & $25.92$ \\ \hline $x_i^{\prime\prime}$& \multicolumn{3}{c|}{0} & $20.74$ \\ \hline \rule{0pt}{12pt} $y_i^{\prime\prime}$& \multicolumn{3}{c|}{$4.03$} & $5.18$ \end{tabular} } } \end{minipage} \caption{Welfare improving vs. welfare maximizing transfers for $k=2$, $U_i(\overline{x}_i,y_i) ={\overline{x}}_i^{a_i} y_i^{1-a_i}$ and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray }\label{ex_transfers} \end{center} \end{figure} The following proposition addresses these concerns. We define welfare-maximizing transfers as those yielding the Nash equilibrium with the maximal welfare among all Nash equilibria arising from all budget-balanced transfers. We now characterize the welfare-maximizing transfer scheme taking into account network endogeneity.\footnote As shown in Proposition \ref{efficiency}, a star is the efficient solution. However, the welfare-maximizing transfer scheme is associated with a lower public good provision than in the efficient solution whenever $\gamma^{\prime}_i<1$, where $i$ is the hub of the star network.} \begin{proposition}\label{2ndBest} Suppose there exists a non-empty sociable equilibrium network $g^*$. Then, there is $\Gamma>0$ and welfare-maximizing transfers if $\gamma^{\prime}_i\in (0,\Gamma]$ for all $i\in N$. \end{proposition} This result is obtained with few assumptions on players' preferences except for a condition on how steep each player's Engel curve is at the second best. Intuitively, this condition requires that the demand of periphery players cannot be too high when they are poor and needs to be relatively low if they are rich, as players who are in the periphery in the second best would provide less public good if they were the hub. Hence, each player's Engel curve cannot be too concave. To show that this requirement is not too strict, we show in the next corollary that the welfare-maximizing transfer scheme can always be Nash implemented if players have identical and linear Engel curves. \begin{corollary}\label{linear} Suppose players have homogeneous preferences with linear Engel curves such that $\gamma^{\prime}_i\in(0,1]$ for all $i\in N$, and a sociable equilibrium $(x^\ast,y^\ast,g^\ast)$ exists with non-empty $g^\ast$. Then, the second best can always be implemented as a Nash equilibrium. \end{corollary} Finally, by Corollary \ref{ex-bbv}, the welfare-maximizing transfer scheme induces a unique equilibrium if the linking cost is sufficiently low. \subsection{Personalized Prices} We have shown so far that income redistribution can go a long way in increasing welfare. However, the resulting allocation is not efficient. Indeed, the planner cannot dictate how players use the resources allocated to them, which results in a much higher private consumption of the private good with respect to the efficient one, as derived in Proposition \ref{efficiency}. We now go one step further and show that a personalized lump-sum tax scheme can instead achieve the efficient outcome. Indeed, the optimal policy here subsidizes the price of the public good for the player selected as hub in Proposition \ref{efficiency}. The personalized price is set equal to this player's marginal rate of substitution in the efficient solution. This induces her to choose the socially optimal public good provision and all other players to free ride on her provision. The lump-sum taxation scheme is designed to be budget balanced. \begin{proposition}\label{personalized prices} Suppose $\gamma^{\prime}_i\in(0,1)$ for all $i\in N$ and a sociable equilibrium $(x^\ast,y^\ast,g^\ast)$ with non-empty $g^\ast$ exists. Then, the efficient solution can be implemented by fixing a personalized price of the public good $p_x \in (0,1)$ for the player selected as hub in Proposition \ref{efficiency}, financed by lump-sum taxes from all players. \end{proposition} Since only one player provides the public good, this situation is analogous to a direct government provision of a public good that crowds out private contributions. However, two additional features are relevant. First, crowding-out is incomplete, so that a public intervention can increase welfare by increasing public good provision. Most importantly, an additional increase in consumption can be achieved by inducing a star, thereby re-directing resources from linking to consumption. Proposition \ref{personalized prices} is relevant whenever the social planner is able to affect the price of the public good. In the context of online social platforms, this could take the form of subsidizing content of the ``super-star'' users who contribute a lot to the content of the platform. Reward schemes in line with this idea are indeed implemented for example by YouTube, whose main contributors receive awards and are sponsored to participate in various promotional activities. \section{Robustness and Extensions}\label{extensions} Our characterization is robust to the introduction of selfish motives in the private provision of public goods due to warm glow giving, e.g., because of the private benefits of becoming an influencer on Twitter. Our characterization is robust also to more general linking technologies, an imperfect degree of substitutability between neighbors' public good provision, and if players benefit from global contributions to the public good, for example, because they can see the posts of others they are not linked to using hashtags on Twitter. Additionally, the characterization of strict Nash equilibria is robust to the introduction of some complementarity in players' efforts, to heterogeneity in the costs of forming a connection and to indirect spillovers, e.g., re-tweeting posts To capture situations where a link between two players implies that both access each others' public good provision, we also extend the model to \textit{two-way flow of spillovers}. Some examples are the acquisition and exchange of information about new products and technologies In \cite{gg} and \cite{KM}, the demand for the public good is exogenous. In those models, when players are homogeneous, strict Nash networks are complete core-periphery networks, while, when players are heterogeneous, the prediction depends on how the gains from a connection change across players. Furthermore, the law of the few holds. We extend their models to a more general class of preferences and heterogeneous budgets. While most of our results carry over to the two-sided model, the key difference from the model presented above is that core players benefit ``for free'' from the public good provided by those linking to them. In this case, strict Nash equilibria need not be complete core-periphery networks even when players are homogeneous. Additionally, players in the core have the same consumption only if they have the same neighbors, and a richer player in the periphery is worse off than all core players. When spillovers flow two-way, a stronger version of the law of the few of Proposition \ref{lotf} holds: the proportion of players significantly contributing to the public good (even in the periphery) goes to zero as the population size grows to infinity. Importantly, we highlight the necessary condition for this result to hold, i.e., that the slope of the contributors' Engel curve for the public good is strictly below one. In that case, contributors' provision strictly decreases with spillovers from neighbors. As a result, the amount of spillovers received by contributors cannot be infinitely large. All of these results are formally derived in the Online Appendix. \section{Conclusions}\label{conclusions} In this paper, we introduce a model in which players decide how to allocate their budget between links, a local public good and a private good. A player links to another in order to free ride on her public good provision. Under standard assumptions, Nash equilibrium networks are core-periphery graphs, a prediction that is robust to several extensions. Importantly, since the demand for the public good is endogenous, poorer players can attain more central positions in the network or consume more public good. Hence, the relationship between wealth, provision, consumption and, therefore, utility is not as straightforward as when the public good is global. We have shown that the type of equilibrium that emerges has an impact on welfare and that income redistribution can alleviate inefficiencies. Knowledge of players' preferences is key in the design of optimal transfers. Yet, as stating a high taste for the public good would translate into receiving more resources, future research should investigate how to incentivize people to truthfully state the taste for the public good of their neighbors and their own. Indeed, players observe better than the social planner their neighbors' preferences. Recent literature has shown that there is scope for eliciting information from agents in a fixed network \citep{bloch2021friend,baumann2018self}. It would be interesting, however, to develop similar insights when the mechanism needs to take into account also link formation. \section{Introduction} Local public goods are important phenomena in our society. For example, individuals often acquire some information on different alternatives whose advantages they do not know either personally or through their peers. Since people benefit from their neighbors' investment, personal acquisition of information is a local public good. Information can be exchanged both via in-person communication \citep{udry,feick}, in online forums and on social platforms. For instance, on Twitter, people write posts and see the posts of their connections on their wall. Another example of local public goods is constituencies offering some goods or services that can be enjoyed also by the citizens of nearby constituencies, such as public parks, a pedestrian city center, cultural activities, and the like. Importantly, the resulting pattern of spillovers can be represented as a directed network in which establishing a link to a player allows to free ride on her contribution. In the aforementioned examples, not only contributing is costly, but also free riding, as individuals need to incur the opportunity cost of travelling to nearby constituencies or searching whom to follow and reading their posts on Twitter. These opportunity costs are different across individuals; for example, they are higher for those who receive higher wages, as their time is more valuable. Additionally, while the most important driver of contributors is a specific need for the public good, agents might have different resources and motivations to provide it. For example, some may post on Twitter because they have a lot of spare time, and others to become influencers. Some constituencies might invest a lot in culture because they are rich, others because of a pronounced interest of their inhabitants. Heterogeneities in wages yield opposing distributional effects of free riding in networks: on the one hand, everyone can free ride, and this should reduce inequality. On the other hand, the richer might benefit more from free riding if this is costly. As a result, new technologies that changed the opportunity cost of free riding might have affected who benefits the most from free riding. This raises the question of how to design policies to increase welfare and redistribute resources. Indeed, the planner might want to subsidize the public good provision of certain individuals via monetary or non-monetary subsidies. For example, YouTube redistributes some of the revenue generated by user activity rewarding its contributors with many initiatives, ranging from awards to fan fests. To study these considerations, we propose a local public good game in which players allocate their time or budget to a public good, a private good, and to free ride on others' provision. Introducing a budget constraint captures that individuals have different amounts of resources and/or time to dedicate to public good provision. We also allow for heterogeneous preferences and relative prices of the private good with respect to the public good and linking. Thereby, we capture that individuals might have different motivations, derive a different utility and pay a different price for an identical bundle. The main contributions of the paper are two. First, we derive sufficient conditions for equilibrium existence and characterize equilibrium networks in a general framework that embeds well-known models of private provision of (local) public goods. Second, we use these results to analyze the impact of the income distribution on inequality and derive policy implications Regarding the first contribution, we show that, if both the public and the private good are normal and neighbors' contributions sufficiently crowd out own-contribution, a Nash equilibrium in which players establish weakly profitable connections exists; furthermore, these networks are core-periphery graphs in which the largest contributors are linked among themselves. Indeed, any two contributors are connected if they produce more than the linking cost, as this frees resources for additional public or private good consumption. Moreover, everyone prefers to link to the largest contributors, as this generates the largest spillovers. Therefore, any strict Nash network is a nested split graph, in which periphery players have nested neighborhoods. This equilibrium characterization is consistent with empirical evidence.\footnote{For example, both on Twitter and in an online forum where users ask and respond to queries about Java, there is a core of players who actively collaborate with each other, while most users only free ride \citep{bastos,adamic}.} Our characterization is also robust to several extensions (Section \ref{extensions}). The framework we propose allows us to revisit classical results in the literature on global public goods \citep{BBV,warr}. In our model, this corresponds to linking costs being zero. Then, when both the public and the private good are normal, a unique Nash equilibrium exists, in which total provision is neutral to income redistribution among contributors. If additionally players have identical preferences, the richest contribute and their contributions reduce inequality. To the contrary, when the linking costs are larger, contributors are not necessarily the richest players even when preferences are homogeneous; moreover, poorer players can be better off than richer ones. To resolve the indeterminacy arising from multiple equilibria, we study inequality in large societies, for which we derive the so-called law of the few \citep{gg}, i.e., the proportion of players in the core converges to zero. In this case, we can analyze welfare focusing on free riders. In particular, the impact of free riding in networks on inequality depends on the initial wealth distribution. When players have similar wealth levels, they can afford the same links and enjoy the same spillovers, thereby reducing inequality. If, to the contrary, a society is very unequal, some can afford more links and hence free ride more than others. As a result, in large societies, networks increase inequality when the economy is relatively unequal to begin with. The mechanism we highlight implies that new technologies that make it cheaper to free ride on others will reduce inequality. Our model has important implications for policy interventions. In contrast to models with an exogenous network \citep{nizar}, here income redistribution affects linking incentives, giving the planner an additional constraint, but also an additional instrument, to improve welfare. Indeed, in an exogenous core-periphery graph, it is optimal to transfer income to core players. However, when players can change their links as a result of the intervention, it is better to transfer income to players who are not central because of limited resources, if they value the public good a lot, as this translates into a larger provision. Beyond improving welfare, the efficient solution can be implemented if the social planner is able to charge personalized prices for the public good. This could be relevant for online social platforms, where public good provision of an individual can be subsidized via sponsorship, while other users can be taxed via digital services taxes that collect some of the revenue generated by user activity. For example, social platforms such as YouTube have a host of programs to reward largest contributors in order to incentivize the creation of new content, from awards to fan fests The novel policy implications of our model are due to following main innovations: players face a budget constraint, and both the network as well as the demand for the public good are endogenous. The latter assumption contrasts with existing models of games on endogenous networks that assume a fixed demand for the public good \citep{gg}. This paper is then the first to study free riding in endogenous networks under general preferences and heterogeneous budgets. When the budget is not relevant for the demand of the public good, players who value it more, provide more and are in more central positions in the network \citep{KM}. However, when the budget set matters, novel insights emerge on how poorer players can have more central positions or benefit more from free riding, thereby affecting both inequality and the policy implications.\footnote{\cite{leeuwenJEEA} introduce competition for status in such a framework. Status rents foster public good provision in the repeated game. Our model accommodates that spillovers increase consumption also in a static game by allowing for richer preferences. Our characterization reminds also of that of \cite{ramos}, but the two models are very different. Indeed, in their paper homogeneous players establish costly links to see other players' signals, and then play a beauty contest game. In \cite{dotan}, exogenously determined high types are in the core which provide low types in the periphery with cheap low-quality indirect connections.} We allow for best replies to be non-linear in neighbors' provision and income, which is empirically relevant (e.g., \citealp{quadratic}). Previous analysis with similarly general preferences has been limited to games on fixed networks; e.g., \cite{nizar} shows that the equilibrium is unique if the public good is sufficiently normal. For endogenous networks, very strong normality not only does not yield uniqueness, but may even conflict with equilibrium existence. Here uniqueness arises for very low linking costs.\footnote{Most of the literature assumes only one good and linear best replies, both when the network is fixed \citep{keyplayer,brkran,brkrandam} or endogenous \citep{baetz,hiller}. In particular, \cite{ktz} study a dynamic potential game of strategic complements that predicts nested split graphs as stochastically stable in a myopic best reply dynamics. However, the nature of the strategic interaction is very different, as we focus on substitutes. We also allow for heterogeneous players and games that do not admit a potential. More importantly, we study a static and simultaneous game with rational players where nestedness obtains only in strict Nash equilibria.} \cite{golub} characterize the Pareto efficient outcomes and Lindahl solution in a local public good game on an exogenously fixed network in terms of well-studied network statistics. In our paper, to the contrary, we are interested in understanding whether policies aimed at raising welfare would have the intended effects when players can change with whom to interact after the policy intervention.\footnote{\cite{belhaj2016} show that nested split graphs are also efficient networks in a network game with local complementarities. Our results on second-best implementation complements theirs, as we assume strategic substitutes.} In our benchmark model, spillovers flow one-way, i.e., towards the player establishing the link.\footnote{Some papers study network formation with one-way flow of spillovers abstracting from strategic interactions on the network (e.g., \citealp{balagoyal}, \citealp{galeotti2006one}, \citealp{billand2008existence}).} To account for situations when the interaction is face-to-face, we extend our analysis to two-way flow of spillovers (see the Online Appendix). While most of our findings are robust, we also derive the sufficient condition for a stricter version of the law of the few The remainder of the paper is organized as follows. Section \ref{model} introduces the model. Section \ref{mainsec} provides results on the existence and characterization of equilibria. Section \ref{sec:homogeneous} focuses on economies with agents with homogeneous preferences to compare our results to those of \cite{BBV}. Section \ref{inequality} studies the impact of income (re)distribution and personalized prices on inequality. Section \ref{extensions} discusses some extensions. Section \ref{conclusions} concludes. All proofs are in the appendix. \section{Model}\label{model} We introduce a local public good game, in which players spend their budget on private good consumption, public good provision and connections. \smallskip \noindent \textbf{Players.} There is a set of players $N = \{1, ..., n\}$; $i$ denotes a typical player. \smallskip \noindent \textbf{Network.} We denote the directed network of social connections by $g$. Player $i$'s linking strategy is denoted by a row vector $g_i =( g_{i1}, ..., g_{in} ) \in G_i = \{0, 1\}^{n}$, where $g_{ii}=0$ and $g_{ij} \in \{0,1\}$ for all $i,j \in N$, $i \neq j$. We say that player $i$ links to player $j$ if $g_{ij}=1$. Then, $g=(g_1,...,g_n)^{T}$. Linking decisions are one-sided: the player proposing a link pays $k>0$ and the link is established. Let $N_i(g) = \{j \in N: g_{ij} = 1\}$ be the set of players to which $i$ links and $\eta_i(g) = \left\vert N_i(g) \right\vert$ the number of links that $i$ sponsors. In a \textit{core-periphery graph}, there are two groups of players, the \textit{periphery} $\mathcal{P}(g)$ and the \textit{core} $\mathcal{C}(g)$, such that, \textit{(i)} for every $i,j \in \mathcal{P}(g)$, $g_{ij}=g_{ji}=0$, \textit{(ii)} for every $l,m \in \mathcal{C}(g)$, $g_{lm}=g_{ml}=1$, and \textit{(iii)} for any $i\in\mathcal{P}(g)$, there is $l\in\mathcal{C}(g)$ such that $g_{il}=1$. Hence, all links in the core are reciprocated. A \textit{complete core-periphery} network is such that $N_i(g)=\mathcal{C}(g)$ for all $i \in \mathcal{P}(g)$ and there are no isolated players. Nodes in $\mathcal{C}(g)$ are referred to as \textit{hubs}. We write $\mathcal{C}$ and $\mathcal{P}$ instead of $\mathcal{C}(g)$ and $\mathcal{P}(g)$, respectively, when no confusion arises. A core-periphery network with a single hub is referred to as a \textit{star}. A core-periphery network in which the sets of players' neighbors are nested is a \emph{nested split graph}. Formally, a nested split graph is a core-periphery graph where, if $\eta_j(g)\leq \eta_i(g)$, then $N_j(g) \subseteq N_i(g)$ for any $i,j\in \mathcal{P}(g)$.\footnote{We extend here in a natural way some graph theoretic notions usually defined on undirected networks to our model of directed network formation. The corresponding definitions for the two-way flow model are in the Online Appendix. In particular, the notion of core-periphery graph we use here is more specific than the ones defined on the closure of $g$ (see below), as it embeds that core players reciprocate links and periphery players link to the core.} Denote by $\overline{g}$ the closure of $g$, such that $\overline{g}_{ij} = \max \{g_{ij}, g_{ji}\}$, for each $i,j \in N$; that is, each directed link in $g$ is replaced by an undirected one. Let $N_i(\overline{g}) = \{j \in N: \overline{g}_{ij} = 1\}$ be the set of players to which $i$ is linked in $\overline{g}$, and let $\eta_i(\overline{g}) = \left\vert N_i(\overline{g})\right\vert$ be $i$'s degree, i.e., the number of $i$'s neighbors in $\overline{g}$. There is a path in $\overline{g}$ from $i$ to $j$ if either $\overline{g}_{ij}=1$, or there are $m$ different players $j_1,...,j_m$ distinct from $i$ and $j$, such that $\overline{g}_{ij_1}=\overline{g}_{j_1j_2}=...=\overline{g}_{j_m j}=1$. A \textit{component} of network $\overline{g}$ is a set of players such that there is a path connecting every two players in the set and no path to players outside the set. Define a cell $h$ of a nested split graph as the set of players $i \in N$ with $h = \eta_i(\overline{g})$ links \smallskip \noindent \textbf{Consumption.} We denote by $x_i \in \mathcal{R}^+$ and $y_i \in \mathcal{R}^+$ the amount of public and private good acquired by player $i$, respectively, where $\mathcal{R}^+ \equiv [0, + \infty)$. Given $g,$ we denote by $\overline{x}_{-i}= \sum_{j \in N} g_{ij} x_j$ player $i$'s spillovers and by $\overline{x}_i = x_i + \overline{x}_{-i}$ player $i$'s public good consumption, given by the sum of her provision and the spillovers she receives from her neighbors in network $g$. Hence, we assume that spillovers flow one-way, i.e., towards the player sponsoring the link. In Online Appendix A, we study the model with two-way flow of spillovers. \smallskip \noindent \textbf{Strategies.} Player $i$'s set of strategies is $S_i = \mathcal{R}^+\times\mathcal{R}^+\times G_i$, and the set of all players' strategies is $S = \prod_{i \in N} S_i $. A strategy profile $s = (x,y, g) \in S$ specifies provision of the public good $x = (x_1,... ,x_n)$, consumption of the private good $y = (y_1,...,y_n)$, as well as links $g = (g_1, ... ,g_n)^T$ for each player. In order to emphasize player $i$'s role, $s$ is sometimes written as $(s_i, s_{-i})$, where $s_{-i}\in \prod_{j\neq i} S_j.$ \smallskip \noindent \textbf{Payoffs.} Each player $i$ faces the following maximization problem: \begin{eqnarray}\label{max} \max_{(x_i,y_i,g_i)\in S_i} && U_i(\overline{x}_i,y_i) \\ \notag \text{s.t.} & & x_i+p_i y_i+\eta_i(g) k=w_i, \end{eqnarray} where $x_i$ and $y_i$ are the respective amounts of public and private good personally acquired by $i$, while $\overline{x}_i$ is $i$'s public good consumption, $p_i > 0$ is the price of the private good paid by player $i$, $k > 0$ is the cost of linking and $w_i >0$ her wealth. We assume $U_i(\cdot,\cdot)$ is a twice continuously differentiable, strictly concave and increasing function in its arguments for each $i \in N$. Under these assumptions, there is a unique and non-negative optimal investment in the public and private good for every $i$, which depends on own wealth, the links sponsored and the spillovers received from neighbors. Denote player $i$'s optimal consumption in isolation by $(x_i^I, y_i^I)$. By analyzing heterogeneous prices for the private good, we capture that the relative price of the private good with respect to that of the public good and the linking costs can differ across players. Additionally, some people might like the public good more than others. Hence, we allow players to have different preferences. The utility maximization problem can be rewritten with player $i$ choosing her local public good consumption $\overline{x}_i$, rather than her public good provision $x_i$: \begin{eqnarray}\label{max2} \max_{( \overline{x}_i,y_i,g_i)\in S_i} && U_i(\overline{x}_i,y_i) \\ \notag \text{s.t.} && \overline{x}_i+p_i y_i= w_i-\eta_i(g) k+ \overline{x}_{-i}, \text{ and } \overline{x}_i \geq \overline{x}_{-i}. \end{eqnarray} Ignoring the inequality constraint, we can express player $i$'s demand function for the public good as $\overline{x}_i=\gamma_i(w_i-\eta_i(g) k+ \overline{x}_{-i})$, where $\gamma_i:\mathcal{R}\rightarrow\mathcal{R}$ is a continuous function. We call $\overline{w}_i(g)=w_i-\eta_i(g) k+ \overline{x}_{-i}$ player $i$'s \textit{net social income} and $\gamma_i$ $i$'s Engel curve. Hence, $x_i=\overline{x}_i-\overline{x}_{-i}=\max\{\gamma_i(\overline{w}_i(g))-\overline{x}_{-i},0\}$. Since the network is endogenous, player $i$'s demand for the public good is net of the budget she spends on links. We assume that both the public and the private good are normal for all players.\footnote{Note that, when $i$'s links change, $i$'s net social income typically changes discontinuously. However, $\gamma$ \textit{per-se} is a continuously differentiable function and jumps in net social income do not pose a problem for our analysis. \begin{assumption}\label{normality} For each $i\in N$, $\gamma_i(w)$ is continuously differentiable with respect to $w$ with derivative $\gamma_i^{\prime}\in[0,1]$. \end{assumption} Let us stress that this assumption is mild and commonly made in public good games. In particular, our model embeds the global public good game of \cite{BBV} as $k\rightarrow 0$. Then, contributors form a core, periphery players link to all contributors and there are no spillovers among inactive players.\footnote{When $k=0$, a complete network with links among inactive players is also an equilibrium. Note that this is also a core-periphery graph, and still there are no spillovers among inactive players. Also note that we allow for $\gamma_i^{\prime}\in\{0,1\}$. As in \cite{BBV}, these extremes do not pose a problem for existence, but they do for uniqueness of the equilibrium (Corollary \ref{ex-bbv}). \smallskip \noindent \textbf{Equilibrium.} A strategy profile $s^\ast = (x^\ast, y^\ast, g^\ast)$ is a \emph{Nash equilibrium} if for all $i \in N$, $s_i^{\ast}$ is a solution to the maximization problem \eqref{max} given $s_{-i}^\ast$. To characterize the equilibrium networks, we will use two refinements. We say a Nash equilibrium is \emph{sociable} if any player $i$ who is indifferent between establishing a link or not, establishes this link. More formally, a Nash equilibrium $(x^\ast,g^\ast)$ is sociable if, when there is $(x_i^\prime,y_i^\prime)$ for $i\in N$ such that $U_i(\bar{x}^\ast,y^\ast)=U_i(\bar{x}_i^\prime,y^\prime_i|\bar{x}^\ast_{-i},y^\ast_{-i})$, then, for any $j\in N\setminus \{i\}$, $g^{\ast}_{ij}\geq g^{\prime}_{ij}$, with strict inequality for some $j$. This refinement allows us to discard equilibrium networks that are not robust to small changes in players' wealth and preferences. An equilibrium is \emph{strict} if no player can unilaterally change her strategy without reducing her payoff. Hence, any strict equilibrium is sociable. \smallskip \noindent \textbf{Social Welfare.} We define social welfare as the sum of individual payoffs \smallskip \noindent \textbf{Discussion and interpretation.} The presence of a budget set and that players pay to free ride are the key innovations of our model. The budget constraint stems from players having limited wealth or time to devote to these activities; links are one-sided and the flow of spillovers is one-way. These assumptions capture situations in which one needs to exert some effort to enjoy the public good provided by others, such as reading a post on Twitter. In this example, players decide how to allocate their time between working and leisure. During leisure, they either collect information directly and post it on Twitter, or read the posts of the users they follow. Heterogeneous wages result in heterogeneous relative prices for the private good. This generates novel trade-offs between using resources for free riding by linking, public good provision and private good consumption. Most importantly, a higher net social income translates into a higher demand for the public good whenever $\gamma_i^{\prime}>0$. In the next sections, we characterize the sociable Nash equilibria of this game, and discuss who the largest contributors are. Then, we will discuss how degree of heterogeneity in the initial endowment or in wages (if the constraint is a time constraint) shapes the distributional effect of free riding in networks. We will then analyze the policy implications for income redistribution and the design of personalized prices. \section{Main Analysis}\label{mainsec} Let us first discuss two lemmas that highlight the novel aspects of our model. First, differently from \cite{gg} and \cite{KM}, spillovers do not completely crowd out own contributions if $\gamma_i^{\prime}>0$. As we show later, this has an impact on the existence and characterization of equilibrium, and in particular, on who the largest contributors are. \begin{lemma}\label{contribution} If Assumption \ref{normality} holds, contributions are decreasing in spillovers. \end{lemma} This result follows immediately from the fact that $\overline{x}_{i}=\max\{\gamma_i( \overline{w}(g)),0\}$ and $\gamma_i^{\prime}\in[0,1]$ (Assumption \ref{normality}), so that demand for the public good is endogenous and increasing in net social income. Second, despite heterogeneity in preferences, the introduction of a budget constraint implies that largest contributors are linked. \begin{lemma}\label{almost_core} Given a Nash equilibrium, suppose there are players $i,j\in N$ with $x^{\ast}_i, x^{\ast}_j>k$. Then, if Assumption \ref{normality} holds, $g_{ij}^{\ast} = 1$. \end{lemma} When the budget set is irrelevant \citep{KM}, the gains from a connection determines who links with whom; here instead, all players who produce more than the linking cost are linked to each other even if they can have very different gains from being connected. Indeed, if two such players were not neighbors, they could increase their net social income by free riding on each other's provision. This increases their demand for the public good if it is normal. This novel insight drives the following proposition. \begin{proposition}\label{characterization} If $k>\overline{k}=max_{i\in N}\{x_i^I\}$, the unique equilibrium network is empty. If $k\leq \overline{k}$ and Assumption \ref{normality} holds, any sociable Nash equilibrium network is a core-periphery graph, while any strict Nash equilibrium network is a nested split graph. \end{proposition} This sharp characterization emerges from two equilibrium requirements. First, by Lemma \ref{almost_core}, players who produce strictly more than the linking cost $k$ form a core of interconnected players. The refinement of sociable Nash equilibrium ensures that the largest contributors form a core. Indeed, if there were several players providing an amount identical to $k$, then free riders would be indifferent between linking and providing an amount $k$ of the public good themselves. Networks without a core could be sustained in equilibrium due to these kinds of indifference. Second, conditional on linking, a player links to the largest contributors. Together with the existence of the core, this generates core-periphery structures. When players are indifferent between linking to players providing the same amount of public good, but do not want to link to all of them, they could link to different ones. The refinement of strict Nash equilibrium ensures that nested split graphs emerge in such situations, as everybody links to larger contributors first.\footnote{Note that core players in a strict Nash equilibrium can have identical provisions, as long as no periphery player links to only one of them.} The limited role of preferences in Proposition \ref{characterization} is due to the main novelty of our model: the introduction of the budget constraint. Indeed, when it does not constrain the demand for the public good, players who value the public good more have more links, and produce more if they are in the core (Theorem 2, \citealp{KM}). This happens because players with a high valuation not only have a larger demand, but also have larger gains from a connection, as in Figure \ref{1aej}. In the example, player $i$'s demand of the public good is $\min\{b_i^2/4,w_i\}$. Since in \ref{1aej} $w_i>b_i^2/4$ for all players, those who value the public good more provide more. In panel \ref{2aej} instead $w_1 < b_1^2/4$, player $1$, who values the public good the most, is in the periphery because she is too poor to contribute enough to be in the core of a sociable equilibrium network. A similar phenomenon can arise because of heterogeneity in the price of the private good.\footnote{For example, the network in Figure \ref{1aej} is an equilibrium also when $U_i(\overline{x}_i,y_i) =(\sqrt{\overline{x}}+\sqrt{y_i})^2$ for all $i\in N$, $k=2$, $w=(10,6,6)$, $p_1=1$ and $p_2=p_3=1.5$. In that case, $x^\ast=(3,0,0)$, even if $2$ and $3$ face a higher price for the private good and this is a substitute of the public good.} As we show later, taking into account these phenomena is particularly relevant in the design of policy interventions. \begin{figure}[htbp!] \begin{center} \begin{minipage}{.48\linewidth} \subfigure[$g^\ast$]{\label{1aej} \begin{minipage}{.5\linewidth} \resizebox{2.5cm}{!} { \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [below left of=1] {2}; \node[main node] (3) [below right of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.5\linewidth} \begin{center} \resizebox{2.7cm}{!} { \begin{tabular}{c|c|c|c} Player & 1 & 2 & 3 \\ \hline \hline $b_i^2/4$& 4 & 3 & 3 \\ \hline $w_i$& 5 & 5 & 5 \\ \hline $x_i^{\ast}$& 4 & 0 & 0 \\ \hline $y_i^{\ast}$& 1 & 4 & 4 \end{tabular} } \end{center} \end{minipage} } \end{minipage} \begin{minipage}{.48\linewidth} \subfigure[$g^{\ast\ast}$]{\label{2aej} \begin{minipage}{.5\linewidth} \resizebox{2.5cm}{!} { \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node,fill=black!20] (2) [below left of=1] {2}; \node[main node,fill=black!20] (3) [below right of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [<-] node [right] {} (1) (2) edge [<->] node [right] {} (3) ; \end{tikzpicture} } \end{minipage} \begin{minipage}{.5\linewidth} \begin{center} \resizebox{3.4cm}{!} { \begin{tabular}{c|c|c|c} Player & 1 & 2 & 3 \\ \hline \hline $b_i^2/4$& 4 & 3 & 3 \\ \hline $w_i$& 1.5 & 5 & 5 \\ \hline $x_i^{\ast\ast}$ & .5 & 1.5 & 1.5 \\ \hline $y_i^{\ast\ast}$ & 0 & 3 & 3 \end{tabular} } \end{center} \end{minipage} } \end{minipage} \caption{Two examples with $U_i(\overline{x}_i,y_i) =\sqrt{b_i\sqrt{\overline{x}_i}+y_i}$, $b_1=4$, $b_2=b_3=2\sqrt{3}$, $k=1$, and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray.}\label{comparison_aej} \end{center} \end{figure} Since we do not have linear best replies, the few conditions we impose to derive Proposition \ref{characterization} do not ensure equilibrium existence for two reasons: on the one hand, free riding decreases contribution (Lemma \ref{contribution}); on the other, it increases the demand for the public good. To see why this matters, suppose there is a player $i$ in the periphery who produces enough public good for other players to profitably link to $i$.\footnote{Note that, if $i$ is best-responding, $i$ is already linked to every player in the core; otherwise, doing so would constitute a profitable deviation.} Clearly this is no equilibrium, so consider moving $i$ in the core. In that case, players who remain in the periphery free ride also on $i$, thereby increasing their demand for the public good. Hence, if their own provision is not reduced enough, they might now themselves produce more than $k$, which is also no equilibrium. In sum, if players' Engel curves are very steep, it might be possible to construct cycles of deviations where players might produce too much when in the periphery, but not enough when in the core. Yet, it is sufficient for equilibrium existence to bound the slope of the Engel curve, $\gamma^{\prime}$. This bounds the increase in players' demand for the public good when they free ride on others. Moreover, it ensures that spillovers substantially crowd out contributions of free riders. \begin{proposition}\label{existence} If Assumption \ref{normality} holds, there is $\overline{\gamma}\leq 1$ such that a sociable Nash equilibrium exists if $\gamma_i^{\prime} \leq \overline{\gamma}$ for all $i\in N$ for whom $x_i^I \geq k$. \end{proposition} Proposition \ref{existence} complements existing results for exogenous networks, which show existence of a unique equilibrium when $\gamma_i^{\prime}$ is sufficiently large \citep{nizar}. The game we study instead has potentially a very large set of equilibria. However, a very large $\gamma^\prime$ does not guarantee uniqueness.\footnote{As usual, we define uniqueness up to permutations in the labels of otherwise identical players.} Indeed, Figure \ref{multiple_equilibria} presents an economy with three players with linear Engel curves with slope $.99$; while the assumption for uniqueness of \cite{nizar} is satisfied, all three networks are an equilibrium. Rather, in our framework the equilibrium is unique when the linking cost is sufficiently small, as then the ranking of players' demand for the public good is not affected by their linking strategies. In other words, if $k$ is sufficiently small, free riding opportunities are the same for all players, so that the set of large contributors is unique as in \cite{BBV} \footnote In the example of Figure \ref{multiple_equilibria}, $g^{\ast\ast}$ is the unique equilibrium if $2.63<k<3.93$, and the complete network if $k\leq 2.63$.} \begin{figure}[htbp!] \begin{center} \begin{minipage}{.34\linewidth} \centering \resizebox{2.5cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [below right of=1] {2}; \node[main node] (3) [below left of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.32\linewidth} \centering \resizebox{2.5cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node,fill=black!20] (2) [below right of=1] {2}; \node[main node] (3) [below left of=1] {3}; \path[every node/.style={font=\sffamily\small}] (3) edge [->] node [right] {} (1) (2) edge [<->] node [right] {} (1) (3) edge [->] node [right] {} (2); \end{tikzpicture} } \end{minipage} \begin{minipage}{.32\linewidth} \centering \resizebox{2.5cm}{!} { \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2.4cm and 2.8cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node,fill=black!20] (2) [below right of=1] {2}; \node[main node,fill=black!20] (3) [below left of=1] {3}; \path[every node/.style={font=\sffamily\small}] (2) edge [<-] node [right] {} (1) (2) edge [<->] node [right] {} (3) (3) edge [<-] node [right] {} (1); \end{tikzpicture} } \end{minipage} \vspace{.3cm} \begin{minipage}{.32\linewidth} \centering \subfigure[$g^\ast$]{\label{1eq} \resizebox{3cm}{!}{ \begin{tabular}{c|c|c|c} Player & 1& 2 & 3 \\ \hline $w_i$& 10& 8 & 8 \\ \hline $x_i^{\ast}$& 9.9 & 3.91 & 3.91 \\ \hline $y_i^{\ast}$& .1 & .14 & .14 \end{tabular} }} \end{minipage} \begin{minipage}{.32\linewidth} \centering \subfigure[$g^{\ast\ast}$]{\label{2eq} \resizebox{3cm}{!} { \begin{tabular}{c|c|c|c} Player & 1& 2 & 3 \\ \hline $w_i$& 10& 8 & 8 \\ \hline $x_i^{\ast\ast}$& 5.95 & 3.95 & 0 \\ \hline $y_i^{\ast\ast}$& .1 & .1 & .1 \end{tabular} }} \end{minipage} \begin{minipage}{.32\linewidth} \centering \subfigure[$g^{\ast\ast\ast}$]{\label{3eq} \resizebox{3cm}{!} { \begin{tabular}{c|c|c|c} Player & 1& 2 & 3 \\ \hline $w_i$& 10& 8 & 8 \\ \hline $x_i^{\ast\ast\ast}$& 2 & 3.97 & 3.97 \\ \hline $y_i^{\ast\ast\ast}$& .1 & .08 & .08 \end{tabular} }} \end{minipage} \caption{Multiple equilibria for $k=3.95$, $U_i(\overline{x}_i,y_i) = \overline{x}_i^a y_i^{1-a}$, $\gamma_i^{\prime}(\overline{w}_i)=a=.99$ and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray.}\label{multiple_equilibria} \end{center} \end{figure} Furthermore, the unique equilibrium on a fixed network is not necessarily an equilibrium when the network is endogenous, because it might predict periphery players to be large contributors; this is impossible when the network is endogenous.\footnote{Consider the third example in Figure 3 in \citet[p.~541]{nizar}. The equilibrium is a star where the hub free rides on the contributions of the periphery. This is no equilibrium with endogenous links as contributors would free ride on each others' effort, thereby reducing their provision.} This result is stated in the following corollary.\footnote{We focus on sociable equilibria in the following corollary because there are many equilibria when $k=0$ as individuals are indifferent to delete or add links to inactive players. Note also that there is trivially a unique equilibrium when the linking cost is very high: a star with the largest contributor as hub, and eventually, the empty network as the linking cost increases further.} \begin{corollary}\label{ex-bbv} There exists $\tilde{k}$ such that if $0\leq k<\tilde{k}$ and $\gamma_i^{\prime} \in (0,1)$ for all $i\in N$, there always exists a unique sociable Nash equilibrium entailing a core-periphery network. \end{corollary} Intuitively, the game we study has potentially a very large set of equilibria because different positions in the network imply very different net social incomes, thereby affecting demand and provision of the public good. In particular, a player's budget available for provision decreases as more links are established. However, as the linking costs decrease, a player's demand is less sensitive to the number of links established. Hence, in the limit as $k\rightarrow 0$, the classical uniqueness result of \cite{BBV} is reestablished. \subsection{Large Societies}\label{large_soc} The law of the few predicts that, as the number of players increases, the proportion of active players in non-empty strict equilibrium networks goes to zero \citep{gg}. In our model, this holds only for the proportion of players in the core. Let $\omega$ be the maximal wealth that players possess. \begin{proposition}\label{lotf} Suppose Assumption \ref{normality} holds. Given a scalar $\omega > 0$, in every sociable Nash equilibrium, $\lim_{n\rightarrow\infty}|\mathcal{C}(\overline{g}^{\ast})|/n=0$, if $w_i \leq \omega$ for all $i \in N$ as $n \rightarrow \infty$. \end{proposition} Intuitively, if there were many active players providing more than $k$ of the public good, they would all be connected. However, this is not possible as we impose the restriction that all players have a finite budget bounded by $\omega$. In other words, most of them need to free ride. Proposition \ref{lotf} allows us to focus on periphery players to understand the welfare effects of networks. Indeed, the welfare effects on core players are less clear, since different players are in the core in different equilibria. \section{Homogeneous Preferences and Prices}\label{sec:homogeneous} We next study an economy in which players have identical preferences and face an identical price for the private good. \begin{assumption}\label{homogeneous} Every player has the same preferences and pays the same price for the private good. \end{assumption} When preferences are homogeneous and the linking costs are sufficiently low, as in \cite{BBV}, there is a unique threshold of wealth such that players whose wealth is above this threshold are in the core and consume the same amount of the private as well as of the public good, while the others are in the periphery and free ride on the contributors. When linking costs are higher, but the budget constraint is not binding for public good provision, differences in wealth instead are irrelevant. In that case, players with the highest demand of the public good are in the core, while the others are in the periphery or isolated \citep{KM}. As the next proposition shows, results are more nuanced in our model, as poorer players can be in the core if richer players free ride a lot, thereby reducing their provision until it is not profitable to link to them. \begin{proposition}\label{threshold} If Assumptions \ref{normality} and \ref{homogeneous} hold, a sociable Nash equilibrium $(x^\ast,y^\ast,g^\ast)$ always exists. Furthermore:\\ \textit{(i)} For any $i,j \in \mathcal{C}(g^\ast),$ $\overline{x}_i^* = \overline{x}_j^*$ and $y_i^* = y_j^*$, while $x_i^* > x_j^*$ if and only if $w_i > w_j.$\\ \textit{(ii)} If there is $j \in \mathcal{P}(g^*)$ such that $g_{ji}^{\ast}=1$ for all $i \in \mathcal{C}(g^*)$, then $\overline{x}_j^* \geq \overline{x}_i^*$ for all $i \in \mathcal{C}(g^*)$.\\ \textit{(iii)} For any $i,j\in \mathcal{P}(g^*)$ such that $N_i(g^*) = N_j(g^*)$, $\overline{x}_i^\ast \geq \overline{x}_j^\ast$, $x_i^\ast \geq x_j^\ast$ and $y_i^\ast \geq y_j^\ast$ if and only if $w_i\geq w_j$.\\ \textit{(iv)} Take $i, j \in \mathcal{P}(g^*)$. If $w_i\geq w_j$, then $N_i(g^*) \geq N_j(g^*)$.\\ \textit{(v)} There is $w^\prime$ such that $i\in \mathcal{P}(g^*)$ if $w_i\leq w^\prime$. \end{proposition} Part \textit{(i)} shows that in our model only players in the core consume the same bundle, while this is not true for players who contribute less. Part \textit{(ii)} shows that a periphery player who connects to all players in the core consumes more than them, while she contributes less. Indeed, such a player free rides the most, while her wealth can be larger or smaller than that of core players. Larger spillovers increase her net social income which, due to normality, translates into a higher demand for the public good. This does not arise in models without income effect \citep{KM}.\footnote{This result is not due to the assumption of one-way flow of spillovers; indeed, we show in the Online Appendix that it also holds under the alternative assumption of two-way flow. A similar effect arises when the largest contributors are not connected on a fixed network \citep{brkran,nizar}. However, these configurations are no equilibrium when the network is endogenous, as these contributors would like to free ride on each other.} While the monotonicity of provision in wealth holds for all players in global public good games, parts \textit{(iii)} and \textit{(iv)} show that here this holds among players with the same neighbors. Hence, it depends on who free rides on whom. Similarly, within the periphery, a player with more neighbors has to be richer to afford more links, free ride more and thereby possibly contribute less. Indeed, periphery players who sponsor less links may provide more public good: as they have less resources to sponsor links, they free ride less, which increases their contributions A consequence of part \textit{(iv)} is that periphery players tend to afford a different number of links the larger is wealth inequality. Therefore, the number of cells, that is, the sets of players having the same links in $\overline{g}$, weakly increases when wealth dispersion increases. In particular, complete core-periphery graphs emerge when players have similar wealth levels, since then all players can afford the same links. \begin{corollary} If Assumptions 1 and 2 hold, let $\tilde{w} = \min_{j \in \mathcal{P}} w_j$. Then, in any sociable Nash equilibrium, as $\tilde{w}$ decreases, the number of cells increases. \end{corollary} Last, but not least, part \textit{(v)} shows that we can identify those players who are poor enough so that they are always in the periphery. However, there is neither a threshold of wealth above which players are contributors nor one above which they belong to the core. Hence, while the richest players are the largest contributors when the linking costs are sufficiently low (Corollary \ref{ex-bbv}), when $k$ increases, poorer players can be in more central positions. When multiple equilibria exist, it is interesting to understand how they compare in terms of welfare. To fix ideas, consider Figure \ref{multiple_welfare}. In this example, there are two equilibria: in \ref{1eqw}, the richest players are in the core; in \ref{2eqw}, player $3$ is in the core, despite of being poorer than $2$. Total public good provision is higher in \ref{1eqw}. However, in \ref{2eqw}, player $1$ contributes more to the public good to compensate the lower contribution of $3$. As a result, player $4$, who can afford only one link, is better off in \ref{2eqw}. Additionally, if there are at least three players like $4$ (with a wealth of $4$ and linking only to $1$), equilibria like \ref{2eqw} are associated with a higher welfare. \begin{figure}[htbp!] \begin{center} \begin{minipage}{.48\linewidth} \subfigure[$g^\ast$]{\label{1eqw} \resizebox{2.8cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node,fill=black!20] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [<->] node [right] {} (1) (3) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (2) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \resizebox{4cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1& 2 & 3 &4\\ \hline $w_i$& 10 & 9 & 8 & 4 \\ \hline \rule{0pt}{12pt} $x_i^{\ast}$& $3.\overline{3}$ & $2.\overline{3}$ & $.1\overline{6}$ & $0$ \\ \hline \rule{0pt}{12pt} $y_i^{\ast}$& $5.\overline{6}$ & $5.\overline{6}$ & $5.8\overline{3}$ & $3$ \end{tabular} } } \end{minipage} \begin{minipage}{.48\linewidth} \subfigure[$g^{\ast\ast}$]{\label{2eqw} \resizebox{2.8cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [right of=1, xshift=20mm] {2}; \node[main node,fill=black!20] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (3) (1) edge [<->] node [right] {} (3) (1) edge [<-] node [right] {} (2) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \resizebox{4cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1& 2 & 3 &4\\ \hline $w_i$& 10 & 9 & 8 & 4 \\ \hline \rule{0pt}{12pt} $x_i^{\ast\ast}$& $3.\overline{6}$ & $.8\overline{3}$ & $1.\overline{6}$ & $0$ \\ \hline \rule{0pt}{12pt} $y_i^{\ast\ast}$& $5.\overline{3}$ & $6.1\overline{6}$ & $5.\overline{3}$ & $3$ \end{tabular} } } \end{minipage} \caption{Comparing welfare between two equilibria for $k=1$, $U_i(\overline{x}_i,y_i) =\sqrt{\overline{x}_i y_i}$ and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray.}\label{multiple_welfare} \end{center} \end{figure} \noindent This example highlights a general result: if players have similar wealth---so that they can afford the same links,---and the linking cost is sufficiently high---so that adding players to the core is too costly,---an equilibrium with few and richer players in the core yields a higher welfare. Denote this equilibrium by $(x^h,y^h,g^h)$.\footnote{This equilibrium is the one we constructed in the proof of Proposition \ref{threshold} to show existence.} Then: \begin{corollary}\label{richer} Suppose that there exists a sociable equilibrium $(x^\prime,y^\prime,g^\prime)$ different from $(x^h,y^h,g^h)$. If $\gamma^{\prime}\in(0,1)$ and Assumption 2 holds, there exist $\omega>0$ and $K$ such that welfare is higher in $g^h$ than in $g^\prime$ if $\max_{i\in N}{w_i}-\min_{i\in N}{w_i}<\omega$ and $k>K$. \end{corollary} To conclude, we have just shown how, with homogeneous preferences, our model yields precise predictions regarding the type of networks that emerge in equilibrium. We now derive further implications of these findings on welfare and inequality. \section{Income (Re)Distribution} In this section, we first describe how the impact of free riding in networks depends on the initial income distribution. After characterizing the efficient solution, we derive policy implications on how a planner can redistribute income to increase welfare when agents can change their linking strategies as a result of the intervention. Finally, we show how the efficient solution can be implemented by using personalized prices. \subsection{Income Distribution and Inequality} When the equilibrium of the local public good game is a complete core-periphery network with inactive players in the periphery, spillovers flow as when the public good is global (i.e., among contributors and from contributors to free riders). However, there can be multiple equilibria, whose welfare properties can differ a lot across players depending on their network position. All players in the core, for example, consume the same bundle, while one richer player in the periphery can consume more than them, and thus, also obtain a larger utility (Proposition \ref{threshold}.\textit{(i)} and \textit{(ii)}). For players in the periphery, it is key how many links they can afford. Each link frees additional resources the player can then spend on consuming more of both goods. Therefore, periphery players with more links benefit more from the network than those with less links: as richer players have more links, inequality increases. By focusing on large societies, the next proposition can abstract from the welfare of core players, who represent an infinitesimal proportion of the population. \begin{proposition}\label{inequality} If Assumptions \ref{normality} and \ref{homogeneous} hold, for any sociable Nash equilibrium, there exists $\underline{w}$ such that\\ (i) if $\max_{i\in N}{w_i}-\min_{i\in N}{w_i}\leq \underline{w}$, the network reduces inequality in utility among any two players for a proportion of players that converges to 1 as $n\rightarrow\infty$;\\ (ii) otherwise, the network increases inequality in net social income among any two players for a proportion of players that converges to 1 as $n\rightarrow\infty$.\\ Furthermore, $\underline{w}$ is inversely related with $k$. \end{proposition} This result shows how the initial wealth distribution shapes inequality in payoffs if free riding in networks is costly. When societies are sufficiently homogeneous, networks are conducive to equality, since all periphery players can afford the same links, thereby enjoying the same spillovers. As utility is concave, richer players gain less marginal utility than poorer ones from these spillovers, and the difference in utility between any two periphery players is lower than absent the network. If to the contrary, the society is sufficiently unequal to begin with, then some players can afford more links, thereby free riding more. As a result, networks will only exacerbate the initial level of wealth inequality in the sense that the difference in net social income between periphery players is larger than the initial difference in wealth. Moreover, as in large societies the proportion of periphery players converges to 1, Proposition \ref{inequality} follows. This result hinges on the presence of a budget constraint and heterogeneity in income. When the budget constraint is not relevant, all players have the same free riding opportunities. In this case, as shown in the example in Figure \ref{comparison_aej}, the network always increases inequality when players have different valuations of the public good (\citealp{KM}, Proposition 2). Finally, as $k$ decreases, the network reduces inequality even for a larger initial difference in wealth levels. The main reason for this is that net social income is increasing as $k$ decreases, and so poorer players also can take advantage of more free-riding opportunities in the network. This result highlights how lowering communication and transportation costs can increase welfare by allowing more people to take advantage of free riding opportunities \subsection{Efficiency} We now characterize the efficient solution. \begin{proposition}\label{efficiency} If Assumption \ref{normality} holds, the efficient network is either empty or a star in which only the hub is active. A player is isolated if her valuation of the public good is too low. \end{proposition} The social planner solves two trade-offs: \textit{(i)} between consumption and links, since each additional link increases spillovers, but linking is costly; and \textit{(ii)} between private and public good consumption of the different players. It is then quite intuitive that a star with only the hub contributing to the public good is the optimal solution as it minimizes linking costs while maximizing spillovers. While there is only one player in the core of the optimal solution, the model can accommodate for less stark results. If, for example, there were capacity constraints in links, the optimal solution would feature more than one player in the core. The reasoning on how to derive the optimal solution is the same. It depends on preferences and total income whether and which players are isolated from the star. Indeed, excluding a player from the star is more beneficial the less she values the public good. Moreover, the number of links that the planner can afford depends on the total income available in the economy. If all players have identical preferences and face identical prices, their marginal rate of substitution between public and private good consumption has to be equal. This implies that their ratios of marginal utilities per income are equal as well. \begin{corollary}\label{eff_homo} If Assumptions \ref{normality} and \ref{homogeneous} hold, in the efficient solution, their consumption of the private and public good are identical. \end{corollary} \subsection{Income Redistribution} As decentralized equilibria are not efficient, we now ask which policies can the social planner introduce to increase welfare. We know from classical results that small transfers of income among contributors are neutral \citep{BBV}. However, this is true only under very specific assumptions when we consider local public goods \citep{nizar}. In our context, with endogenous networks, a transfer scheme is neutral not only when the set of contributors does not change, but also the transfer is confined to the largest contributors in the network and their provision does not change much, so that links are not affected. We next show how a planner can redistribute income to increase welfare. Income redistribution is a budget balanced transfers scheme $t = (t_1, ..., t_n)$ such that $\sum_{i \in N} t_i = 0$. Players then choose their optimal consumption and links according to the utility maximization problem \eqref{max} given $w+t$. \begin{proposition}\label{improving_welfare} Suppose a sociable equilibrium $(x^\ast,y^\ast,g^\ast)$ exists with non-empty $g^\ast$ and $\gamma^{\prime}_i\in(0,1]$ for all $i\in N$. Then, there is an income redistribution $t$ that yields an equilibrium with a star network $g^{star}$ in which both public good consumption and welfare are higher than in $(x^\ast,y^\ast,g^\ast)$. \end{proposition} In the proposition, we show that redistributing income towards the player with most links, eventually increases both public good consumption and welfare. This holds both for small transfers which do not change the equilibrium network, and for larger ones, which eventually induce a star as an equilibrium network. Notably, Proposition \ref{improving_welfare} allows for heterogeneous preferences, as long as the public good is strictly normal, which is a slightly stronger assumption than Assumption \ref{normality}. This result builds on our characterization of equilibrium networks. Indeed, since sociable equilibrium networks are core-periphery graphs, spillovers among core players flow as in a complete network. Hence, a similar neutrality result as \cite{BBV}'s applies to this part of the network. In other words, consider the following thought experiment. First, fix the network; given this graph, we can transfer some resources from other core players to the player with the most links---who is also the largest contributor---in a way that each player's net social income is unchanged. Hence, their demand of the public good is unchanged. While the total provision of public good is then constant, the unique contributor in the core is the player who received the most links. As a result, periphery players' access to the public good increases, as they might not have been linked to all core players. However, it is possible to further increase welfare and income by transferring resources to the player with most links until all the other core players are inactive and it is no longer profitable to link to them. A social planner can then collect the resources saved from deleting these links and use them to increase welfare. In particular, as $\gamma^{\prime}_i>0$ for all $i\in N$, part of these resources will be assigned to the largest contributor to further increase everyone's public good consumption. Yet, the transfers we propose do not necessarily maximize welfare, for two reasons. First, when the planner transfers resources to the largest contributor, her net social income increases, but that of the player whose resources are being taken away decreases. So the optimal transfers need to trade-off the increases of public good provision of the hub with the decrease of private consumption of periphery players. Second, the player who becomes the hub of the star applying the transfers described in Proposition \ref{improving_welfare} might not be the ideal recipient. To clarify the ideas, consider the example in Figure \ref{ex_transfers}. Panel (a) represents the initial allocation, while panel (b) the optimal allocation given that player 1 is the hub of the star. Panel (c) instead represents the welfare maximizing allocation, where resources are being transferred to player 4, who has the highest valuation for the public good. Indeed, player 4 is initially in the periphery because of a limited initial budget.\footnote{A similar situation can arise when players pay different prices for the private good.} This subtle difference is very important, as it implies that repeatedly applying the redistribution policies characterized by \cite{nizar} to the initial equilibrium would not lead to the welfare-maximizing outcome. \begin{figure}[htbp!] \begin{center} \begin{minipage}{.33\linewidth} \centering \resizebox{3cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node,fill=black!20] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [<->] node [right] {} (1) (3) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (2) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.33\linewidth} \centering \resizebox{3cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node,fill=black!20] (1) {1}; \node[main node] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (1) (3) edge [->] node [right] {} (1) (4) edge [->] node [right] {} (1); \end{tikzpicture} } \end{minipage} \begin{minipage}{.32\linewidth} \centering \resizebox{3cm}{!}{ \begin{tikzpicture}[-,>=stealth',shorten >=1pt,auto,node distance=2cm and 2cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node] (2) [right of=1, xshift=20mm] {2}; \node[main node] (3) [below of=1] {3}; \node[main node,fill=black!20] (4) [below of=2] {4}; \path[every node/.style={font=\sffamily\small}] (2) edge [->] node [right] {} (4) (3) edge [->] node [right] {} (4) (1) edge [->] node [right] {} (4); \end{tikzpicture} } \end{minipage} \vspace{.3cm} \begin{minipage}{.33\linewidth} \centering \subfigure[$ W(x^\ast,y^\ast,g^\ast)= 28.384$]{\label{1transf} \resizebox{3.5cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1 & 2 & 3 & 4 \\ \hline \hline $a_i$ & \multicolumn{3}{c|}{.5} & .8 \\ \hline $w_i$ & 15 & 15 & 10 & 4\\ \hline \rule{0pt}{12pt} $x_i^{\ast}$& $4.\overline{3}$ & $4.\overline{3}$ & $0$ & $.73$ \\ \hline \rule{0pt}{12pt} $y_i^{\ast}$& $8.\overline{6}$ & $8.\overline{6}$ & $6$ & $1.2\overline{6}$ \end{tabular} } } \end{minipage} \begin{minipage}{.33\linewidth} \centering \subfigure[$ W(x^\prime,y^\prime,g^\prime)= 38.5063$]{\label{2transf} \resizebox{3.5cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1 & 2 & 3 & 4 \\ \hline \hline $a_i$ & \multicolumn{3}{c|}{.5} & .8 \\ \hline $w_i$ & $29.02$ & \multicolumn{2}{c|}{$5.54$} & 3.91 \\ \hline $x_i^{\prime}$& $14.51$ & \multicolumn{3}{c}{0}\\ \hline \rule{0pt}{12pt} $y_i^{\prime}$& $14.51$ & \multicolumn{2}{c|}{$3.54$} & $1.91$ \end{tabular} } } \end{minipage} \begin{minipage}{.32\linewidth} \centering \subfigure[$ W(x^{\prime\prime},y^{\prime\prime},g^{\prime\prime})= 43.128$]{\label{3transf} \resizebox{3.5cm}{!} { \begin{tabular}{c|c|c|c|c} Player & 1 & 2 & 3 & 4 \\ \hline \hline $a_i$ & \multicolumn{3}{c|}{$.5$} & $.8$ \\ \hline $w_i$& \multicolumn{3}{c|}{$6.03$} & $25.92$ \\ \hline $x_i^{\prime\prime}$& \multicolumn{3}{c|}{0} & $20.74$ \\ \hline \rule{0pt}{12pt} $y_i^{\prime\prime}$& \multicolumn{3}{c|}{$4.03$} & $5.18$ \end{tabular} } } \end{minipage} \caption{Welfare improving vs. welfare maximizing transfers for $k=2$, $U_i(\overline{x}_i,y_i) ={\overline{x}}_i^{a_i} y_i^{1-a_i}$ and $p_i=1$ for all $i\in N$. Players in the core are shaded in gray }\label{ex_transfers} \end{center} \end{figure} The following proposition addresses these concerns. We define welfare-maximizing transfers as those yielding the Nash equilibrium with the maximal welfare among all Nash equilibria arising from all budget-balanced transfers. We now characterize the welfare-maximizing transfer scheme taking into account network endogeneity.\footnote As shown in Proposition \ref{efficiency}, a star is the efficient solution. However, the welfare-maximizing transfer scheme is associated with a lower public good provision than in the efficient solution whenever $\gamma^{\prime}_i<1$, where $i$ is the hub of the star network.} \begin{proposition}\label{2ndBest} Suppose there exists a non-empty sociable equilibrium network $g^*$. Then, there is $\Gamma>0$ and welfare-maximizing transfers if $\gamma^{\prime}_i\in (0,\Gamma]$ for all $i\in N$. \end{proposition} This result is obtained with few assumptions on players' preferences except for a condition on how steep each player's Engel curve is at the second best. Intuitively, this condition requires that the demand of periphery players cannot be too high when they are poor and needs to be relatively low if they are rich, as players who are in the periphery in the second best would provide less public good if they were the hub. Hence, each player's Engel curve cannot be too concave. To show that this requirement is not too strict, we show in the next corollary that the welfare-maximizing transfer scheme can always be Nash implemented if players have identical and linear Engel curves. \begin{corollary}\label{linear} Suppose players have homogeneous preferences with linear Engel curves such that $\gamma^{\prime}_i\in(0,1]$ for all $i\in N$, and a sociable equilibrium $(x^\ast,y^\ast,g^\ast)$ exists with non-empty $g^\ast$. Then, the second best can always be implemented as a Nash equilibrium. \end{corollary} Finally, by Corollary \ref{ex-bbv}, the welfare-maximizing transfer scheme induces a unique equilibrium if the linking cost is sufficiently low. \subsection{Personalized Prices} We have shown so far that income redistribution can go a long way in increasing welfare. However, the resulting allocation is not efficient. Indeed, the planner cannot dictate how players use the resources allocated to them, which results in a much higher private consumption of the private good with respect to the efficient one, as derived in Proposition \ref{efficiency}. We now go one step further and show that a personalized lump-sum tax scheme can instead achieve the efficient outcome. Indeed, the optimal policy here subsidizes the price of the public good for the player selected as hub in Proposition \ref{efficiency}. The personalized price is set equal to this player's marginal rate of substitution in the efficient solution. This induces her to choose the socially optimal public good provision and all other players to free ride on her provision. The lump-sum taxation scheme is designed to be budget balanced. \begin{proposition}\label{personalized prices} Suppose $\gamma^{\prime}_i\in(0,1)$ for all $i\in N$ and a sociable equilibrium $(x^\ast,y^\ast,g^\ast)$ with non-empty $g^\ast$ exists. Then, the efficient solution can be implemented by fixing a personalized price of the public good $p_x \in (0,1)$ for the player selected as hub in Proposition \ref{efficiency}, financed by lump-sum taxes from all players. \end{proposition} Since only one player provides the public good, this situation is analogous to a direct government provision of a public good that crowds out private contributions. However, two additional features are relevant. First, crowding-out is incomplete, so that a public intervention can increase welfare by increasing public good provision. Most importantly, an additional increase in consumption can be achieved by inducing a star, thereby re-directing resources from linking to consumption. Proposition \ref{personalized prices} is relevant whenever the social planner is able to affect the price of the public good. In the context of online social platforms, this could take the form of subsidizing content of the ``super-star'' users who contribute a lot to the content of the platform. Reward schemes in line with this idea are indeed implemented for example by YouTube, whose main contributors receive awards and are sponsored to participate in various promotional activities. \section{Robustness and Extensions}\label{extensions} Our characterization is robust to the introduction of selfish motives in the private provision of public goods due to warm glow giving, e.g., because of the private benefits of becoming an influencer on Twitter. Our characterization is robust also to more general linking technologies, an imperfect degree of substitutability between neighbors' public good provision, and if players benefit from global contributions to the public good, for example, because they can see the posts of others they are not linked to using hashtags on Twitter. Additionally, the characterization of strict Nash equilibria is robust to the introduction of some complementarity in players' efforts, to heterogeneity in the costs of forming a connection and to indirect spillovers, e.g., re-tweeting posts To capture situations where a link between two players implies that both access each others' public good provision, we also extend the model to \textit{two-way flow of spillovers}. Some examples are the acquisition and exchange of information about new products and technologies In \cite{gg} and \cite{KM}, the demand for the public good is exogenous. In those models, when players are homogeneous, strict Nash networks are complete core-periphery networks, while, when players are heterogeneous, the prediction depends on how the gains from a connection change across players. Furthermore, the law of the few holds. We extend their models to a more general class of preferences and heterogeneous budgets. While most of our results carry over to the two-sided model, the key difference from the model presented above is that core players benefit ``for free'' from the public good provided by those linking to them. In this case, strict Nash equilibria need not be complete core-periphery networks even when players are homogeneous. Additionally, players in the core have the same consumption only if they have the same neighbors, and a richer player in the periphery is worse off than all core players. When spillovers flow two-way, a stronger version of the law of the few of Proposition \ref{lotf} holds: the proportion of players significantly contributing to the public good (even in the periphery) goes to zero as the population size grows to infinity. Importantly, we highlight the necessary condition for this result to hold, i.e., that the slope of the contributors' Engel curve for the public good is strictly below one. In that case, contributors' provision strictly decreases with spillovers from neighbors. As a result, the amount of spillovers received by contributors cannot be infinitely large. All of these results are formally derived in the Online Appendix. \section{Conclusions}\label{conclusions} In this paper, we introduce a model in which players decide how to allocate their budget between links, a local public good and a private good. A player links to another in order to free ride on her public good provision. Under standard assumptions, Nash equilibrium networks are core-periphery graphs, a prediction that is robust to several extensions. Importantly, since the demand for the public good is endogenous, poorer players can attain more central positions in the network or consume more public good. Hence, the relationship between wealth, provision, consumption and, therefore, utility is not as straightforward as when the public good is global. We have shown that the type of equilibrium that emerges has an impact on welfare and that income redistribution can alleviate inefficiencies. Knowledge of players' preferences is key in the design of optimal transfers. Yet, as stating a high taste for the public good would translate into receiving more resources, future research should investigate how to incentivize people to truthfully state the taste for the public good of their neighbors and their own. Indeed, players observe better than the social planner their neighbors' preferences. Recent literature has shown that there is scope for eliciting information from agents in a fixed network \citep{bloch2021friend,baumann2018self}. It would be interesting, however, to develop similar insights when the mechanism needs to take into account also link formation.
743a001ef2f93546fe95883cf42dcd0a2f8c20e5
\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.1\linespacing}% {\normalfont\itshape}} \makeatother \newcommand{\marginpar}{\marginpar} \newcommand{\hfill$\Box$\bigskip}{\hfill$\Box$\bigskip} \newcommand{\operatorname{Trop}}{\operatorname{Trop}} \newcommand{$\quad\Box$}{$\quad\Box$} \newcommand{\smallskip\flushleft}{\smallskip\flushleft} \newcommand{\textrm{dim}}{\textrm{dim}} \newcommand{\textrm{deg}}{\textrm{deg}} \newcommand{\textrm{det}}{\textrm{det}} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\Lambda}{\Lambda} \newcommand{\hat{\alpha}_{i}}{\hat{\alpha}_{i}} \newcommand {\bC} {\mathbb {C}} \newcommand {\bR} {\mathbb R} \newcommand {\bZ} {\mathbb Z} \newcommand {\D} {\mathcal D} \newcommand {\cD} {\mathcal D} \newcommand {\Z} {\mathcal Z} \newcommand {\DD} {\mathbf D} \newcommand {\bCP} {\mathbb {CP}^1} \newcommand{\textrm{.}}{\textrm{.}} \newcommand{\textrm{,}}{\textrm{,}} \newcommand {\Ga} {\Gamma} \newcommand {\ga} {\gamma} \newcommand {\eps} {\epsilon} \newcommand {\de} {\delta} \newcommand {\supp} {\mathrm supp~} \newcommand {\HH} {\mathcal H} \newcommand {\CC} {\mathcal C} \newcommand {\ST} {\mathcal {ST}} \newcommand {\SD} {\mathcal {SD}} \newcommand{\mathbb N}{\mathbb N} \newcommand {\cA} {\mathcal A} \newcommand {\cM} {\mathcal M} \newcommand {\cH} {\mathcal H} \newcommand {\cC} {\mathcal C} \newcommand {\cP} {\mathcal P} \newcommand {\cS} {\mathcal S} \newcommand {\rd}{\mathbb R\text{deg}} \newcommand{\partial}{\partial} \newcommand {\cri} {\mathfrak {C}} \newcommand{\mathcal L}{\mathcal L} \newcommand{\mathcal O}{\mathcal O} \newcommand {\R}{\mathbb R} \newcommand {\RR}{\mathcal R} \DeclareMathOperator{\Eu}{Eu} \DeclareMathOperator{\Vol}{Vol} \DeclareMathOperator{\EVol}{EVol} \DeclareMathOperator{\M}{M} \DeclareMathOperator{\SM}{SM} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\Grass}{Grass} \DeclareMathOperator{\Conv}{Conv} \DeclareMathOperator{\Osc}{Osc} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\Ann}{Ann} \DeclareMathOperator{\Coker}{Coker} \DeclareMathOperator{\Ve}{Vert} \DeclareMathOperator{\Diam}{Diam} \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prp}[thm]{Proposition} \theoremstyle{definition} \newtheorem{eg}[thm]{Example} \newtheorem{rmk}[thm]{Remark} \numberwithin{equation}{section} \makeatother \begin{document} \title [Return of the plane evolute] {Return of the plane evolute} \author[R.~Piene]{Ragni Piene} \address{Department of Mathematics, University of Oslo, NO-0316 Oslo, Norway} \email{ragnip@math.uio.no} \author[C.~Riener]{Cordian Riener} \address{Department of Mathematics and Statistics, UiT The Arctic University of Norway, NO-9037 Troms\o, Norway} \email {cordian.riener@uit.no} \author[B.~Shapiro]{Boris Shapiro} \address{Department of Mathematics, Stockholm University, SE-106 91, Stockholm, Sweden} \email{shapiro@math.su.se} \dedicatory{``If I have seen further it is by standing on the shoulders of Giants." Isaac Newton, from a letter to Robert Hooke} \date{\today} \keywords{evolute, plane real algebraic curve} \subjclass[2000]{Primary 14H50, Secondary 51A50, 51M05} \begin{abstract} Below we consider the evolutes of plane real-algebraic curves and discuss some of their complex and real-algebraic properties. In particular, for a given degree $d\ge 2$, we provide lower bounds for the following four numerical invariants: 1) the maximal number of times a real line can intersect the evolute of a real-algebraic curve of degree $d$; 2) the maximal number of real cusps which can occur on the evolute of a real-algebraic curve of degree $d$; 3) the maximal number of (cru)nodes which can occur on the dual curve to the evolute of a real-algebraic curve of degree $d$; 4) the maximal number of (cru)nodes which can occur on the evolute of a real-algebraic curve of degree $d$. \end{abstract} \maketitle \section{Short historical account} \label{sec:int} As we usually tell our students in calculus classes, the evolute of a curve in the Euclidean plane is the locus of its centers of curvature. The following intriguing information about evolutes can be found on Wikipedia \cite{Wi}: ``Apollonius (c. 200 BC) discussed evolutes in Book V of his treatise Conics. However, Huygens is sometimes credited with being the first to study them, see \cite{HuyB}. Huygens formulated his theory of evolutes sometime around 1659 to help solve the problem of finding the tautochrone curve, which in turn helped him construct an isochronous pendulum. This was because the tautochrone curve is a cycloid, and cycloids have the unique property that their evolute is a cycloid of the same type. The theory of evolutes, in fact, allowed Huygens to achieve many results that would later be found using calculus, see \cite{Ar1, Yo}." Notice that \cite{Huy}, originally published in 1673 and freely available on the internet, contains a large number of beautiful illustrations including those of evolutes. Further exciting pictures of evolutes can be found in the small book \cite{Ya} written about hundred years ago for high-school teachers. Among several dozens of books on (plane) algebraic curves available now, only very few \cites{Co, Hi, Ya, Sa} mention evolutes at all, the best of them being \cite{Sa}, first published more than one and half century ago. Some properties of evolutes have been studied in connection with the so-called 4-vertex theorem of Mukhopadhyaya--Kneser as well as its generalizations, see e.g. \cites{Fu, Ta}. Their definition has been generalized from the case of plane curves to that of plane fronts and also from the case of Euclidean plane to that of the Poincar\'e disk, see e.g. \cite{FuTa}. Singularities of evolutes and involutes have been discussed in details by V.~Arnold and his school, see e.g. \cite{Ar1, Ar2} and more recently in \cite{ST, ScT}. In recent years the notion of Euclidean distance degree of an algebraic variety studied in e.g. \cite{DHOST} and earlier in \cites{CT, JoPe} has attracted substantial attention of the algebraic geometry community. In the case when the variety under consideration is a plane curve, the ED-discriminant in this theory is exactly the standard evolute. In our opinion, this connection calls for more studies of the classical evolute since in spite of more than three hundred years passed since its mathematical debut, the evolute of a plane algebraic curve is still far from being well-understood. Below we attempt to develop some real algebraic geometry around the evolutes of real-algebraic curves and their duals hoping to attract the attention of the fellow mathematicians to this beautiful and classical topic. \section{Initial facts about evolutes and problems under consideration} \label{sec:init} From the computational point of view the most useful presentation of the evolute of a plane curve is as follows. Using a local parametrization of a curve $\Ga$ in $\bR^2$, one can parameterize its evolute $E_\Ga$ as \begin{equation}\label{eq:basic} E_\Ga(t)=\Ga(t)+\rho(t) \bar n(t), \end{equation} where $\rho(t)$ is its curvature radius at the point $\Ga(t)$ (assumed non-vanishing) and $\bar n(t)$ is the unit normal at $\Ga(t)$ pointing towards the curvature center. In Euclidean coordinates, for $\Ga(t)=(x(t),y(t))$ and $E_\Ga(t)=(X(t),Y(t))$, one gets the following explicit expression \begin{equation}\label{eq:basic2} \left\{ X(t)=x(t)-\frac{y^\prime(t)(x^\prime(t)^2+y^\prime(t)^2)}{x^\prime(t)y^{\prime\prime}(t)-x^{\prime\prime}(t)y^\prime(t)},\; Y(t)=y(t)+\frac{x^\prime(t)(x^\prime(t)^2+y^\prime(t)^2)}{x^\prime(t)y^{\prime\prime}(t)-x^{\prime\prime}(t)y^\prime(t)} \right\}. \end{equation} If a curve $\Ga$ is given by an equation $f(x,y)=0$, then (as noticed in e.g., \cite{Hi}, Ch. 11, \S~2) the equation of its evolute can be obtained as follows. Consider the system \begin{equation}\label{eq:basic3} \begin{cases} f(x,y)=0\\ X=x+\frac{f^\prime_x((f^\prime_x)^2+(f^\prime_y)^2)}{2f^\prime_xf^\prime_yf^{\prime\prime}_{xy} -(f^\prime_y)^2f^{\prime\prime}_{xx}-(f^\prime_x)^2f^{\prime\prime}_{yy}}\\ Y=y+\frac{f^\prime_y((f^\prime_x)^2+(f^\prime_y)^2)}{2f^\prime_xf^\prime_yf^{\prime\prime}_{xy} -(f^\prime_y)^2f^{\prime\prime}_{xx}-(f^\prime_x)^2f^{\prime\prime}_{yy}}\\ \end{cases} \end{equation} defining the original curve and the family of centers of its curvature circles. Then eliminating the variables $(x,y)$ from \eqref{eq:basic3} one obtains a single equation defining the evolute in variables $(X,Y)$. For concrete bivariate polynomials $f(x,y)$ of lower degrees, such an elimination procedure can be carried out in, for example, Macaulay 2 \cite{M2}. \begin{example}\label{ex:basic}{\rm Two basic examples of evolutes are as follows. \begin{enumerate} \item For the parabola $\Ga=(t,t^2)$, its evolute is given by \[\textstyle E_\Ga(t)=(-4t^3, \frac{1}{2}+3t^2)\] which is a semicubic parabola satisfying the equation $27X^2=16(Y-\frac{1}{2})^3$, see Fig.~ \ref{fig:conic1a}. \item For the ellipse $\Ga=(a\cos t, b \sin t)$, the evolute is given by \[\textstyle E_\Ga(t)=(\frac{a^2-b^2}{a}\cos^3 t, \frac{b^2-a^2}{b}\sin^3t),\] which is an astroid satisfying the equation $(aX)^{2/3}+(bY)^{2/3}=(a^2-b^2)^{2/3}$, see Fig.~\ref{fig:conic1b}. \end{enumerate}} \end{example} \begin{figure} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.25]{fig1left.pdf} \caption{Evolute of a parabola.} \label{fig:conic1a} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.28]{fig1med.pdf} \caption{Evolute of an ellipse.} \label{fig:conic1b} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[scale=0.28]{fig1right.pdf} \caption[short]{Behavior of the evolute near a simple inflection point.} \label{fig:x^3} \end{subfigure} \caption{First examples of evolutes.} \label{fig:one} \end{figure} If $\Ga$ has an inflection point, then the curvature radius becomes infinite, which means that the evolute $E_\Ga$ goes to infinity with its asymptote being the straight line passing through the inflection point and orthogonal to $\Ga$ (i.e., the normal to $\Ga$ at the inflection point), see Fig.~\ref{fig:x^3}. In case the point is a higher order inflection point, this point at infinity will be a critical point of the curvature and give a cusp on the evolute (see Example IX in Subsection \ref{subs:higher}). Observe that if $\Ga$ is a rational algebraic curve, then the above recipe provides the global parametrization of $E_\Ga$. \smallskip Given a plane curve $\Ga$, the alternative definition of its evolute $E_\Ga$ which will be particularly useful for us is that $E_\Ga$ is the envelope of the family of normals to $\Ga$, where a \emph{normal} is an affine line perpendicular to (the tangent line to) $\Ga$ at some point of $\Ga$. In other words, each normal to $\Ga$ is a tangent line to $E_\Ga$ and each tangent to $E_\Ga$ is a normal to (the analytic continuation) of $\Ga$. From this definition it follows that the evolute $E_\Ga\subset \bR^2$ is a \emph{caustic}, i.e., the critical locus of the Lagrangian projection of the cotangent bundle $T^\ast \Ga\subset T^\ast \bR^2$ of the initial curve $\Ga$ to the (phase) plane $\bR^2$. This circumstance explains, in particular, why typical singularities of the evolutes behave differently from those of (generic) plane algebraic curves and why generic evolutes have no inflection points. \smallskip For an affine or projective real curve $\Ga$, we denote by $\Ga^\bC$ its complexification. For an affine real curve $\Ga \subset \bR^2$, we let $\tilde\Ga\subset \bR P^2$ denote its projective closure and $\tilde \Ga ^\bC \subset \bC P^2$ the complexification of $\tilde \Ga$ (equal to the projective closure of $\Ga^\bC \subset \bC^2$). \begin{definition} {\rm For a plane algebraic curve $\Ga\subset \bR^2 \subset \bR P^2$, define its \emph{ curve of normals} $\tilde N_\Ga\subset (\bR P^2)^\vee$ as the curve in the dual projective plane whose points are the normals of $\Ga$. (We start with the quasiprojective curve $N_\Ga$ of all normals to the affine $\Ga$ and take its projective closure in $(\bR P^2)^\vee$.)} \end{definition} Similarly to the above, for a (locally) parameterized curve $\Ga(t)=(x(t),y(t))$ and $N_\Ga(t)=(u(t),v(t))$, one gets \begin{equation}\label{eq:basic4} \left\{ u(t)=\frac{x^\prime(t)}{y^\prime(t)},\;v(t)=-\frac{x(t)x^\prime(t)+y(t)y^\prime(t)}{y^\prime(t)}\right\}. \end{equation} (Here we assume that the equation of the normal line to $\Ga$ at the point $(x(t),y(t))$ is taken in the standard form $y+u(t)x+v(t)=0$.) \smallskip If a curve $\Ga$ is given by an equation $f(x,y)=0$, then the equation of its curve of normals can be obtained as follows. Consider the system \begin{equation}\label{eq:eqnorm} \begin{cases} f(x,y)=0\\ u=-\frac{f^\prime_y}{f^\prime_x}\\ v=\frac{xf^\prime_y-yf^\prime_x}{f^\prime_x}\\ \end{cases} \end{equation} defining the original curve and the coefficients of the family of normals. Eliminating the variables $(x,y)$ from \eqref{eq:eqnorm} one obtains a single algebraic equation defining the curve of normals in the variables $(u,v)$. By the above alternative definition, for a plane algebraic curve $\Ga$, we get that $(\tilde N_\Ga)^\vee=\tilde E_\Ga$ where $\vee$ stands for the usual projective duality of the plane projective curves. \medskip Two types of real singularities of the evolute/curve of normals have natural classical interpretations in terms of the original curve $\Ga$. Recall that real nodes of real-algebraic curves are subdivided into \emph{crunodes} and \emph{acnodes}, the former being transversal intersections of two real branches and the latter being transversal intersections of two complex conjugate branches. Now observe that a crunode of $N_\Ga$ (i.e., the real node with two real branches) corresponds to a \emph{diameter} of $\Ga$ which is a straight segment connecting pairs of points on $\Ga$ and which is perdendicular to the tangent lines to $\Ga$ at both endpoints. On the other hand, a real cusp of $E_\Ga$ (resp. an inflection point on $N_\Ga$) corresponds to a \emph {vertex} of $\Ga$ which is, by definition, a critical point of $\Ga$'s curvature. As we mentioned above, vertices of plane curves appear, for example, in the classical $4$-vertex theorem and its numerous generalizations. Beautiful lower bounds on the number of diameters of plane curves, plane wavefronts as well as their higher dimensional generalizations have been obtained in symplectic geometry, see e.g. \cites{Pu1, Pu2}. \medskip To formulate the problems we consider below, let us recall the following notion which deserves to be better known \cite[Def.~1]{LSS}. \begin{definition}{\rm Given a real algebraic hypersurface $H \subset \bR^n$, we define its \emph{$\R$-degree} as the supremum of the cardinality of $H \cap L$ taken over all lines $L \subset \bR^n$ such that $L$ intersects $H$ transversally.} \end{definition} In what follows, we denote the $\R$-degree of $H$ by $\R \deg(H)$. Obviously, $\R \deg(H)\le \deg(H)$, where $\deg(H)$ is the usual degree of $H$. \medskip In what follows, we discuss four real-algebraic questions related to the evolutes and curves of normals of plane real-algebraic curves. \begin{problem}\label{R-deg EG-NG} For a given positive integer $d$, what are the maximal possible $\R$-degrees of the evolute $E_\Ga$ and of the curve of normals $N_\Ga$ where $\Ga$ runs over the set of all real-algebraic curves of degree $d$? \end{problem} \begin{problem}\label{cusps EGa} For a given positive integer $d$, what is the maximal possible number of real cusps on $E_\Ga$ where $\Ga$ runs over the set of all real-algebraic curves of degree $d$? In other words, what is the maximal possible number of vertices a real-algebraic curve $\Ga$ of degree $d$ might have? \end{problem} To make Problem~\ref{cusps EGa} well-defined we have to assume that $\Ga$ does not have a circle as its irreducible component since in the latter case the answer is infinite. Note that since by \emph{vertices} of $\Ga$ we understand the critical points of the curvature, it is possible that a vertex can be a real cusp of the projective closure of $E_\Ga$. This happens, in particular, when the curvature vanishes at a critical point (see Example IX in Subsection \ref{subs:higher}). \begin{problem}\label{nodesNGa} For a given positive integer $d$, what its the maximal possible number of crunodes on $N_\Ga$ where $\Ga$ runs over the set of all real-algebraic curves of degree $d$? In other words, what is the maximal possible number of diameters $\Ga$ might have? \end{problem} Here again we have to assume that $\Ga$ does not have a circle as its irreducible component since in this case the answer is infinite. \begin{problem}\label{nodesEGa} For a given positive integer $d$, what is the maximal possible number of crunodes on $E_\Ga$ where $\Ga$ runs over the set of all real-algebraic curves of degree $d$? In other words, what is the maximal possible number of points in $\bR^2$ which are the centers for at least two distinct (real) curvature circles of $\Ga$? \end{problem} \begin{remark}{\rm As we mentioned above, questions similar to Problems~\ref{cusps EGa} and \ref{nodesNGa} have been studied in classical differential geometry and symplectic geometry/topology. They can also be connected to the study of plane curves of constant breadth which has been carried out by such celebrities as L.~Euler, A.~Hurwitz, H.~Minkowski, and W.~Blaschke, see e.g. \cite{Ba} and references therein. To the best of our knowledge, Problem~\ref{nodesEGa} has not been previously discussed in the literature.} \end{remark} Our results related to Problems~1--4 can be found in Sections \ref{sec:rdeg}--\ref{sec:Ecru} below. They are mostly obtained by using small deformations of line arrangements in the plane. In all cases the lower bounds for the above quantities which we obtain are polynomials of the same degree as their complex counterparts. However, in all cases but one their leading coefficients are smaller than those of the complex answers. At the moment we do not know whether our estimates are sharp even on the level of the leading coefficients. \section{Various preliminaries} \subsection{Basic complex algebraic facts} We first summarize some information about the evolute and the curve of normals mainly borrowed from the classical treatise \cite{Sa}. \begin{proposition}[see Art. 111, 112, p.~94--96, in \cite {Sa}]\label{prop:1} For an affine real-algebraic curve $\Ga\subset \bR^2$ of degree $d$, which is in general position with respect to the line at infinity and has only $\delta$ nodes and $\kappa$ ordinary cusps as singularities, the curves $\tilde \Ga^\bC$, $\tilde E_\Ga^\bC$ and $\tilde N_\Ga^\bC$ are birationally equivalent. The degree of $\tilde E_\Ga$ equals $3d(d-1)-6\delta-8\kappa$, and the degree of $\tilde N_\Ga$ equals $d^2-2\delta-3\kappa$. \end{proposition} The genericity assumption for the birationality can be substantially weakened (but not completely removed). \begin{lemma}[see Art. 113, p.~96, in \cite {Sa}] For a generic affine real-algebraic curve $\Ga\subset \bR^2$ of degree $d$, $\tilde E_\Ga^\bC$ has no inflection points. \end{lemma} \begin{proposition}[see Art. 113, p.~97, of \cite{Sa}]\label{prop:cusps+nodes} For an affine real-algebraic curve $\Ga\subset \bR^2$ as in Proposition \ref{prop:1}, the only singularities of $\tilde E_\Ga^\bC$ and $\tilde N_\Ga^\bC$ are nodes and cusps, except that $\tilde N_\Ga^\bC$ has an ordinary $d$-uple point (corresponding to the line at infinity). The number $\kappa_E$ of cusps of $\tilde E_\Ga^\bC$ equals $3(d(2d-3)-4\delta-5\kappa)$. If $\Ga$ is nonsingular, the number $\delta_E$ of nodes of $\tilde E_\Ga^\bC$ equals $\frac{d}{2} (3d-5)(3d^2-d-6)$, and the number $\delta_N$ of nodes of $\tilde N_\Ga^\bC$ equals $\binom{d^2-1}{2}\binom{d-1}{2}-\binom{d}{2}=(d^2+d-4)\binom{d}{2}$. The curve $\tilde N_\Ga^\bC$ has no cusps (since $\tilde E_\Ga^\bC$ has no inflection points). \end{proposition} For the sake of completeness and for the convenience of our readers, we included in Appendix~\ref{sec:Ragni} below (some) modern proofs of the above claims, further results, and discussions related to the enumerative geometry of envelopes and evolutes over the field of complex numbers. \subsection {Klein's formula} For further use, let us recall a classical real-algebraic result of F.~Klein, see \cites{Kl, Co}. \begin{theorem}\label{th:Klein} If a real-algebraic curve $\Ga$ has no singularities except nodes, cusps, bitangents and inflection points, then $$d+2\tau^\prime+i^\prime=d^\vee+2\delta^\prime+\kappa^\prime,$$ where $d$ is the degree, $\tau^\prime$ the number of conjugate tangents, $i^\prime$ the number of real inflections, $d^\vee$ is the class, i.e., the degree of the dual curve, $\delta^\prime$ the number of real conjugate points, and $\kappa^\prime$ the number of real cusps of $\Ga$. \end{theorem} Klein's theorem was generalized by Schuh \cite{Sch}, see \cites{Vi, Wa}. In particular, the beautiful paper \cite{Wa} contains the following result (usually referred to as Klein--Schuh's theorem) together with its detailed proofs and references to the earlier literature. \begin{theorem}\label{th:genKlein} For any real-algebraic plane curve $\Ga$, \[\textstyle d-d^\vee=\sum_p (m_p(\Ga)-r_{p}(\Ga)) -\sum_q (m_q(\Ga^\vee)-r_{q}(\Ga^\vee)), \] where $\Ga^\vee$ is the dual curve, $d$ is the degree and $d^\vee$ is the class of $\Ga$, $m_p(\Ga)$ (resp. $m_q(\Ga^\vee)$) is the multiplicity of a real singular point $p\in \Ga$ (resp. $q\in \Ga^\vee$), and $r_{p}(\Ga)$ (resp. $r_{q}(\Ga^\vee)$) is the number of local real branches of $\Ga$ at $p$ (resp. of $\Ga^\vee$ at $q$). \end{theorem} \begin{example}{\rm Let $\Gamma$ be the curve of normals of an ellipse. Then $\Gamma^\vee$ is the evolute of the ellipse. We have $d=4$ and $d^\vee =6$. The singularities of $\Gamma$ are two crunodes, corresponding to the two diameters of the ellipse, and an acnode, corresponding to the line at infinity. The evolute $\Gamma^\vee$ has four real cusps and no other real singular points. This gives $4-6=2-4$, which checks with the formula, see Fig.~\ref{fig:conic1b}.} \end{example} \subsection{Brusotti's theorem and small deformations of real line arrangements} Let $\Ga\subset \bC^2$ be any (possibly reducible) plane real-algebraic curve with only real and complex nodes as singularities. Denote by $\Ga_\bR\subset \bR^2$ the real part of $\Ga$. Observe that there are two topological types of small real deformations/smoothenings of a crunode. Namely, being a transversal intersection of two smooth real branches a crunode has locally four connected components of its complement. Under a small real deformation of a crunode either one or the other pair of opposite connected components will merge forming a single component. Similarly, there exist two topological types of small real deformations/smoothenings of a acnode. Either it disappears in the complex domain or a small real oval will be created near the acnode. The following useful result was proved by Brusotti in \cite{Br}. \begin{theorem}\label{th:brusotti} Any real-algebraic curve $\Ga\subset \bC^2$ with only nodes as singularities admits a small real deformation of the same degree which realizes any collection of independently prescribed smoothing types of its real nodes. \end{theorem} Thus there are $2^m$ topological types of such small deformations where $m$ is the number of real nodes of the real curve under consideration. The easiest way to think about different types of small deformations is to fix a real bivariate polynomial $G(x,y)$ of minimal degree defining $\Ga$ as its zero locus and to assign a $\pm$-sign to each real node. Then the sign pattern uniquely determines the way to resolve each crunode and acnode as follows. If $v$ is a crunode of $\Ga$, then $G(x,y)$ has alternating signs in four local components of the complement to $\Ga$ near $v$. If we assign the sign $+$ to the node $v$, then under this resolution two opposite local components in which $G(x,y)$ is positive should glue together, and if we assign the sign $-$ then the other two opposite components (in which $G(x,y)$ is negative) should glue together. Analogously, if $G(x,y)$ has a local maximum at the acnode and we assign $+$, then we create a small oval. Otherwise the acnode disappears. For a local minimum, the signs exchange their roles. The next statement follows from \cite{Br} and can be also found in \cite[p.~13]{Gu} and \cite{Gu2}. \begin{proposition} \label{prop:signs} For each plane real-algebraic curve $\Ga$ of a given degree $d$ with only nodes as singularities and a given sign pattern at its crunodes and acnodes, there exists a real bivariate polynomial of degree at most $d$ having the chosen sign at each crunode/acnode. \end{proposition} \begin{corollary}\label{cor:signs}In the above notation, if $\Ga$ is the zero locus of a real polynomial $G(x,y)$ and $H(x,y)$ is a polynomial realizing a chosen sign pattern for the crunodes/acnodes of $\Ga$ and such that $\Ga$ and the curve $H(x,y)=0$ have no common singularities (including the line at infinity), then there exists $\epsilon_{G,H}>0$ such that for any $0<\epsilon\le \epsilon (G,H)$, the curve given by $G(x,y)+\epsilon H(x,y)=0$ is non-singular and realizes the prescribed smoothening type of all crunodes/acnodes of $\Ga$. \end{corollary} We will mainly be applying Brusotti's theorem to \emph{generic} real line arrangements in $\bR^2$, where ``generic" means that no two lines are parallell and no three lines intersect at one point. In this case, one has only crunodes among the real nodes. The following claim will be useful in our considerations. \begin {proposition}\label{lm:smallcomp} Given a line arrangement $\cA\subset \bR^2$ of degree $d$, a vertex (crunode) $v$ and a real polynomial $H(x,y)$ of degree at most $d$ which does not vanish at $v$, consider a sufficiently small disk $\mathbb D\subset \bR^2$ centered at $v$ which neither intersects the real zero locus of $H(x,y)$ nor any of the lines of $\cA$ except for those two whose intersection point is $v$. Fix additionally a product $G(x,y)$ of linear forms whose zero locus is $\cA$. Then there exists $\beta (G,H,v)>0$ such that for any $0<\epsilon^2 \le \beta(G,H,v)$, the curve given by $G(x,y)+\epsilon^2 H(x,y)=0$ restricted to $\mathbb D$ has the following properties: \begin{itemize} \item[\rm (i)] it consists of two smooth real branches without inflection points; \item[\rm (ii)] each branch contains a unique vertex, i.e., a critical point of the curvature. At this point the curvature attains its maximum. \end{itemize} \end{proposition} \begin{figure} \begin{center} \includegraphics[scale=0.2]{Curv1.pdf}\hskip0.5cm \includegraphics[scale=0.2]{CurvStat.pdf}\hskip0.5cm \includegraphics[scale=0.2]{CurvQuot.pdf} \end{center} \caption{The leftmost plot shows the curvature of $y_\eps=\sqrt{x^2+\eps^2(1+x^4-x^6)}$ for $\eps=1/4$, the central plot shows the standard curvature for the same $\eps$ and the rightmost plot shows their quotient together with the theoretical asymptotic quotient $\Psi(x)$, see Lemma~\ref{lm:curvature}.} \label{fig:Curvature} \end{figure} The proof of Proposition~\ref{lm:smallcomp} is based on the next analytic lemma. Consider a family of functions $y_\eps(x)=\sqrt{x^2+\eps^2(1+O(x,\eps))}$, where $O(x,\eps)$ is a function vanishing at the origin and real-analytic in the variables $(x,\eps)$ in some open neighborhood of it. Therefore $1+O(x,\eps)$ is positive in some sufficiently small fixed rectangle $x\in [-A,A],\; \eps\in [-e,e]$. Here $\eps$ is a small real parameter and for each fixed $\eps$, we think of $y_\eps(x)$ as a function of $x\in [-A,A]$. Let $K_\eps(x)=\frac{y^{\prime\prime}}{(1+(y^\prime_\eps(x))^2)^{3/2}}$ be the signed curvature of the function $y_\eps(x)$ and let $k_\eps(x)=\frac{\eps^2}{(\eps^2+2x^2)^{3/2}}$ be the signed curvature of the (upper branch of) hyperbola $y= \sqrt{x^2+\eps^2}$, $x\in \bR$. (Notice that $k_\eps(x)>0$ for all $x\in \bR$ and that $\int_{-\infty}^\infty k_\eps(x)dx=\sqrt{2}$ for all $\eps\in \bR$ with $\eps\neq 0$.) \begin{lemma}\label{lm:curvature} In the above notation, when $\eps\to 0$, then on the interval $[-A,A]$ the quotient $\frac{K_\eps(x)}{k_\eps(x)}$ uniformly converges to the function \[\textstyle \Psi(x)=1+O(x,0)-xO^\prime(x,0)+\frac {1}2 x^2 O^{\prime\prime}(x,0).\] \end{lemma} The statement of Lemma~\ref{lm:curvature} is illustrated in Fig.~\ref{fig:Curvature}. \begin{proof} Set $\Phi(x):=x^2+\eps^2(1+O(x,\eps))$. With this notation we get \[y^\prime_\eps=\frac{\Phi^\prime}{2\Phi^{1/2}},\quad y^{\prime\prime}_\eps=\frac{2\Phi^{\prime\prime}\Phi-(\Phi^\prime)^2}{4\Phi^{3/2}},\quad K_\eps=\frac{y^{\prime\prime}}{(1+(y^\prime_\eps(x))^2)^{3/2}}=2\frac {2\Phi^{\prime\prime}\Phi-(\Phi^\prime)^2} {(4\Phi+(\Phi^\prime)^2)^{3/2}}. \] (Recall that all derivatives are taken with respect to the variable $x$.) Substituting the expressions for $\Phi, \Phi^\prime$ and $\Phi^{\prime\prime}$ in the third formula, we obtain \[K_\eps(x)=2\eps^2 \frac {4+4O+2x^2O^{\prime\prime}+2\eps^2O^{\prime\prime}+2\eps^2 O O^{\prime\prime}-4x O^\prime-\eps^2(O^\prime)^2}{(4\eps^2+8x^2+4\eps^2 O+4\eps^2x O^\prime+\eps^4(O^\prime)^2)^{3/2}}. \] Dividing by $k_\eps(x)$ we get \begin{equation}\label{eq:eqKeps} \frac{K_\eps(x)}{k_\eps(x)}=2\frac {(4+4O+2x^2O^{\prime\prime}+2\eps^2O^{\prime\prime}+2\eps^2 O O^{\prime\prime}-4x O^\prime-\eps^2(O^\prime)^2 )(\eps^2+2x^2)^{3/2}}{(4\eps^2+8x^2+4\eps^2 O+4\eps^2x O^\prime+\eps^4(O^\prime)^2)^{3/2}}. \end{equation} To prove the pointwise convergence, assume that $x\neq 0$ and let $\eps\to 0$. Then using real analyticity of $O(x,\eps)$ and $\Phi(x,\eps)$, one gets \begin{align*} \lim_{\eps\to 0} \frac{K_\eps(x)}{k_\eps(x)}=&2 \frac {(4+4O(x,0)-4xO^\prime(x,0)+2x^2O^{\prime\prime}(x,0)) (2x^2)^{3/2}} {(8x^2)^{3/2}}\\ =&1+O(x,0)-xO^\prime(x,0)+\frac{x^2O^{\prime\prime}(x,0)}{2}=\Psi(x). \end{align*} For $x=0$, the function $\Psi(x)$ continues by analyticity implying that $\Psi(0)=1$ since $O(0,0)=0$. \smallskip To prove the uniform convergence, consider again \eqref{eq:eqKeps}. The factor \[(4+4O+2x^2O^{\prime\prime}+2\eps^2O^{\prime\prime}+2\eps^2 O O^{\prime\prime}-4x O^\prime-\eps^2(O^\prime)^2 )\] in the right-hand side is real analytic and non-vanishing at the origin. Thus if we can prove that the remaining factor \[\frac{(\eps^2+2x^2)^{3/2}}{(4\eps^2+8x^2+4\eps^2 O+4\eps^2x O^\prime+\eps^4(O^\prime)^2)^{3/2}}\] in the right-hand side of \eqref{eq:eqKeps} is real analytic and non-vanishing at the origin, the uniform convergence will follow. Let us consider the inverse $\left(\frac{4\eps^2+8x^2+4\eps^2 O+4\eps^2x O^\prime+\eps^4(O^\prime)^2}{\eps^2+2x^2}\right)^{3/2}$ of the latter expression. To this end, it suffices to show that \[\kappa(x,\eps)=\frac{4\eps^2+8x^2+4\eps^2 O+4\eps^2x O^\prime+\eps^4(O^\prime)^2}{\eps^2+2x^2}=4\frac{\eps^2+2x^2+\eps^2 O+\eps^2x O^\prime+\eps^4(O^\prime)^2/4}{\eps^2+2x^2}\] has a non-vanishing limit at the origin. To prove this claim, consider \begin{multline*} \lim_{(x,\eps)\to (0,0)}\frac{\eps^2+2x^2+\eps^2 O+\eps^2x O^\prime+\eps^4(O^\prime)^2/4}{\eps^2+2x^2}\\ =\lim_{\sqrt{\eps^2+2x^2}\to 0}\left(1+\frac{\eps^2 O}{\eps^2+2x^2}+\frac{\eps^2x O^\prime}{\eps^2+2x^2}+\frac{\eps^4(O^\prime)^2/4}{\eps^2+2x^2}\right). \end{multline*} Since $O(x,\eps)$ and $x$ vanish at the origin, the limits at the origin of the second and third terms in the right-hand side of the latter expression are $0$. The last term also has limit $0$, since it contains $\eps^4$ in the numerator. Thus the limit of $\kappa(x,\eps)$ at the origin exists and equals $4$. The result follows. \end{proof} \begin{remark} \label{rm:rm2}{\rm A rather simple rescaling shows that a statement similar to Lemma~\ref{lm:curvature} holds in a more general family of curves $y_\eps(x)=a\sqrt{x^2+\eps^2(1+O(x,\eps))}$ for the quotient of the curvature $K_\eps(x)$ and the standard curvature $k_\eps(x)= \frac{a\eps^2} {(\eps^2+(1+a^2)x^2)^{3/2}}$ of (the branch of) the hyperbola $y=a\sqrt{x^2+\eps^2}$, where $a$ is a fixed positive constant. The original case corresponds to $a=1$. } \end{remark} \begin{proof}[Proof of Proposition~\ref{lm:smallcomp}] In the above notation, consider the family of real curves $\Upsilon_\eps: \{ G(x,y)+\eps^2 H(x,y)=0 \}$. In a small neighborhood $\mathbb D$ of the chosen vertex $v$ and for sufficiently small $\eps$, the restriction of $\Upsilon_\eps$ to $\mathbb D$ is a desingularization of the crunode at $v$ of a chosen type, since $\eps>0$ and $H(x,y)$ does not vanish av $v$. Let us choose affine coordinates $(\tilde x, \tilde y)$ centered at $v$ which are obtained from the initial coordinates $(x,y)$ by a translation and rotation such that the pair of lines belonging to $\cA$ and intersecting at $v$ will be given by $\tilde y=\pm a\tilde x$ for some positive $a$. These coordinates will be uniquely defined if we additionally require that in a small neighborhood of $v$ the family $\Upsilon_\eps$ will be close to the family of hyperbolas $\tilde y=\pm a\sqrt {\tilde x^2+\eps^2}$. Now in these coordinates $(\tilde x, \tilde y)$, the curve $\Upsilon_\eps$ will be given by \[(\tilde y^2-a^2\tilde x^2)L(\tilde x, \tilde y)+\eps^2 H(\tilde x, \tilde y)=0.\] Here (after possible rescaling of the equation and parameter $\eps$) we can assume that $L(\tilde x,\tilde y)=1+ \dots$ and $H(\tilde x, \tilde y)=1 + \dots$, where $\dots$ stand for higher order terms. Notice that $L(\tilde x,\tilde y)$ is the product of linear forms defining the lines of $\cA$ other than the two given by $(\tilde y^2-a^2\tilde x^2)$. From the latter equation we obtain \begin{equation}\label{eq:extra} \tilde y^2=a^2\tilde x^2+\eps^2\frac {1 + C(\tilde x, \tilde y)}{1 + D(\tilde x, \tilde y)}, \end{equation} where $C(\tilde x, \tilde y)$ and $D(\tilde x, \tilde y)$ are real analytic functions vanishing at the origin. Expanding the functional coefficient at $\eps^2$ in \eqref{eq:extra} near the origin in coordinates $(\tilde x, \tilde y)$ and using the implicit function theorem, we get that \eqref{eq:extra} determines the family of curves $\tilde y_\eps = \pm a \sqrt{ \tilde x^2+\eps^2(1+O(\tilde x,\eps))}$. Here $O(\tilde x,\eps)$ is a well-defined function which vanishes at the origin and is real analytic in some neighborhood of it. Dropping tildas we see that the case $y_\eps = \sqrt{x^2+\eps^2(1+O(x,\eps))}$ is exactly the one treated in Lemma~\ref{lm:curvature} and the case $a>0$ is mentioned in Remark~\ref{rm:rm2}. The case $a<0$ is obtained from the case $a>0$ by a trivial variable change. Furthermore, by Lemma~\ref{lm:curvature}, the curvature $K_\eps(x)$ of $\Upsilon_\eps$ is strictly positive for all sufficiently small $\eps$ and $x$ lying in an a priori fixed small neighborhood of $v$. Thus $\Upsilon_\eps$ has no inflection points in this neighborhood. Finally, near the origin and for small $\eps$, $K_\eps(x)$ behaves very much like the standard curvature $k_\eps(x)$ since (i) their quotient tends to a positive constant at $0$; (ii) for $\eps\to 0$, the standard curvature $k_\eps(x)$ tends to $\sqrt{2} \delta (0)$ , where $\delta(0)$ is Dirac's delta function supported at the origin. Therefore, for all sufficiently small $\eps$, $K_\eps(x)$ has a unique maximum near the origin. Finally, for $a>0$, the situation is completely parallel. \end{proof} To move further, we need more definitions. By an \emph{edge} of a real line arrangement $\cA\subset \bR^2$ we mean a connected component of $\cA\setminus V(\cA)$ where $V(\cA)$ is the set of its vertices (nodes). An edge is called \emph{bounded} if both its endpoints are vertices, and \emph{unbounded} otherwise. Given a small resolution $\RR$ of $\cA$ and a bounded edge $e\in \cA$, we denote by $\RR_e\subset \RR$ the restriction of $\RR$ onto a sufficiently small neighborhood $U_e\subset \bR^2$ of $e$. Obviously, for any small resolution $\RR$ of $\cA$, $\RR_e$ consists of three connected components, see Fig.~\ref{pic:edge}. We say that $\RR$ \emph{respects} $e$ if each of the three connected components of $\RR_e$ is close to the union of edges bounding a single connected domain of $U_e\setminus \cA$, see Fig.~\ref{pic:edge}\,(left); otherwise we say that $\RR$ \emph{twists} the edge $e$, see Fig.~\ref{pic:edge}\,(right). In the first case, the edge $e$ is called \emph{respected} by $\RR$ and in the second case \emph{twisted} by $\RR$. The three connected components of $\RR_e$ are divided into two \emph{short} and one \emph{long} where the long one is stretched along $e$ and each of the two short ones is completely located near the respective vertex of $e$. \smallskip Proposition~\ref{lm:smallcomp} has the following consequence. \begin{figure}[t] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[scale=0.35]{resolution1.pdf} \caption{The resolution respects the the bounded edge.}\label{pic:edgeleft} \label{fig:conic1a} \end{subfigure} \hfill \begin{subfigure}[t]{0.49999\textwidth} \centering \includegraphics[scale=0.35]{resolution2.pdf} \caption{The resolution twists the the bounded edge.}\label{pic:edgeright} \end{subfigure} \hfill \caption{Small resolutions of a line arrangement around the bounded edge $AB$.} \label{pic:edge} \end{figure} \begin{corollary}\label{lm:edges} Given a generic line arrangement $\cA\subset \bR^2$ and its sufficiently small real deformation $\RR$, the following facts hold: \begin{itemize} \item[\rm (i)] If $\RR$ respects the edge $e$, then the short components of $\RR_e$ have no inflection points while the long component has an even number inflection points (possibly none), see Fig.~\ref{pic:edge} (left); \item[\rm (ii)] If $\RR$ twists $e$, then the short components of $\RR_e$ have no inflection points while the long component has an odd number of inflection points, see Fig.~\ref{pic:edge} (right); \item[\rm (iii)] If $\RR$ respects the edge $e$, then there are at least five extrema of curvature on the three components of $\RR_e$; namely one maximum on each of the short components and two maxima close to the vertices, plus an odd number of additional extrema on the long component; \item[\rm (iv)] If $\RR$ twists the edge $e$, then there are at least four extrema of curvature on the three components of $\RR_e$; namely, one maximum on each short component and two maxima (of the absolute value of the curvature) on the long component close to the vertices. \end{itemize} \end{corollary} \begin{proof} To settle (i) and (ii), we notice that by Proposition~\ref{lm:smallcomp} neither the short nor the long components of $\RR_e$ have inflection points near the vertices of $e$. Furthermore, if we parameterize the long component as $(x(t),y(t))$ with $(x^\prime)^2(t) +(y^\prime)^2(t)$ non-vanishing for all $t$, then the signed curvature \[k(t)=\frac{ x^\prime(t) y^{\prime\prime}(t)- y^\prime(t) x^{\prime\prime}(t)} {((x^\prime(t))^2+(y^\prime(t))^2)^{3/2}}\] will have the same sign near the vertices if $\RR$ preserves $e$ and will change the sign if $\RR$ twists $e$. Therefore the long segment acquires an even number of inflections counting multiplicities in the first case and an odd number of inflections in the second case. To settle (iii) and (iv), observe that by Proposition~\ref{lm:smallcomp} both in the case when $\RR$ preserves $e$ or twists $e$, $\RR_e$ will have four maxima of the absolute value of the curvature near the vertices of $e$, one on each short component and two on the long component. Additionally, if $\RR$ preserves $e$, then the long segment contains an even number of inflection points. If there are no inflection points at all (as in the case shown in Fig.~ \ref{pic:edge} (left)) then there is at least one more minimum of the curvature on the long component between the two maxima. In other words, between any two consecutive inflection points on the long component there must be at least one maximum of the absolute value of the curvature. \end{proof} \section{On the maximal $\R$-degrees of the evolute and of the curve of normals}\label{sec:rdeg} Recall that the degree of the evolute of a generic curve of degree $d$ equals $3d(d-1)$, see Proposition~\ref{prop:1}. Already consideration of the first non-trival case of plane conics shows that the question about the maximal $\bR$-degree of the evolute (the first part of Problem~\ref{R-deg EG-NG}) is non-trivial. Namely, if $\Ga$ is a generic real conic then the usual degree of $E_\Ga$ equals $6$ while the $\bR$-degree of its evolute (which for a generic conic $\Ga$ is an astroid) equals $4$, see Lemma~\ref{lm:astroid} below, Fig.~\ref{fig:one} b) and Fig.~\ref{fig:hypRot}. Our initial result in this direction is as follows. \begin{proposition}\label{prop:Rdegevol} For any $d\ge 3$, the maximal $\bR$-degree for the evolutes of algebraic curves of degree $d$ is not less than $d(d-2)$. \end{proposition} \begin{proof} Recall that at each real inflection point of a real-algebraic curve $\Ga$, its evolute $E_\Ga$ goes to infinity and its asymptote at infinity is the line normal to $\Ga$ and passing through the respective inflection point. Notice that Klein's formula combined with the usual Pl\"ucker formula for non-singular plane curves imply that a real-algebraic curve of degree $d$ has at most one third of its total number of inflection points being real and this bound is sharp, see \cite{Kl}. (The sharpness can be obtained by considering small deformations of real line arrangements). The number of complex inflection points of a generic smooth plane curve of degree $d$ equals $3d(d-2)$. Thus there exists a smooth real-algebraic curve of degree $d$ with $d(d-2)$ real inflection points. The evolute of such curve hits the line at infinity (transversally) at $d(d-2)$ real points. Therefore the evolute intersects any affine line sufficiently close to the line at infinity $d(d-2)$ times as well. Thus the maximal $\bR$-degree among the evolutes of plane curve of degree $d$ is at least $d(d-2)$. \end{proof} \begin{remark}{\rm At least for small $d$, the above lower bound is not sharp. Namely, for $d=2$, the maximal $\bR$-degree equals $4$, while the above bound is not applicable. For $d=3$, taking a usual cubic in the Weierstrass form, one has an example of the evolute whose $\bR$-degree is greater than or equal to $6$, see Fig.~\ref{fig:WeierNC}. (Notice the number of real inflections of a nonsingular real cubic always equals $3$.)} \end{remark} Our second result solves the second part of Problem~\ref{R-deg EG-NG} about the maximal $\R$-degree of the curve of normals. \begin{proposition}\label{lm:Shustin} There exists a real-algebraic curve $\Ga$ of degree $d$ and a point $z\in \bR^2$ such that all $d^2$ complex normals to $\Ga$ passing through $z$ are, in fact, real. In other words, there exists $\Ga$ such that the maximal $\R$-degree of $N_\Ga$ equals $d^2$, which coincides with its complex degree. \end{proposition} \begin{proof} Recall that a crunode (which is a transversal intersection of two smooth real local branches) admits two types of real smoothing. Given a crunode $c$ and a point $z$ such that the straight segment $L$ connecting $z$ with $c$ is not tangent to the real local branches at the curve at $c$, there exists a smoothing of the curve at $c$ such that one obtains two real normals to this smoothing passing through $z$ and close to $L$, see illustration in Fig.~\ref{pic:brusotti}. Now take an arrangement $\cA\subset \bR^2$ of $d$ real lines in general position and a point $z$ outside these lines. By Brusotti's theorem, smoothing all $d(d-1)/2$ nodes in the admissible way shown in Fig.~\ref{pic:brusotti} we obtain $d(d-1)$ normals close to the straight segments joining $z$ with the nodes of $\cA$. Additional $d$ normals are obtained by small deformations of the altitudes connecting $z$ with each of the $d$ given lines. Thus, for a small deformation of $\cA$ that resolves all its nodes in the admissible way with respect to $z$, one gets $d^2$ real normals to the obtained curve through the point $z$ which implies that the $\R$-degree of its curve of normals is a least $d^2$. But the usual complex degree of this curve of normals is (at most) $d^2$. The result follows. \end{proof} \begin{figure} \centering \includegraphics[scale=0.44]{figure4.pdf} \caption{An admissible resolution (in red) of a crunode $c$ w.r.t. a point $z$ and two normals (in black) to the two local branches of the resolution.}\label{pic:brusotti} \end{figure} \begin{figure} \begin{tikzpicture}[scale=0.65] \draw [thick, blue] (0,0) arc (-30:30:2) ; \draw [thick, blue] (1.75,0) arc (30:150:1); \draw [thick, blue] (4.5,-1.) arc (-10:150:1.5) ; \draw [thick, blue] (7.4,-0.75) arc (20:170:1.5) ; \draw[thin, black] (-1.5,0.6) -- (9,-0.5); \draw[thin, red] (-.25,-0.4) -- (1.65,2.5); \node at (8,1.5) {L}; \node at (1.5,1.8) {T}; \draw[fill] (0.0,0) circle [radius=0.05]; \draw[fill] (1.75,0) circle [radius=0.05]; \draw[fill] (0.0,2) circle [radius=0.05]; \draw[fill] (4.52,-1) circle [radius=0.05]; \draw[fill] (7.4,-0.75) circle [radius=0.05]; \end{tikzpicture} \caption{Illustration of one possible case in the proof of Lemma~\ref{lm:astroid}.}\label{pic:illustr} \end{figure} \begin{lemma}\label{lm:astroid} The $\bR$-degree of the evolute of a non-empty generic real conic equals $4$. \end{lemma} \begin{proof}  Any ellipse in $\bR^2$ can be reduced by a translation and rotation to the standard ellipse $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ whose evolute is parameterized as $\{x = \frac{a^2-b^2}{a} \cos^3 t,\; y =\frac{b^2-a^2}{b} \sin^3 t\}$ and is given by the equation $(ax)^{2/3} + (by)^{2/3} = (a^2-b^2)^{2/3}$. Analogously, any hyperbola can be reduced by the same operations to the standard hyperbola $\frac{x^2}{a^2}-\frac{y^2}{b^2} = 1$ whose evolute is parameterized as $\{x = \frac{a^2+b^2}{a} \cosh^3 t, y =\frac{a^2+b^2}{b} \sinh^3 t\}$ and is given by the equation $(ax)^{2/3}- (by)^{2/3} = (a^2 + b^2)^{2/3}$. In both cases one can find a line in $\bR P^2$ such that in the affine chart obtained as the complement to this line, the evolute $E$ of the conic becomes a plane closed $4$-gon $\mathcal F$ with smooth and concave sides, see Fig.~\ref{fig:one} and Fig.~\ref{fig:hypRot}. An additional property which is essential for our argument, is that no line can intersect each of the $4$ sides of $\mathcal F$ more than twice. To prove this in the case of a standard ellipse, observe that each side of the evolute can be considered as a convex/concave graph of a function $y(x)$ and therefore cannot intersect a real line more than two times, since otherwise there will necessarily exist an inflection point. As a consequence, we get that such a graph lies on one side with respect to the tangent line at any of its points. The case of a hyperbola is similar. Let us show that no real line can intersect a concave closed $4$-gon $\mathcal F$ with smooth sides more than four times if any of its sides can not intersect a line more than twice. (Recall that we count intersection points without multiplicities.) Indeed, assume that there is a real line $L$ intersecting $\mathcal F$ at least six times transversally, see Fig. 5. (Notice that a line intersecting a closed simple curve transversally must intersect it an even number of times.) Then $L$ intersects $\mathcal F$ either six or eight times since it can not intersect any side of $\mathcal F$ more than twice. Firstly, notice that by concavity of $\mathcal F$, the line $L$ can not intersect it $8$ times since in this case the $4$-tuple of concave sides will not be able to close up forming a $4$-gon. A similar situation occurs in all cases of the intersection multiplicities of the line $L$ with the $4$ consecutive cyclicly ordered sides of $\mathcal F$ are $2-2-1-1$. The final possibility is the intersection multiplicities $2-2-2$ with the three consecutive sides of $\mathcal F$ which is illustrated in Fig.~\ref{pic:illustr}. Let us provide more details in this special case. Consider the leftmost of these three sides and draw a tangent line $T$ at its leftmost vertex, see Fig.~\ref{pic:illustr}. Then the last remaining side must lie in one halfplane of the complement to $T$ while the rightmost vertex of the rightmost side among the three interesting $L$ twice must lie in the other halfplane in the complement to $T$. Thus again the $4$ sides can not close up to form a $4$-gon. \end{proof} \section{On the maximal number of real vertices of a plane real-algebraic curve}\label{sec:vert} In this section we discuss Problem 2, providing a lower bound for the maximal number $\bR\Ve(d)$ of real vertices of a real-algebraic curve of degree $d$. Recall that by Proposition~\ref{prop:cusps+nodes}, the number $\bC\Ve(d)$ of complex vertices of a generic curve $\Ga$ of degree $d$ (i.e., the number of cusps of its evolute $\tilde E_\Ga^\bC$) equals $3d(2d-3)$. Below we obtain a lower bound of $\bR\Ve(d)$ via small deformations of real line arrangements. \begin{proposition}\label{prop:realcusps} \rm{(i)} The number of real cusps for the evolute of an arbitrary small deformation $\RR$ of any generic line arrangement $\cA\subset \bR^2$ of degree $d$ is at least $d(d-1)$ plus the number of bounded edges of $\cA$ respected by $\RR$. \noindent \rm{(ii)} If the line arrangement $\cA$ is given by the equation $\prod_{i=1}^d L_i=0$, where the $L_i$'s are linear equations describing the lines of $\cA$, then for any sufficiently small real number $\epsilon\neq 0$, the deformation $\RR$ given by $\prod_{i=1}^d L_i=\epsilon$ respects all the bounded edges of $\cA$. In this case the total number of real cusps on its evolute is greater than or equal to $d(2d-3)$. \end{proposition} \begin{remark} {\rm Apparently the number $d(2d-3)$ is the maximal possible number of cusps among the evolutes of small deformations of generic line arrangements of degree $d$. It is exactly equal to one third of the number $\bC\Ve(d)$ of complex cusps. } \end{remark} \begin{proof}[Proof of Proposition~\ref{prop:realcusps}] Given a generic line arrangement $\cA\subset \bR^2$, consider its complement $\bR^2\setminus \cA$. It consists of $\binom {d-1}{2}$ bounded and $2d$ unbounded convex polygons. Now take any small deformation $\RR$ of $\cA$ among real curves of degree $d$. By Proposition~\ref{lm:smallcomp}, in a sufficiently small neighborhood of each vertex $v$ of $\cA$ the smooth curve $\RR$ consists of two convex branches at each of which the (absolute value of the) curvature attains a local maximum near $v$. These two local maxima correspond to two cusps on the evolute $E_{\RR}$ of $\RR$ which gives totally $2\binom {d}{2}=d(d-1)$ cusps corresponding to the local maxima of the absolute value of the curvature. Additionally, by Corollary~\ref{lm:edges} (iii), every bounded edge of $\cA$ respected by $\RR$ supplies at least one additional vertex on $\RR$ which settles item (i). To settle (ii), observe that for a sufficiently small deformation $\RR$ of $\cA$ which twists a bounded edge $e$ of $\cA$, the long component of $\RR_e$ must intersect $e$, see Fig.~\ref{pic:edgeright}. On the other hand, the line arrangement $\cA$ given by $\prod_{i=1}^d L_i=0$ and its deformation $\prod_{i=1}^d L_i=\epsilon$ have no common points. Therefore $\RR$ respects every bounded edge of $\cA$ and, in particular, has no inflection points. The total number of bounded edges of any generic arrangement $\cA\subset \bR^2$ with $d$ lines equals $d(d-2)$. Thus we get at least $d(d-1)+d(d-2)=d(2d-3)$ extrema of the curvature on the smooth curve $\RR$ given by $\prod_{i=1}^d L_i=\epsilon$, for small real $\epsilon$. \end{proof} \begin{remark} {\rm Conjecturally, for any small deformation $\RR$ of a generic line arrangement $\cA$, the number of the minima of the curvature on $\RR$ plus the number of inflection points on $\RR$ equals the number of bounded edges of $\cA$ (which is given by $d(d-2)$).} \end{remark} \section{On the maximal number of real diameters of a plane real-algebraic curve}\label{sec:diam} In this section we provide some information about Problem~\ref{nodesNGa}. Let us denote by $\R\Diam(d)$ the maximal number of real diameters for real-algebraic curves of degree $d$ having no circles as irreducible components and by $\bC\Diam(d)$ the number of complex diameters of a generic curve $\Ga$ of degree $d$. By definition, $\bC\Diam(d)$ equals the number $\delta_N$ of complex nodes of $N_\Ga$ not counting the special $d$-uple point at $\infty$, which by Proposition~\ref{prop:cusps+nodes} implies that \[\textstyle \bC\Diam(d)=\binom{d}{2}(d^2+d-4)=\frac{1}{2}d^4-\frac{5}{2}d^2+2d.\] In particular, $\bC\Diam(2)=2$, and the number of real diameters of an ellipse is also $2$. Further, $\bC\Diam(3)=24$. Based on our experiments, we conjecture that not all $24$ complex diameters can be made real. In other words, there is probably no (generic) real cubic $\Ga$ for which all $24$ complex nodes of its curve of normals $N_\Ga$ are crunodes. Below we provide a lower bound for $\bR\Diam(d)$ by using small deformations of real line arrangements. Assume that we have a generic arrangement $\cA$ of $d$ lines in $\bR^2$, meaning that no two lines are parallel and no three intersect at the same point. Again by Brusotti's theorem, we can find a small real deformation of $\cA$ within real curves of degree $d$ which resolves each of the $\binom {d}{2}$ crunodes of $\cA$ in a prescribed way. (For a generic $\cA\subset \bR^2$ of degree $d$, there exist $2^{\binom{d}{2}}$ topological types of its small real resolutions.) It turns out that under some additional generality assumptions, one can estimate the number of real diameters of any such small resolution $\RR$. To move further, we need to introduce more notions related to line arrangements and their small resolutions. \medskip \noindent{\bf Notation.} A line arrangement $\cA\subset \bR^2$ is called \emph{strongly generic} if in addition to the conditions that no two lines are parallel and no three lines intersect at the same point, we require that no two lines are \emph{perpendicular}. By an \emph{altitude} of a given line arrangement $\cA\subset \bR^2$, we mean a straight segment connecting a vertex $v$ of $\cA$ with a point on a line of $\cA$ not containing $v$ such that this segment is perpendicular to the chosen line, see Fig.~\ref{pic:altitude}. The line of $\cA$ to which an altitude $\alpha$ is orthogonal, is called a \emph{base line}. (Notice that if $\cA\subset \bR^2$ is strongly generic, then no altitude of $\cA$ connects its vertices.) Finally, we call a segment of a line belonging to $\cA$ and connecting its two vertices, a \emph{side} of the arrangement $\cA$. (In particular, every bounded edge is a side, but a side can consist of several bounded edges). Given two intersecting lines in $\bR^2$, we say that a pair of opposite sectors of its complement form a \emph{cone}. Thus the complement to the union of two lines consists of two disjoint cones. (The closure of a cone will be called a \emph{closed cone}). We will mainly be interested in cones in a small neighborhood of their respective vertices. Assume that we have chosen some type of a small real resolution $\RR$ of a given strongly generic arrangement $\cA$, which means that at each vertex $v$ of $\cA$ we have (independently of other vertices) chosen which of two local cones bounded by the lines whose intersection gives $v$, will merge, i.e., whose sectors will glue together after the deformation. (Two sectors of the other cone will stay disjoint under a local deformation.) We will call the first cone \emph{merging} and the second one \emph{persisting}. For a given line $\ell\subset \bR^2$ and point $p\in \ell$, denote by $\ell^\perp(p)$ the line passing through $p$ and orthogonal to $\ell$. For a cone bounded by two lines $\ell_1$ and $\ell_2$ intersecting at some vertex $v$, define its \emph{dual cone} as the union of all lines passing through $v$ and such that every line is orthogonal to some line passing through $v$ and belonging to the initial cone. (The dual cone is bounded by $\ell^\perp_1(v)$ and $\ell^\perp_2(v)$.) Given a generic line arrangement $\cA\subset \bR^2$ of degree $d$, define its \emph{derived arrangement} $\cD\cA\subset \bR^2$ of degree $d(d-1)$ as follows. For any pair of lines $\ell_1$ and $\ell_2$ from $\cA$, let $v$ denote their intersection point. Then $\cD\cA$ consists of all lines $\ell_1^\perp(v)$ and $\ell_2^\perp(v)$ where $\ell_1$ and $\ell_2$ run over all pairs of distinct lines in $\cA$. If we choose some resolution $\RR$ of $\cA$ then at each vertex $v$ of $\cA$, we get the persistent cone $\cC_v(\RR)$ and its dual persistent cone $\cC_v^\perp(\RR)$ bounded by two lines of $\cD\cA$ which are perpendicular to the lines of $\cA$ those intersection gives $v$. If $\alpha$ is an altitude starting at $v$ and $\RR$ is some resolution, we say that $\alpha$ is \emph{admissible} with respect to $\RR$ if it lies inside $\cC_v(\RR)$ and \emph{non-admissible} otherwise. Finally, for a given strongly generic $\cA$, its two vertices $v_1$ and $v_2$ and any resolution $\RR$, we say that $v_1$ and $v_2$ \emph{see each other with respect to $\RR$} if $v_2 \in \cC_{v_1}^\perp(\RR)$ and $v_1\in \cC_{v_2}^\perp(\RR)$. \begin{figure} \begin{tikzpicture}[scale=1.5] \draw[thin, blue] (0,1) -- (1.5,3); \draw[thin, blue] (-0.2,1.3) -- (1.7,2.7); \draw[fill] (0.75,2) circle [radius=0.05]; \node at (0.8,2.4) {v}; \draw[fill] (3.25,2) circle [radius=0.05]; \node at (3.2,2.4) {w}; \draw[thin, blue] (2.5,3) -- (4,1); \draw[thin, blue] (4,3) -- (2.5,1); \draw[thin, black,dashed] (1.5,1.0) -- (0,3); \draw[thin, black,dashed] (1.5,1.4) -- (0,2.6); \draw[thick, red, dashed] (0.75,2) -- (3.25,2); \draw[thin, black,dashed] (4,2.5) -- (2.5,1.5); \draw[thin, black,dashed] (4,1.3) -- (2.4,2.7); \draw[blue] (1.25,2.35) arc (40:44:3); \draw[blue] (0.1,1.56) arc (220:225:3); \draw[blue] (3.5,2.35) arc (85:95:3); \draw[blue] (3.55,1.6) arc (-82:-105:1.4); \end{tikzpicture} \caption{Two vertices $v$ and $w$ of a line arrangement $\cA$ which see each other (along the red dashed line) with respect to $\RR$. The lines of $\cA$ are shown in solid blue, while the dashed lines belong to the derived arrangement. The persistent cones are marked by the blue arcs.} \label{fig:Proper} \end{figure} \begin{lemma}\label{lm:local} Given a strongly generic line arrangement $\cA$, the following holds: \begin{itemize} \item[(i)] any small resolution $\RR$ of a vertex $v\in \cA$ has (at least) one short diameter near $v$; \item[(ii)] if an altitude $\alpha$ is admissible with respect to a small deformation $\RR$ then $\RR$ has (at least) two diameters close to $\alpha$; \item[(iii)] if $v_1$ and $v_2$ see each other with respect to a small deformation $\RR$ then $\RR$ has (at least) four diameters close to the straight segment $(v_1,v_2)$. \end{itemize} \end{lemma} \begin{proof} To settle (i), observe that a small deformation $\RR$ of $\cA$ restricted to a small neighborhood of a given vertex $v$ consists of two convex connected components which are convex ``towards each other". For each point on one of the local branches, the distance function to the other branch increases towards its endpoints. Therefore there exists a global minimum of the distance function between these branches, giving a diameter attained on a pair of points lying inside both branches (and not on their boundary), see Fig.~\ref{pic:altitude}. To settle (ii), denote by $\ell_1$ and $\ell_2$ a pair of lines belonging to $\cA$ whose intersection gives the vertex $v$. Recall that the Gauss map sends a point of a curve to the normal line to the curve passing through this point. Observe that a small resolution $\RR$ creates near the vertex $v$ two short connected components such that the Gauss map on each of them moves from a line close to $\ell_1^\perp$ to a line close to $\ell_2^\perp$. The Gauss map of a small deformation of the base line of $\alpha$ (to which the altitude $\alpha$ is orthogonal) is close to the direction of the altitude. Thus by continuity and admissibility we will be able to find (at least) one normal on each of the short components which is also orthogonal to the small deformation of the base line, see Fig.~\ref{pic:altitude}. To settle (iii), observe that short segments are very close to the corresponding hyperbolas (one for each crunode), see Fig.~\ref{pic:seeeach}. For the hyperbolas the statement is true and the four diameters are located close to the connecting segment $vw$. To prove this, notice that we have four pairs of branches of hyperbolas to consider, lying in four pairs of respective sectors. These pairs of sectors can be of three possible types. We say that a pair of sectors ``look away from each other" if each cone contains the apex of the other cone; ``look towards each other" if their intersection is empty; ``one looks after the other" if the apex of one cone belongs to the other, but not the other way around. For a pair whose sectors ``look away from each other", we get a maximum of the distance between the branches of hyperbolas near their apices, for the pair whose sectors ``look towards each other", we get a minimum of the distance between the branches of hyperbolas near their apices, and for the pairs where ``one looks after the other", we get a saddle point. Finally, the same phenomena will be present in a sufficiently small deformation of hyperbolas. To prove this, we consider the pairwise distance as the function on the small square, one side of which gives the position of the point on one considered branch and the other side gives the position of the point on the other side. Then if the sectors ``look away from each other", the distance function is concave on each horizontal and vertical segment of the square; if the sectors ``look towards each other", the distance function is convex on each horizontal and vertical segment of the square; if the sectors ``one looks after the other", the distance function is convex on each horizontal and concave on the each vertical segment of the square. The result follows. \end{proof} \begin{figure}\includegraphics[scale=0.65]{fig7.pdf} \caption{A resolution of an admissible altitude (dashed) creating two diameters (black). We also include a short diameter created near the vertex $v$ (in black).}\label{pic:altitude} \end{figure} \begin{figure} \includegraphics[scale=0.55]{4diameters2.pdf} \caption{A resolution of two vertices which see each other, resulting in four diameters (black).}\label{pic:seeeach} \end{figure} Lemma~\ref{lm:local} immediately implies the following claim. \begin{proposition} \label{prop:Brus} Given any small resolution $\RR$ of a strongly generic arrangement $\cA$ consisting of $d$ lines, the number of real diameters of $\RR$ is greater than or equal to $\binom{d}{2}$ plus twice the number of admissible altitudes with respect to $\RR$, plus four times the number of pairs of vertices which see each other with respect to $\RR$. \end{proposition} \begin{conjecture} The number of diameters of any real small resolution of any line arrangement of degree $d$ does not exceed the number \begin{equation}\label{eq:kappa} \binom{d}{2}+2\binom{d}{2}(d-2)+4\binom{\binom{d}{2}}{2}=\frac{1}{2}d^4-3d^2+\frac{5}{2}d. \end{equation} \end{conjecture} \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{figure9_1.pdf} \caption{Three lines (shown in blue), their altitudes (shown dotted in orange), and their derived arrangement (shown in black). Red lines indicate the resolution.}\label{fig:fig9left} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{figure9_2.pdf} \caption{A small resolution creating a smooth triangle-shaped oval with 3 infinite arcs having 21 diameters, which are drawn in black.}\label{fig:fig9middle} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{figure9_3.pdf} \caption{The curve of normals corresponding to the resolution has 24 real singularities in the affine plane. 21 of these are crunodes, of which 18 are visible in this picture -- there are two more out left and one more to the right.}\label{fig:fig9right} \end{subfigure}\hfill \caption{An example of a resolution of three lines with 21 diameters.} \label{fig:Triangle} \end{figure} \begin{example}{\rm Fig.~\ref{fig:Triangle} shows a special case of a line arrangement for $d=3$ and its small resolution which results in $21$ diameters. The arrangement itself is shown by $3$ blue lines and its derived arrangement is shown by $6$ black lines, the three altitudes are shown by $3$ red lines. The chosen resolution is shown by $3$ short red segments at the vertices $A, B,C$ which indicate how we cut out arrangement at its vertices. As the result of such a resolution we will obtain a compact triangle-formed oval inside together with $3$ infinite arcs. Three pairs of persistent sectors will include $EAD - D'AE'$, $FBG - F'BG'$, and $HCI - H'CI'$. Every two vertices see each other with respect to $\RR$ for this small resolution, and each altitude is admissible. Therefore we get $3+2\times 3 + 3\times 4=21$ real diameters on this small resolution $\RR$. We conjecture that 21 is the maximal number of real diameters which can be obtained by a small deformation of three real lines. }\end{example} Proposition~\ref{prop:Brus} implies the following lower bound for $\bR\Diam(d)$. \begin{proposition}\label{prop:thin} In the above notation, \begin{equation}\label{eq:newbound} \textstyle \bR\Diam(d)\ge \frac{1}{2}d^4-d^3+\frac{1}{2}d. \end{equation} \end{proposition} To settle Proposition~\ref{prop:thin}, we need to introduce a certain class of real line arrangements. We say that an arrangement $\cA$ is \emph{oblate} if the slopes of all lines in $\cA$ are close to each other. As a particular example, one can take $d$ lines tangent to the graph of $\arctan x$ for $d$ values of the variable $x$ of the form $101, 102, 103, \dots , 100+d$. \begin{proof} For the special small resolution of an oblate arrangement for which we choose the narrow cone at each vertex to be the persistent, the following diameters will be present. Every pair of persistent cones will be in proper position and contribute $4$ diameters and each vertex will contribute $1$ diameter. On the other hand, all altitudes will be non-admissible. Thus we get at least $4\binom{\binom{d}{2}}{2}+\binom{d}{2}=\frac{1}{2}d^4-d^3+\frac{1}{2}d$ diameters for this resolution. \end{proof} \begin{figure}[H] \begin{tikzpicture} \draw[thick, blue] (-0.7,2) -- (5.5,3); \draw[thick, blue] (-0.5,1.7) -- (5.5,3.5); \draw[thick, blue] (-0.3,2.4) -- (5.9,3); \draw[thick, blue] (-0.5,2.6) -- (4.7,2.6); \end{tikzpicture} \caption{Example of an oblate line arrangement.} \label{fig:oblate} \end{figure} \begin{remark} {\rm It might happen that using some other types of line arrangements and their special resolutions, one can improve the lower bound \eqref{eq:newbound} and get closer to $\frac{1}{2}d^4-3d^2+\frac{5}{2}d$. For $d=3$, Fig. ~\ref{fig:Triangle} contains such a construction. However, for $d\ge 4$ it is not clear whether $\frac{1}{2}d^4-3d^2+\frac{5}{2}d$ is achievable by small deformations of line arrangements. It seems difficult to make all pairs of vertices to see each others while simultaneously keeping all the altitudes admissible. } \end{remark} \section{On the maximal number of crunodes of the evolute}\label{sec:Ecru} In this section we discuss Problem \ref{nodesEGa}. Recall that the number of complex nodes of the evolute of a generic curve of degree $d$ is given by \[ \textstyle \delta_E=\frac{1}{2}d(3d-5)(3d^2-d-6).\] Denote by $\delta^{\rm cru}_E(d)$ the maximal number of crunodes for the evolutes of real-algebraic curves of degree $d$. At the moment we have only a rather weak lower bound for this number of crunodes, which we again obtain by resolving a line arrangement. \begin{proposition}\label{prop:Ecru} We have $\delta^{\rm cru}_E(d)\ge(\lfloor \frac{d-2}{2}\rfloor +d-2)^4-\frac{1}{2}$ \end{proposition} \begin{proof} Take a line arrangement $\mathcal{A}$ and assume that in this arrangement we have a line which intersects two other lines in acute angles such that there is a bounded edge $e$ between the two resulting nodes $A$ and $B$. By Brusotti's theorem there exists a resolution $\mathcal{R}$ in which $A$ and $B$ are resolved in such a way that the long component of the resulting local branch $\mathcal{R}_e$ has curvatures of different signs near the two nodes, the tangent direction changes by more than 90 degrees near each of the nodes, and such that it twists $e$. We call such a resolution a ``zig-zag''. By Corollary \ref{lm:edges} we know that the long component of $\mathcal{R}_e$ will have at least one inflection point as well as two maxima of the absolute value of curvature close to the vertices. Considering the part of the evolute corresponding to $\mathcal{R}_e$, we have that these two maxima correspond to two cusps of the evolute, which, due to the different signs of curvature, are oriented towards each other. Furthermore, the existence of the inflection point results in a line, perpendicular to the compact line segment, which is an asymptote of the evolute, in such a way that from each of the two cusps one branch of the evolute approaches this asymptote. Furthermore, since the resolution can be chosen in such a way to ensure that the curvature radius becomes as large as necessary, the remaining two branches are guaranteed to intersect with the other branches (the described situation can also be seen in Fig.~\ref{fig:zigzac}). Now it follows that the two branches which tend to the asymptotical line are connected via the other two branches, and that the part of the evolute corresponding to $\mathcal{R}_e$ therefore contains a pseudo-line, Moreover, $\mathcal{R}$ can be chosen in such a way that every bounded edge that can be resolved in a zig-zag, contributes a pseudo-line in the evolute. For $d\geq 3$, consider a set of $\lfloor \frac{d}{2}\rfloor$ parallel lines as well as another set of $\lfloor \frac{d+1}{2}\rfloor$ parallel lines which are almost parallel to the first set of lines. The union of these two sets will yield a line arrangement with $(d-2)+\lfloor \frac{(d-2)^2}{2}\rfloor$ many bounded edges, and there exists a resolution of the line arrangement which resolves each of the bounded edges in a zig-zag. The resulting evolute thus has $(d-2)+\lfloor \frac{(d-2)^2}{2}\rfloor$ pseudo-lines, each of which will intersect pairwisely, yielding the announced lower bound of crunodes. \end{proof} \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.62\columnwidth]{deg3_local_ziczag.pdf} \caption{An arrangement of three lines (in gray) together with a zigzag deformation in red, with the inflection point on the compact segment between {\bf A} and {\bf B}. The resulting evolute (in blue) has an assymptote (dashed in blue).} \label{fig:zigzac} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.62\columnwidth]{lines.pdf} \caption{Two set of 3 lines. By resolving the 6 lines with changing the colours at every vertex and following the orientation indicated by the arrows one obtains the desired resolution.} \label{fig:lines} \end{subfigure} \caption{Visualization of zig-zag-resolutions.} \label{fig:zigtac1} \end{figure} \begin{remark}{\rm The above lower bound is obtained in a crude way, and it is surely possible to improve this construction. However, the bound is of the same degree as the complex bound.} \end{remark} \section{A florilegium of some real curves, their evolutes, and curves of normals}\label{sec:zoo} Here we present some classical examples to showcase the interplay between a real algebraic curve, its evolute, and its curve of normals. Since many of the equations defining these curves are quite lengthy, we separately collect most of them in Appendix \ref{subsec:formulae}. Many of our examples are known in the literature, see e.g. \cite[\S~9]{Sa}, but, to the best of our knowledge, curves of normals have not been explicated earlier. For convenience of the readers, we collected the information about the standard invariants of the complexifications of the curves under consideration in Subsection~\ref{subs:tables}. Recall from Section \ref{sec:diam} that the real singular point (an ordinary $d$-uple point if the curve $\Ga$ is in general position) of the curve of normals corresponding to the line at infinity does not contribute to the count of the diameters of $\Ga$. \bigskip \subsection{Conics} \hfill\\ \noindent {\bf I.} As we saw in Example \ref{ex:basic} (2), for the standard ellipse $\Ga$ given by the equation \[\frac{x^2}{a^2}+\frac{y^2}{b^2}=1,\] its evolute $E_\Ga$ is a stretched astroid given by the equation \[(ax)^{2/3}+(by)^{2/3}=(a^2-b^2)^{2/3},\] see Fig.~\ref{fig:conic1b} \begin{figure}[H]\label{fig:ellRot} \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example1_1.pdf} \caption{A rotated ellipse in blue and its evolute in red.}\label{fig:ellRotleft} \label{fig:conic1a} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example1_2.pdf} \caption{Corresponding curve of normals shown in gold.}\label{fig:ellRotright} \end{subfigure} \caption{A rotated ellipse, its evolute, and its curve of normals.} \label{fig:ellRot} \end{figure} The evolute and the curve of normals are better seen in case of the rotated ellipse $(x+y)^2+4(y-x)^2-1=0$, shown in Fig.~\ref{fig:ellRot}. The evolute is a sextic with 4 real cusps, 2 complex cusps, and 4 complex nodes. The curve of normals is a quartic with 2 real crunodes (corresponding to the diameters of the ellipse), and its double point corresponding to the line at infinity $z=0$ has no real branches, i.e., is an acnode, see Fig.~\ref{fig:ellRotright}. Klein's formula for the evolute in this case reads as $6+2=4+4$, where the second $4$ in the right hand side comes from real cusps of the evolute and 2 in the left hand side comes from the acnode of the curve of normals. The ellipse has 4 vertices and 2 diameters. \medskip \noindent {\bf II.} For the hyperbola $\Ga$ given by the equation \[\frac{x^2}{a^2}-\frac{y^2}{b^2}=1,\] its evolute $E_\Ga$ is a version of a Lam\'e curve given by the equation \[(ax)^{2/3}-(by)^{2/3}=(a^2+b^2)^{2/3}.\] \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example2_1.pdf} \caption{A rotated hyperbola in blue and its evolute in red.}\label{fig:hypRotleft} \label{fig:conic1a} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example2_2.pdf} \caption{Same situation in a different affine chart.}\label{fig:hypRotmiddle} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example2_3.pdf} \caption{Corresponding curve of normals shown in gold.}\label{fig:hypRotright} \end{subfigure} \caption{Rotated hyperbola, its evolute and its curve of normals.} \label{fig:hypRot} \end{figure} The evolute and the curve of normals are better seen in case of the rotated hyperbola $(x+y)^2-4(y-x)^2-1=0$, see Fig.~\ref{fig:hypRot}. As in the case of an ellipse, the evolute of a hyperbola is a sextic with 4 real cusps, 2 complex cusps, and 4 complex nodes. This is best visible in a different affine chart (see Fig.~\ref{fig:hypRotmiddle}). The curve of normals again is a quartic with 1 crunode and 1 acnode in addition to its double point corresponding to the line at infinity $z=0$, which is a crunode. In Fig.~\ref{fig:hypRotright} one can see the (affine) crunode corresponding to the unique real diameter of the hyperbola; this diameter connects the pair of points on its two branches minimizing the distance function. Klein's formula for the hyperbola is exactly the same as for the ellipse. The hyperbola has 2 vertices and 1 diameter. \subsection{Cubics} \hfill\\ \noindent {\bf III.} The cuspidal cubic classically referred to as \emph{cissoid} is given by the equation \[ (x^2+y^2)x=ay^2.\] This rational curve is \emph{circular}, meaning that it passes through the two circular points $(i:1:0)$ and $(-i:1:0)$ on the line at infinity. Besides the circular points, it intersects the line at infinity in the point $(0:1:0)$, which is an inflection point. The numerical characters of the curve are $d=3$, $\kappa =1$, $d^\vee= 2d-2-\kappa =3$, $\iota=1$. The evolute has two inflectional tangents passing through the circular points, so $\iota_E=2$. Hence the degree of the curve of normals is $\deg N_\Gamma=d+d^\vee-\iota_E=4$, and the degree of its evolute is $\deg E_\Gamma=3d+\iota-3\iota_E=4$, see column III in Table~\ref{tb:inv}. The evolute is given by the equation $\textstyle 27y^4+288a^2y^2+512 a^3x=0.$ It is apparent from the equation that the evolute has only one singular point, namely $(1:0:0)$, which has type $E_6$ (with tangent being the line at $\infty$), hence a point of multiplicity 3 and $\delta$-invariant $3$. The curve of normals has an acnode $(-\frac{2}3:1:0)$ and two complex ordinary cusps $(i:2i:1)$ and $(-i:-2i:1)$ (corresponding to the inflection points of $E_\Gamma$). For the Klein--Schuh formula applied to the evolute, we have that the degree and the class (equal to the degree of the curve of normals) are equal, and the curve of normals has one real singularity, an acnode, and the evolute has a cusp of multiplicity 3. Hence the formula checks: $4-4=(2-0)-(3-1)=0$. Since the curve of normals has no crunodes, and the evolute has no cusps in the affine real plane, the cissoid has no vertices or diameters, see Fig.~\ref{fig:conic}. \begin{figure}[H] \centering \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=0.52\columnwidth]{example3_1.pdf} \caption{A cissoid $(x^2+y^2)x=4y^2$ in blue and its evolute given by $27y^4+4608y^2+32768x=0$ in red.}\label{fig:conicleft} \label{fig:conic1a} \end{subfigure} \hfill \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example3_2.pdf} \caption{The corresponding curve of normals shown in gold is given by $36u^3v+uv^3+12u^2v^2+64u^2+96uv+128=0$.}\label{fig:conicright} \end{subfigure} \caption{Cissoid, its evolute and its curve of normals.} \label{fig:conic} \end{figure} \smallskip \noindent {\bf IV.} The nodal cubic given by the equation \[5(x^2-y^2)(x-1)+(x^2+y^2)=0\] has a crunode at the origin, and three real branches transversal to the line at $\infty$, hence is in general position with respect to the line at infinity and the circular points, except that the point $(0:1:0)$ is an inflection point. Its numerical characters are $d=3$, $\delta=1$, $\kappa=0$, $\iota =3$, $d^\vee =3\cdot 2-2=4$. The degree of its curve of normals is $\deg N_\Gamma=3+4=7$, and the degree of its evolute is $\deg E_\Gamma =3\cdot 3+3=12$, see column IV of Table~\ref{tb:inv}. We find that the evolute has $11$ cusps in the affine plane, of which 5 are real. Furthermore, it has two real cusps and a real $E_6$-singularity at infinity. (The reason the third cusp is an $E_6$-singularity rather than an ordinary cusp is that the curve has an inflection point at $(0:1:0)$, with tangent transversal to the line at infinity. This gives an ``extra'' tangent to the evolute passing through the point $(0:1:0)^\perp =(1:0:0)$, so that the formula for the degree of the evolute can be written $2\cdot 3 +(3+1)+ (\iota -1)=12$.) Furthermore $E_\Ga$ has $39$ complex nodes, of which $3$ are real (crunodes). \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example4_1.pdf} \caption{The nodal cubic in blue and its evolute in red with marked singular points. \label{fig:nodalcubic} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example4_2.pdf} \caption{The curve of normals for the latter nodal cubic in the $(u,v)$ chart, with marked singular points.}\label{fig:nodalcubicmiddle} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example4_all.pdf} \caption{The curve of normals in the $(u,w)$ chart, with marked singular points.}\label{fig:nodalcubicright} \end{subfigure} \caption{The nodal cubic $5(x^2-y^2)(x-1)+(x^2+y^2)=0$, its evolute, and its curve of normals.} \label{fig:nodalcubic} \end{figure} Using Maple we determine that $N_\Ga$ has 13 singular points, all of which are real. 10 of these singular points can bee seen in Fig.~\ref{fig:nodalcubic}. Furthermore, $N_\Ga$ has a triple points at $(0:1:0)$ (which does not contribute to the count of diameters) and two double points at $(- \frac{5}{8}:1:0)$ and $(-\frac{5}{3}:1:0)$. All singular points can be seen in the different affine chart shown in Fig.~\ref{fig:nodalcubicright}, and we see 10 crunodes and two acnodes. Hence this nodal cubic has 5 vertices and 10 diameters. \smallskip \noindent {\bf V.} Our next example is a generic cubic in the Weierstrass form given by the equation \[y^2+x(x-2)(x+1)=0.\] The line at infinity is an inflectional tangent, at the point $(0:1:0)$. Hence the curve is not in general position. Its numerical characters are $d=3$, $\iota=9$, $d^\vee=6$. According to \cite[Thm.~8]{JoPe2}, the degree of its curve of normals is $\deg N_\Gamma=d+d^\vee - \iota_E$, with $\iota_E=3-1=2$, hence $\deg N_\Gamma=7$, which checks with the computation of the equation for $N_\Gamma$. The degree of its evolute is $\deg E_\Gamma=3d+\iota-3\iota_E=12$, which also checks with the computation of the equation for the evolute. According to Proposition~\ref{pr:7}, \[\kappa(E)=6d-3d^\vee+3\iota-5\iota_E=18-18+27-10=17,\] see column V in Table~\ref{tb:inv}. The evolute has an inflection point at $(0:1:0)^\perp =(1:0:0)$, with tangent the line at infinity that intersects the evolute with multiplicity 4. Indeed, the evolute has 17 cusps, of which 9 are real (we only see 7 of these in Fig.~\ref{fig:WeierNCleft}), and $\binom{11}{2}-1-17=37$ nodes, of which 3 are real crunodes, see Fig.~\ref{fig:WeierNCleft}. \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example5_1.pdf} \caption{The Weierstrass cubic in blue and its evolute in red.}\label{fig:WeierNCleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example5_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:WeierNCmiddle} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example5_3.pdf} \caption{The curve of normals in the $(v,w)$ chart.}\label{fig:WeierNCright} \end{subfigure} \caption{The Weierstrass cubic $y^2+x(x-2)(x+1)=0$, its evolute, and its curve of normals in different charts.} \label{fig:WeierNC} \end{figure} We see from Fig.~\ref{fig:WeierNCleft} that $N_\Ga$ has a triple point at $u=0,v=0$. This comes from the fact that the line $y=0$ is perpendicular to $\Ga$ at the three distinct points $(-1: 0:1)$, $(0:0:1)$, $(2:0:1)$, and gives 3 diameters. Additionally, $N_\Ga$ has an $E_6$-singularity (corresponding to the inflection point of $E_\Ga$), with $\delta$-invariant 3. The remaining singular points of $N_\Ga$ are $\binom{\deg N_\Ga -1}{2}-g-\binom{3}{2}-3=15-1-3-3=8$ nodes, of which 2 are real: 1 is an acnode and 1 is a crunode. The Weierstrass cubic has 9 vertices and $4$ diameters. \medskip \noindent {\bf VI.} Our final example among cubics is a nonsingular cubic given by the equation \[(x^2 - y^2) (x - 1) + 1/64=0\] with three real branches transversal to the line at infinity, see Fig.~\ref{fig:GenCubicNC}. This is a curve in general position, but the three intersection points with the line at infinity are inflection points. By Proposition~\ref{prop:1}, its curve of normals has degree $\deg N_\Gamma =3+6=9$ and its evolute has degree $\deg E_\Gamma=3d+\iota=18$. Using Maple, we find that the evolute $E_\Ga$ has 21 ordinary complex cusps and 105 complex nodes, see column VI of Table~\ref{tb:inv}. The three cusps on the line at infinity are $E_6$-singularities, for the same reason as for the nodal cubic IV. We find 18 real singular points in the $(x,y)$-plane, of which 9 are real cusps and 9 are crunodes (not all of them can be seen in Fig.~\ref{fig:GenCubicleft}). Using Maple, we find that the curve of normals $N_\Ga$ has 2 real triple points, one at $(0:0:1)$ and one at $(0:1:0)$. The first contributes 3 diameters of the curve, whereas the second corresponds to the line $z=0$ and does not contribute to the diameters. Furthermore, we find 21 complex nodes, of which three are on the line $w=0$, namely at the real points $(\alpha:1:0)$, where $\alpha$ is any of the three real roots of $\phi(t)=t^3 + 128t^2 + 256t + 128$. Of the remaining 18 nodes, 10 are real. The real affine situation in the $(u,v)$-chart is visible in Fig.~\ref{fig:GenCubicmiddle}, where we see two acnodes among the 10 visible real affine nodes. In Fig.~\ref{fig:GenCubicright} we can see the triple point $(0:1:0)$ and two of the points $(\alpha:1:0)$ -- the third of these, an acnode, is further out and not shown to allow for a detailed vision around the origin. We thus have that 3 of the 13 real nodes are acnodes. The cubic has 9 vertices and 13 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example6_1.pdf} \caption{The cubic in blue and its evolute in red.}\label{fig:GenCubicleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example6_2.pdf} \caption{The curve of normals in the $(u,v)$ chart, with marked singular points.}\label{fig:GenCubicmiddle} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example6_33.pdf} \caption{The curve of normals in the $(u,w)$ chart, with marked singular points.}\label{fig:GenCubicright} \end{subfigure} \caption{A nonsingular cubic with three real branches transversally intersecting the line at infinity, its evolute, and its curve of normals in two charts.} \label{fig:GenCubicNC} \end{figure} \subsection{Rational curves of higher degree}\label{subs:higher} \hfill\\ \noindent {\bf VII.} The ampersand curve is a rational curve given by the equation \[\textstyle (-1 + x) (-3 + 2 x) (-x^2 + \frac{1}{2}y^2) - 4 (-2 x + x^2 + \frac{1}{2}y^2)^2=0.\] It intersects the line at infinity transversally in four (non-circular) points, hence is in general position. Its numerical characters are $d=4, d^\vee =6, \delta=3, \kappa=0, \iota=6, \tau =4$. We see 2 real inflection points on the curve, so by Klein's formula $4+2\tau'+\iota'=6+2\delta'+\kappa'$, we get $4+2\tau'+2= 6+0+0$ (since all nodes are crunodes, $\delta'=0$), hence $\tau'=0$ (i.e., there are no conjugate double tangents). The degree of its curve of normals is $\deg N_\Gamma=4+6=10$ and that of its evolute is $\deg E_\Gamma=12 +6= 18$. We get $ \kappa(N)=0$, $\delta_N=\delta(N)-\binom{6}2=36-6=30$ and $\kappa(E)=24$, $\delta_E=\delta(E)-\kappa(E)=136-24=112$, see column VII of Table~\ref{tb:inv}. The evolute has 24 ordinary cusps, of these 6 are real and in the $(x,y)$-plane. It has 112 nodes, of these 4 are crunodes and 4 are acnodes (except for 2 acnodes, these can all be seen in Fig.~\ref{fig:ambersandleft}). The curve of normals has a quadruple point at $(0:1:0)$. Using Maple we find that $N_\Ga$ has additional 30 singular points, all nodes. Of these, 14 are real -- 11 of these points can bee seen in Fig.~\ref{fig:ambersandright}, and the nodes at $(-2:1:0)$ and $(-159/209 \pm \sqrt\frac{201}{209}: 1:0)$ can be seen in Fig.~\ref{fig:ambersvw}. Hence we can see all the real singular points by considering the two affine charts shown in Fig.~\ref{fig:ambersand}. The point $(0:0:1)$ is an acnode. Note that the ``acnode'' $(0:1:0)$ we see in Fig.~\ref{fig:ambersvw} is the ordinary quadruple point corresponding to the line $z=0$ -- this point has four complex branches. Thus we see that $N_\Ga$ has 13 crunodes. The ampersand curve has 6 vertices and 13 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example7_1.pdf} \caption{The ampersand curve in blue with its evolute in red.}\label{fig:ambersandleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example7_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:ambersandright} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example7_3.pdf} \caption{The curve of normals in the $(u,w)$ chart, with marked singular points.}\label{fig:ambersvw} \end{subfigure}\hfill \caption{The ampersand curve, its evolute, and its curve of normals.} \label{fig:ambersand} \end{figure} \medskip \smallskip \noindent {\bf VIII.} The cross curve is a rational curve given by the equation \[x^2y^2-4x^2-y^2=0.\] The curve is not in general position: it intersects the line at infinity in two points that are crunodes of the curve, with tangents transversal to the line at infinity, and with each branch of the crunodes having an inflection point at the crunode. The curve has an acnode at the origin $(0:0:1)$. Its numerical characters are $d=4$, $d^\vee=6$, $\delta=3$, $\kappa =0$, $\iota=6$, $\tau=4$. Klein's formula gives $4+2\tau' +\iota' =6+2\delta'+\kappa'$, hence $4+2\tau'=6+2$ (since one node is an acnode, $\delta'=1$), hence $\tau'=2$. We get $\deg N_\Ga=d+d^\vee=10$, $\kappa(N)=0$, $\delta_N=\delta(N)-\binom{4}2=36-6=30$, and $\deg E_\Ga=3d+\iota=18$. The evolute has 16 ordinary cusps, of which 4 are real, and two real singularities on the line at infinity, each with multiplicity 6, $\delta$-invariant 18, and 2 branches. (Each of these singularities is the coming-together of two $E_6$-singularities.) The evolute has $\kappa(E)=24$, and $\delta_E=\delta(E)-16-2\cdot 18=84$ nodes, of these two are real and are acnodes. The curve of normals has a quadruple point at $(0:1:0)$ (which does not contribute to the number of diameters) with four real branches and $\delta$-invariant 12 -- it can be seen in Fig. \ref{fig:crossvw}. Furthermore, using Maple, we find that the curve of normals has 24 double points in the affine $(u,v)$-plane, namely $(\pm \sqrt{2}:0:1)$, $(\pm \sqrt{-2}:0:1)$, and $(\pm \sqrt{\alpha}: \pm\sqrt{\gamma(\alpha)}:1)$, where $\alpha$ is any of the five roots of $\phi(t)= t^5 - 6t^4 + 44t^3 - 56t^2 + 24t - 4$ and $\gamma(\alpha)=\frac{1}{3}(84+6\alpha^4-33\alpha^3+252\alpha^2-219\alpha$. Thus in total we have 6 real affine nodes in the $(u,v)$-chart, 4 of which are acnodes, as can be seen in Fig. \ref{fig:crossright}. The cross curve has 4 vertices and 2 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example8_1.pdf} \caption{The cross curve in blue with its evolute in red with marked singular points.}\label{fig:crossleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example8_2.pdf} \caption{The curve of normals in $(u,v)$ charts with marked singular points.}\label{fig:crossright} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example8_3.pdf} \caption{The curve of normals in the $(u,w)$ chart with marked singular points.}\label{fig:crossvw} \end{subfigure}\hfill \caption{The cross curve, its evolute, and its curve of normals.}\label{fig:cross} \end{figure} \medskip \smallskip \noindent {\bf IX.} The bean curve is a rational curve given by the equation \[x^4+x^2y^2+y^4-x(x^2+y^2)=0.\] The curve intersects the line at infinity in four distinct (non-circular) points, so it is in general position. It has an ordinary triple point at the origin, with one real branch. The point $(1:0:1)$ is an inflection point with tangent line $x=1$. The intersection number of the curve with its tangent at this point is 4. The corresponding point on its dual curve is an $E_6$-singularity (a cusp of multiplicity 3), hence $\iota' =2$. The numerical characters of the curve are $d=4$, $d^\vee=6$, $\delta=3$, $\kappa =0$, $\iota=6$, $\tau=4$. Klein--Schuh's formula gives $4-6=(3-1) -\iota'-2\tau'=2-2-2\tau'$, hence $\tau'=1$. Its curve of normals has $\deg N_\Ga=10$, $\kappa(N)=0$, and $\delta(N)=36$ nodes, and its evolute has $\deg E_\Ga=18$, $\kappa(E)=24$, $\delta(E)=136$, see column IX of Table~\ref{tb:inv}. The evolute has 124 nodes and 24 cusps. There are 5 real cusps in the $(x,y)$-plane and one at $(0:1:0)$. This last one corresponds to the higher order inflection point $(1:0:1)$ of the curve, which gives a critical point of the curvature radius function and hence this cusp is a vertex of $\Ga$. We see 3 crunodes and 5 acnodes in Fig. \ref{fig:beanleft}. The curve of normals has an ordinary quadruple point at $(0:1:0)$, with all branches complex. In addition, $N_\Ga$ has a crunode visible in Fig.~\ref{fig:beanright}, and singularities at $(-\frac{3}{2}:1:0)$, $(1 - \sqrt{5}: 1:0)$, $(1 + \sqrt{5}:1:0)$, which can be seen in the $(u,w)$-chart shown in Fig.~\ref{fig:beanvw}, which gives two acnodes and an additional crunode. The bean curve has 6 vertices and 2 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example9_1.pdf} \caption{The bean curve in blue with its evolute in red.}\label{fig:beanleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example9_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:beanright} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example9_3.pdf} \caption{The curve of normals in the $(v,w)$ chart with marked singular points.}\label{fig:beanvw} \end{subfigure}\hfill \caption{The bean curve, its evolute, and its curve of normals.} \label{fig:bean} \end{figure} \smallskip \noindent {\bf X.} The trifolium is a rational curve given by the equation \[(x^2+y^2)^2-x^3+3xy^2=0.\] The trifolium is a \emph{2-circular} curve: the highest degree homogeneous part of its equation is divisible by $(x^2+y^2)^2$. The curve touches the line at infinity at the two circular points. It has one singular point, an ordinary triple point at the origin $(0:0:1)$. Its numerical characters are $d=4$, $d^\vee=6$, $\delta=3$, $\kappa =0$, $\iota=6$, $\tau=4$. Klein--Schuh's formula gives $4-6=-\iota'-2\tau'=-2\tau'$, since there are no real inflection points. Hence $\Ga$ has one pair of conjugate tangents. The evolute has totally 24 nodes and 12 cusps of which no nodes are real, i.e., it has neither acnodes nor crunodes. Of totally 12 cusps $6$ are real and $6$ are non-real. All real cusps can be seen in Figure~\ref{fig:trifolium} (a). For the curve of normals we get from Equation~(\ref{degnorm}) $\deg N_\Ga=4+6-2(2-1+1)=6$, and for the evolute we get from Equation~(\ref{degev}) $\deg E_\Ga=2\deg N_\Ga +d^\vee-2d+\kappa-\iota(E)=12+6-8+0-0=10$, see column X of Table~\ref{tb:inv}. The curve of normals has 7 double points in the affine $(u,v)$-plane, all of which are crunodes, as can be seen in Fig~\ref{fig:trifoliumright}. Furthermore, it has three real nodes at the line at infinity $w=0$, two crunodes at $(\frac{1}{3} + \frac{\sqrt{33}}{3}:1: 0)$ and $(\frac{1}{3} - \frac{\sqrt{33}}{3}:1: 0)$, and an acnode at $(0:1:0)$. Its evolute is a hypocycloid, with 6 real cusps in the affine plane, and is again a 2-circular curve. The trifolium has 6 vertices and 9 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example10_1.pdf} \caption{The trifolium in blue with its evolute in red.}\label{fig:trifoliumleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example10_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:trifoliumright} \end{subfigure} \hfill \caption{The trifolium, its evolute, and its curve of normals.} \label{fig:trifolium} \end{figure} \smallskip \noindent {\bf XI.} The quadrifolium is a rational curve given by the equation \[(x^2 + y^2)^3 - 4x^2y^2.\] The quadrifolium is a 3-circular curve. It has cusps (with tangent the line at infinity) at the two circular points. Its quadruple point at the origin has $\delta$-invariant 8. Its numerical characters are $d=6$, $d^\vee=8$, $\delta=10$, $\kappa =2$, $\iota=8$, $\tau=21$. The degree of the curve of normals is $\deg N_\Ga=6+8-2(3-2+2)=8$, by Equation~(\ref{degnorm}). It has 17 nodes in the affine $(u,v)$-plane, all real and visible in Fig.~\ref{fig:quadfoliumright}, and four real nodes $(0:1:0)$, $(1: 0:0)$, $(\frac{3}{4}\sqrt{6}: 1: 0)$, and $(-\frac{3}{4}\sqrt{6}: 1: 0)$ on the line $w=0$. All of these nodes are crunodes, except for $(0:1:0)$ which is an acnode. The evolute is a hypocycloid of degree $\deg E_\Ga=2\deg N_\Ga +d^\vee-2d+\kappa-\iota_E=16+8-12+2-0=14$, by (\ref{degev}). It has 8 real cusps in the affine plane and no other real singularities. The quadrifolium has 8 vertices and 20 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example11_1.pdf} \caption{The quadrifolium in blue with its evolute in red.}\label{fig:quadfoliumleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example11_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:quadfoliumright} \end{subfigure} \hfil \caption{The quadrifolium, its evolute, and its curve of normals.} \label{fig:quadfolium} \end{figure} \medskip \smallskip \noindent {\bf XII.} The D\"urer folium is a rational curve given by the equation \[(x^2 + y^2) (2(x^2 + y^2) - 1)^2 - x^2=0\] is a 3-circular curve. It has $E_6$-singularities at the circular points, and two crunodes and one $A_3$-singularity in the affine plane. Its numerical characters are $d=6$, $d^\vee=6$, $\delta=10$, $\kappa =4$, $\iota=4$, $\tau=10$, see row XII of Table~\ref{tb:inv}. Its evolute is a 3-circular epicycloid of degree $\deg E_\Ga=10$, with 4 real cusps in the affine plane, and no other real singularities. Its curve of normals has degree $\deg N_\Ga=6+6-2(3-3+3)=6$ and 10 double points, all real. Of these, 7 are in the affine $(u,v)$-plane and are crunodes, as can be seen in Fig.~\ref{fig:duererright}. Additionally the curve of normals has three crunodes at the line at infinity $w=0$. These are $(1:0:0)$, $(\frac{3}{2}\sqrt{6}: 1: 0)$ and $(-\frac{3}{2}\sqrt{6}: 1: 0)$. The D\"urer folium has 4 vertices and 10 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example12_1.pdf} \caption{The D\"urer folium in blue with its evolute in red.}\label{fig:duererleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example12_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:duererright} \end{subfigure} \hfil \caption{The D\"urer folium, its evolute, and its curve of normals.} \label{fig:duerer} \end{figure} \smallskip \noindent {\bf XIII.} The nephroid is a rational curve given by the equation \[4(x^2 + y^2 - 1)^3 - 27y^2=0.\] It is a 3-circular curve, with an $E_6$ singularity at each circular point. It has two real ordinary cusps, two complex nodes, and no inflection points. Its numerical characters are $d=6$, $d^\vee=4$, $\delta=10$, $\kappa =6$, $\iota=0$, $\tau=3$. Klein--Schuh's formula checks: $6-4=2(2-1)-0=2$. The nephroid is an epicycloid, hence its evolute $E_\Ga$ is again a nephroid (turned by 90 degrees), hence also 3-circular. It has equation \[(4(x^2+y^2)-1)^3-27x^2=0.\] $\deg E_\Ga=6$, $\kappa(E)=6$, $\iota(E)=0$, and $\delta_E =10-2\cdot 1-2\cdot 3=2$. Since the curve of normals is the dual of the evolute, we have $\deg N_\Ga=d^\vee =4$, $\kappa(N)=\iota(E)=0$, and $\delta_N=\delta(N)=3$, see column XIII of Table~\ref{tb:inv}. The curve of normals has three crunodes (we see two of them in Fig.~\ref{fig:neph}, the third correspond to the line $x=0$). The nephroid has 2 vertices and 3 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example13_1.pdf} \caption{The nephroid in blue with its evolute in red.}\label{fig:nephleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example13_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:nephright} \end{subfigure} \caption{The nephroid, its evolute, and its curve of normals.} \label{fig:neph} \end{figure} \smallskip \noindent {\bf XIV.} Cayley's sextic is a rational curve given by the equation \[4(x^2 + y^2 - x)^3 - 27(x^2 + y^2)^2=0.\] It is 3-circular, with $E_6$ singularities at the two circular points. It has two other singular points: one crunode and one $E_6$ singularity at the origin. Its numerical characters are $d=6$, $d^\vee=4$, $\delta=10$, $\kappa =6$, $\iota=0$, $\tau=3$, see column XIV of Table~\ref{tb:inv}. Its evolute $E_\Ga$ is a nephroid, hence $\deg E_\Ga =6$ and $\kappa(E)=6$. Its dual, the curve of normals of $\Ga$, has degree 4 and 3 crunodes (we see one in the affine $(u,v)$-plane). Cayley's sextic has 2 vertices and 3 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example14_1.pdf} \caption{Cayley's sextic in blue with its evolute in red.}\label{fig:caleyleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.62\columnwidth]{example14_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:caleyright} \end{subfigure} \hfil \caption{Cayley's sextic, its evolute, and its curve of normals.} \label{fig:caley} \end{figure} \medskip \smallskip \noindent {\bf XV.} The ranunculoid is an epicycloid given by the equation \noindent{\small \begin{dmath*} 0=-52521875 + 93312 x^5 + x^{12} - 1286250 y^2 - 933120 x^3 y^2 - 32025 y^4 + 466560 x y^4 - 812 y^6 - 21 y^8 - 42 y^{10} + y^{12} + 6 x^{10} (-7 + y^2) + 3 x^8 (-7 - 70 y^2 + 5 y^4) + 4 x^6 (-203 - 21 y^2 - 105 y^4 + 5 y^6) + 3 x^4 (-10675 - 812 y^2 - 42 y^4 - 140 y^6 + 5 y^8) + 6 x^2 (-214375 - 10675 y^2 - 406 y^4 - 14 y^6 - 35 y^8 + y^{10})\\ =(x^2+y^2)^6+\text{terms of lower degree}. \end{dmath*}} This is a rational curve which is 6-circular, with cuspidal singularities with multiplicity 6 and $\delta$-invariant 15 at the two circular points, and additional five real cusps. Its numerical characters are $d=12$, $d^\vee=7$, $\delta=55$, $\kappa =15$, $\iota=0$, $\tau=15$. Its evolute $E_\Ga$ is again a ranunculoid, hence 6-circular, and has $\deg E_\Gamma=12$. For the curve of normals, we get $\deg N_\Ga=7$, $\kappa(N)=0$, $\delta_N=\delta(N)=15$, see column XV of Table~\ref{tb:inv}. All of the nodes are real and we can see 12 of these in Fig. \ref{fig:ranright}. Additionally, there are three nodes at the line at infinity at the points $(\alpha:1:0)$, where $\alpha$ runs through the roots of $\phi(t)=125t^3 - 100t^2 - 20t + 8.$ Changing again to a different affine chart, all of the nodes are visible in Fig. \ref{fig:ranuw} and we find that $N_\Ga$ has 15 crunodes. The ranunculoid has 5 vertices and 15 diameters. \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example15_1.pdf} \caption{The ranunculoid in blue with its evolute in red.}\label{fig:ranleft} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example15_2.pdf} \caption{The curve of normals in the $(u,v)$ chart.}\label{fig:ranright} \end{subfigure} \hfil \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{example15_3.pdf} \caption{The curve of normals in the $(u,w)$ chart.}\label{fig:ranuw} \end{subfigure} \caption{The ranunculoid, its evolute, and its curve of normals.} \label{fig:ran} \end{figure} \subsection{Tables of invariants of the complexifications of the above examples}\label{subs:tables} \hfill\\ Let $\Gamma\subset \mathbb R^2$ be a real, plane curve. Let $C_0\subset \mathbb P^2_{\mathbb C}$ denote projective closure of its complexification and $\nu:C\to C_0$ the normalization map. We are interested in studying the evolute $E_C$ and the curve of normals $N_C$ of $\Gamma$ and of $C$. In Table~\ref{tb:inv} we give the numerical characters (cf. Appendix A) of the curve $C$, its evolute, and its curve of normals for each of the fifteen examples in \S~8.1. We set $d:=\deg C$, $d^\vee :=\deg C^\vee$, $\delta:=\delta$-invariant of $C_0$, $\kappa:=$ the degree of the ramification locus of $\nu$, $\tau:= \delta$-invariant of $C^\vee$, $\iota:=$ the ramification divisor of $C\to C^\vee$, $d(E):=\deg E_C$, $d(N):=\deg N_C$, $\delta(E):=$ the $\delta$-invariant of $E_C$, $\kappa(E):=$ the degree of the ramification divisor of $C\to E_C$, $\tau(E):=$ the $\delta$-invariant of $E_C$, $\iota(E):=$ the degree of the ramification divisor of $C\to E_C^\vee =N_C$. Since we have $N_C=E_C^\vee$, by duality, we have $\kappa(E)=\iota(N)$, $\iota(E)=\kappa(N)$, $\tau(E)=\delta(N)$, and $\tau(N)=\delta(E)$. \begin{center} \begin{table}[h] \tablestyle[sansbold] \begin{tabular}{||c c c c c c c c c c c c c c c c||} \hline & I & II & III & IV & V & VI & VII & VIII & IX & X & XI & XII & XIII & XIV & XV\\ [0.5ex] \toprule\hline $d$ & 2 & 2 &3 & 3 & 3 & 3 & 4 & 4 & 4 & 4 & 6 & 6 & 6 & 6 & 12 \\ \hline $d^\vee$ & 2 & 2 & 3 & 4 & 6 & 6 & 6 & 6 & 6 & 6 & 8 & 6 & 4 & 4 & 7 \\ \hline $\delta$ & 0 & 0 & 1 & 1 & 0 & 0 & 3 & 3 & 3 & 3 & 10 & 10 & 10 & 10 & 55 \\ \hline $\kappa$ & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 4 & 6 & 6 & 15 \\ \hline $\tau$ & 0 & 0 & 1 & 3 & 9 & 9 & 10 & 10 & 10 & 10 & 10 & 10 & 3 & 3 & 15 \\ \hline $\iota$ & 0 & 0 & 1 & 3 & 9 & 9 & 6 & 6 & 6 & 6 & 8 & 4 & 0 & 0 & 0 \\ \hline $d(E)$ & 6 & 6 & 4 & 12 & 12 & 18 & 18 & 18 & 18 & 10 & 14 & 10 & 6 & 6 & 12 \\ \hline $d(N)$ & 4 & 4 & 4 & 7 & 7 & 9 & 10 & 10 & 10 & 6 & 8 & 6 & 4 & 4 & 7 \\ \hline $\delta(E)$ & 10 & 10 & 3 & 55 & 54 & 135 & 136 & 136 & 136 & 36 & 78 & 36 & 10 & 10 & 55 \\ \hline $\kappa(E)$ & 6 & 6 & 2 & 15 & 17 & 27 & 24 & 24 & 24 & 12 & 18 & 12 & 6 & 6 & 15 \\ \hline $\tau(E)$ & 3 & 3 & 3 & 15 & 14 & 27 & 36 & 36 & 36 & 10 & 21 & 10 & 3 & 3 & 15 \\ \hline $\iota(E)$ & 0 & 0 & 2 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \caption{Table of invariants case-by-case.} \label{tb:inv} \end{table} \end{center} In Table~\ref{ta:singularities} we list the actual complex singularities of the evolute and the curve of normals, namely the number of singularities of types $A_1$, $A_2$, $D_4$, $E_6$, and other. The last row indicates whether the curve of normals has a point of multiplicity $d$ corresponding to the line at infinity with respect to $\Ga$. \begin{center} \begin{table}[h] \tablestyle[sansbold] \begin{tabular}{||c c c c c c c c c c c c c c c c||} \hline & I & II & III & IV & V & VI & VII & VIII & IX & X & XI & XII & XIII & XIV & XV\\ [0.5ex] \toprule\hline $A_1(E)$ & 4 & 4 & 0 & 39 & 37 & 105 & 112 & 84 & 112 & 24 & 60 & 18 & 2 & 2 & 20 \\ \hline $A_2(E)$ & 6 & 6 & 0 & 13 & 17 & 21 & 24 & 16 & 24 & 12 & 18 & 12 & 2 & 2 & 5 \\ \hline $E_6(E)$ & 0 & 0 & 1 & 1 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 2 & 2 & 2 & 0\\ \hline other & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $2$ & 0 & 0 & 0 & 0 & 0 & 0 & $2$\\ \hline $A_1(N)$ & 2 & 2 & 1 & 12 & 8 & 21 & 30 & 30 & 30 & 10 & 21 & 10 & 3 & 3 & 15 \\ \hline $A_2(N)$ & 0 & 0 & 2 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline $D_4(N)$ & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline $E_6(N)$ & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline $d$-uple$(N)$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \medskip \caption{Table of actual (complex) singularities of the evolute and the curve of normals. The two ``other'' singularities for VIII are singularities with multiplicity 6, $\delta$-invariant 18, and 2 branches: the merging of two $E_6$-singularities, and the two ``other'' singularities for XV are singularities with multiplicity 6, $\delta$-invariant 15, and 1 branch.}\label{ta:singularities} \end{table} \end{center} \medskip In Table~\ref{ta:kleinschuh} we list the terms appearing in the Klein--Schuh formula (Thm.~\ref{th:genKlein}) for the evolute and its dual curve, the curve of normals, and the number of vertices and diameters of $\Ga$. \begin{center} \begin{table}[h] \tablestyle[sansbold] \begin{tabular}{||c c c c c c c c c c c c c c c c||} \hline & I & II & III & IV & V & VI & VII & VIII & IX & X & XI & XII & XIII & XIV & XV\\ [0.5ex] \toprule\hline $d(E)$ & 6 & 6 & 4 & 12 & 12 & 18 & 18 & 18 & 18 & 10 & 14 & 10 & 6 & 6 & 12\\ \hline $d(N)$ & 4 & 4 & 4 & 7 & 7 & 9 & 10 & 10 & 10 & 6 & 8 & 6 & 4 & 4 & 7\\ \hline {\small$\sum (m_p(E)-r_p(E))$} & 4 & 4 & 2 & 9 & 9 & 15 & 14 & 16 & 16 & 6 & 8 & 4 & 2 & 2 & 5\\ \hline {\small $\sum (m_q(N)-r_q(N))$} & 2 & 2 & 2 & 4 & 4 & 6 & 6 & 8 & 8 & 2 & 2 & 0 & 0 & 0 & 0\\ \hline $V(\Ga)$ & 4 & 2 & 0 & 5 & 9 & 9 & 6 & 4 & 5 & 6 & 8 & 4 & 2 & 2 & 5 \\ \hline $D(\Ga)$ & 2 & 1 & 0 & 10 & 4 & 13 & 14 & 2 & 2 & 9 & 20 & 10 & 3 & 3 & 15 \\ \hline \end{tabular} \medskip \caption{The first four rows show the terms in the Klein--Schuh formula, applied to $E_\Ga$ and $N_\Ga$. The last two rows show the number of vertices $V(\Ga)$ and the number of diameters $D(\Ga)$.}\label{ta:kleinschuh} \end{table} \end{center} \section {Some further problems} Below we present very small sample of natural questions related to the topic of the paper. \smallskip \noindent {\bf 1.} The basic technical tool which we use to approach Problems 1--4 formulated in \S~\ref{sec:init} is to consider small deformations of real line arrangements. Line arrangements themselves do not have evolutes in the conventional sense, but probably such evolutes can be defined as certain limiting piecewise linear geometric objects. In general, our approach has a strong resemblance with methods of tropical geometry. One wonders if it is possible to develop a rigorous tropical geometry in the context of evolutes and their duals. \smallskip \noindent {\bf 2.} Analog of the problem about the maximal number of ovals. \begin{problem} For a given degree $d$, what is the maximal number of connected components in the complement of $\bR^2\setminus N_\Ga$, $\bR^2\setminus E_\Ga$ and $\bR^2\setminus (E_\Ga \cup \Ga)$? (One can ask the same question for the complements in $\bR P^2$.) \end{problem} This question is an analog of the problem about the maximal number of ovals for a plane curve of a given degree answered by Harnack's theorem, but since our curves are always singular it is better to ask about the maximal number of components of the complement. \smallskip \noindent {\bf 3.} Notice that if we fix a real polynomial defining $\Ga$ then normals to $\Ga$ split into two complementary classes -- gradient-like and antigradient-like. Namely, each normal is proportional to the gradient with either a positive or a negative constant. If we change the sign of the defining polynomial then we interchange gradient-like and anitgradient-like normals. \begin{problem} Given $d$, what is the maximal number of normals of one type which can pass through a given point? \end{problem} By Proposition~\ref{lm:Shustin} the total number of real normals through a point can reach $d^2$, but it is not clear at the moment how many of them can belong to one class. \smallskip \noindent {\bf 4.} Are the leading coefficients in our lower bounds for the Problems 1--4 sharp?
935a03219e3fc91c1c21f203bad2cd193ec7fe18
\section{Introduction} The main purpose of this paper is the evaluation of moments of some functions composed with the fractional part of $1/x$. In fact, we will analyze the integrals \begin{equation*} \label{eq:moments} I_kf=\int_{0}^{1}x^k f\left(\bigg\{\frac{1}{x}\bigg\}\right)\, dx, \qquad k=0,1,2,\dots . \end{equation*} We name this integrals the fractional moments of the function $f$. Up to our knowledge, the particular case $f(x)=x^m$ appears in the literature in different references (see \cite{Furdui-ITSF, Furdui-Libro, Furdui-Analysis, LSQ, SLQ, Valean}). For example, in \cite[Theorem 2.1]{Furdui-Analysis} or \cite[Problem 22.2]{Furdui-Libro} we can see the identity \begin{equation} \label{eq:Furdui-zetas} \int_{0}^{1}x^k\left\{\frac{1}{x}\right\}^m\, dx=\frac{m!}{(k+1)!}\sum_{j=1}^{\infty}\frac{(k+j)!}{(m+j)!}(\zeta(k+j+1)-1), \qquad m,k\in \mathbb{Z}^+, \end{equation} where $\zeta$ denotes the zeta Riemann function. Moreover, for the particular case $m=k+1$, by using the identity \[ \sum_{j=1}^{\infty}\frac{\zeta(j+1)-1}{j+1}=1-\gamma, \] with $\gamma$ being the Euler-Mascheroni constant, it is proved that \[ \int_{0}^{1}x^k\left\{\frac{1}{x}\right\}^{k+1}\, dx=H_{k+1}-\gamma-\sum_{j=2}^{k+1}\frac{\zeta(j)}{j}, \] where $H_n$ is the $n$-th harmonic number. In this paper we will make a systematic analysis of this kind of integrals and we will obtain appropriated closed forms for them. The main tool to evaluate fractional moments will be an identity relating them with an integral involving the function $\log\Gamma(x+1)$ and the derivatives of the function $f$ (see Lemma \ref{Lem:main} in the next section). From this result, we deduce the fractional moments for some functions. In particular, we analyze the fractional moments for some trigonometric functions, the Bernoulli polynomials and the functions $x^m$ and $x^m(1-x)^m$. The integrals containing $\log\Gamma(x+1)$ will be evaluated by using some results in \cite{Espinosa-Moll-1}. In that paper, we see that the integrals \[ \int_{0}^{1}B_n(x)\log\Gamma(x)\, dx, \] where $B_n$ are the Bernoulli polynomials, are simpler to evaluate than the integrals \[ \int_{0}^{1}x^n \log\Gamma(x)\, dx. \] In fact, the latter integrals are evaluated in terms of the former ones. By this reason, to analyze the fractional moments of $x^m(1-x)^m$ we start giving an expansion for these functions and their derivatives in terms of the Bernoulli polynomials (see Lemma \ref{Lem:expan-Berno} and the remark following it). We believe that this result has its own interest. The paper is organized as follows. In Section \ref{sec:main} we present the lemma allowing us to obtain the fractional moments and in the rest of the paper we show some examples and applications related to trigonometric functions, Bernoulli polynomials and the functions $x^m$ and $x^m(1-x)^m$. All along the paper when an empty sum appears it must be taken as zero. \section{The main lemma} \label{sec:main} The following lemma will be the main tool to evaluate fractional moments. Before, we define the sequence $\alpha_n=\zeta(n+1)$, for $n>0$, and $\alpha_0=\gamma$. \begin{Lem} \label{Lem:main} Let $f$ be a function having $k+2$ derivatives in the interval $[0,1]$. Then \begin{multline*} I_kf=\frac{1}{(k+1)!}\Bigg(\sum_{j=0}^{k}(k-j)!\left(f^{(j)}(0)\alpha_{k-j}-f^{(j)}(1)(\alpha_{k-j}-1))\right)\\ +\int_{0}^{1}f^{(k+2)}(x)\log\Gamma(x+1)\,dx\Bigg), \qquad k\ge 0. \end{multline*} \end{Lem} The polygamma function will play a crucial role in the proof of the following lemma. It is defined as the $(m+1)$-th derivative of the logarithm of the gamma function \[ \psi^{(m)}(x)=\frac{d^{m+1}}{dx^{m+1}}\log \Gamma(x), \qquad m=0,1,2,\dots . \] Two facts about the polygamma function will be fundamental. Its representation as a series \begin{equation} \label{eq:psi-series} \psi^{(m)}(x)=(-1)^{m+1}m!\sum_{k=0}^{\infty}\frac{1}{(x+k)^{m+1}}, \end{equation} and its values in the positive integers \begin{align} \psi^{(m)}(n)&=(-1)^{m+1}m!\left(\zeta(m+1)-\sum_{k=1}^{n-1}\frac{1}{k^{m+1}}\right)\notag \\&=(-1)^{m+1}m!\sum_{k=n}^{\infty}\frac{1}{k^{m+1}}, \qquad m,n=1,2,\dots, \label{eq:psi-integers-1} \end{align} and \begin{equation} \label{eq:psi-integers-2} \phi^{(0)}(n)=-\gamma+\sum_{k=1}^{n-1}\frac{1}{k}, \qquad n=1,2,\dots. \end{equation} \begin{proof}[Proof of Lemma \ref{Lem:main}.] With the change of variable $w=1/x$, we have \begin{align*} I_kf&=\int_{1}^{\infty} f(\{w\})\, \frac{dw}{w^{k+2}}=\sum_{j=1}^{\infty}\int_{j}^{j+1}f(w-j)\, \frac{dw}{w^{k+2}}\\&=\sum_{j=1}^{\infty}\int_{0}^{1}f(s)\,\frac{ds}{(j+s)^{k+2}} =\frac{(-1)^k}{(k+1)!}\int_{0}^{1}f(s)\psi^{(k+1)}(s+1)\,ds, \end{align*} where in the last step we have used \eqref{eq:psi-series}. Now, applying integration by parts $k+2$ times and taking into account that $\log\Gamma(2)=\log\Gamma(1)=0$, we arrive at \begin{multline*} I_kf=\frac{(-1)^k}{(k+1)!}\Bigg(\sum_{j=0}^{k}(-1)^{j}(f^{(j)}(1)\psi^{(k-j)}(2)-f^{(j)}(0)\psi^{(k-j)}(1))\\ +(-1)^{k+2}\int_{0}^{1}f^{(k+2)}(s)\log \Gamma(s+1)\, ds\Bigg). \end{multline*} Finally, we conclude the proof applying \eqref{eq:psi-integers-1} and \eqref{eq:psi-integers-2}. \end{proof} \section{Fractional moments for trigonometric functions} We start our examples given the fractional moments for the sine and cosine functions. The expression that we obtain for them involve the classical functions sine integral \[ \Si(x)=\int_{0}^{x}\frac{\sin t}{t}\, dt \] and cosine integral \[ \Ci(x)=-\int_{x}^{\infty}\frac{\cos t}{t}\, dt. \] The following lemma contains the evaluation of two integrals for $\log\Gamma(x+1)$ with trigonometric functions. \begin{Lem} \label{Lem:trig-log} It is verified that \[ \int_{0}^{1}\sin(2\pi x)\log\Gamma(x+1)\, dx=\frac{\Ci(2\pi)}{2\pi} \] and \[ \int_{0}^{1}\cos(2\pi x)\log\Gamma(x+1)\, dx=\frac{1}{4}-\frac{\Si(2\pi)}{2\pi}. \] \end{Lem} \begin{proof} Taking $n=1$ in \cite[6.443.1 and 6.443.3]{Grad-Ri}, we have \[ \int_{0}^{1}\sin(2\pi x)\log\Gamma(x)\, dx=\frac{\log 2\pi+\gamma}{2\pi} \] and \[ \int_{0}^{1}\cos(2\pi x)\log\Gamma(x)\, dx=\frac{1}{4}. \] Then \[ \int_{0}^{1}\sin(2\pi x)\log\Gamma(x+1)\, dx=\frac{\log 2\pi+\gamma}{2\pi}+\int_{0}^{1}\sin(2\pi x)\log x\, dx \] and \[ \int_{0}^{1}\cos(2\pi x)\log\Gamma(x+1)\, dx=\frac{1}{4}+\int_{0}^{1}\cos(2\pi x)\log x\, dx. \] Now, applying integration by parts and the identity \cite[8.230.2]{Grad-Ri} \[ \Ci(x)=\gamma+\log x+\int_{0}^{x}\frac{\cos t-1}{t}\, dt, \] we deduce \begin{align*} \int_{0}^{1}\sin(2\pi x)\log x\, dx&=\frac{1}{2\pi}\int_{0}^{1}\frac{\cos(2\pi x)-1}{x}\, dx \\&=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{\cos t-1}{t}\, dt= -\frac{\log 2\pi+\gamma}{2\pi}+\frac{\Ci(2\pi)}{2\pi} \end{align*} and the result for the integral with the sine follows. The integral with the cosine can be obtained by using integration by parts only. Indeed, \[ \int_{0}^{1}\cos(2\pi x)\log x\, dx=-\frac{1}{2\pi}\int_{0}^{1}\frac{\sin(2\pi x)}{x}\, dx =-\frac{1}{2\pi}\int_{0}^{2\pi}\frac{\sin t}{t}\,dt=-\frac{\Si(2\pi)}{2\pi}.\qedhere \] \end{proof} With the notation \[ f_s(x)=\sin(2\pi x)\qquad\text{ and }\qquad f_c(x)=\cos(2\pi x), \] we have \[ f_s^{(2j)}(x)=(-1)^j(2\pi)^{2j}\sin(2\pi x), \qquad f_s^{(2j+1)}(x)=(-1)^j(2\pi)^{2j+1}\cos(2\pi x), \] \[ f_c^{(2j)}(x)=(-1)^j(2\pi)^{2j}\cos(2\pi x),\quad \text{and} \quad f_c^{(2j+1)}(x)=(-1)^{j+1}(2\pi)^{2j+1}\sin(2\pi x). \] Moreover, $f_s^{(2j)}(0)=f_s^{(2j)}(1)=0$, $f_s^{(2j+1)}(0)=f_s^{(2j+1)}(1)=(-1)^{j}(2\pi)^{2j+1}$, $f_c^{(2j+1)}(0)=f_c^{(2j+1)}(1)=0$, and $f_c^{(2j)}(0)=f_c^{(2j)}(1)=(-1)^j(2\pi)^{2j}$. Taking \[ \mathcal{S}_k=\int_{0}^{1}x^k f_s\left(\left\{\frac{1}{x}\right\}\right)\, dx\qquad\text{ and }\qquad \mathcal{C}_k=\int_{0}^{1}x^k f_c\left(\left\{\frac{1}{x}\right\}\right)\, dx \] we have the following result. \begin{Theo} For $n\ge 0$, it is verified that \[ \mathcal{S}_{2n}=\frac{(-1)^{n+1}(2\pi)^{2n+1}}{(2n+1)!} \left(\sum_{j=0}^{n-1}\frac{(-1)^j(2j+1)!}{(2\pi)^{2j+2}}+\Ci(2\pi)\right), \] \[ \mathcal{S}_{2n+1}=\frac{(-1)^n(2\pi)^{2n+2}}{(2n+2)!} \left(\sum_{j=0}^{n}\frac{(-1)^j(2j)!}{(2\pi)^{2j+1}}-\frac{\pi}{2}+\Si(2\pi)\right), \] \[ \mathcal{C}_{2n}=\frac{(-1)^n(2\pi)^{2n+1}}{(2n+1)!} \left(\sum_{j=0}^{n}\frac{(-1)^j(2j)!}{(2\pi)^{2j+1}}-\frac{\pi}{2}+\Si(2\pi)\right), \] and \[ \mathcal{C}_{2n+1}=\frac{(-1)^n(2\pi)^{2n+2}}{(2n+2)!} \left(\sum_{j=0}^{n}\frac{(-1)^j(2j+1)!}{(2\pi)^{2j+2}}+\Ci(2\pi)\right). \] \end{Theo} \begin{proof} The identities can be deduced immediately by using Lemma~\ref{Lem:main}, Lemma~\ref{Lem:trig-log}, and the given properties about the derivatives of the functions $f_s$ and $f_c$. \end{proof} \section{Fractional moments for Bernoulli polynomials} Now, we analyze the fractional moments for the Bernoulli polynomials $B_n(x)$. They can be defined through its exponential generating function \[ \frac{te^{xt}}{e^t-1}=\sum_{n=0}^{\infty}B_n(x)\frac{t^n}{n!}, \] converging for $|t|<\pi$. It is well known that $B_n(0)=(-1)^nB_n(1)=B_n$, where $B_n$ are the Bernoulli numbers. Remember that $B_{2k+1}=0$, for $k\ge 1$, and $B_1=-1/2$. A main tool in our approach will be the identity $B_n'(x)=nB_{n-1}(x)$ (Bernoulli polynomials are, in fact, a particular case of Appell polynomials). More generally, it is verified \begin{equation} \label{eq:ber-der} B_n^{(k)}(x)=\frac{n!}{(n-k)!}B_{n-k}(x),\qquad n\ge k. \end{equation} A crucial point to obtain a proper expression for the fractional moments of the Bernoulli polynomials is the following identity (see \cite[(6.5) and (6.6)]{Espinosa-Moll-1}) \begin{equation} \label{eq:ber-log} \int_{0}^{1}B_n(x)\log\Gamma(x)\, dx=a_n,\qquad n\ge 0, \end{equation} where the sequence $a_n$ is defined by \begin{equation} \label{eq:seq-an} a_{n}=\begin{cases} -\zeta'(-n),& n=0,2,4,\dots,\\ \frac{B_{n+1}}{n+1}\left(\frac{\zeta'(n+1)}{\zeta(n+1)}-\log(2\pi)-\gamma\right), & n=1,3,5\dots, \end{cases} \end{equation} with $\zeta'$ being the derivative of the Riemann function. It is convenient to remember that $\zeta'(0)=-\log\sqrt{2\pi}$ and \[ \zeta'(2n)=(-1)^n\frac{(2n)!\zeta(2n+1)}{2(2\pi)^{2m}}. \] Moreover, we consider the sequence \[ b_n=a_n-\frac{1}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k}\frac{B_{n+1-k}}{k}, \qquad n\ge 0. \] \begin{Lem} \label{Lem:Berno-Log} For $n\ge 0$, it is verified that \[ \int_{0}^{1}B_n(x)\log\Gamma(x+1)\, dx=b_n. \] \end{Lem} \begin{proof} From \eqref{eq:ber-log}, we have \[ \int_{0}^{1}B_n(x)\log\Gamma(x+1)\, dx=\int_{0}^{1}B_n(x)\log x\, dx+a_n. \] Now, applying integration by parts to the first integral and using the identity \begin{equation} \label{eq:ber.ber} B_m(x)=\sum_{k=0}^{m}\binom{m}{k}B_{m-k}x^k \end{equation} (which can be deduced from \eqref{eq:ber-der} by using that $B_n(0)=B_n$), we obtain that \begin{align*} \int_{0}^{1}B_n(x)\log x\, dx&=-\frac{1}{n+1}\int_{0}^{1}\frac{B_{n+1}(x)-B_{n+1}}{x}\, dx\\&=-\frac{1}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k}B_{n+1-k}\int_{0}^{1}x^{k-1}\, dx \\&=-\frac{1}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k}\frac{B_{n+1-k}}{k} \end{align*} and the proof is completed. \end{proof} Taking \[ J_k^n=\int_{0}^{1}x^kB_n\left(\left\{\frac{1}{x}\right\}\right)\, dx, \] we have the following result. \begin{Theo} For $n\ge 1$, it is verified that \[ J_k^n=\frac{1}{(k+1)\binom{k}{n}}\Bigg(\sum_{j=0}^{[n/2]}\binom{k-n+2j}{2j}B_{2j} +(k-n+1)\bigg(\frac{1}{2}-\zeta(k-n+2)\bigg)\Bigg), \] for $k\ge n$, \[ J_{n-1}^n=\sum_{j=1}^{[n/2]}\frac{B_{2j}}{2j}+\frac{1}{2}-\gamma, \] and, \begin{equation*} J_{k}^n=\frac{1}{k+1}\binom{n}{k}\Bigg(\sum_{j=[(n-k+1)/2]}^{[n/2]} \frac{B_{2j}}{\binom{2j}{k-n+2j}}+(n-k)(n-k-1)b_{n-k-2}\Bigg), \end{equation*} for $0\le k \le n-2$. \end{Theo} \begin{proof} The result follows by applying Lemma \ref{Lem:main}, Lemma \ref{Lem:Berno-Log}, and the properties of Bernoulli polynomials and Bernoulli numbers. \end{proof} \section{Fractional moments for the functions $x^m$} Considering the functions $h_m(x)=x^m$, with $m\ge 1$, in this section we evaluate their fractional moments. More precisely, we calculate the integrals \[ \mathcal{C}_k^m=\int_{0}^{1}x^k h_m\left(\left\{\frac{1}{x}\right\}\right)\, dx. \] To this end, we need the two following lemmas. \begin{Lem} \label{lem:pot-log} For $n\ge 0$, it is verified that \[ \int_{0}^{1}x^n\log\Gamma(x+1)\, dx=-\frac{1}{(n+1)^2}+\frac{1}{n+1}\sum_{k=0}^{n}\binom{n+1}{k}a_k, \] where the sequence $a_k$ was defined in \eqref{eq:seq-an}. \end{Lem} \begin{proof} By using the identity\footnote{This fact follows from \eqref{eq:ber.ber} applying the identity \[ \sum_{k=0}^{m}\binom{m+1}{k}=\begin{cases}1,& m=0\\0, & m\not=0\end{cases}, \] which can be proved by using the exponential generating function \[ \frac{t}{e^{t}-1}=\sum_{k=0}^{\infty}\frac{B_k}{k!}t^k. \]} \begin{equation} \label{eq:pot-Ber} x^n=\frac{1}{n+1}\sum_{j=0}^{n}\binom{n+1}{j}B_j(x), \end{equation} it is clear that \begin{align*} \int_{0}^{1}x^n\log\Gamma(x+1)\, dx&=\int_{0}^{1}x^n\log x\, dx+\int_{0}^{1}x^n\log\Gamma(x)\, dx\\ &=-\frac{1}{(n+1)^2}+\frac{1}{n+1}\sum_{j=0}^{n}\binom{n+1}{j}\int_{0}^{1}B_j(x)\log\Gamma(x)\, dx \end{align*} and the results is obtained applying \eqref{eq:ber-log}. \end{proof} \begin{Lem} \label{lem:pot-id} The identities \[ \sum_{j=0}^{m}\frac{(k-j)!}{(m-j)!}=\frac{(k+1)!}{m!(k+1-m)},\qquad k\ge m \] and \[ \sum_{j=0}^{k}\frac{(k-j)!}{(m-j)!}=\frac{(k+1)!}{m!(k+1-m)}\left(1-\binom{m}{k+1}\right), \qquad 0\le k\le m-2, \] hold. \end{Lem} \begin{proof} The first identity is equivalent to \[ \sum_{j=0}^{m}\binom{k-m+j}{j}=\binom{k+1}{m} \] and this is a consequence of the relation \[ \sum_{j=0}^{m}\binom{n+j}{j}=\binom{n+m+1}{m},\qquad n\ge 0, \] which can be proved in an elementary way by using induction on $m$. The second identity is equivalent to \[ \frac{m-k-1}{k+1}\sum_{j=0}^{k}\frac{\binom{m}{j}}{\binom{k}{j}}=\binom{m}{k+1}-1. \] In this case, it is enough to check that both sides of the identity satisfy the recurrence relation \[ (m-k)a_{m+1,k}-(m+1)a_{m,k}=k+1, \qquad 0\le k \le m-2, \] and they coincide for $m=2$ and $k=0$. \end{proof} In the proof of our result for $\mathcal{C}_k^m$, we will apply that $h_m^{(k)}(x)=m!x^{m-k}/(m-k)!$, for $0\le k \le m$, and $h_m^{(k)}(x)=0$, for $k>m$. Moreover, $h_m^{(k)}(0)=0$, for $k\not= m$, $h_m^{(m)}(0)=m!$, and $h_m^{(k)}(1)=m!/(m-k)!$, for $0\le k \le m$. Now, we have the following result. \begin{Theo} \label{moment-pot} For $m\ge 1$, it is verified that \[ \mathcal{C}_k^m=\frac{1}{k+1-m}-\frac{1}{(k+1)\binom{k}{m}}\sum_{j=k-m+1}^{k}\binom{j}{k-m}\zeta(j+1), \qquad k\ge m, \] \[ \mathcal{C}_{m-1}^m=H_m-\gamma-\sum_{j=1}^{m-1}\frac{\zeta(j+1)}{j+1}, \] and \begin{multline*} \mathcal{C}_k^m=\frac{1}{k+1-m}-\frac{1}{k+1}\binom{m}{k}\left(\sum_{j=1}^{k}\frac{\zeta(j+1)}{\binom{m-k+p}{p}} +\gamma\right)\\+\binom{m}{k+1}\sum_{j=0}^{m-k-2}\binom{m-k-1}{j}a_j, \qquad 0\le k\le m-2. \end{multline*} \end{Theo} \begin{proof} The result can de deduced by applying Lemma \ref{Lem:main}, Lemma \ref{lem:pot-log}, Lemma \ref{lem:pot-id}, and the given elementary properties for the derivatives of $h_m$. \end{proof} \begin{Remark} We have to observe that a closed form for $\mathcal{C}_k^m$, when $k\ge m$, appears in \cite[Problem 1.47]{Valean} and it is equivalent to the given one in our previous theorem. \end{Remark} Note that, taking $k=m$, we obtain that \[ \mathcal{C}_m^m=1-\frac{1}{m+1}\sum_{j=1}^{m}\zeta(j+1), \] which is the result in \cite[Corollary 2.2]{Furdui-Analysis}. Moreover, the identity \eqref{eq:Furdui-zetas} and the previous theorem imply the following corollary. \begin{Cor} For $m\ge 1$, it is verified that \begin{multline*} \frac{m!}{(k+1)!}\sum_{j=1}^{\infty}\frac{(k+j)!}{(m+j)!}(\zeta(k+j+1)-1)=\frac{1}{k+1-m} \\-\frac{1}{(k+1)\binom{k}{m}}\sum_{j=k-m+1}^{k}\binom{j}{k-m}\zeta(j+1),\qquad k\ge m, \end{multline*} \[ \sum_{j=1}^{\infty}\frac{\zeta(m+j)-1}{m+j}=H_m-\gamma-\sum_{j=1}^{m-1}\frac{\zeta(j+1)}{j+1}, \] and \begin{multline*} \frac{m!}{(k+1)!}\sum_{j=1}^{\infty}\frac{(k+j)!}{(m+j)!}(\zeta(k+j+1)-1)=\frac{1}{k+1-m}-\frac{1}{k+1}\binom{m}{k} \\\times \left(\sum_{j=1}^{k}\frac{\zeta(j+1)}{\binom{m-k+p}{p}} +\gamma\right)+\binom{m}{k+1}\sum_{j=0}^{m-k-2}\binom{m-k-1}{j}a_j, \qquad 0\le k\le m-2. \end{multline*} \end{Cor} By using the values \[ a_0=\log\sqrt{2\pi}\qquad \text{ and }\qquad a_1=-\frac{1}{4}+\frac{\zeta'(2)}{2\pi^2}+\frac{1}{3}\log\sqrt{2\pi}-\frac{\gamma}{12}, \] we can see some particular cases. Taking $k=m-2$ and $k=m-3$, we have, respectively, \[ \sum_{j=1}^{\infty}\frac{\zeta(m+j-1)-1}{(m+j)(m+j-1)}=-\frac{1}{m}+\log\sqrt{2\pi}-\frac{\gamma}{2}-\sum_{n=2}^{m-1}\frac{\zeta(n)}{n(n+1)},\qquad m\ge 2, \] and \begin{multline*} \sum_{j=1}^{\infty}\frac{\zeta(m+j-2)-1}{(m+j)(m+j-1)(m+j-2)}\\=-\frac{1}{2m(m-1)}+\frac{\zeta'(2)}{2\pi^2}+\frac{1}{3}\log\sqrt{2\pi} -\frac{\gamma}{4}-\sum_{n=2}^{m-2}\frac{\zeta(n)}{n(n+1)(n+2)}, \qquad m\ge 3. \end{multline*} \begin{Remark} From the well-known Hermite identity \[ \sum_{k=0}^{n-1}\left[x+\frac{k}{n}\right]=[nx], \qquad x\in \mathbb{R}, \] we obtain that \[ \sum_{k=0}^{n-1}\left\{x+\frac{k}{n}\right\}=\{nx\}+\frac{n-1}{2}, \qquad x\in \mathbb{R}. \] Then, applying Theorem \ref{moment-pot}, we can deduce that \[ \int_{0}^{n}x^k\sum_{k=0}^{n-1}\left\{\frac{1}{x}+\frac{k}{n}\right\}\, dx=n^{k+1}\left(\frac{1}{k}+\frac{n-1-2\zeta(k+1)}{2(k+1)}\right), \qquad n,k\ge 1, \] and \[ \int_{0}^{n}\sum_{k=0}^{n-1}\left\{\frac{1}{x}+\frac{k}{n}\right\}\, dx=n\left(\frac{n+1}{2}-\gamma\right), \qquad n\ge 1. \] \end{Remark} \begin{Remark} In \cite[Theorem 3.1]{Furdui-Analysis}, it is proved that \[ \int_{0}^{1}\int_{0}^{1}\left\{\frac{x}{y}\right\}^m\left\{\frac{y}{x}\right\}^k\, dx\, dy=\frac{\mathcal{C}_k^m+\mathcal{C}_m^k}{2}. \] In such paper, the double integral is written as an infinite sum of values of the Riemann zeta function by using \eqref{eq:Furdui-zetas}. With Theorem \ref{moment-pot} it is possible to obtain an appropriate closed form for the integral. \end{Remark} \section{Fractional moments for the functions $x^m(1-x)^m$} To finish with our examples, we study the fractional moments for the functions $f_m(x)=x^m(1-x)^m$. To do this, we start obtaining an expression for $f_m(x)$ in terms of Bernoulli polynomials because, as we said in the Introduction, the integrals of $\log\Gamma(x+1)$ with polynomials behave better with the Bernoulli polynomials $B_k(x)$ than with the usual powers $x^k$. In fact, in \cite[Example 6.4]{Espinosa-Moll-1} to evaluate the integrals \[ \int_{0}^{1}x^k\log \Gamma(x)\, dx \] the authors write them in terms of Bernoulli polynomials by using \eqref{eq:pot-Ber}, as we did in Lemma \ref{lem:pot-log}. To obtain the expansion of $f_m(x)$ in terms of Bernoulli polynomials we need some sums of combinatorial numbers that are contained in the following lemma. \begin{Lem} \label{lem:fm-sums} The identities \begin{equation} \label{eq:j0} \sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}=\frac{1}{(2m+1)\binom{2m}{m}}, \end{equation} \begin{equation} \label{eq:j1-m} \sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j}=0, \qquad j=1,\dots,m, \end{equation} and \begin{equation} \label{eq:jm+1-2m} \sum_{k=j-m}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j}=\frac{(-1)^m(1+(-1)^j)}{j}\binom{m}{j-m-1}, \end{equation} for $j=m+1,\dots,2m$, hold \end{Lem} \begin{proof} To prove \eqref{eq:j0} we apply integration. Indeed, it is easy to check that \begin{align*} \sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}&=\sum_{k=0}^{m}(-1)^k\binom{m}{k}\int_{0}^{1}x^{m+k}\, dx \\&=\int_{0}^{1}x^m(1-x)^m\, dx\\&=\frac{(\Gamma(m+1))^2}{\Gamma(2m+2)}=\frac{1}{(2m+1)\binom{2m}{m}}. \end{align*} In the proof of \eqref{eq:j1-m} we use the falling factorial, and it is given by $(a)_n=a(a-1)\cdots (a-n+1)$, for $n>1$, and $(a)_0=1$. It is clear that $\binom{m+k+1}{j}$ is a polynomial in the variable $k$ of degree $j$ and, for $j\ge 1$, \begin{align*} \binom{m+k+1}{j}\frac{1}{m+k+1}&=\frac{(m+k)(m+k-1)\cdots (m+k+2-j)}{j!}\\&=\sum_{\ell=0}^{j-1}a_{j,m,\ell}(k)_\ell \end{align*} for some coefficients $a_{j,m,\ell}$. From the identity \[ \binom{m}{k}(k)_\ell=\binom{m-\ell}{k-\ell}(m)_\ell, \] we have \begin{multline*} \sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j}=\sum_{\ell=0}^{j-1}a_{j,m,\ell} \sum_{k=\ell}^{m}(-1)^k\binom{m}{k}(k)_\ell\\ \begin{aligned} &=\sum_{\ell=0}^{j-1}a_{j,m,\ell}\sum_{k=\ell}^{m}(-1)^k\binom{m-\ell}{k-\ell}(m)_\ell\\&= \sum_{\ell=0}^{j-1}(-1)^\ell a_{j,m,\ell}(m)_\ell\sum_{k=0}^{m-\ell}(-1)^k\binom{m-\ell}{k}=0, \end{aligned} \end{multline*} where in the last step we have applied the identity $\sum_{k=0}^{p}(-1)^k\binom{p}{k}=0$ with $p=m-j+1,\dots, m$. The identity \eqref{eq:jm+1-2m} is equivalent to \[ \sum_{k=j-m}^{m}(-1)^k\binom{m}{k}\binom{m+k}{j-1}=(-1)^m(1+(-1)^j)\binom{m}{j-m-1}. \] Obviously, \begin{multline*} \sum_{k=j-m}^{m}(-1)^k\binom{m}{k}\binom{m+k}{j-1}\\=\sum_{k=j-m-1}^{m}(-1)^k\binom{m}{k}\binom{m+k}{j-1}-(-1)^{j-m-1}\binom{m}{j-m-1}. \end{multline*} Now, applying \cite[(5.24)]{Concrete}, we have \[ \sum_{k=j-m-1}^{m}(-1)^k\binom{m}{k}\binom{m+k}{j-1}=(-1)^m\binom{m}{j-m-1} \] and \[ \sum_{k=j-m}^{m}(-1)^k\binom{m}{k}\binom{m+k}{j-1}=(-1)^m(1+(-1)^j)\binom{m}{j-m-1} \] finishing the proof. \end{proof} \begin{Lem} \label{Lem:expan-Berno} For each $m\ge 1$, it is verified \[ f_m(x)=\frac{1}{(2m+1)\binom{2m}{m}}+(-1)^m\sum_{j=m+1}^{2m}\frac{1+(-1)^j}{j}\binom{m}{j-m-1}B_j(x). \] \end{Lem} \begin{proof} To obtain the identity, we use Newton's binomial identity and \eqref{eq:pot-Ber}. Indeed, \begin{align*} f_m(x)&=x^m\sum_{k=0}^{m}(-1)^k\binom{m}{k}x^k=\sum_{k=0}^{m}(-1)^k\binom{m}{k}x^{m+k}\\&=\sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\sum_{j=0}^{m+k}\binom{m+k+1}{j}B_j(x) \\&=\sum_{j=0}^{m}B_j(x)\sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j} \\&\kern25 pt+\sum_{j=m+1}^{2m}B_j(x)\sum_{k=j-m}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j} \\&=\sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}+\sum_{j=1}^{m}B_j(x)\sum_{k=0}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j} \\&\kern25 pt+\sum_{j=m+1}^{2m}B_j(x)\sum_{k=j-m}^{m}\frac{(-1)^k}{m+k+1}\binom{m}{k}\binom{m+k+1}{j} \end{align*} We finish applying the identities \eqref{eq:j0}, \eqref{eq:j1-m}, and \eqref{eq:jm+1-2m} in the previous lemma. \end{proof} \begin{Remark} It is interesting to observe that, by using \eqref{eq:ber-der}, we have \begin{align} \label{eq:derivatives} f_m^{(k)}(x)&=(-1)^m\sum_{j=\max\{k,m+1\}}^{2m}\frac{1+(-1)^j}{j}\binom{m}{j-m-1}\frac{j!}{(j-k)!}B_{j-k}(x),\notag \\&=(-1)^m k!\sum_{j=\max\{k,m+1\}}^{2m}\frac{1+(-1)^j}{j}\binom{m}{j-m-1}\binom{j}{k}B_{j-k}(x), \end{align} for $1\le k\le 2m$. \end{Remark} \begin{Remark} Let $\mathcal{P}_m$ be the shifted Legendre polynomial of degree $m$; i. e, $\mathcal{P}_{m}(x)=P_m(2x-1)$, where $P_m$ is the standard Legendre polynomial. By Rodrigues' formula \[ \mathcal{P}_m(x)=\frac{(-1)^m}{m!}f_m^{(m)}(x) \] and applying \eqref{eq:derivatives} we deduce that, for $m\ge 1$, \begin{align*} \mathcal{P}_m(x)&=\frac{1}{m!}\sum_{j=m+1}^{2m}\frac{1+(-1)^j}{j}\binom{m}{j-m-1}\frac{j!}{(j-m)!}B_{j-m}(x)\\ &=\sum_{j=1}^{m}(1+(-1)^{m+j})\frac{(m+j-1)!}{j!\, (j-1)!\, (m-j+1)!}B_j(x). \end{align*} The previous identity for shifted Legendre polynomials matches with the given ones in \cite[Theorem 2.2 and Theorem 2.4]{Navas-Ruiz-Varona}. \end{Remark} To evaluate the fractional moments of the functions $f_m$ we need the combinatorial identities contained in the next lemma. \begin{Lem} \label{lem:im-sums} For $m\ge 1$, the identities \begin{equation} \label{eq:im-sum-1} \sum_{j=m}^{2m}\frac{\binom{m}{j-m}}{\binom{k}{j}}=\frac{m!(k-2m)!(k+1)}{(k+1-m)!}, \qquad k\ge 2m, \end{equation} and \begin{equation} \label{eq:im-sum-2} \sum_{j=m}^{2m-1}\frac{\binom{m}{j-m}}{\binom{2m-1}{j}}=2m(H_{2m}-H_m) \end{equation} hold. \end{Lem} \begin{proof} The first identity is equivalent to \[ \sum_{n=0}^{m}\binom{m+n}{m}\binom{k-m-n}{k-2m}=\binom{k+1}{m} \] and it follows from \cite[(5.26)]{Concrete}. To prove the second one we check that both sides satisfy the recurrence relation \[ ma_{m+1}=(m+1)a_m+\frac{m}{2m+1} \] and they coincide for $m=1$. \end{proof} To complete the evaluation of the fractional moments for the functions $f_m$, we will need some elementary facts about them. More exactly, $f_m^{(j)}(x)=0$, for $j>2m$, $f_m^{(j)}(0)=f_m^{(j)}(1)=0$, for $0\le j <m$, and, by using Leibniz rule for the derivative of a product, \begin{equation} \label{eq:derivative-0-1} (-1)^jf^{(j)}_m(0)=f_m^{(j)}(1)=(-1)^mj!\binom{m}{j-m}, \qquad m\le j \le 2m, \end{equation} Then, denoting \[ \mathcal{I}_k^m=\int_{0}^{1}x^k f_m\left(\left\{\frac{1}{x}\right\}\right)\, dx, \] \[ \delta(x,y)=\begin{cases}1, & x=y,\\ 0, & x\not = y,\end{cases} \qquad \text{ and } \qquad p_{m,k}=\sum_{j=m}^{k}\frac{\binom{m}{j-m}}{\binom{k}{j}}, \] we have the following result. \begin{Theo} For $m\ge 1$, it is verified that \[ \mathcal{I}_k^m=(-1)^m\Bigg(\frac{1}{(k+1-m)\binom{k-m}{m}}-\frac{2}{k+1}\sum_{j=[m/2]}^{m-1} \frac{\binom{m}{2j+1-m}}{\binom{k}{2j+1}}\zeta(k-2j)\Bigg), \] for $k\ge 2m$, \[ \mathcal{I}_{2m-1}^m=(-1)^m\Bigg(H_{2m}-H_m-\gamma-\frac{1}{m}\sum_{j=[m/2]}^{m-2} \frac{\binom{m}{2j+1-m}}{\binom{2m-1}{2j+1}}\zeta(2m-1-2j)\Bigg), \] \begin{multline*} \mathcal{I}_k^m= \frac{(-1)^m}{k+1}\Bigg(p_{m,k}-2\sum_{j=[m/2]}^{[k/2]-1}\frac{\binom{m}{2j+1-m}}{\binom{k}{2j+1}}\zeta(k-2j)\\ -2\gamma\binom{m}{2m-k}\delta((k+1)/2,[(k+1)/2])\Bigg)\\ +(-1)^m(k+2)\sum_{[m/2]+1}^{m}\binom{m}{2j-m-1}\binom{2j}{k+2}\frac{b_{2j-k-2}}{j}, \end{multline*} for $m\le k \le 2m-2$, and \[ \mathcal{I}_k^m=(-1)^m(k+2)\sum_{j=[m/2]+1}^{m}\binom{m}{2j-m-1}\binom{2j}{k+2}\frac{b_{2j-k-2}}{j}, \] for $0\le k \le m-1$. \end{Theo} \begin{proof} The result is a consequence of Lemma \ref{Lem:main}, \eqref{eq:derivatives}, Lemma \ref{Lem:Berno-Log}, and Lemma \ref{lem:im-sums}. In particular, we have to use \eqref{eq:im-sum-1} for the case $k\ge 2m$ and \eqref{eq:im-sum-2} for $k=2m-1$. Moreover, the values of the derivatives of $f_m$ in $x=0$ and $x=1$ given in \eqref{eq:derivative-0-1} are also used. \end{proof} \begin{Remark} Unfortunately, we could not find a closed form for the sum \[ p_{m,k}=\sum_{j=m}^{k}\frac{\binom{m}{j-m}}{\binom{k}{j}} \] appearing in the case $m\le k \le 2m-2$. This remains as an open question. \end{Remark}
d0beee7bbb77e602f1ac46a4ab5dfd0a0d3f7a00
\section{Introduction}\label{sec1} Recent advancement in observational astrophysics paved the way for the release of the first image of supermassive black hole \cite{2}. Black holes are one of the most intriguing predictions of general theory of relativity. It is predicted that supermassive black holes form the core of almost all the galaxies in the universe. The recent image released by the Event Horizon Telescope (\textit{EHT}) group depict the shadow of the supermassive black hole residing at the center of Messier 87 galaxy \cite{2}. Theoretical analysis of the formation of the image of black holes (\textit{black hole shadow}) started with the works of Synge \cite{3} who demonstrated the shadow of a spherically symmetric black hole. Later, Luminet \cite{4} studied the shadow of a spherically symmetric black hole surrounded by accretion disks. Shadows are dark disks formed by light coming from a background source when a black hole lies between the source and observer with no source between the observer and black hole. Due to gravitational lensing, light from the source bends around the black hole and makes its way out to the observer, imaging the black hole boundary. The study for Kerr and Kerr-Newman black holes were carried out in \cite{5},\cite{6} and \cite{7}. With improvement in observational data from time to time, black hole shadows were studied in GR \cite{8}-\cite{f}, modified gravity theory \cite{26}-\cite{32} as well as higher dimensional gravity theories \cite{33}-\cite{35}. Besides, black hole shadow analysis were carried out considering expansion of the universe \cite{36},\cite{37}. \noindent General relativity has impressed us time and again by its impeccable predictions. But it still lacks to incorporate two of the major constituents of the universe- dark energy and dark matter. These two comprise nearly 95-96 \% of the total mass-energy content of the universe. Studies were conducted using dark energy models comprising of cosmological constant ($\Lambda$) \cite{38},\cite{39}, quintessence \cite{40},\cite{41}, etc. But here we are interested in studying the effects of dark matter ($DM$). Dark matter are non-baryonic and non-luminous in nature comprising about 27 \% of the mass-energy content of the universe. Many observations such as galaxy rotation curves \cite{42} and dynamical motion of galaxy clusters \cite{43} predicted their existence. The widely accepted model for dark matter is the cold dark matter model ($CDM$) whose primary candidate are WIMPs. But the $CDM$ model breaks down at small scales as has been studied in detail in the review work in \cite{001}. In order to account for the drawbacks of the $CDM$ model, warm and fuzzy dark matter models have been proposed. These models fall into the category of perfect fluid dark matter ($PFDM$). The perfect fluid model was first introduced in \cite{44}, \cite{45} and then further works were carried out in \cite{46}. Recently, $PFDM$ model has been studied using cosmological constant in \cite{47}-\cite{49}. Also shadow of rotating black hole in $PFDM$ has been worked out in \cite{50}. In \cite{1}, circular geodesics were studied in detail in the black hole spacetime with $PFDM$. Black holes surrounded by $PFDM$ in Rastall gravity was studied in \cite{x}. Evidence on the presence of dark matter near black holes have been discussed in \cite{68}. Besides there is a review work on the analytical study of black hole shadow in \cite{1aa}. \noindent We plan to study the shadow of charged rotating black hole in perfect fluid dark matter ($PFDM$) surrounded by plasma. In a general scenario black holes remain surrounded by material media. The recent observations of $M87^{*}$ central black hole supports the presence of plasma \cite{67}. Hence we take interest to investigate the general case where black holes are surrounded by some medium. Study of black hole shadows considering plasma has been conducted in \cite{51}-\cite{60}. We aim to study black hole shadow considering radial power law distribution of plasma. Also we wish to consider cases of both homogeneous \cite{61} and inhomogeneous plasma. Besides, we wish to consider plasma whose frequency ($\omega_p$) depends on both $r$ and $\theta$ coordinates. \noindent The paper is organised as follows. In section \ref{sec1}, we give a brief overview of the literature in studies of black hole shadow in general and in presence of plasma. In section \ref{sec2}, we discuss the system of rotating charged black hole in $PFDM$. Then in section \ref{sec3}, we continue the subsequent discussion of the black hole system in presence of plasma. Later in section \ref{sec4}, we discuss the circular geodesics. Here we focus in studying the co-rotating and counter rotating photon orbits moving in the equatorial plane. Then we study the black hole shadow in both arbitrary and near equatorial plane. In section \ref{sec5}, we study the impact of the spacetime parameters \Big($a$, $Q$, $\chi$, $k$, $\frac{\omega_c}{\omega_0}$\Big) on the shadow of the black hole. Later in section \ref{sec6} we look into the effective potential ($V_{eff}$) as encountered by the photons in the black hole spacetime. Further we compute the shadow radius $R_s$, angular shadow radius $\theta_d$ in section \ref{observation} and constraint various parameters by using the observed value of $\theta_d$, obtained from $M87^*$ data. Finally in section \ref{sec7}, we conclude by discussing our observations in detail. We work in geometric units, where we consider $c = G = 1$. Apart from the discussion and analysis, all our mathematical calculations and results are obtained using $M=1$. \section{Black hole spacetime in perfect fluid dark matter}\label{sec2} We consider a charged black hole surrounded by perfect fluid dark matter ($PFDM$). The corresponding action in (3+1) dimensions is given as \cite{1} \begin{equation}\label{x1} S=\int d^4 x\sqrt{-g}\Big(\frac{R}{16\pi}- \frac{1}{4}F^{\mu \nu}F_{\mu \nu} +\mathcal{L}_{DM}\Big) \end{equation} with $R$ being the Ricci scalar and $F_{\mu \nu}$ being the Maxwell field strength tensor. $\mathcal{L_{DM}}$ gives the Lagrangian density of the $PFDM$. Extremizing the action gives the Einstein field equations \cite{48} \begin{equation}\label{x2} R_{\mu \nu}-\frac{1}{2}g_{\mu \nu}R=8\pi \Big( T_{\mu\nu}^{M}-T_{\mu\nu}^{DM}\Big)~. \end{equation} In the above equation, $T_{\mu\nu}^{M}$ and $T_{\mu\nu}^{DM}$ are the energy-momentum tensor of ordinary matter and $PFDM$ respectively. The components of energy-momentum tensor in both cases can be expressed as \cite{47}, \cite{x} \begin{eqnarray} (T^{\mu}_{\;\;\nu})^{M} = diag\left(-\frac{Q^2}{8\pi r^4}, -\frac{Q^2}{8\pi r^4}, \frac{Q^2}{8\pi r^4}, \frac{Q^2}{8\pi r^4}\right) \end{eqnarray} \begin{eqnarray} (T^{\mu}_{\;\;\nu})^{DM} = diag(-\rho, p_r, p, p) \end{eqnarray} where $Q$ is the black hole charge, $\rho$ is the energy density and $p_r$, $p$ and $p$ are pressures of $PFDM$ in the three directions respectively. In order to solve the Einstein field equations, we assume a static, spherically symmetric metric ansatz of the form \begin{equation} ds^2 =-e^{2\gamma}dt^2 + e^{-2\gamma}dr^2 + r^2 (d\theta ^2 + \sin ^2 \theta d\phi^2) \end{equation} where $\gamma$ is a function of $r$ only. Replacing the metric ansatz in eq.\eqref{x2}, the different components of Einstein equations take the form \begin{equation}\label{6a} e^{2\gamma}\Big(\frac{1}{r^2}+\frac{2\gamma^{'}}{r}\Big)-\frac{1}{r^2}=8\pi \Big(-\frac{Q^2}{8\pi r^4} +\rho \Big) \end{equation} \begin{equation}\label{7a} e^{2\gamma}\Big(\frac{1}{r^2}+\frac{2\gamma^{'}}{r}\Big)-\frac{1}{r^2}=8\pi \Big(-\frac{Q^2}{8\pi r^4} - p_r\Big) \end{equation} \begin{equation}\label{8a} e^{2\gamma}\Big(\gamma{''}+2\gamma{'}^2+\frac{2\gamma^{'}}{r}\Big)=8\pi \Big(\frac{Q^2}{8\pi r^4} - p\Big) \end{equation} where prime ($'$) denotes derivative with respect to the radial coordinate ($r$). Subtracting eq.\eqref{7a} from eq.\eqref{6a} gives \begin{equation} \rho + p_r = 0~~ \Rightarrow ~~ p_r = -\rho~. \end{equation} Taking the equation of state for the $PFDM$ as $\frac{p}{\rho}=(\delta - 1)$, and taking the ratio of the above eq.(s) \eqref{6a} and \eqref{8a}, we get \begin{equation}\label{1203} e^{2\gamma}\Big(\gamma{''}+2\gamma{'}^2+\frac{2\gamma^{'}}{r}\Big)-\frac{Q^2}{ r^4}=(1-\delta)\Bigg[e^{2\gamma}\Big(\frac{1}{r^2}+\frac{2\gamma^{'}}{r}\Big)-\frac{1}{r^2}+\frac{Q^2}{ r^4}\Bigg]~. \end{equation} To solve the above equation, we set $2\gamma=\ln(1-U)$, which yields \begin{equation}\label{1204} U^{''}+ 2\delta\frac{U^{'}}{r} + 2(\delta -1)\frac{U}{r^2} + 2(2-\delta)\frac{Q^2}{r^4}=0~. \end{equation} Eq.\eqref{1204} can be solved for different values of $\delta$ \cite{46}. In this case, we are particularly interested in the solution for $\delta=\frac{3}{2}$ \cite{44,46}. For $\delta=\frac{3}{2}$, eq.(\ref{1204}) reduces to the following form \begin{equation} r^2 U^{''}+3rU^{'} + U + \frac{Q^2}{r^2}=0~. \end{equation} The solution of the above equation is obtained to be \begin{equation} U(r)=\frac{r_s}{r}-\frac{Q^2}{r^2}-\frac{\chi}{r}ln\Big(\frac{r}{|\chi|}\Big) \end{equation} where $r_s$ and $\chi$ are integration constants. In order to evaluate $r_s$, we set $Q=0$ and $\alpha=0$. In this limit, weak field approximation gives $r_s = 2M$. Thus, the lapse function takes the form \begin{eqnarray}\label{05} e^{2\gamma} = 1-U = 1-\frac{2M}{r}+\frac{Q^2}{r^2}+\frac{\chi}{r}ln\Big(\frac{r}{|\chi|}\Big)~. \end{eqnarray} The spacetime metric with $e^{2\gamma}=f(r)$ takes the form \begin{equation} ds^2 =-f(r)dt^2 + \frac{1}{f(r)}dr^2 + r^2 (d\theta ^2 + \sin ^2 \theta d\phi^2) \end{equation} with $f(r)$ given by \begin{equation} f(r)=1-\frac{2M}{r}+\frac{Q^2}{r^2}+\frac{\chi}{r}ln\Big(\frac{r}{|\chi|}\Big)~. \end{equation} Replacing the above solution of eq. \eqref{05} in eq.\eqref{6a}, we get \begin{equation}\label{08} \rho = \frac{\chi}{8\pi r^3} \end{equation} where $\rho$ is the energy density of dark matter and for $c=1$ corresponds to the mass density. Since the energy density must be positive due to the weak energy condition, that is, $\rho \geq 0$, this implies that $\chi \geq 0$. Hence the dark matter parameter $\chi$ in the black hole solution (eq.\eqref{05}) is positive \cite{49,50}. It is important to note that if we incorporate $T_{\mu \nu}^{DM}$ in eq.\eqref{x2} with a positive sign, then the weak energy condition would imply $\chi \leq 0$. This would then result in a completely different black hole solution \cite{1b}. We shall not investigate this solution here. We point out that the bound on $\chi$ is $0 < \chi < 2M$ and $-7.18M < \chi < 0$ in the latter case \cite{47}. It is also important to note from eq.\eqref{08} that with increase in the dark matter density $\rho$, the dark matter parameter $\chi$ increases. The rotating version of the solution was obtained using Newman-Janis algorithm in \cite{1}. The solution has the form \begin{align}\label{7} ds^2 =-\frac{1}{\zeta^2}\Big(\Delta -a^2 \sin^2 \theta\Big)dt^2 +\frac{\zeta^2}{\Delta}dr^2 + \zeta^2 d\theta^2 -\frac{2a\sin^2 \theta}{\zeta^2}\Big[r^2 + a^2 - \Delta \Big] dtd\phi \nonumber \\ + \sin^2 \theta \Big[r^2 + a^2 + \frac{a^2 \sin^2 \theta}{\zeta^2}\Big(r^2 + a^2 - \Delta\Big)\Big]d\phi^2 \end{align}\\ \noindent with \begin{equation} \Delta=r^2 +a^2 -2Mr +Q^2+\chi r \ln\Big(\frac{r}{|\chi| }\Big) , ~ \zeta^2 = r^2 + a^2 \cos^2 \theta~. \end{equation} \begin{figure}[H] \flushleft \begin{minipage}[b]{0.45\textwidth} {\includegraphics[width=\textwidth]{Horizon_1.eps}} \end{minipage} \hspace{1.5cm} \begin{minipage}[b]{0.45\textwidth} {\includegraphics[width=\textwidth]{Horizon_2.eps}} \end{minipage} \caption{Plot for $\Delta(r)$ with respect to $r$ for $a$=0.4, $Q$=0.2 for $\chi < \chi_c$(left panel) and $\chi > \chi_c$(right panel), $\chi_c=0.498$.} \label{12} \end{figure} \noindent The black hole event horizon is obtained from the condition $\Delta = 0$. On analysing the spacetime, we find that the black hole has two horizons. One of the horizons is close to $r=0$ but not at $r=0$ which can be seen from the plots. The other horizon is close to $r=1.5$, where we have set $M=1$. The presence of $PFDM$ does not increase the number of horizons of the black hole. Yet it does modify the location of the horizon surface which can be observed by analysing $\Delta$. \begin{table}[h]\label{tablea} \centering \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{}\\ \hline $a$ & $Q$ & $\chi_c$ \\ \hline 0.1 & 0.2 & 0.529\\ \hline 0.3 & 0.2 & 0.513\\ \hline 0.5 & 0.2 & 0.477\\ \hline \end{tabular} \hspace{2.5cm} \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{}\\ \hline $a$ & $Q$ & $\chi_c$ \\ \hline 0.4 & 0.0 & 0.508\\ \hline 0.4 & 0.3 & 0.488\\ \hline 0.4 & 0.6 & 0.414\\ \hline \end{tabular} \caption{\footnotesize Table showing the variation of the critical value of $PFDM$ parameter $\chi_c$ with black hole spin $a$ and charge $Q$.} \label{Fig11} \end{table} \noindent We observe from Fig.\ref{Fig11} that for fixed values of $M$, $a$ and $Q$, as we increase the $PFDM$ parameter $\chi$, the outer event horizon ($r_{h+}$) initially decreases upto a critical value ($\chi_c$) and then starts increasing. The inner event horizon ($r_{h-}$) decreases all through. The reason for such an observation can be assigned to the fact that $PFDM$ contributes to the mass of the system \cite{50}. We can explain this observation by asserting that the system is a composition of two parts, original black hole of mass $M$ and the black hole corresponding to $PFDM$ with mass $M_{0}$. When $\chi < \chi_c$, then the total system mass is given by $M$ with an inhibition coming from the $PFDM$ mass $M_{0}$. As $\chi$ increases, the inhibition increases which gets reflected in the black hole event horizon ($r_{h+}$). Just at the point when $\chi \geq \chi_c$, $PFDM$ mass $M_{0}$ becomes greater than the original black hole mass $M$ and hence the effective mass and the effective horizon is depicted by the mass of the $PFDM$. The reason for this is that the dark matter density is related directly to the dark matter parameter $\chi$ (eq.\eqref{08}). Thus, for $\chi > \chi_c$, increase in $\chi$, results in increase of the event horizon radius ($r_{h+}$). \section{Rotating charged black hole with perfect fluid dark matter immersed in plasma} \label{sec3} Here we consider the rotating charged black hole surrounded by $PFDM$ immersed in plasma. We assume that there is no interaction between $PFDM$ and plasma. The consideration of plasma is realistic, since most black holes are surrounded by material media. Plasma is a dispersive medium where light rays deviate depending on their frequency. The Hamiltonian for the light rays can be derived using Maxwell's equations considering the source to be made of two charged fluids (electrons and ions). For the study of plasma in curved background, we can consider magnetised \cite{62} as well as non-magnetised plasma \cite{63}. In our study we consider non-magnetised, pressureless (dustlike) plasma. The Hamiltonian for light rays in presence of plasma in curved spacetime is provided in \cite{64}. It is applicable even in our case, since dark matter doesn't interact with plasma by other means except gravity. The Hamiltonian has the form \cite{51} \begin{equation}\label{222} H(x^{\mu}, p_{\mu}) = \frac{1}{2}\Big[ g^{\mu \nu}p_{\mu}p_{\nu} + \omega_p (r) ^2 \Big] \end{equation} where $\omega_p(r)$ is the plasma frequency considered to have radial dependence only. This is a simplified assumption for a rotating black hole since the plasma frequency should then be a function of both $r$ and $\theta$. The refractive index ($n$) of the material medium depends on both the plasma frequency ($\omega_p$) as well the frequency of photons ($\omega$) measured by a static observer. The expression of $n (r,\omega)$ has the form \cite{65} \begin{equation} n^2(r, \omega) = 1 - \Big(\frac{\omega_p(r)}{\omega}\Big)^2~. \end{equation} For an observer having 4-velocity $u^\mu$, the effective energy of photon as measured by the observer is $\hbar \omega = -p_\mu u^\mu$. In our case, the observer is static, hence $\hbar\omega = -p_{0} u^0 =-p_0 \sqrt{-g^{00}}$ \cite{65}. Replacing the expressions for the refractive index of plasma medium and photon energy in eq.\eqref{222}, the Hamiltonian becomes \begin{equation}\label{223} H(x^{\mu}, p_{\mu}) = \frac{1}{2}\Big[ g^{\mu \nu}p_{\mu}p_{\nu} - (n^2 -1)\Big(p_0 \sqrt{-g^{00}}\Big)^2 \Big]~. \end{equation} In order to solve for the geodesics, we need to have an explicit form for the plasma frequency $\omega_p$. Since $\omega_p$ depends only on $r$, plasma can have many forms of distribution. Here we assume an extensively studied distribution in the literature, the radial power law distribution, with $\omega_p$ given as \cite{65} \begin{equation}\label{2x} \omega_p ^2 = \frac{4\pi e^2 N(r)}{m_e} \end{equation} where $e$ is the electronic charge, $m_e$ is the mass of the electron and $N(r)$ being the plasma number density. Note that eq.\eqref{2x} is relevant when the spacetime is spherically symmetric. \noindent The plasma density following \cite{64}, \cite{65} has the form \begin{equation} N(r)=\frac{N_0}{r^h}~. \end{equation} where $N_0$ is a constant, $h$ can take integer values with $h \geq 0$. The final form of the refractive index $n(r)$ along with $\frac{\omega_p}{\omega}$ becomes \begin{equation} \Big(\frac{\omega_p}{\omega}\Big)^2 = \frac{k}{r^h}~~;~~n(r)=\sqrt{1-\frac{k}{r^h}}~. \end{equation} Here $k$ is a constant which gives the weightage of plasma around the black hole, also $h$ = 1,2,3 \cite{65}. In this study we shall consider $h=0$ which results in $n(r)=$constant corresponding to homogeneous plasma media. Later we will consider the simplest dependence on $r$ with $h=1$. The second case will be regarded as inhomogeneous plasma distribution. \noindent Also we consider a case where the plasma frequency $\omega_p (r, \theta)$ is dependent on both $r$ and $\theta$ such that \begin{equation}\label{200} \omega_p (r, \theta)^2 = \frac{f_r (r) + f_{\theta} (\theta)}{r^2 + a^2 \cos^2 \theta}~. \end{equation} We have carried out the shadow analysis considering this general case in the subsequent discussion. Note that the above form of the plasma distribution (eq.\eqref{200}) is relevant when the spacetime is axially-symmetric. \section{Study of circular geodesics in plasma medium}\label{sec4} \subsection{Co-rotating and counter rotating photon orbits} The photons in the rotating black hole spacetime can move along the direction of black hole spin (co-rotating photons) as well as opposite to that of black hole spin (counter rotating photons). In order to determine the radius ($r_p$) of those photon orbits, we need to determine the radial geodesic equation and impose the condition for circular geodesics. For this, we need to use the Hamiltonian ($\mathcal{H}$) to determine the trajectories of the particles in the equatorial plane. The Hamiltonian given in eq.\eqref{223} takes the form \begin{equation}\label{20} \mathcal{H}=\frac{1}{2}\Big[ g^{\mu \nu}p_{\mu}p_{\nu} + (n^2 -1)g^{00}p_0 ^2 \Big] \end{equation} with $\mathcal{H}=0$ for photons. Since we are interested in the geodesics of the equatorial plane, we set $\theta = \frac{\pi}{2}$ and thus $\dot{\theta}=0$. \noindent The geodesics can be calculated using Hamilton's equation of motion given by \begin{equation} \dot{x}^{\mu}=\frac{\partial \mathcal{H}}{\partial p_{\mu}}~~;~~ \dot{p}_{\mu}=-\frac{\partial \mathcal{H}}{\partial x^{\mu}}~. \end{equation} The Hamiltonian depends on the metric ($g_{\mu \nu}$) as well as on the refractive index of plasma ($n(r)$). The metric has no explicit dependence on $x_0 (=t)$ and $x_3 (=\phi)$. So the Hamilton's second equation of motion gives two constants of motion $p_0 = -E$ and $p_3 = L_{\phi}$. $E$ and $L_{\phi}$ respectively give the energy per unit mass and angular momentum per unit mass of the photon as observed by a stationary observer at infinity. The geodesics corresponding to $t$ and $\phi$ in the equatorial plane becomes \begin{equation} r^2 \dot{t} = \frac{r^2 + a^2}{\Delta}\Big[n^2(r^2 + a^2)E - aL_{\phi}\Big] + a\Big[L_{\phi} - an^2 E \Big] \end{equation} \begin{equation} r^2 \dot{\phi} = \frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi} - a E \Big]~. \end{equation} Now $p_r = \frac{\partial \mathcal{L}}{\partial \dot{r}} = \frac{\partial S}{\partial r} = g_{rr} \dot{r}^2$ with $\mathcal{L}=\frac{1}{2}g_{\mu \nu}\dot{x}^{\mu}\dot{x}^{\nu}$. The dot in the above equations correspond to derivative with respect to the affine parameter $\lambda$. Since light rays cannot be parametrized in terms of proper time ($\tau$), so we need to parametrize them with some other parameter. This parametrization is done using the affine parameter ($\lambda$). With the above equations in hand, we get from eq.\eqref{20} \begin{equation}\label{21} \dot{r}^2 = \frac{1}{r^4}\Bigg[\Big(E(r^2 + a^2) - aL_{\phi}\Big)^2 - \Delta(aE-L_{\phi})^2 + (n^2 - 1)\Big(E^2 (r^2 + a^2)^2 -\Delta E^2 a^2 \Big) \Bigg]~. \end{equation} In order to obtain the two kinds of orbits discussed above, we define the impact parameter as $\frac{L_{\phi}}{E}=D$, in terms of which the radial eq.\eqref{21} looks as \begin{equation}\label{22} \dot{r}^2 = \frac{E^2}{r^4}\Bigg[\Big((r^2 + a^2) - aD\Big)^2 - \Delta(a-D)^2 + (n^2 - 1)\Big( (r^2 + a^2)^2 -\Delta a^2 \Big) \Bigg]~. \end{equation} \noindent Rearranging eq.\eqref{22}, we have \begin{equation}\label{23} \frac{r^2\dot{r}^2}{E^2} =\frac{1}{r^2}\Bigg[\Big((r^2 + a^2) - aD\Big)^2 - \Delta(a-D)^2 + (n^2 - 1)\Big( (r^2 + a^2)^2 -\Delta a^2 \Big) \Bigg]=F(r)~. \end{equation} Imposing the condition for circular geodesics, that is, $F(r)=0=F'(r)$, we get \begin{equation}\label{47} r^2 + (a^2 - D^2) + \frac{2M}{r}(a-D)^2 -\frac{Q^2}{r^2}(a-D)^2 - \frac{\chi}{r}(a-D)^2 \ln\Big(\frac{r}{|\chi|}\Big) + (n^2 -1)\Big(r^2 +a^2 +a^2\Big(\frac{2M}{r} - \frac{Q^2}{r^2} - \frac{\chi}{r}\ln\Big(\frac{r}{|\chi|}\Big)\Big)\Big)=0~. \end{equation} \begin{eqnarray}\label{24} 2r -\frac{2M}{r^2}(a-D)^2 + \frac{2Q^2}{r^3}(a-D)^2 + \frac{\chi}{r^2}(a-D)^2\ln(\frac{r}{|\chi|})-\frac{\chi}{r^2}(a-D)^2 + \nonumber \\ (n^2 -1)\Big(2r-a^2\Big(\frac{2M}{r^2} - \frac{2Q^2}{r^3} - \frac{\chi}{r^2}\ln(\frac{r}{|\chi|}) + \frac{\chi}{r^2} \Big)\Big) + 2nn' \Big(r^2 +a^2 +a^2\Big(\frac{2M}{r} - \frac{Q^2}{r^2} - \frac{\chi}{r}\ln\Big(\frac{r}{|\chi|}\Big)\Big)\Big)=0 \end{eqnarray} where, $n^{'}\equiv \frac{dn}{dr}$. Solving for $(a-D)$ in eq.\eqref{24}, we have \begin{footnotesize} \begin{eqnarray} (a-D)=\pm \sqrt{\frac{2r^5 + (n^2 -1)\Big[2r^5 - a^2\Big(2Mr^2 - 2Q^2 r - \chi r^2 \ln(\frac{r}{|\chi|})+\chi r^2\Big)\Big] + 2nn'\Big[r^4(r^2 + a^2) + a^2\Big(2Mr^3 - Q^2 r^2 - \chi r^3 \ln(\frac{r}{|\chi|})\Big)\Big]}{2Mr^2 - 2Q^2 r - \chi r^2 \ln(\frac{r}{|\chi|})+\chi r^2}}~. \end{eqnarray} \end{footnotesize} Thus the impact parameter becomes \begin{footnotesize} \begin{eqnarray}\label{50} D=a\mp \sqrt{\frac{2r^5 + (n^2 -1)\Big[2r^5 - a^2\Big(2Mr^2 - 2Q^2 r - \chi r^2 \ln(\frac{r}{|\chi|})+\chi r^2\Big)\Big] + 2nn'\Big[r^4(r^2 + a^2) + a^2\Big(2Mr^3 - Q^2 r^2 - \chi r^3 \ln(\frac{r}{|\chi|})\Big)\Big]}{2Mr^2 - 2Q^2 r - \chi r^2 \ln(\frac{r}{|\chi|})+\chi r^2}}~. \end{eqnarray} \end{footnotesize} The $\mp$ sign corresponds to counter and co-rotating geodesics of photons moving in the black hole spacetime. In case of background devoid of plasma, that is, $n=1, n' =0$ we have \begin{equation} D=a\mp \sqrt{\frac{2r^5}{2Mr^2 - 2Q^2 r - \chi r^2 \ln(\frac{r}{|\chi|})+\chi r^2}} \end{equation} which corresponds to the expression derived in \cite{1}. Replacing $D$ from eq.\eqref{50} in eq.\eqref{47} we get an equation in $r$ devoid of the constants $E$ and $L_{\phi}$ which depends only on the spacetime parameters ($M$, $Q$, $\chi$, $n$). The solution of the equation gives the photon orbit radius ($r_p$) both for co-rotating and counter rotating orbits. The Table(s) below show the photon orbit radius ($r_p$) both for co and counter rotating orbits with variation in plasma parameter ($k$). We obtain the Table(s) below with $M=1$. \begin{table}[h]\label{table} \centering \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =0.2}}\\ \hline $k$ & $r_{p1}$ \\ \hline 0.0 & 1.645\\ \hline 0.2 & 1.480 \\ \hline 0.27 & 1.381\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =1.0}}\\ \hline $k$ & $r_{p1}$ \\ \hline 0.0 & 1.742\\ \hline 0.2 &1.632 \\ \hline 0.38 & 1.418\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =0.2}}\\ \hline $k$ & $r_{p1}$ \\ \hline 0.0 & 1.645\\ \hline 0.2 & 1.507 \\ \hline 0.31 & 1.385\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =1.0}}\\ \hline $k$ & $r_{p1}$ \\ \hline 0.0 & 1.742\\ \hline 0.2 & 1.632 \\ \hline 0.4 & 1.492\\ \hline \end{tabular} \caption{\footnotesize Radius ($r_{p1}$) for co-rotating (prograde) photon orbits with variation in plasma parameter ($k$) with black hole spin $a$=0.5 and charge $Q$=0.3. The first two are for homogeneous plasma \Big($n=\sqrt{1-k}$\Big) and the next two for inhomogeneous plasma \Big($n=\sqrt{1-\frac{k}{r}}$\Big). } \label{Fig1} \end{table} \noindent Table \ref{Fig1} show that with increase in the plasma parameter $k$, the radius of co-rotating photon orbits $r_{p1}$ decrease in case of both homogeneous and inhomogeneous plasma. Besides they exist closer to the event horizon ($r_{h+}$) of the black hole. The effect of plasma parameter $k$ on the radius of the orbits remains same for both $\chi < \chi_c$ and $\chi > \chi_c$. The critical value of the $PFDM$ parameter for black hole spin $a=0.5$ and charge $Q=0.3$ is 0.467. Also we find that the photon orbits do not exist for all possible values of the plasma parameter $k$. After a certain critical value of plasma parameter $k=k_c$, the photon radius drops below the event horizon radius ($r_{h+}$) and thus has no physical existence. Thus there is a bound to the value of $k$ depending on the combinations of the black hole parameters ($M$, $Q$, $\chi$). \begin{table}[h]\label{table} \centering \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =0.2}}\\ \hline $k$ & $r_{p2}$ \\ \hline 0.0 & 2.782\\ \hline 0.2 &2.825 \\ \hline 0.4 & 2.886\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =1.0}}\\ \hline $k$ & $r_{p2}$ \\ \hline 0.0 & 2.608\\ \hline 0.2 &2.640 \\ \hline 0.4 & 2.685\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =0.2}}\\ \hline $k$ & $r_{p2}$ \\ \hline 0.0 & 2.782\\ \hline 0.2 &2.761 \\ \hline 0.4 & 2.737\\ \hline \end{tabular} \hspace{1.5cm} \begin{tabular}{|c|c|} \multicolumn{2}{c}{\textbf{ $\chi$ =1.0}}\\ \hline $k$ & $r_{p2}$ \\ \hline 0.0 & 2.608\\ \hline 0.2 &2.587 \\ \hline 0.4 & 2.564\\ \hline \end{tabular} \caption{\footnotesize Radius ($r_{p2}$) for counter rotating (retrograde) photon orbits with variation in plasma parameter ($k$) with black hole spin $a$=0.5 and charge $Q$=0.3. The first two are for homogeneous plasma \Big($n=\sqrt{1-k}$\Big) and the next two for inhomogeneous plasma \Big($n=\sqrt{1-\frac{k}{r}}$\Big). } \label{Fig2} \end{table} \noindent Table \ref{Fig2} show that with increase in plasma parameter $k$, the radius of counter rotating photon orbits $r_{p2}$ increase in case of homogenous plasma distribution whereas it decreases in case of inhomogenous distribution. The effect of plasma parameter $k$ on the radius of the orbits remains same for both $\chi < \chi_c$ and $\chi > \chi_c$. Here too only for a certain combination of spacetime parameters and plasma parameter $k$ does the photon orbits exist which results in a bound on the plasma parameter $k$. \subsection{Black hole shadow} In this section, we wish to calculate the black hole shadow. Shadows are formed due to bending of light rays near regions of strong gravity. In case of black holes, when light from a distant source comes near to it, light rays get deflected. The deflected rays after encircling the black hole either plunges into the black hole or escapes to infinity. These light rays from unstable circular orbits reaches the observer at infinity and creates \textit{circular} or \textit{deformed circular} boundary curve. The dark disk inside the curve is called the black hole shadow. The shadow is formed in the celestial plane characterised by the celestial coordinates $\alpha$ and $\beta$. They are defined for an observer at infinity as \begin{multicols}{2} \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} {\includegraphics[width=\textwidth]{Shadow.eps}} \end{minipage} \caption{Celestial coordinates ($\alpha$, $\beta$)}\cite{c}. \label{31} \end{figure} \begin{equation} \alpha = \lim_{r_0\to \infty} -r_0 ^2 \sin \theta_0 \Big(\frac{d \phi}{d r}\Big)\Bigg|_{r_0, \theta_0} \end{equation} \begin{equation} \beta = \lim_{r_0 \to \infty} r_0 ^2 \Big(\frac{d \theta}{d r}\Big)\Bigg|_{r_0, \theta_0} \end{equation} \end{multicols} \noindent where the position of the observer is given by the coordinates $r_0$ and $\theta_0$. As can be seen from Fig.\ref{31}, we consider a black hole residing between the source and the observer. We have shown a tangent on the light ray at the point where light reaches the observer. The light ray meets the celestial plane at the point ($\alpha, \beta$). Considering all light rays from all possible directions and drawing tangents that intersect the plane, we get a circular or near circular or deformed circular boundary curve. The celestial coordinate $\alpha$ gives the apparent perpendicular distance of the shadow boundary from the axis of black hole rotation and the coordinate $\beta$ gives the apparent perpendicular distance of the boundary of the shadow from its projection in the equatorial plane. $r_0$ gives the radial distance of the observer from the black hole and $\theta_0$ corresponds to the inclination angle of the observer's line of sight with the axis of rotation. \subsubsection{Case I}\label{I} Here we consider the general case by considering the plasma frequency to be a function of both $r$ and $\theta$, that is, $n (r, \theta)$ \cite{1aa},\cite{54},\cite{60}. We start from the Hamiltonian \begin{equation} \mathcal{H}(x^{\mu}, p_{\mu})=\frac{1}{2}\Big[g^{\mu \nu}p_{\mu}p_{\nu} + \omega_p(r,\theta) ^2\Big] \end{equation} with the refractive index defined as \begin{equation} n(r,\theta)^2 = 1 - \Big(\frac{\omega_p(r,\theta)}{\omega}\Big)^2~. \end{equation} Here, $\omega_p (r, \theta)$ gives the plasma frequency and $\omega$ gives the photon frequency as measured by any arbitrary observer in the domain of outer communication, that is between outer event horizon ($r_{h+}$) to infinity. The photon frequency as measured by a stationary observer at infinity is $\omega_0$ which is related to $\omega$ by the relation \begin{equation}\label{79} \omega = \frac{\omega_0}{\sqrt{-g_{00}}}=\frac{\omega_0 \zeta}{\sqrt{\Delta -a^2 \sin^2 \theta}}~. \end{equation} We wish to calculate the geodesics by Hamilton's equation of motion. We found that the Hamiltonian ($\mathcal{H}$) is independent of $t$ and $\phi$. Hence the corresponding constants of motion are $p_0 = -E$ and $p_3 = L_{\phi}$ where, $E$ and $L_{\phi}$ correspond to the energy and angular momentum of photons as measured by a stationary observer at infinity. Also $E$ and $\omega_0$ are related as $E=\hbar \omega_0$, which results into $E=\omega_0$ for $\hbar =1$. By using Hamilton's equation of motion $\dot{x}^{\mu}=\frac{\partial H}{\partial p_{\mu}}$, we get the equations for $t$ and $\phi$ as \begin{equation} \zeta^2 \dot{t} = \frac{r^2 + a^2}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + a\Big[L_{\phi} - a E \sin ^2 \theta\Big] \end{equation} \begin{equation}\label{44} \zeta^2 \dot{\phi} = \frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]~. \end{equation} For evaluating the geodesics of $r$ and $\theta$, we use the Hamilton-Jacobi equation given for photons as \begin{equation}\label{000} \mathcal{H}\Big(x^{\mu}, \frac{\partial S}{\partial x^{\mu}}\Big)=\frac{1}{2}\Big[g^{\mu \nu}\frac{\partial S}{\partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}} + \omega_p(r,\theta) ^2\Big]=0~. \end{equation} In order to solve the above eq.\eqref{000}, we assume an ansatz of the form \cite{66} \begin{equation} S=-Et + L_{\phi}\phi + S_r (r) + S_{\theta} (\theta) \end{equation} and replace it in eq.\eqref{000} to get \begin{equation}\label{010} -\frac{1}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 + (L_{\phi} \csc \theta - aE\sin \theta )^2 +\Delta \Big(\frac{\partial S_r}{\partial r}\Big)^2 + \Big(\frac{\partial S_\theta}{\partial \theta}\Big)^2 + \omega_p ^2 \zeta^2 = 0~. \end{equation} To separate the above equation into two equations of $r$ and $\theta$, we need to assume a certain form of $\omega_p ^2(r, \theta)$ which reads \cite{54} \begin{equation}\label{85} \omega_p ^2 (r, \theta) = \frac{f_r (r) + f_{\theta} (\theta)}{r^2 + a^2 \cos^2 \theta} \end{equation} where $f_r (r)$ and $f_{\theta} (\theta)$ are functions of $r$ and $\theta$ only respectively. This form of $\omega_p ^2 (r, \theta)$ leads to the following form for the refractive index $n(r,\theta)$ \begin{equation} n(r,\theta)^2 = 1 - \Big(\frac{f_r (r) + f_{\theta} (\theta)}{(r^2 + a^2 \cos^2 \theta)\omega^2}\Big)~. \end{equation} \noindent This gives us \begin{equation} \Big(\frac{\partial S_\theta}{\partial \theta}\Big)^2 + (L_{\phi} \csc \theta - aE\sin \theta )^2 + f_{\theta} (\theta) = \frac{1}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta \Big(\frac{\partial S_r}{\partial r}\Big)^2 - f_r (r)= constant= \kappa \end{equation} with $\kappa$ being the generalised Carter constant used to separate the $r$ and $\theta$ geodesics. The equations take the form \begin{equation}\label{k5} \zeta^2 \dot{\theta}= \pm \sqrt{\Theta (\theta)} \end{equation} \begin{equation}\label{k6} \zeta^2 \dot{r}= \pm \sqrt{R (r)} \end{equation} with $\Theta(\theta)$ and $R(r)$ taking the form \begin{equation} \Theta (\theta)= \kappa- (L_{\phi} \csc \theta - aE\sin \theta )^2 - f_{\theta} (\theta) \end{equation} \begin{equation} R(r)= \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta\Big(\kappa + f_{r} (r)\Big)~. \end{equation} The shadow is formed by the photons moving in the unstable circular geodesics. The conditions for these geodesics are $R(r) = \frac{\partial R(r)}{\partial r} = 0.$ Utilizing these conditions, we get the following equations \begin{equation}\label{k1} \Big[(r^2 + a^2) - a\xi\Big]^2 = \Delta\Big[\eta + \tilde{f}_r (r)\Big] \end{equation} \begin{equation}\label{k2} 4r\Big[(r^2 + a^2) - a\xi\Big] - \Delta \tilde{f}_r (r) = \Delta^{'}\Big[\eta + \tilde{f}_r (r)\Big] \end{equation} where $\xi = \frac{L_{\phi}}{E}$, $\eta = \frac{\kappa}{E^2}$ and $\tilde{f}_r (r) = \frac{f_r (r)}{E^2}$. Also we define another variable $\tilde{f}_{\theta} (\theta)= \frac{f_{\theta} (\theta)}{E^2}$ which will be necessary later~. Eliminating $\eta$ from eq.(s) \eqref{k1} and \eqref{k2}, we get an equation in $\xi$ as \begin{equation} A \xi^2 + 2 B \xi + C=0 \end{equation} where \begin{align} A = a^2 \Delta^{'}~~;~~ B = 2ar\Delta - a\Delta^{'}(r^2 + a^2)~;~~\\ \nonumber C = \Big[\Delta^{'}(r^2 + a^2)^2 -4r \Delta (r^2 + a^2) + \Delta ^2 \tilde{f}'_r (r) \Big] \end{align} with $\tilde{f}'_r (r)$ giving the derivative of $\tilde{f}_r (r)$ with respect to $r$. Solving for $\xi$, we get \begin{equation}\label{k3} \xi = -\frac{B}{A} \pm \sqrt{\Big(\frac{B}{A}\Big)^2 - \frac{C}{A}}~. \end{equation} The shadow can be obtained by considering the negative sign in the above expression. The expression for $\eta$ takes the form \begin{equation} \eta = \frac{1}{\Delta}\Bigg[ (r^2 + a^2) - a\xi \Bigg]^2 - \tilde{f}'_r (r) ~. \end{equation} From the expressions of the celestial coordinates $\alpha$ and $\beta$, we observe that for determining the black hole shadow, we need the geodesics for $\phi$, $\theta$ and $r$ as given in eq.(s) \eqref{44}, \eqref{k5} and \eqref{k6} respectively. The expressions for $\frac{d \phi}{dr}$ and $\frac{d \theta}{dr}$ take the form \begin{equation}\label{k7} \Big(\frac{d \phi}{d r}\Big) = \frac{\frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]}{\sqrt{\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta\Big(\kappa + f_{r} (r)\Big)}} \end{equation} \begin{equation}\label{k8} \Big(\frac{d \theta}{d r}\Big) = \sqrt{\frac{\kappa - (L_{\phi} \csc \theta - aE\sin \theta )^2 - f_{\theta} (\theta)}{\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta\Big(\kappa + f_{r} (r)\Big)}}~. \end{equation} Using the above eq.(s) \eqref{k7}, \eqref{k8} in the expressions for $\alpha$ and $\beta$, we have \begin{equation}\label{k9} \alpha = -\xi\csc \theta_0~~;~~ \beta =\pm \sqrt{\eta - (\xi \csc \theta_0 - a\sin \theta_0 )^2 - \tilde{f}_{\theta} (\theta_0)}~. \end{equation} The shadow can be observed by plotting $\alpha$ and $\beta$ along X and Y axis respectively. We show the plots for two different plasma distribution. One for $f_r (r) =\omega_c ^2 \sqrt{M^3 r}$ and $f_{\theta} (\theta)=0$ \cite{54} and the other one for $f_r (r) =0$ and $f_{\theta} (\theta)= \omega_c ^2 M^2 (1 + 2 \sin^2 \theta)$ \cite{54}. For $f_r (r) =\omega_c ^2 \sqrt{M^3 r}$ and $f_{\theta} (\theta)=0$ (with $M=1$), the expressions in eq.(s) \eqref{k7}, \eqref{k8} and \eqref{k9} take the form \begin{equation} \Big(\frac{d \phi}{d r}\Big) = \frac{\frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]}{\sqrt{\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta\Big(\kappa + \omega_c ^2 \sqrt{r}\Big)}} \end{equation} \begin{equation} \Big(\frac{d \theta}{d r}\Big) = \sqrt{\frac{\kappa - (L_{\phi} \csc \theta - aE\sin \theta )^2 }{\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta\Big(\kappa + \omega_c ^2 \sqrt{r}\Big)}} \end{equation} which gives \begin{equation} \alpha = -\xi\csc \theta_0~~;~~ \beta =\pm \sqrt{\eta - (\xi \csc \theta_0 - a\sin \theta_0 )^2}~. \end{equation} Also for $f_r (r) =0$ and $f_{\theta} (\theta)= \omega_c ^2 M^2 (1 + 2 \sin^2 \theta)$ (with $M=1$), the expressions in eq.(s) \eqref{k7}, \eqref{k8} and \eqref{k9} take the form \begin{equation} \Big(\frac{d \phi}{d r}\Big) = \frac{\frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]}{\sqrt{\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta \kappa }} \end{equation} \begin{equation} \Big(\frac{d \theta}{d r}\Big) = \sqrt{\frac{\kappa - (L_{\phi} \csc \theta - aE\sin \theta )^2 -\omega_c ^2 (1 + 2 \sin^2 \theta) }{\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 -\Delta \kappa }} \end{equation} which gives \begin{equation} \alpha = -\xi\csc \theta_0~~;~~ \beta =\pm \sqrt{\eta - (\xi \csc \theta_0 - a\sin \theta_0 )^2 - \Big(\frac{\omega_c}{\omega_0}\Big) ^2 (1 + 2 \sin^2 \theta_0)}~. \end{equation} \noindent The plots are shown in section \ref{sec5} (Figures \ref{10a} and \ref{10b}) and we have set $M=1$. \subsubsection{Case II}\label{II} In this section, we plan to investigate the black hole shadow for the case of refractive index $n(r)$ depending only on $r$ and having no $\theta$ dependence. For this we like to evaluate the geodesics of light rays. The Hamiltonian ($\mathcal{H}$) in this case takes the form \begin{equation} \mathcal{H}=\frac{1}{2}\Big[ g^{\mu \nu}p_{\mu}p_{\nu} + (n^2 -1)g^{00}p_0 ^2 \Big]~. \end{equation} \noindent Using Hamilton's equation of motion, we can evaluate the expressions for $t$ and $\phi$ geodesics in an arbitrary plane which reads \begin{equation} \zeta^2 \dot{t} = \frac{r^2 + a^2}{\Delta}\Big[n^2(r^2 + a^2)E - aL_{\phi}\Big] + a\Big[L_{\phi} - an^2 E \sin ^2 \theta\Big] \end{equation} \begin{equation}\label{002} \zeta^2 \dot{\phi} = \frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]~. \end{equation} In order to evaluate the geodesics for $r$ and $\theta$, we use the Hamilton-Jacobi equation, which reads \begin{equation}\label{11} \frac{\partial S}{\partial \lambda}= -\mathcal{H}=-\frac{1}{2}\Big[ g^{\mu \nu}\Big(\frac{\partial S}{\partial x^{\mu}}\Big)\Big(\frac{\partial S}{\partial x^{\nu}}\Big) + (n^2 -1)g^{00}\Big(\frac{\partial S}{\partial x^0}\Big)^2 \Big] ~. \end{equation} In order to solve the above equation, we need to choose an ansatz for $S$. Following \cite{66}, we use \begin{equation} S=-Et + L_{\phi}\phi +S_r(r) + S_{\theta}(\theta)~. \end{equation} Replacing $S$ in eq.\eqref{11}, we have \begin{align}\label{19} -\frac{(n^2 - 1)}{\Delta}E^2 (r^2 + a^2)^2 -\frac{1}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 + (n^2-1) a^2 E^2 \sin^2 \theta -a^2E^2\cos^2 \theta \\ \nonumber + L^2 _{\phi}\cot ^2 \theta + (aE - L_{\phi})^2 +\Delta \Big(\frac{\partial S_r}{\partial r}\Big)^2 + \Big(\frac{\partial S_\theta}{\partial \theta}\Big)^2 = 0~. \end{align} Now we wish to separate the above equation in $r$ and $\theta$ variables. By inspecting the term $n(r)^2 a^2 E^2 \sin^2 \theta$, we find that it is in general, not possible to separate $n(r)$ from $\sin \theta$. We now investigate two cases.\\ \noindent \textbf{Case IIa}\\ First, we consider $n=n(r)=\sqrt{1-\frac{k}{r}}$. Though it should be noted that for spinning black holes $n=n(r, \theta)$ \cite{1aa,54}, but for simplicity we now consider the $n=n(r)$ only. Now we find that due to such consideration, eq.\eqref{19} is not separable. However, we can separate it into $r$ and $\theta$ equations by considering near equatorial plane as done in \cite{11,c}. The approximation is taken as $\theta \approx \frac{\Pi}{2} + \epsilon$, where $\epsilon$ is a very small angle. It must be noted that the unstable photon orbits are not restricted to the near equatorial planes, they can travel through any arbitrary plane. But for an observer at infinity, this approximation is valid and gives desirable results for black hole shadow. Besides since we are considering near equatorial plane, the observer should also be placed in the equatorial plane, i.e., $\theta_0 = \frac{\pi}{2}$. This assumption modifies the geodesics which we take into account while evaluating the celestial coordinates ($\alpha, \beta$). \noindent Choosing a near equatorial plane and setting $\theta = \frac{\Pi}{2} + \epsilon$, we have eq.\eqref{19} as \begin{align}\label{3} -\frac{(n^2 - 1)}{\Delta}E^2 (r^2 + a^2)^2 -\frac{1}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 + (n^2 -1)a^2 E^2 \\ \nonumber + (aE - L_{\phi})^2 +\Delta \Big(\frac{\partial S_r}{\partial r}\Big)^2 + \Big(\frac{\partial S_\epsilon}{\partial \epsilon}\Big)^2 = 0~. \end{align} Introducing Carter constant $\kappa$ \cite{66} as in the earlier case, we can split the equation into two parts as \begin{align}\label{3} \frac{(n^2 - 1)}{\Delta}E^2 (r^2 + a^2)^2 + \frac{1}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - (n^2-1) a^2 E^2 \\ \nonumber - (aE - L_{\phi})^2 -\Delta \Big(\frac{\partial S_r}{\partial r}\Big)^2 = \Big(\frac{\partial S_\epsilon}{\partial \epsilon}\Big)^2 = \kappa~. \end{align} Finally, using \( \frac{\partial \mathcal{L}}{\partial \dot{x}^ \mu} = \frac{\partial S}{\partial x ^ \mu} \) , where \(\mathcal{L} = \frac{1}{2}g_{\mu \nu}\dot{x}^ \mu \dot{x}^ \nu \) , we get equation(s) for $r$ and $\epsilon$ as \begin{equation} r^2 \dot{r} = \sqrt{R(r)} \end{equation} \begin{equation} r^2 \dot{\epsilon} = \sqrt{\Theta(\epsilon)} \end{equation} where the expressions for $R(r)$ and $\Theta(\epsilon)$ takes the form \begin{equation} R(r) = (n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[(n^2 -1) a^2 E^2 + (aE - L_{\phi})^2 +\kappa\Big] \end{equation} \begin{equation} \Theta(\epsilon) = \kappa~. \end{equation} Since, shadows are formed by light rays moving in unstable circular orbits, so we use the conditions $R(r) = \frac{\partial R(r)}{\partial r}=0$. The conditions read \begin{equation} (n^2 - 1)(r^2 + a^2)^2 + \Big[(r^2 + a^2) - a\xi\Big]^2 = \Delta\Big[(n^2 -1) a^2 + (a- \xi)^2 +\eta\Big] \end{equation} \begin{equation} 4r(n^2 - 1)(r^2 + a^2) + 4r\Big[(r^2 + a^2) - a\xi\Big] -2nn^{'}\Delta a^2 +2(r^2 + a^2)nn^{'} = \Delta^{'}\Big[(n^2 -1) a^2 + (a- \xi)^2 +\eta\Big]~. \end{equation} Eliminating $\eta$ from the above equations, we get a quadratic equation $\xi$ as \begin{equation} A\xi^2 + 2B\xi + C = 0 \end{equation} with the expressions for $A$, $B$ and $C$ taking the form \begin{align} A = a^2 \Delta^{'}~~;~~ B = 2ar\Delta - a\Delta^{'}(r^2 + a^2)~;~~\\ \nonumber C = \Big[n^2 \Delta^{'}(r^2 + a^2)^2 -4r \Delta n^2 (r^2 + a^2) + 2nn^{'}\Delta^2 a^2 - 2(r^2 + a^2)^2 nn^{'}\Delta\Big]~. \end{align} Solving for $\xi$, we have \begin{equation} \xi = -\frac{B}{A} \pm \sqrt{\Big(\frac{B}{A}\Big)^2 - \frac{C}{A}} \end{equation} with the negative sign yielding the appropriate results. The expression for $\eta$ becomes \begin{equation} \eta = \frac{1}{\Delta}\Bigg[ (n^2 - 1)(r^2 + a^2)^2 + \Big[(r^2 + a^2) - a\xi\Big]^2 \Bigg] -(n^2 -1) a^2 - (a- \xi)^2~. \end{equation} The constants $\xi$ and $\eta$ are the quantities in terms of which shadow radius $R_s$ is evaluated. In our case, we consider the observer to be placed in the equatorial plane, that is, $\theta_0 = \frac{\pi}{2}$. \noindent In order to determine $\alpha$ and $\beta$, we need the geodesic equations and thereby calculate $\Big(\frac{d \phi}{d r}\Big)$ and $\Big(\frac{d \epsilon}{d r}\Big)$. The geodesics for $\phi$, $\epsilon$ and $r$ have the form (considering near equatorial plane) \begin{equation} \Big(\frac{d \phi}{d \lambda}\Big) = \frac{a}{\Delta r^2}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \frac{1}{r^2}\Big[L_{\phi} - a E \Big] \end{equation} \begin{equation} \Big(\frac{d \epsilon}{d \lambda}\Big) = \frac{\sqrt{\kappa}}{r^2} \end{equation} \begin{equation} \Big(\frac{d r}{d \lambda}\Big) = \frac{1}{r^2}\sqrt{(n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[(n^2 -1) a^2 E^2 + (aE - L_{\phi})^2 +\kappa\Big]}~. \end{equation} Utilising the above geodesics, we get the relevant equations as \begin{equation} \Big(\frac{d \phi}{d r}\Big) = \frac{\frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]}{\sqrt{(n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[(n^2 -1) a^2 E^2 + (aE - L_{\phi})^2 +\kappa\Big]}} \end{equation} \begin{equation} \Big(\frac{d \epsilon}{d r}\Big) = \sqrt{\frac{\kappa}{(n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[(n^2 -1) a^2 E^2 + (aE - L_{\phi})^2 +\kappa\Big]}}~. \end{equation} Now replacing the above relations in the expressions for $\alpha$ and $\beta$ and then taking the limit $r \to \infty$, we get the celestial coordinates as \begin{equation} \alpha = -\frac{\xi}{n}~~;~~ \beta =\pm \frac{\sqrt{\eta}}{n}~. \end{equation} Plotting $\alpha$ along X-axis vs $\beta$ along Y-axis, we get the silhouette of the black hole shadow.\\ \noindent \textbf{Case IIb}\label{Ia}\\ Here, we consider $n(r)=$constant=$\sqrt{1-k}$. The $t$ and $\phi$ geodesics are the same as the previous case. In this case we find that the above eq.\eqref{19} is separable. Using the Cartar constant $\kappa$ \cite{66}, we have \begin{align} \Big(\frac{\partial S_\theta}{\partial \theta}\Big)^2 + (n^2-1) a^2 E^2 \sin^2 \theta -a^2E^2\cos^2 \theta + L^2 _{\phi}\cot ^2 \theta = -\Delta \Big(\frac{\partial S_r}{\partial r}\Big)^2 + \frac{(n^2 - 1)}{\Delta}E^2 (r^2 + a^2)^2 \\ \nonumber +\frac{1}{\Delta}\Big[(r^2 + a^2)E + aL_{\phi}\Big]^2 - (aE - L_{\phi})^2 = \kappa ~. \end{align} This leads to the $r$ and $\theta$ equation(s) using \( \frac{\partial \mathcal{L}}{\partial \dot{x}^ \mu} = \frac{\partial S}{\partial x ^ \mu} \) , with \(\mathcal{L} = \frac{1}{2}g_{\mu \nu}\dot{x}^ \mu \dot{x}^ \nu \) as \begin{equation}\label{003} \zeta^2 \dot{r} = \sqrt{R(r)} \end{equation} \begin{equation}\label{004} \zeta^2 \dot{\theta} = \sqrt{\Theta(\theta)}~. \end{equation} The expressions for $R(r)$ and $\Theta(\theta)$ take the form \begin{equation} R(r) = (n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[(aE - L_{\phi})^2 +\kappa\Big] \end{equation} \begin{equation} \Theta(\theta) = \kappa - (n^2-1) a^2 E^2 \sin^2 \theta + a^2E^2\cos^2 \theta - L^2 _{\phi}\cot ^2 \theta~. \end{equation} The black hole shadow is formed by the unstable circular orbits of photons moving around black hole. The condition satisfied by these rays are $R(r)=\frac{\partial R}{\partial r}=0$. Imposing these conditions we get \begin{equation} (n^2 - 1)(r^2 + a^2)^2 + \Big[(r^2 + a^2) - a\xi\Big]^2 = \Delta\Big[\eta + (a- \xi)^2 \Big] \end{equation} \begin{equation} 4rn^2 (r^2 + a^2) -4ra\xi = \Delta^{'}\Big[\eta + (a- \xi)^2 \Big] \end{equation} where $\xi = \frac{L_{\phi}}{E}$ and $\eta = \frac{\kappa}{E^2}$ are the Chandrasekhar constants \cite{66}. Eliminating $\eta$ from the above equation(s), we get an equation in $\xi$ of the form \begin{equation} A\xi^2 + 2B\xi + C = 0 \end{equation} where, $A$, $B$, $C$ has the form \begin{align} A = a^2 \Delta^{'}~~;~~ B = 2ar\Delta - a\Delta^{'}(r^2 + a^2)~;~~\\ \nonumber C = \Big[n^2 \Delta^{'}(r^2 + a^2)^2 -4r \Delta n^2 (r^2 + a^2)\Big]~. \end{align} Solving the above equation, $\xi$ becomes \begin{equation} \xi = -\frac{B}{A} \pm \sqrt{\Big(\frac{B}{A}\Big)^2 - \frac{C}{A}}~. \end{equation} The shadow can now be obtained by considering the negative sign in the above solution. Also $\eta$ becomes \begin{equation} \eta = \frac{1}{\Delta}\Bigg[ (n^2 - 1)(r^2 + a^2)^2 + \Big[(r^2 + a^2) - a\xi\Big]^2 \Bigg] - (a- \xi)^2~. \end{equation} In order to calculate the celestial coordinates ($\alpha, \beta$) we need to make use of the geodesics equations(s) for $r$, $\theta$ and $\phi$ as given in eq.(s) \eqref{003}, \eqref{004} and \eqref{002} respectively. Using them, we get the expressions for $\frac{d\phi}{dr}$ and $\frac{d\theta}{dr}$ as \begin{equation} \Big(\frac{d \phi}{d r}\Big) = \frac{\frac{a}{\Delta}\Big[(r^2 + a^2)E - aL_{\phi}\Big] + \Big[L_{\phi}\csc^2 \theta - a E \Big]}{\sqrt{(n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[ (aE - L_{\phi})^2 +\kappa\Big]}} \end{equation} \begin{equation} \Big(\frac{d \theta}{d r}\Big) = \sqrt{\frac{\kappa - (n^2-1) a^2 E^2 \sin^2 \theta + a^2E^2\cos^2 \theta - L^2 _{\phi}\cot ^2 \theta}{(n^2 - 1)E^2 (r^2 + a^2)^2 + \Big[(r^2 + a^2)E - aL_{\phi}\Big]^2 - \Delta\Big[(aE - L_{\phi})^2 +\kappa\Big]}}~. \end{equation} Replacing the above eq.(s) in the expressions for $\alpha$ and $\beta$, we have \begin{equation} \alpha = -\frac{\xi}{n}\csc \theta_0~~;~~ \beta =\pm \frac{\sqrt{\eta - (n^2-1) a^2 \sin^2 \theta_0 + a^2\cos^2 \theta_0 - \xi^2 \cot ^2 \theta_0}}{n}~. \end{equation} The shadow can be obtained by plotting $\alpha$ along $X$ axis and $\beta$ along $Y$ axis. The observer is fixed at infinity ($r_0=\infty$) and the position of the observer with respect to the direction of black hole spin ($a$) can be varied. So we consider three positions as $\theta_0 = \frac{\pi}{4}$, $\theta_0 = \frac{\pi}{3}$ and $\theta_0 = \frac{\pi}{2}$. The corresponding plots with variation in plasma parameter $k$ is shown in the next section \ref{sec5}.\\ \noindent Before ending this section, we want to mention that the general case (Case \ref{I}) in which $n$ is a function of both $r$ and $\theta$ boils down to the case case $n=\sqrt{1-k}$. This can be seen as follows. Setting $f_r (r)=\omega_c ^2 r^2$ and $f_{\theta} (\theta)=\omega_c ^2 a^2 \cos^2 \theta$, we get $\omega_p (r, \theta)= \omega_c $ = constant. Now we write the Hamiltonian ($\mathcal{H}$) in the form \begin{equation} \mathcal{H}(x^{\mu}, p_{\mu})=\frac{1}{2}\Big[g^{\mu \nu}p_{\mu}p_{\nu} + \widetilde{\omega}_p(r,\theta) ^2\Big] \end{equation} where $\widetilde{\omega_p}$ is given by \begin{equation} \widetilde{\omega}_p = \frac{\omega_p}{\sqrt{-g_{00}}}=\frac{\omega_c}{\sqrt{-g_{00}}}~. \end{equation} Now the refractive index is defined as \cite{65} \begin{equation} n^2 (r, \theta)=1-\frac{ \widetilde{\omega}_p ^2}{\omega^2}= 1- \Big(\frac{\omega_c}{\omega_0}\Big)^2=1-k= constant \end{equation} where we have defined $\Big(\frac{\omega_c}{\omega_0}\Big)^2 = k$ and used $\omega=\omega_{0}/\sqrt{-g_{00}}$. Substituting $\omega_c$ in terms of $n$, we get \begin{eqnarray} \mathcal{H}(x^{\mu}, p_{\mu})&=&\frac{1}{2}\Big[g^{\mu \nu}p_{\mu}p_{\nu} -g^{00} \omega_c ^2\Big]\nonumber\\ &=&\frac{1}{2}\Big[g^{\mu \nu}p_{\mu}p_{\nu} + (n^2 -1)g^{00} \omega_0 ^2\Big]~,~n=\sqrt{1-k}~. \end{eqnarray} This is the same Hamiltonian as in Case \ref{II} with $p_0 = -E=-\omega_0$. \section{Impact of the spacetime, $PFDM$ and plasma parameters on the black hole shadow}\label{sec5} The motion of any particle in the black hole spacetime is influenced by the parameters describing the spacetime. The same is true for unstable photons ($m=0$) that either plunge into the black hole singularity or fly off to infinity. \textbf{The photons} that fly off to infinity reach the observer and form the boundary of the black hole shadow. Thus the shadow formed by photons gets impacted by the spacetime parameters. \noindent The parameters describing the spacetime are spin ($a$) and charge ($Q$) of the black hole, $PFDM$ parameter ($\chi$) and the plasma \textbf{parameters $k$ and $\frac{\omega_c}{\omega_0}$ }. Now we show the plots and discuss how the parameters affect the black hole shadow. \begin{figure}[H] \centering \begin{minipage}[b]{0.35\textwidth} \subfloat[\footnotesize Q = 0.2, k = 0.2, $\chi$ = 0.2 ]{\includegraphics[width=\textwidth]{Shadow_spin_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.35\textwidth} \subfloat[\footnotesize Q = 0.2, k = 0.2, $\chi$ = 1.0 ]{\includegraphics[width=\textwidth]{Shadow_spin_2.eps}} \end{minipage} \caption{\footnotesize Variation of the black hole shadow with spin ($a$) of the black hole. The colored plots are for different spin values-blue dotted ($a=0.1$), black ($a=0.3$), red dashed ($a=0.5$). The plots are shown for inhomogeneous plasma with $n(r)=\sqrt{1-\frac{k}{r}}.$ } \label{2} \end{figure} \noindent In the Fig.\ref{2}, we have shown the impact of the black hole spin ($a$) on the shadow. In order to do so, we have fixed the rest of the black hole parameters ($Q$ = 0.2, $\chi$ = 0.2, 1.0, $k$ = 0.2). The plots are shown for inhomogeneous plasma with $n(r)=\sqrt{1-\frac{k}{r}}.$ Previously, we have mentioned that $PFDM$ parameter $\chi$ has two ranges of values separated by $\chi_c$. The left plot is for $\chi < \chi_c$ and the right one is for $\chi > \chi_c$. The value of $\chi_c$ varies with variation in the values of black hole spin $a$ with fixed charge $Q$ as can be seen in Table \ref{Fig11}. The shadow is larger in case of $\chi < \chi_c$, whereas, they are comparatively smaller in case of $\chi > \chi_c$. Besides, we find that with the increase in spin ($a$) of the black hole, the shadow gets rotated and slightly deformed. This is due to the rotational drag force on the unstable photons moving in close vicinity of the black hole. \begin{figure}[H] \centering \begin{minipage}[b]{0.3\textwidth} \subfloat[\footnotesize a = 0.4, k = 0.2, $\chi$ = 0.2 ]{\includegraphics[width=\textwidth]{Shadow_charge_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.3\textwidth} \subfloat[\footnotesize a = 0.4, k = 0.2, $\chi$ = 1.0 ]{\includegraphics[width=\textwidth]{Shadow_charge_2.eps}} \end{minipage} \caption{\footnotesize Variation of the black hole shadow with charge $Q$. The colored plots are for different charge values-blue dotted ($Q=0.0$), black ($Q=0.3$), red dashed ($Q=0.6$). The plots are shown for inhomogeneous plasma with $n(r)=\sqrt{1-\frac{k}{r}}.$} \label{1} \end{figure} \noindent Fig.\ref{1} depict the effect of charge ($Q$) on the shadow of the black hole. Just like the previous case we have shown two ranges of $\chi$ ($\chi < \chi_c$ and $\chi > \chi_c$). We set the rest of the black hole parameters to constant values ($a$ = 0.4, $\chi$ = 0.2, 1.0, $k$ = 0.2). The plots are shown for inhomogeneous plasma with $n(r)=\sqrt{1-\frac{k}{r}}.$ In this case too, the value of $\chi_c$ varies with variation in the values of black hole charge $Q$ with fixed spin $a$ as can be seen in Table \ref{Fig11}. Here too, we find that the shadow size in $\chi < \chi_c$ is greater than that in $\chi > \chi_c$. Also with the increase in charge ($Q$) of the black hole, the shadow size reduces in both cases ($\chi < \chi_c$ and $\chi > \chi_c$). The reason for the observation can be assigned to the fact that the event horizon radius ($r_{h+}=M+\sqrt{M^2 - Q^2}$) without dark matter ($\chi$=0) and plasma ($k$=0) decreases with increase in charge $Q$. The same persists in presence of $\chi$ and $k$. The black hole shadow which is a manifestation of the event horizon, thereby decreases. The decrement in shadow size with charge ($Q$) is non-uniform. \begin{figure}[H] \centering \begin{minipage}[b]{0.35\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, k = 0.2, $\chi < \chi_c$ ]{\includegraphics[width=\textwidth]{Shadow_DM_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.35\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, k = 0.2, $\chi > \chi_c$ ]{\includegraphics[width=\textwidth]{Shadow_DM_2.eps}} \end{minipage} \caption{ \footnotesize Variation of black hole shadow with perfect fluid dark matter ($PFDM$) parameter $\chi$. The colored plots are for different values of $\chi$-blue dotted ($\chi=0.1$), black ($\chi=0.2$), red dashed ($\chi=0.3$) for the left plot and blue dotted ($\chi=1.0$), black ($\chi=1.1$), red dashed ($\chi=1.2$) for the right plot. The plots are shown for inhomogeneous plasma with $n(r)=\sqrt{1-\frac{k}{r}}.$} \label{3} \end{figure} \noindent In Fig.\ref{3}, we have shown the impact of $PFDM$ parameter ($\chi$) on the black hole shadow. The plots are shown for inhomogeneous plasma with $n(r)=\sqrt{1-\frac{k}{r}}.$ From previous analysis, we find that the outer event horizon radius ($r_{h+}$) decreases with increase in $\chi$ for $\chi < \chi_c$ and increases for $\chi > \chi_c$. The analogical results are observed in case of black hole shadow. We found that for $\chi < \chi_c$, the shadow decreases non-uniformly and gets distorted with increase in $\chi$. On the other hand, for $\chi > \chi_c$, the shadow increases uniformly with \begin{figure}[H] \centering \begin{minipage}[b]{0.3\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 0.2 ]{\includegraphics[width=\textwidth]{Omega_p_r_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.3\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 1.0 ]{\includegraphics[width=\textwidth]{Omega_p_r_2.eps}} \end{minipage} \caption{\footnotesize Variation of black hole shadow with $f_r (r)= \omega_c ^2 \sqrt{M^3 r} $ and $f_{\theta}(\theta) =0$ for $\chi=0.2$ (left) and $\chi=1.0$ (right). The plots are shown for $M=1$ and different values of $\Big(\frac{\omega_c}{\omega_0}\Big)^2$ with $\Big(\frac{\omega_c}{\omega_0}\Big)^2=0.0$ (blue), $\Big(\frac{\omega_c}{\omega_0}\Big)^2=1.0$ (black), $\Big(\frac{\omega_c}{\omega_0}\Big)^2=3.0$ (red dotted) and $\Big(\frac{\omega_c}{\omega_0}\Big)^2=6.0$ (orange).} \label{10a} \end{figure} \noindent increase in $\chi$, though the effect is less pronounced than for $\chi < \chi_c$. Such kind of observation results from the fact that $PFDM$ effectively gives the mass of the total system as discussed previously. \noindent In Figure \ref{10a}, we have shown the variation of black hole shadow with a certain form of the function $f_r (r) = \omega_c ^2 \sqrt{r}$ and $f_{\theta}(\theta)=0$, with $M=1$. The plots are shown for black hole spin $a=0.4$ and charge $Q=0.2$. Also the plots are shown with the observer in the equatorial plane, $\theta_0 = \frac{\pi}{2}$. The left one is for dark matter parameter $\chi=0.2$ and the right one for $\chi=1.0.$ We have taken these two values since $\chi=0.2$ is less than $\chi_c$ whereas, $\chi=1.0$ is greater than $\chi_c$ for the considered combination of spin $a$ and charge $Q$. The plots show that shadow size decreases with increase in plasma parameter $\omega_c$. Besides, we also observe that the shadow size is smaller in case of $\chi=1.0$ as compared to that in $\chi=0.2$. \begin{figure}[H] \centering \begin{minipage}[b]{0.35\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 0.2 ]{\includegraphics[width=\textwidth]{Omega_p_theta_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.35\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 1.0 ]{\includegraphics[width=\textwidth]{Omega_p_theta_2.eps}} \end{minipage} \caption{\footnotesize Variation of black hole shadow with $f_r (r)= 0 $ and $f_{\theta}(\theta) = \omega_c ^2 M^2 \Big(1 + 2 \sin^2 \theta\Big)$ for $\chi=0.2$ (left) and $\chi=1.0$ (right). The plots are shown for $M=1$ and different values of $\Big(\frac{\omega_c}{\omega_0}\Big)^2$ with $\Big(\frac{\omega_c}{\omega_0}\Big)^2=0.0$ (blue), $\Big(\frac{\omega_c}{\omega_0}\Big)^2=0.4$ (black),$\Big(\frac{\omega_c}{\omega_0}\Big)^2=1.0$ (red dotted) and $\Big(\frac{\omega_c}{\omega_0}\Big)^2=1.5$ (orange).} \label{10b} \end{figure} \noindent In Figure \ref{10b}, we have shown the variation of black hole shadow with $f_r (r) = 0$ and $f_{\theta}(\theta)= \omega_c ^2 \Big(1 + 2 \sin^2 \theta\Big)$, with $M=1$. The plots are shown with spin and charge of the black hole set at $a=0.4$ and $Q=0.2$ respectively. Also, the plots are shown with the observer situated in the equatorial plane, $\theta_0 = \frac{\pi}{2}$. The left plot is for dark matter parameter $\chi=0.2$ and the right one is for $\chi=1.0.$ The plots show that shadow size once again decreases with increase in plasma parameter $\omega_c$ as in the earlier case. \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 0.2 ]{\includegraphics[width=\textwidth]{Shadow_inhomogeneous_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 1.0 ]{\includegraphics[width=\textwidth]{Shadow_inhomogeneous_2.eps}} \end{minipage} \caption{\footnotesize Variation of the black hole shadow with inhomogeneous plasma \Big($n = n(r)=\sqrt{1-(\frac{k}{r}})$\Big). The colored plots are for different values of plasma parameter-blue dotted ($k=0.0$), black ($k=0.2$), red dashed ($k=0.4$).} \label{5} \end{figure} \noindent We plot the effect of inhomogeneous plasma ($n = n(r)$) on the black hole shadow in Fig.\ref{5}. We show the plots both for $\chi < \chi_c$ and $\chi > \chi_c$. We observe that the co-rotating photon radius ($r_{p1}$) which correspond to the extreme left of $\alpha$ axis decreases with increase in plasma parameter $k$. The same happens in case of counter rotating radius ($r_{p2}$) which corresponds to the extreme right of the $\alpha$ axis. The effect is the same as obtained previously following a numerical approach. The cumulative effect of the two extreme orbit produces the unstable photon orbit which forms the black hole shadow. The effect remains identical for both $\chi < \chi_c$ and $\chi > \chi_c$. The shadow size is larger in case of $\chi < \chi_c$ than that in $\chi > \chi_c$. \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 0.2]{\includegraphics[width=\textwidth]{Shadow_homogeneous_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ =1.0 ]{\includegraphics[width=\textwidth]{Shadow_homogeneous_2.eps}} \end{minipage} \caption{\footnotesize Variation of the black hole shadow with homogeneous plasma ($n =\sqrt{1-k}= $constant). The colored plots are for different values of plasma parameter-blue dotted ($k=0.0$), black ($k=0.2$), red dashed ($k=0.4$). The plots are for $\theta_0 =\frac{\pi}{4}$.} \label{4a} \end{figure} \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 0.2]{\includegraphics[width=\textwidth]{Shadow_homogeneous_3.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ =1.0 ]{\includegraphics[width=\textwidth]{Shadow_homogeneous_4.eps}} \end{minipage} \caption{\footnotesize Variation of the black hole shadow with homogeneous plasma ($n =\sqrt{1-k}= $constant). The colored plots are for different values of plasma parameter-blue dotted ($k=0.0$), black ($k=0.2$), red dashed ($k=0.4$). The plots are for $\theta_0 =\frac{\pi}{3}$.} \label{4b} \end{figure} \begin{figure}[H] \centering \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ = 0.2]{\includegraphics[width=\textwidth]{Shadow_homogeneous_5.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.4\textwidth} \subfloat[\footnotesize a=0.4, Q = 0.2, $\chi$ =1.0 ]{\includegraphics[width=\textwidth]{Shadow_homogeneous_6.eps}} \end{minipage} \caption{\footnotesize Variation of the black hole shadow with homogeneous plasma ($n =\sqrt{1-k}= $constant). The colored plots are for different values of plasma parameter-blue dotted ($k=0.0$), black ($k=0.2$), red dashed ($k=0.4$). The plots are for $\theta_0 =\frac{\pi}{2}$.} \label{4c} \end{figure} \noindent The effect of plasma is being observed for both homogeneous and inhomogeneous plasma distribution. In Fig.(s)(\ref{4a}, \ref{4b}, \ref{4c}), we have shown the effect of homogeneous plasma ($n$=constant) on the black hole shadow. The observation is carried out for $\theta_0 = \frac{\pi}{4}$, $\theta_0 = \frac{\pi}{3}$ and $\theta_0 = \frac{\pi}{2}$ respectively. The extreme right point on the $\alpha$ axis corresponds to the radius of counter rotating photon orbits \cite{5a}. On the other hand, the extreme left point on the $\alpha$ axis corresponds to the radius of co-rotating orbit. The radius ($r_{p1}$) of co-rotating photons is found to decrease with increase in plasma parameter $k$, whereas that of counter rotating photons ($r_{p2}$) is observed to increase with increase in $k$. The same is observed both for $PFDM$ parameter $\chi = 0.2$ and $1.0$. Also the shadow size is larger in case of $\chi =0.2$ with respect to that in $\chi = 1.0$. \section{Effective potential ($V_{eff}$)}\label{sec6} In this section, we study the effective potential ($V_{eff}$) as faced by a photon moving in the black hole spacetime. The potential can have maxima or minima which corresponds to the existence of unstable or stable orbits. The condition for maxima or minima are given as $\frac{\partial ^2 V_{eff}}{\partial r^2}<0$ and $\frac{\partial ^2 V_{eff}}{\partial r^2}>0$ respectively. The effective potential can be obtained from the modified radial equation which gives \begin{equation}\label{53} \dot{r}^2 + V_{eff}=E^2 \end{equation} with the effective potential given by \begin{eqnarray} V_{eff}=-\frac{(a^2 E^2 -L_{\phi}^2)}{r^2} - \frac{2M}{r^3}(aE - L_{\phi})^2 + \frac{Q^2}{r^4}(aE-L_{\phi})^2 + \frac{\chi}{r^3}(aE - L_{\phi})^2 \\ -(n^2 - 1)\Bigg[E^2 + \frac{a^2 E^2}{r^2} + a^2 E^2 \Big(\frac{2M}{r^3} - \frac{Q^2}{r^4} - \frac{\chi}{r^3}\ln{\frac{r}{|\chi|}}\Big)\Bigg]~. \end{eqnarray} The plots for the effective potential are shown below. Here we basically focus on the dependence of the effective potential on the plasma parameter $k$. \begin{figure}[H] \centering \begin{minipage}[b]{0.45\textwidth} \subfloat[\footnotesize $a$=0.5, $Q$ = 0.3, $\chi$ = 0.2, $L_{\phi}$=3.0 ]{\includegraphics[width=\textwidth]{Potential_homogeneous_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.45\textwidth} \subfloat[\footnotesize $a$=0.5, $Q$ = 0.3, $\chi$ = 1.0, $L_{\phi}$=3.0 ]{\includegraphics[width=\textwidth]{Potential_homogeneous_2.eps}} \end{minipage} \caption{\footnotesize Variation of the effective potential ($V_{eff}$) for co-rotating photons with homogeneous plasma ($n = $constant$=\sqrt{1-k}$).} \label{6} \end{figure} \noindent The above Fig.\ref{6} shows the effective potential ($V_{eff}$) encountered by the photons in co-rotating orbits with variation in plasma parameter $k$. Here we consider the plasma distribution to be homogenous such that $n=\sqrt{1-k}$. The left one is for $\chi = 0.2$ and the right one for $\chi = 1.0$. The plots are shown by considering $M=1$, $E=1$, $a=0.5$, $Q=0.3$, $L_{\phi}$=3.0. We find that with increase in plasma parameter the potential increases uniformly in both cases. The potential shows a maxima which corresponds to unstable photon orbits. The maxima in case of $\chi=1.0$ are a little higher than the same for $\chi=0.2$. Also we find that the position of the maxima, which gives the unstable photon radius ($r_p$) slightly shifts towards left with increase in plasma parameter $k$. \begin{figure}[H] \centering \begin{minipage}[b]{0.45\textwidth} \subfloat[\footnotesize $a$=0.5, $Q$ = 0.3, $\chi$ = 0.2, $L_{\phi}$=3.0 ]{\includegraphics[width=\textwidth]{Potential_inhomogeneous_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.45\textwidth} \subfloat[\footnotesize $a$=0.5, $Q$ = 0.3, $\chi$ = 1.0, $L_{\phi}$=3.0 ]{\includegraphics[width=\textwidth]{Potential_inhomogeneous_2.eps}} \end{minipage} \caption{\footnotesize Variation of the effective potential ($V_{eff}$) for co-rotating photons with inhomogeneous plasma ($n = n(r)=\sqrt{1-\frac{k}{r}}$).} \label{7} \end{figure} \noindent Fig.\ref{7} shows the effective potential ($V_{eff}$) faced by the co-rotating photons with variation in plasma parameter $k$. The left one is for $\chi = 0.2$ and the right one for $\chi = 1.0$. We consider the plasma distribution to be inhomogeneous such that $n(r)=\sqrt{1-\frac{k}{r}}$. The plots are shown by considering $M=1$, $E=1$, $a=0.5$, $Q=0.3$, $L_{\phi}$=3.0. We find that with increase in plasma parameter ($k$), the potential increases uniformly in both cases. The potential shows a maxima which corresponds to unstable photon orbits. The maxima in case of $\chi=1.0$ are a little higher than the same for $\chi=0.2$. Also we find that the position of the maxima shifts towards left with increase in plasma parameter $k$. This implies that the radius ($r_p$) of the unstable photon orbits decreases with increase in plasma parameter $k$, i.e., the orbits move close to the black hole. The increment of effective potential in both the above cases for homogeneous and inhomogeneous plasma can be assigned to the fact that due to interaction of photons with plasma, the total energy and thereby the potential of the system increases. This can be seen by looking at the Hamiltonian ($\mathcal{H}$) which has an extra term due to plasma (eq.\eqref{223}) \begin{equation} \begin{split} \mathcal{H_{I}} & =-\frac{1}{2} (n^2 -1)\Big(p_0 \sqrt{-g^{00}}\Big)^2 \\ & =\frac{1}{2}\frac{k}{r^h}\Big(p_0 \sqrt{-g^{00}}\Big)^2 ~~;~~ n=\sqrt{1-\frac{k}{r^h}}~. \end{split} \end{equation} \begin{figure}[H] \centering \begin{minipage}[b]{0.45\textwidth} \subfloat[\footnotesize $a$=0.5, $Q$ = 0.3, $\chi$ = 0.2, $k$=0.2 ]{\includegraphics[width=\textwidth]{Potential_plasma_compare_1.eps}} \end{minipage} \hspace{1.0cm} \begin{minipage}[b]{0.45\textwidth} \subfloat[\footnotesize $a$=0.5, $Q$ = 0.3, $\chi$ = 1.0, $k$=0.2 ]{\includegraphics[width=\textwidth]{Potential_plasma_compare_2.eps}} \end{minipage} \caption{\footnotesize Variation of the effective potential ($V_{eff}$) in inhomogeneous plasma \Big($n = n(r)=\sqrt{1-\frac{k}{r}}$\Big) for $L_{\phi} >0$ and $L_{\phi}<0$. The solid line corresponds to corotating and the dashed line corresponds to the counter rotating orbit.} \label{8} \end{figure} \noindent Thus, with increase in plasma parameter $k$, the interaction energy increases. So by radial equation \eqref{53} for fixed $r$, increase in energy increases the potential. Thus the potential of the system increases with increase in plasma parameter $k$. The plots of $V_{eff}$ in Fig.(\ref{8}) displays both the co-rotating and counter rotating orbits. The co-rotating (prograde) orbits are characterised by $E>0$ and $L_{\phi}>0$ with respect to black hole spin $a>0$. The reverse happens in case of counter rotating (retrograde) orbits with $E>0$ and $L_{\phi}<0$ with respect to black hole spin $a>0$. We observe that the unstable photon orbit radius of counter rotating ($r_{p2}$) are greater than that of co-rotating orbits ($r_{p1}$) as can be seen from the maxima of the potential. This implies that the corotating orbits are near to the black hole than the counter rotating ones. \section{Shadow radius $R_s$ and constraints from the M$87^*$ observational data}\label{observation} In this section, we compute one of the major observables of black hole shadow, which is the black hole shadow radius ($R_s$). We do this by following the approach given in \cite{5b} which considers a reference circle to estimate the shadow radius. The definition of $R_s$ reads (in terms of the celestial coordinates) \begin{eqnarray} R_s = \frac{(\alpha_t -\alpha_r)^2+\beta_t^2}{2|\alpha_r-\alpha_t|} \end{eqnarray} where the silhouette of the shadow coincides with the reference circle at three different coordinates, the top coordinate ($\alpha_t,\beta_t$), the bottom coordinate ($\alpha_b,\beta_b$) and the right coordinate ($\alpha_r,0$). It is to be noted that the shadow radius $R_s$ is related to the angular diameter of the shadow as \cite{5c} \begin{eqnarray} \theta_d = 2 \frac{R_s}{d} \end{eqnarray} where $d$ represents the distance of $M87^*$ black hole from earth ($d=16.8$ Mpc). From the $EHT$ observations, the parameter $\theta_d$ is estimated to be $42\pm3$ $\mu a s$ or in radian $(0.20325\pm0.0146) \times 10^{-9}$ rad \cite{2}. We now try to constraint the PFDM parameter ($\frac{\chi}{M}$) and the plasma parameter ($k$) by confronting the theoretically estimated value of $\theta_d$ with the observational data. In the subsequent analysis, we set the charge parameter $\frac{Q}{M}=0$ for the sake of simplicity.\\ \noindent We first consider the Kerr limit ($\frac{\chi}{M} \rightarrow 0, k \rightarrow 0, \frac{Q}{M} \rightarrow 0$) of our black hole solution. We observe that for the variation of the spin parameter $\frac{a}{M} \in [0,1]$, the value of the angular diameter $\theta_d$ is obtained to be $\theta_d \in [0.1925\times10^{-9}, 0.1929\times10^{-9}]$ (in radian). This in turn means that all the values of the spin parameter $\frac{a}{M}$ produces angular diameter ($\theta_d$) are allowed from the observational point of view.\\ Now incorporating the homogeneous plasma background ($h=0$) with $\frac{Q}{M}=0$, $\frac{\chi}{M}=0$, we try to constraint the value of the plasma parameter $k$ for various values of $\frac{a}{M}$. We obtain the possible range of $k$ from the observational constraint $\theta_d \in (0.20325\pm0.0146) \times 10^{-9}$ radian. The results are displayed in Table\ref{TabNew1}.\\ \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{}\\ \hline $\frac{a}{M}$ & $k_{lower}$ & $k_{upper}$ \\ \hline 0.1 & 0.0 & 0.976\\ \hline 0.2 & 0.0 & 0.909\\ \hline 0.3 & 0.0 & 0.805\\ \hline 0.4 & 0.0 & 0.677\\ \hline 0.5 & 0.0 & 0.536\\ \hline 0.6 & 0.0 & 0.396\\ \hline 0.7 & 0.0 & 0.266\\ \hline 0.8 & 0.0 & 0.154\\ \hline 0.9 & 0.0 & 0.067\\ \hline 1.0 & 0.0 & 0.020\\ \hline \end{tabular} \caption{\footnotesize The results show the upper bound on the value of the plasma parameter $k$ ($0 \leq k \leq k_{upper}$) at fixed value of the spin parameter which results in the allowed value of $\theta_d$. (We set $\frac{\chi}{M} = 0, \frac{Q}{M} = 0$).} \label{TabNew1} \end{table} \noindent From Table\eqref{TabNew1} we note the range of allowed values of the plasma parameter $k$ at different values of the spin parameter $\frac{a}{M}$. It is to be noted that with increasing value of the spin parameter $\frac{a}{M}$, the allowed range for plasma parameter $k$ decreases. Further, we observe that when we consider the inhomogeneous plasma background ($h=1$), the resulting value of $\theta_d$ lies outside the estimated range of $\theta_d$. This is true for each and every value of the spin parameter $\frac{a}{M}$. Next we consider the PFDM black hole solution (with $\frac{Q}{M}\rightarrow 0$) in the homogeneous plasma background. The following Table represents our observation \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{}\\ \hline $\frac{\chi}{M}$ & $k_{lower}$ & $k_{upper}$ \\ \hline 0.010 & 0.000 & 0.904\\ \hline 0.015 & 0.000 & 0.903\\ \hline 0.020 & 0.770 & 0.901\\ \hline 0.025 & 0.875 & 0.899\\ \hline \end{tabular} \caption{\footnotesize The results show the lower and upper value of the plasma parameter $k$ at fixed value of the spin parameter, in presence of the PFDM parameter. (We set $ \frac{Q}{M} = 0$ and $\frac{a}{M}=0.2$).} \label{TabNew2} \end{table} \noindent From Table\eqref{TabNew2}, we note that the allowed range of values for the PFDM parameter is $\frac{\chi}{M} \in [0, 0.025]$. We also observe the allowed range of values for $k$ corresponding to a fixed value of the PFDM parameter. \noindent Next we consider the general case of plasma frequency $\omega_p (r, \theta)$. In this case, the plasma frequency is given by $\omega_p ^2 (r, \theta) = \frac{f_r (r) + f_{\theta} (\theta)}{r^2 + a^2 \cos^2 \theta}$. We wish to constraint the $PFDM$ parameter ($\frac{\chi}{M}$) and the plasma parameter $\Big(\frac{\omega_c}{\omega_0}\Big)^2$ from the observed value of angular shadow ($\theta_d$). We consider the two cases studied in section \ref{I}. The first one is $f_r(r)=\omega_c ^2 \sqrt{r}$ and $f_{\theta} (\theta)=0$ and the other one $f_r(r)=0$ and $f_{\theta} (\theta)=\omega_c ^2 (1 + 2 \sin^2 \theta)$. The Tables below show the corresponding ranges of $\frac{\chi}{M}$ compatible with the ranges of $\Big(\frac{\omega_c}{\omega_0}\Big)^2$. \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{}\\ \hline $\frac{\chi}{M}$ & $\Big(\frac{\omega_c}{\omega_0}\Big)_{lower} ^2$ & $\Big(\frac{\omega_c}{\omega_0}\Big)_{upper} ^2$ \\ \hline 0.001 & 0.0 & 0.489\\ \hline 0.002 & 0.0 & 0.387\\ \hline 0.003 & 0.0 & 0.293\\ \hline 0.004 & 0.0 & 0.205\\ \hline 0.005 & 0.0 & 0.121\\ \hline 0.006 & 0.0 & 0.039\\ \hline \end{tabular} \caption{\footnotesize Table showing the accessible range of $\Big(\frac{\omega_c}{\omega_0}\Big) ^2$ with $f_r (r)=\omega_c ^2 \sqrt{r}$ and $f_{\theta} (\theta)=0$ for various values of $\frac{\chi}{M}$. The results show the upper bound on the value of the plasma parameter $\Big(\frac{\omega_c}{\omega_0}\Big) ^2$ at fixed value of the spin parameter ($\frac{a}{M}=0.5$) which results in the allowed value of $\theta_d$. (We set $\frac{Q}{M} = 0$).} \label{TabNew3} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \multicolumn{3}{c}{}\\ \hline $\frac{\chi}{M}$ & $\Big(\frac{\omega_c}{\omega_0}\Big)_{lower} ^2$ & $\Big(\frac{\omega_c}{\omega_0}\Big)_{upper} ^2$ \\ \hline 0.001 & 0.0 & 0.282\\ \hline 0.002 & 0.0 & 0.223\\ \hline 0.003 & 0.0 & 0.168\\ \hline 0.004 & 0.0 & 0.117\\ \hline 0.005 & 0.0 & 0.069\\ \hline 0.006 & 0.0 & 0.022\\ \hline \end{tabular} \caption{\footnotesize Table showing the accessible range $\Big(\frac{\omega_c}{\omega_0}\Big) ^2$ with $f_r (r)=0$ and $f_{\theta} (\theta)=\omega_c ^2 (1+ 2 \sin^2 \theta)$ for various values of $\frac{\chi}{M}$. The results show the upper bound on the value of the plasma parameter $\Big(\frac{\omega_c}{\omega_0}\Big) ^2$ at fixed value of the spin parameter ($\frac{a}{M}=0.5$) which results in the allowed value of $\theta_d$. (We set $\frac{Q}{M} = 0$).} \label{TabNew4} \end{table} \noindent The above Tables \ref{TabNew3} and \ref{TabNew4} show that the allowed ranges of the $PFDM$ parameter $\frac{\chi}{M}$ is $\frac{\chi}{M} \in [0, 0.006]$ which implies the presence of a very small amount of dark matter in the vicinity of the black hole. Also the Tables entail that dark matter and plasma can coexist as depicted from the observational range of black hole shadow $\theta_d$. Besides, we observe that with the increase of $PFDM$ parameter $\frac{\chi}{M}$ from 0 to 0.006 the allowed range of plasma parameter $\Big(\frac{\omega_c}{\omega_0}\Big) ^2$ gradually decreases and vanishes from $\frac{\chi}{M}=0.007$. \section{Summary and conclusion}\label{sec7} After running through the analysis in details, we now summarise our results. We consider a charged rotating black hole surrounded by perfect fluid dark matter ($PFDM$). We also immerse the system in plasma and consider no interaction between plasma and dark matter. \noindent We observe some unique characteristics of the black hole spacetime due to presence of $PFDM$. From the analysis of the function $\Delta(r)$, we find that the outer event horizon ($r_{h+}$) decreases with increase in PFDM parameter ($\chi$). The decrement proceeds untill $\chi<\chi_c$. At this point and beyond, we observe a reverse nature where $r_{h+}$ increases with further increase in $\chi$. The same behaviour gets reflected in the observed black hole shadow. The reason for such a fascinating observation can be assigned to the contribution of PFDM to the mass of the black hole system. Here, we have two masses, $M$ for the original black hole and $M_0$ for the black hole corresponding to PFDM. Till $\chi<\chi_c$, the black hole mass is M and gets inhibited by $M_0$. But just at the point when $\chi=\chi_c$, the total mass is given by $M_0$ beyond which system mass increases with increase in $\chi$. Since the mass has a major effect in determining the event horizon of the black hole and thereby the shadow, hence we obtain such results. \noindent Then we move on to analyse the motion of null particles around the black hole. These particles can be rotating along the direction of black hole spin (co-rotating) or opposite (counter rotating) to it. We observed that those co-rotating orbits lie close to the black hole whereas, the counter rotating ones remain away from the black hole. We find their dependence on plasma parameter. The effect of plasma is independent of the influence of dark matter. We find that in case of homogeneous plasma distribution \Big($n=\sqrt{1-k}$\Big), the radius of co-rotating orbits decrease whereas that for the counter rotating orbits increase with increase in plasma parameter $k$. Also, we find that with increase in plasma parameter ($k$) in case of inhomogeneous plasma distribution \Big($n=\sqrt{1-\frac{k}{r}}$\Big), the increase in $k$ results in decrease of photon radius for both co-rotating and counter rotating orbits. \noindent Then we obtain the null geodesics responsible for the formation of the black hole shadow. With the help of the geodesic equation(s), we obtain the celestial coordinates ($\alpha, \beta$). These two coordinates give the black hole shadow radius ($R_s$) as $R_s ^2 = \alpha^2 + \beta^2$. The shadow gets formed in the celestial plane ($\alpha - \beta$ plane). We plot the shadow and study the dependence of the black hole shadow on the black hole parameters ($a$, $Q$, $\chi$, $k$). From the plots, we find that the shadow gets rotated and deformed with increase in black hole spin ($a$). The deformation of black hole shadow occurs due to the rotational drag of the unstable photons by the black hole. We also observe that increment in charge ($Q$) reduces the radius ($R_s$) and thereby the size of the black hole shadow. The reason for this is quite obvious. The black hole shadow is the image of the outer event horizon of the black hole. The radius of outer event horizon is given as $r_{h+} = M + \sqrt{M^2 - Q^2}$ in absence of $\chi$ and $k$. With increase in $Q$, $r_{h+}$ decreases and thereby $R_s$ decreases. This ultimately reduces the size of the black hole shadow. We analyse this in case of inhomogeneous plasma distribution with \Big($n=\sqrt{1-k}$\Big). \noindent Next we analyse the effect of the plasma medium on the black hole shadow. The shadow is formed by the light rays encircling the black hole in unstable photon orbits. If these light rays move through vacuum, they have no deviation. But if they move through plasma media, they get deviated due to variation in frequency. The black hole shadow is formed in the celestial plane with coordinates $\alpha$ and $\beta$. We considered the general case where the refractive index ($n(r, \theta)$) and plasma frequency ($\omega_p (r, \theta)$) both depends on $r$ and $\theta$. We have considered a different value for the spin, $a=0.4$ in contrast to that considered in \cite{54} which is $a=0.999$ (extremal case). We found that with only radial variation $f_r (r) = \omega_c ^2 \sqrt{M^3 r}$ and $f_{\theta}(\theta)=0$, the shadow plots reduce in size with increase in plasma parameter $\omega_c$. Similar nature is observed in case of $\theta$ variation with $f_r (r) = 0$ and $f_{\theta}(\theta)= \omega_c ^2 M^2 \Big(1 + 2 \sin^2 \theta\Big)$ with the shadow size decreasing with increase of plasma parameter $\omega_c$. \noindent Next we consider cases with refractive index of the form \Big($n=\sqrt{1-\frac{k}{r}}$\Big) (inhomogeneous) and \Big($n=\sqrt{1-k}$\Big) (homogeneous). Analysing the plots, we find that the extreme right of $\alpha$ axis corresponds to the radius of counter rotating orbits ($r_{p2}$) whereas that on the extreme left corresponds to the co-rotating orbits ($r_{p1}$). The variation of the radius of these orbits with plasma gets reflected in the black hole shadow. In particular, we observe that $r_{p1}$ decreases with increase in the plasma parameter for fixed value of the $PFDM$ parameter for both inhomogeneous and homogeneous plasma. However, $r_{p2}$ increases with decreases in the plasma parameter ($k$) for a fixed $PFDM$ parameter ($\chi$) for inhomogeneous plasma, and increases for homogeneous plasma. \noindent We have also analysed the effective potential ($V_{eff}$) and have found that it significantly depends on the plasma parameter ($k$). The maxima of the potential ($V_{eff}$) corresponds to the radius of the unstable photon orbits. These maxima's shift towards left for both homogeneous and inhomogeneous plasma distribution. Besides, we also find that the peak of the effective potential increases with increase in plasma parameter $k$. The reason for such an increment can be related to the fact that due to presence of plasma, the interaction energy of the total system increases and hence the potential ($V_{eff}$) of the system increases.\\ Finally, we compute the shadow radius $R_s$ and the angular shadow radius $\theta_d$. From our theoretical results, we constraint the plasma parameter $k$ as well as $\Big(\frac{\omega_c}{\omega_0}\Big) ^2$ with the $PFDM$ parameter $\frac{\chi}{M}$ by comparing the obtained values of $\theta_d$ with that observed from the $M87^*$ supermassive black hole data. \section*{Acknowledgments} A.D. would like to acknowledge the support of S.N. Bose National Centre for Basic Sciences for Senior Research Fellowship. A.S. acknowledges the financial support by Council of Scientific and Industrial Research (CSIR, Govt. of India). The authors would also like to thank the referees for useful comments.
2dfcdc7836f11db5979460f2720f288c7aee912f
\section{Introduction} \label{sec:intro} Heavy quarkonia, the bound states of a heavy quark and anti-quark are a unique laboratory of the strong interactions. At the same time they constitute a central tool in the investigation of the primordial state of matter created in relativistic heavy-ion collisions, the quark-gluon plasma. In turn elucidating the properties of these strongly interacting bound states in extreme condition remains a central focus of experimental and theoretical research (see Refs.~\cite{Rothkopf:2019ipj,Aarts:2016hap,Mocsy:2013syh,Bazavov:2009us} for reviews). Interest in heavy quarkonium in relativistic heavy-ion collisions erupted with the seminal paper by Matsui and Satz \cite{Matsui:1986dk}. Their paper put forward two key ideas: on the one hand it argues that the formation of the deconfined medium in heavy ion collisions will interfere with the binding of the heavy quarks through color screening and thus prevent formation of a bound state. The second idea states that such an absence of bound states in the medium will lead to a suppression of quarkonium yields. In case that in a heavy-ion collision only a few heavy quark pairs are produced, the suppression Matsui and Satz envisioned has been clearly established by experiment for both charmonium at RHIC and bottomonium at LHC. At increasing energies, where a wealth of heavy quarks may be produced in the initial state, it has been observed that quarkonium yields, in particular charmonium at LHC, can be replenished. This phenomenon is attributed to recombination. The question of whether or how color screening (and in general the interactions with a (non-)thermal medium) affect the survival of heavy quarkonium states, remains an open research question. Using lattice QCD simulations it has so far been established that the free energy of static quark anti-quark pair is indeed screened at large separations (see e.g. ref.~\cite{Bazavov:2020teh} for a recent review). The most recent lattice analysis of the static $Q \bar Q$ free energy shows that the interactions are screened beyond distances larger than $0.4/T$ \cite{Bazavov:2018wmo}. Matsui and Satz took the idea of static color screening and applied it to quarkonium with finite mass constituents. In fact, their idea of melting from color screening relies on a non-relativistic potential picture of quarkonium binding. At zero temperature such a potential picture has been highly successful in describing the phenomenology of the ground and excited states below the open heavy flavor threshold (see e.g. Ref. \cite{Brambilla:2004wf}). The lattice calculations of quarkonium Bethe-Salpeter amplitudes at zero temperature are also consistent with the potential model \cite{Kawanai:2011xb,Kawanai:2011jt,Kawanai:2013aca,Nochi:2016wqg,Larsen:2020rjk}. The past two decades have seen significant progress in our understanding of heavy quarkonium systems based on the concept of effective field theory. Such a systematic approximation of QCD allows us to clarify the concept of a potential in the context of heavy (but not static) quarks. At zero temperature there exist three distinct energy scales: $M \gg M v \gg M v^2$ with $M$ being the heavy quark mass and $v$ the relative velocity of the heavy quarks inside the bound state. By focusing on physical processes involving energies smaller than $M$, we may cast the description of the quark anti-quark pair in terms of non-relativistic Pauli spinors (pair creation at the scale $M$ is not explicitly treated but remains present as four-fermi interaction). This process of \textit{integrating out} the so called hard scale $M$ leads to the theory of non-relativistic QCD (NRQCD) \cite{Caswell:1985ui}, which is valid at scales up to $M v$. We may further restrict our focus on e.g.~the binding properties of the heavy quark antiquark pair at the ultrasoft scale $Mv^2$, which leads (as long as the same degrees of freedom as in NRQCD can be identified) to a theory of color singlet $S$ and octet $O$ wavefunctions, called potential NRQCD or pNRQCD for short \cite{Brambilla:1999xf}. The Lagrangian of pNRQCD has the form \begin{widetext} \begin{align*} \nonumber {\cal L}_{\rm pNRQCD}=\int d^3\mathbf{r} {\rm Tr}\Big[& S^\dagger \big[ i\partial_0 - \big( \frac{\mathbf{D}^2}{2 M} + V_S^{(0)} + \frac{V_S^{(1)}}{m_Q} + \ldots \big) \big]S + O^\dagger \big[ iD_0 + \frac{\mathbf{D}^2}{2 M} + V_O^{(0)} + \frac{V_O^{(1)}}{m_Q} +\ldots\big) \big]O\Big]\\ +&V_A(r){\rm Tr}\Big[ O^\dagger \mathbf{r} g \mathbf{E} S + S^\dagger \mathbf{r} g \mathbf{E} O \Big]+V_B(r){\rm Tr}\Big[ O^\dagger \mathbf{r} g \mathbf{E} O + O^\dagger O\mathbf{r} g \mathbf{E} \Big] +{\cal O}\big(r^2,\frac{1}{m_Q^2}\big)+{\cal L}_{light~quarks,gluons}\label{eq:pNRQCDcont} \end{align*} \end{widetext} The singlet and octet fields depend on the label $\mathbf{r}$ that corresponds to the distance between the heavy quark and anti-quark, and the ultrasolft gluon fields depend on the center of mass coordinate, $\mathbf{R}$ of the heavy $Q\bar Q$ pair. This theory of pNRQCD puts the ideas of previous non-relativistic potential models on a more solid footing. Its Lagrangian tells us that the propagation of color singlet and octet d.o.f. depends on two mechanisms. The first is encoded in the Schr\"odinger-like part of the Lagrangian (top line), in which the physics of the integrated-out gluons appears as Wilson coefficients in the form of time-independent potential terms $V$. Note that the static potentials $V^{(0)}_{S/O}$ are but the first terms in a systematic expansion in powers of the inverse rest mass. In contrast to the naive potential model we see that the presence of ultrasoft gluons (on the scale $Mv^2$) on the other hand introduces transitions between the singlet and octet wave functions, due to the Wilson coefficients $V_A$ and $V_B$. These transitions in general cannot be summarized in terms of a simple potential and are referred to as non-potential effects (see also the discussion in ref.~\cite{Rothkopf:2019ipj}). The potential picture thus describes only the lowest order (tree level) of pNRQCD. Even if we are interested in the static potential $V^{(0)}_S$, we therefore have to take care to distinguish between the concept of a potential (Wilson coefficient of pNRQCD) and the static energy of the quark-antiquark pair (which refers to the energy of the lowest lying excitation in its spectrum). In some instances, depending on the specific separation of scales and a level of coarse graining in time, the terms referred to above as non-potential terms may be absorbed into additional time-independent potential terms, allowing for a simple potential description based on a Schr\"odinger equation. E.g. it has been shown that in vacuum if $M v^2 \ll \Lambda_{QCD}$, the static energy and the potential agree, i.e. the static energy accessible from lattice QCD correlation functions can be used as a potential \cite{Brambilla:1999xf}. The situation at finite temperature is much more involved, as additional energy scales come into play. These are related to the thermal medium and exhibit the hierarchy $T \gg m_D \sim gT \gg g^2 T$ at weak coupling. The thermal physics may both influence the potential and non-potential contributions to pNRQCD. In the context of pNRQCD, one may expect the medium to modify the potential, but again, this only holds true for particular scale hierarchies. E.g. when considering deeply bound quarkonium states with very small spatial extent, the real-part of the potential relevant for their physics remains effectively Coulombic. In some scale hierarchies the physics of the singlet and octet transitions may be summarized in additional contribution to the potential, leading e.g. to the emergence of an imaginary part \cite{Laine:2006ns,Brambilla:2008cx,Beraudo:2007ky} related to dissipative effects in the medium. The static in-medium potential has been studied non-perturbatively on the lattice via spectral function reconstruction and model spectral function fits. Based on the Bayesian BR method \cite{Burnier:2013nla} for spectral reconstruction the static potential has so far been investigated in quenched QCD \cite{Rothkopf:2011db,Burnier:2016mxc} and in full QCD simulations based on the legacy asqtad action \cite{Burnier:2014ssa,Burnier:2015tda}. Recently HTL motivated decomposition of APE smeared Wilson loop in symmetric and anti-symmetric parts has also been used to extract the thermal potential in the quenched approximation \cite{Bala:2019cqu}. These studies concluded that the real-part of the potential eventually becomes screened in the deconfined phase and have identified hints for the existence of an imaginary part once one simulates above the crossover temperature. Concurrently the potential has been extracted by fitting modified HTL spectral functions to Euclidean correlators in \cite{Bazavov:2014kva} and deploying a skewed or non-skewed Lorentzian fit in \cite{Petreczky:2017aiz}. In both cases values for the real-part were obtained that are significantly larger than those extracted via the direct spectral function reconstruction lying closer to the $T=0$ results. Concurrent to the development of the EFT approach, the past five years have seen rapid progress in understanding the dynamical evolution of heavy quarkonium in the context of open quantum systems (see \cite{Brambilla:2016wgg,Brambilla:2017zei,Brambilla:2019tpt,Brambilla:2020qwo,Rothkopf:2019ipj,Akamatsu:2020ypb} for recent reviews). In particular the role of the imaginary part of the potential has been elucidated and its relation to wavefunction decoherence \cite{Kajimoto:2017rel,Miura:2019ssi} highlighted. It has been shown how a separation of scales in terms of energy scales is connected to a separation of time scales. Using different scale separation scenarios (and different time coarse graining prescriptions), various so called master equations for the real-time evolution of the reduced density matrix of heavy quarkonium in a medium have been derived, revealing e.g. the subtle interplay between screening and decoherence in a hot QCD medium. One central goal, both in the EFT and open-quantum systems community, is to go beyond the weak coupling considerations, on which many of the arguments related to scale separations are anchored on. In order to make progress e.g.~in the phenomenologically relevant temperature regime just above the QCD crossover transition, it is therefore necessary to explore whether a potential picture can be established non-perturbatively and if so, what the functional form of such a potential is. As a starting point we therefore set out in this study to investigate the interactions of static quark-antiquark pairs at $T>0$ using realistic state-of-the-art lattice QCD calculations. To this end, in \cref{sec:gencon} we will present general considerations on the real-time dynamics of static color sources and their study from Euclidean lattice simulations. The first part of our study is presented in \cref{sec:latcorr}, where after discussing the lattice setup in \cref{sec:setup}, we investigate the lowest three cumulants of the correlation function in \cref{sec:corrmom}, and compare them in \cref{sec:corrHTLcmp} to predictions from hard thermal loop perturbation theory (HTL). In \cref{sec:spectra} we present the investigation of the underlying spectral structure of the correlators using four different methods: spectral model fits \cref{sec:potfit}, the HTL-motivated approach \cref{sec:BalaDatta}, Pade rational approximations \ref{sec:Pade} and the Bayesian BR method \cref{sec:Bayes}. We conclude with a discussion in \cref{sec:conclusion}. \section{General considerations} \label{sec:gencon} In order to connect the EFT description of quarkonium to QCD we have to carry out a matching procedure. I.e. correlation functions with the same physics content in both languages need to be identified. Once we demand that their values agree at a certain matching scale it allows us to fix the Wilson coefficients of the effective theory. In the static limit $m_Q\to\infty$, it has been shown that the Wilson loop is the appropriate QCD quantity which we can identify with the unequal time correlation function of two color singlet fields in pNRQCD \cite{Brambilla:1999xf}. The matching condition at the leading order in multipole expansion reads \cite{Brambilla:1999xf}: \begin{align} W_\square(r,t,T) &= \langle {\rm exp}[ig \int_\square dz^\mu A_\mu]\rangle_{\rm QCD}\\ \nonumber &{\equiv} \langle S(r,0)S^\dagger(r,t) \rangle_{\rm pNRQCD}. \end{align} The Wilson loop in QCD itself emerges self consistently from the static limit of the retarded $Q\bar{Q}$ meson correlator. By matching with different quantities related to the singlet and octet sector, the ultimate goal here lies in identifying individually the potential ($V_S$,$V_O$) and non-potential contributions ($V_A$,$V_B$,$\ldots$) that govern the Wilson loop evolution in Minokwski-time. Let us focus on the singlet sector. Instead of studying the evolution of $W_\square(r,t)$ in the real-time domain, it is advantageous to go over to its Fourier transform \begin{align} \rho_r(\omega,T)=\int dt W_\square(r,t,T) e^{-i\omega t}.\label{eq:RTwilsonspecdec} \end{align} This Fourier transform, as shown in \cite{Rothkopf:2009pk}, also coincides with the positive definite spectral function of the Wilson loop. This fact is relevant, as in Euclidean lattice simulations we do not have direct access to the real-time Wilson loop but can exploit its spectral function as bridge between the imaginary and real-time domain. The Euclidean Wilson loop, which we can simulate on the lattice has a spectral decomposition, housing the same spectral functions as in \cref{eq:RTwilsonspecdec}, which here is related to the lattice observable by a Laplace transform \begin{align} W_\square(r,\tau,T)=\int d\omega e^{-\omega \tau} \rho_r(\omega,T). \label{eq:ETwilsonspecdec} \end{align} We may thus gain insight in the real-time evolution of the Wilson loop by studying the spectral function encoded in its Euclidean counterpart. The inversion of \cref{eq:ETwilsonspecdec} however constitutes an ill-posed inverse problem, which we will attack with four different and complementary numerical strategies in \cref{sec:spectra}. At zero temperature in a finite volume the spectral function consists of a ground state (lowest lying) delta peak separated by an energy gap from many excited state delta peaks (hybrid potential, static-light mesons etc.). In the infinite volume limit some of these excited state contributions form a continuum. The excited state contributions will be seen as deviation from a single exponential behavior of the correlator at small $\tau$. Performing a spectral decomposition of the non-zero temperature Euclidean time correlator in Eq. \ref{eq:ETwilsonspecdec} in a finite volume by inserting a complete set of energy eigenstates one can see that in addition to the ground state delta peak additional peaks in its proximity will appear. This is shown in Appendix \ref{app:spec_decomp}. The coefficients of these additional delta functions are proportional to Boltzmann factors and therefore, their relative weight will increase with increasing temperature. Thus, we will see a broadening of the zero temperature ground state peak. At finite temperature there will be some additional peaks in the $\omega$ region corresponding to excited states, but these will not change the overall shape of the spectral function significantly, because of the already large density of states. Therefore, any possible modifications in that region should not have a significant effect on the Euclidean correlator. Thus the most interesting part of the finite temperature spectral function is the position, $\Omega(r,T)$, and the effective width, $\Gamma(r,T)$, of this dominant broadened peak. Furthermore, as also shown in Appendix \ref{app:spec_decomp},~ the finite temperature spectral function could be non-zero even for $\omega \ll \Omega(r,T)$. We call this part of the spectral function the low energy tail. Thus we expect that the spectral function of static $Q \bar Q$ pair should consist of ground state peak, a high energy part, which to a good approximation is temperature independent and the low energy tail. The goal of this study is modest. Using for the first time finite temperature lattices with realistic pion masses, we set out to elucidate the lowest lying peak in the spectral function. We will refrain from making a quantitative connection of peak structures in the spectral functions to Wilson coefficients ($V_S$,$V_O$,$V_A$,$V_B$,$\ldots$) and solely attempt to constrain the values of $\Omega(r,T)$ and $\Gamma(r,T)$ as reliably as possible, given the currently available lattice data. We also note that the overall form of the spectral function depends on the choice of our static meson operator, i.e. on the choice of the spatial part of the Wilson loop. As mentioned above, on the lattice at finite volume, the spectral function consists of a sum of delta peaks. Choosing between e.g. the Wilson loop with straight spatial lines, with deformed spatial lines, smearing the links from which to build the Wilson loop or taking instead Wilson line correlators in a particular gauge, such as Coulomb gauge, will change the amplitudes of the peaks in the spectral function but not their position. At $T=0$ where one encounters well separated peaks, and only their position is of interest, the tuning of operators is a common procedure to optimize the signal to noise ratio in the determination of these peak positions (see also the discussion in \cite{Jahn:2004qr,Bazavov:2008rw}). At finite temperature, where multiple peaks may congregate around a dominant central value, the changes introduced in the envelope of amplitudes by modifying the operator are less straightforward to predict. However, the position of the dominant peak and its width should be largely independent of the choice of the static meson operator. Here we may gain some intuition e.g. from HTL perturbation theory. It was shown that in the leading order of HTL perturbation theory the central position of the lowest lying spectral peak remains unaffected by the choice of either considering the Wilson loop or the Wilson line correlator in Coulomb gauge \cite{Burnier:2013fca}. At the same time a clear difference was found in the structures surrounding the lowest lying peak. In quenched QCD an example has been given in Ref.~\cite{Rothkopf:2019ipj} that while the overall values of the Wilson line correlator are gauge dependent, its slope at intermediate imaginary time (thus corresponding to the position of its dominant lowest lying spectral peak) is virtually unaffected by the gauge transformation. I.e. there are indications that the properties of the lowest lying spectral structure may be extracted in an operator-independent fashion from Euclidean correlators. On the other hand the high energy part of the spectral function seems to be strongly dependent on the choice of operator. The same is true for the low energy tail of the spectral function at non-zero temperature, see Appendix \ref{app:spec_decomp}. \section{Study of the lattice correlation function} \label{sec:latcorr} \begin{figure} \centering \includegraphics[width=8cm]{meff_diffNt_b7825_ra6-eps-converted-to.pdf} \caption{The first cumulant calculated at $r=0.24$fm for $\beta=7.825$ and $N_{\tau}=64,~16,~12$ and $10$, corresponding to $T\simeq 0,~306,~408$ and $489$ MeV, respectively. The filled symbols correspond to the subtracted correlator, while the open symbols to unsubtracted correlator, see text.} \label{fig:demo_m1} \end{figure} \begin{figure*} \includegraphics[width=8.4cm]{meff_Nt12_diffb_ra3-eps-converted-to.pdf} \includegraphics[width=8.4cm]{meff_Nt12_diffb_ra6-eps-converted-to.pdf} \caption{The first cumulant as function of $\tau$ obtained on $N_{\tau}=12$ lattices for $rT=1/4$ (left) and $rT=1/2$ (right) at different temperatures.} \label{fig:demo_m1_ext} \end{figure*} \subsection{Lattice setup} \label{sec:setup} We performed calculations of Wilson loops and correlators of Wilson lines in Coulomb gauge at non-zero temperature in (2+1)-flavor QCD with physical strange quark mass using gauge configurations generated by HotQCD and TUMQCD collaborations with L\"uscher-Weisz gauge action and highly improved staggered quark action \cite{Bazavov:2011nk,Bazavov:2013uja,Bazavov:2014pvz,Ding:2015fca,Bazavov:2017dsy,Bazavov:2018wmo,Bazavov:2019qoo}. The Wilson line correlator is defined by, \begin{equation} W(r, \tau, T)=\frac{1}{3} \langle Tr(L(0,\tau) L^\dagger(r, \tau))\rangle_{T} \end{equation} where $L(r,\tau)=\exp(i\int_0^{\tau} A_{4}(r,\tau^{\prime}) d\tau^{\prime})$. We used $N_{\sigma}^3 \times N_{\tau}$ lattices with $N_{\tau}=10,~12$ and $16$, to control lattice spacing effects, and $N_{\sigma}/N_{\tau}=4$ ~\footnote{While a few ensembles actually have $N_z=2N_\sigma$, this is irrelevant for the considerations in the following.}. Previous experience shows that the aspect ratio $N_{\sigma}/N_{\tau}=4$ is large enough to control finite volume effects. The light ($u$ and $d$) quark mass was set to $m_s/20$, which in the continuum limit corresponds to pion mass of $161$ MeV. At high temperatures, $T>300$ MeV we also performed calculations with light quark mass equal to $m_s/5$, as quark mass effects are expected to be small in this region. The calculations performed here were part of a larger campaign by TUMQCD collaboration to study the interaction of static quarks at non-zero temperature and to extract the strong coupling constant \cite{Bazavov:2018wmo,Bazavov:2019qoo}. As in the previous studies the lattice spacing has been fixed using the $r_1$ scale defined in terms of the static $Q\bar Q$ energy at zero temperature $V(r)$ \footnote{It is usually referred to as the potential in the lattice literature.} \begin{equation} \left . r^2\frac{d V}{d r}\right|_{r=r_1}=1. \end{equation} The values of $r_1/a$ as well as the zero temperature Wilson loops and Wilson line correlators for (2+1)-flavor HISQ configurations have been determined in Refs. \cite{Bazavov:2011nk,Bazavov:2014pvz,Bazavov:2017dsy}. We use the parametrization given in Ref. \cite{Bazavov:2017dsy} to obtain $a/r_1$ and the value $r_1=0.3106$ fm \cite{Bazavov:2010hj}. Our calculations cover a large temperature range from temperature as low as $120$ MeV to about $2$ GeV. This allows us to perform comparisons to the weak coupling calculations. The parameters of the calculations, including the temperature values, the bare gauge coupling $\beta=10/g^2$ and the corresponding statistics are summarized in the Appendix \ref{app:lat}; an account of the zero temperature ensembles is given there as well. For Wilson loops we used 3D-HYP smeared links in the spatial direction to improve the signal. We used zero, one, two, or five steps of HYP smearings. In what follows we will use the notation $W(r,\tau,T)$ for both Wilson line correlators and Wilson loops. The Wilson line correlators require multiplicative renormalization. This renormalization corresponds to additive renormalization of the static $Q\bar Q$ energy at zero temperature. As in our previous studies with HISQ action we choose the renormalization scheme which corresponds to the choice $V(r=r_0)=0.954/r_0$, with $r_0$ being the Sommer scale \cite{Sommer:1993ce}. The renormalization constants corresponding to this choice have been first calculated in Refs. \cite{Bazavov:2011nk,Bazavov:2014pvz} for $\beta \le 7.825$ and later extended to larger $\beta$ values and also refined using result on the free energy of a static quark \cite{Bazavov:2016uvm,Bazavov:2018wmo}. Here we use the value of the renormalization constants given in Tab. X of Ref. \cite{Bazavov:2018wmo} for $\beta\ge 7.15$ and in Tab. V of Ref. \cite{Bazavov:2016uvm} for smaller $\beta$ values. \subsection{Cumulant analysis of the correlation functions} \label{sec:corrmom} To understand the main features of our lattice results and to what extent these can constrain the spectral function of a static meson it is useful to consider the $n$-th cumulants of the correlation functions defined as \begin{eqnarray} & m_1(r,\tau,T)=-\partial_{\tau} \ln W(r,\tau,T),\\ & m_n=\partial_{\tau} m_{n-1}(r,\tau,T), n>1. \label{eq:m_n} \end{eqnarray} The first cumulant $m_1$ is nothing but the effective mass, which at non-zero lattice spacing is defined as \begin{equation} m_1(r,\tau,T)=\frac{1}{a} \ln\frac{W(r,\tau,T)}{W(r,\tau+a,T)}. \end{equation} \begin{figure*} \centering \includegraphics[width=8.5cm]{meff_Nt12hyp_b7825_ra3-eps-converted-to.pdf} \includegraphics[width=8.5cm]{meff_Nt12hyp_b7825_ra6-eps-converted-to.pdf} \caption{Comparison of the subtracted first cumulant from the Wilson line correlators in Coulomb gauge and smeared Wilson loops with different levels of HYP smearing at $T=408$ MeV with $N_{\tau}=12$ and $\beta=7.825$. The left panel shows the results for $rT=1/4$, while the right panel shows the result for $rT=1/2$.} \label{fig:comp2hyp} \end{figure*} The first cumulant needs an additive renormalization which is the same as the additive renormalization of static $Q\bar Q$ energy or the free energy. In what follows we will present the renormalized first cumulant using the known renormalization constants as discussed above. Since some of the calculations in the high temperature region are performed with light quark masses significantly larger than the physical value we have to make sure that this does not affect our results. In Appendix \ref{app:lat} we compared the calculations performed at $m_l=m_s/20$ and $m_l=m_s/5$ and see no light quark mass dependence within statistical errors. Since we have different $N_{\tau}$ values we can check the size of the cutoff effects. This is also discussed in the Appendix \ref{app:lat}. The size of the cutoff dependence turn out smaller than our statistical errors. Because of this we mostly focus our discussion on $N_{\tau}=12$ data. For this data set we have relatively small statistical errors and sufficient number of data points in the Euclidean time direction. When appropriate we also show the $N_{\tau}=10$ and $16$ data. In Fig. \ref{fig:demo_m1} we show the first cumulant from Wilson line correlators at $r=0.24$ fm for $\beta=7.825$ and $N_{\tau}=16,12$ and $10$ corresponding to temperatures for $T=306$, $408$ and $T=489$ MeV, respectively and compared to the zero temperature first cumulant. At $T=0$ the first cumulant approaches a plateau for $\tau>0.2$ fm. On the other hand the non-zero temperature cumulant decreases monotonically. At small $\tau$ the difference between the zero temperature and the finite temperature first cumulant is very small and increases monotonically as $\tau$ increases. The slope of the first cumulant increases with increasing the temperature. This means that the in-medium modifications of the spectral function are larger at larger temperature, as expected. For the lowest temperature, $T=306$ MeV the decrease in the first cumulants is approximately linear in $\tau$~around $\tau \sim 1/(2T)$, while for the higher temperatures this linear trend is only seen for smaller $\tau$, corresponding to the reduction in $1/(2T)$. The small $\tau$ behavior of the Wilson line correlators is dominated by the high omega part of the spectral function. The high $\omega$ part of the spectral function is largely temperature independent, as discussed in the previous section, and therefore, it is not very interesting from the point of view of studying the in-medium effects on the static $Q \bar Q$ pair. On the other hand it complicates the analysis and so it would be nice to get rid of it. Let us assume ~-- following the arguments in Appendix \ref{app:spec_decomp} --~that we can decompose the spectral function as $\rho_r(\omega,T)=\rho_r^{tail}(\omega,T)+\rho_r^{med}(\omega,T)+ \rho_r^{high}(\omega)$, with $\rho_r^{med}(\omega,T)$ containing only the spectral structures of interest, in particular the dominant peak, and $\rho_r^{high}(\omega)$ describing the well separated UV behavior of the spectral function. At zero temperature $\rho_r^{med}(\omega,T)$ is a single delta function describing the ground state of static $Q \bar Q$ pair for our choice of static meson operator. Therefore, assuming that the higher lying peaks are well separated from the ground state, we may isolate it at $T=0$ and subtract it from $W(r,\tau,T=0)$. This gives us an estimate for the contribution of the high $\omega$ part of the spectral function. Once evaluated at zero temperature it can be used to subtract off an estimate of the high omega contribution at $T>0$ at the same value of $\beta$. We calculated the first cumulant from the subtracted correlator and the results are also shown in Fig. \ref{fig:demo_m1}. At large $\tau$ the subtraction has no effect, however, at small $\tau$ the subtracted first cumulant at $T>0$ shows a weaker $\tau$-dependence. At the same time it shows visible temperature dependence already for small $\tau$'s. At these small $\tau$ values we see an approximately linear $\tau$-dependence of $m_1$ at non-zero temperature~with a slope similar to the $\tau \sim 1/(2T)$ region. In Fig. \ref{fig:demo_m1_ext} we show the subtracted first cumulants at lower temperatures for two distances, $rT=1/4$ and $rT=1/2$. We consider the distances scaled by the temperature since with increasing temperatures the medium modification of the correlator will manifest at shorter and shorter distances. The form of $\tau$ dependence of $m_1$ will scale with $rT$. We see that for fixed $rT$ the decrease of the first cumulant with $\tau$ is stronger at higher temperatures. Furthermore, this decrease is larger for larger $rT$. We again see an approximate linear dependence of the first cumulants in $\tau$, except for the few largest $\tau$ values. This feature of the first cumulants, which is a necessary consequence of the existence of the low energy tail (see Appendix ~\ref{app:spec_decomp}), will play an important role when modeling the spectral function of the static meson. We also point out that the behavior of the first cumulant shown in Figs. \ref{fig:demo_m1} and \ref{fig:demo_m1_ext} is similar to the behavior of the bottomonium first cumulants in NRQCD at non-zero temperature when extended meson operators are used \cite{Larsen:2019bwy,Larsen:2019zqv}. It is interesting to compare the results of the Wilson line correlators in Coulomb gauge with the ones obtained from Wilson loops. Both types of static meson correlators have been used to obtain the static energy at zero temperature \cite{Bernard:2000gd,Cheng:2007jq,Bazavov:2011nk,Bazavov:2014pvz}. The first cumulants for Wilson line correlators and from smeared or unsmeared Wilson loops have been compared in Ref. \cite{Bazavov:2019qoo} for zero temperature. It turned out that both approach to the same plateau value for sufficiently large $\tau$. At small $\tau$ the first cumulants for Wilson loops are systematically larger than those for the Wilson line correlators. The first cumulants for the Wilson line correlators approach the plateau at smaller Euclidean time separation and have smaller errors \cite{Bazavov:2019qoo}. In this sense the Wilson line correlators in Coulomb gauge are very good in projecting to the ground state, while there are significant excited state contribution in the Wilson loops. We performed a similar comparison of Wilson line correlators in Coulomb gauge and Wilson loops with different levels of HYP smearings for $T=411$ MeV ($\beta=7.825$) and $N_{\tau}=12$. As in the zero temperature case there is a significant difference in the first cumulants for Wilson loops and Wilson line correlators at small Euclidean time as in the zero temperature case due to the excited state contamination, or equivalently due to $\rho_r^{high}(\omega)$. Therefore, in Fig. \ref{fig:comp2hyp} we show the comparison of the Wilson loops and Wilson line correlators in Coulomb gauge at two distances, $rT=1/4$ and $rT=1/2$, in terms of the subtracted first cumulants. At the smaller distance the first cumulant from Wilson line correlator and for Wilson loops with different levels of HYP smearing agree within errors. For the larger distance, $rT=1/2$ the two correlators agree at small $\tau$, where we see a nearly linear decrease of the first cumulants, while at large $\tau$, the non-linear behavior in $\tau$ of the first cumulants depends on the number of HYP smearings, and is also different for the Wilson line correlator in Coulomb gauge. Thus the large Euclidean time behavior depends on the choice of the static meson operator. This is to be expected as explained above. The situation is similar to the case of bottomonium correlator in NRQCD at non-zero temperature when different extended meson operators are used \cite{Larsen:2019bwy,Larsen:2019zqv}. Since the behavior of the first cumulant at $\tau$ close to $1/T$ depends on the choice of the static meson operators it is non-trivial to obtain physical information from $W(r,\tau,T)$ in this $\tau$ region. In order to better understand our numerical results on the first cumulants and see to what extent these can constrain the spectral function of a static meson it is helpful to calculate higher cumulants of the correlator. In the following we consider the cumulants of the subtracted correlator as we are interested in exploring the $\tau$-dependence caused by thermal broadening of the dominant peak. To evaluate higher cumulants we performed fits of the first cumulants of the subtracted correlator using fourth order polynomials, and estimated the higher cumulants by taking the derivatives of the resulting polynomial. The results for the second cumulants for three distances, $rT=1/4,~1/2$ and $1$ at several temperatures are shown in Fig. \ref{fig:m2} for $N_{\tau}=12$. The errors on the cumulants have been estimated using the jackknife procedure. Since the second cumulant is negative, and the square root of the negative second cumulant may be related to the width, as discussed later, in the figure we show $\sqrt{-m_2}$ in temperature units. We see that the errors on the second cumulants increase with decreasing temperatures. At short distances, the second cumulant is approximately constant for small $\tau$ and then starts to increase rapidly with increasing $\tau$. For $rT=1$ the almost constant behavior of $m_2$ is only seen for the highest two temperatures. The results for $T<251$ MeV are not shown as these have much larger errors. However, within these large errors the second cumulant is compatible with a constant at these temperatures. \begin{figure*} \centering \includegraphics[width=5.5cm]{m2_comp_r14-eps-converted-to.pdf} \includegraphics[width=5.5cm]{m2_comp_r12-eps-converted-to.pdf} \includegraphics[width=5.5cm]{m2_comp_r10-eps-converted-to.pdf} \caption{The second cumulants $m_2$, obtained from a fourth order polynomial fit to the first cumulant $m_1$, of the subtracted static meson correlator on $N_{\tau}=12$ lattices for $rT=1/4$ (left), $rT=1/2$ (middle) and $rT=1$ (right) for several temperatures. The different symbols correspond to different temperatures given in MeV.} \label{fig:m2} \end{figure*} \begin{figure*} \centering \includegraphics[width=5.5cm]{m3_comp_r14-eps-converted-to.pdf} \includegraphics[width=5.5cm]{m3_comp_r12-eps-converted-to.pdf} \includegraphics[width=5.5cm]{m3_comp_r10-eps-converted-to.pdf} \caption{The third cumulants $m_3$, obtained from a fourth order polynomial fit to the first cumulant $m_1$, of the subtracted static meson correlator on $N_{\tau}=12$ lattices for $rT=1/4$ (left), $rT=1/2$ (middle) and $rT=1$ (right) for several temperatures. The different symbols correspond to different temperatures given in MeV.} \label{fig:m3} \end{figure*} In Fig. \ref{fig:m3} we show the third cumulant of the Wilson line correlator, obtained from a fourth order polynomial fit to the first cumulant $m_1$, as function of $\tau T$ in temperature units. The results are shown for three representative distances, $rT=1/4$, $rT=1/2$ and $rT=1$. We only show our findings for the third cumulant for $T \ge 334$ MeV as at lower temperatures the errors are too large to extract meaningful information from them. Furthermore, for $rT=1/4$ the errors are already very large for $T=334$ MeV. The absolute value of the third cumulant increases rapidly with increasing $rT$ and decreases with increasing temperature. These features can be already deduced by looking at the result for the second cumulant. For $\tau T>0.35$ the third cumulant is negative, while for $\tau<0.3$ it is positive but small given the errors. The small positive third cumulant at small $\tau$ is equivalent to having a nearly constant second cumulant. From Fig. \ref{fig:m3} it is clear that estimating the fourth and higher order cumulants from the present lattice results is very challenging. This will be important when considering parametrization of the spectral function of a static meson, as the data can only constrain such a limited amount of parameters. Hence any such parametrization should not contain more than three or four parameters. \subsection{Comparison to HTL predictions} \label{sec:corrHTLcmp} At high temperatures, it is expected that the Wilson loops and the Wilson line correlators in Coulomb gauge can be described in the weak coupling approach. The Wilson loops and Wilson line correlators have been calculated at leading order in Hard Thermal Loop (HTL) perturbation theory \cite{Laine:2006ns,Burnier:2013fca}. The HTL approximation is valid when $r \sim 1/m_D$ \cite{Brambilla:2008cx}, with $m_D$ being the leading order Debye mass in QCD. At distances $r \ll 1/m_D$ this approximation is not expected to work. In HTL approximation the logarithm of the Wilson loop or the Wilson line correlator can be written as \begin{align} \label{lnW_htl} \log W(r,\tau,T)=-{\rm Re} V(r,T)\times \tau+\nonumber\\ \int_{-\infty}^{\infty}\frac{d \omega}{2 \pi} (e^{-\omega \tau}+e^{-\omega (\beta-\tau)}) (1+n_B(\omega)) \sigma_r(\omega,T)+\mathrm{const}, \end{align} where $n_B(\omega)=(\exp(\omega/T)-1)^{-1}$. The spectral function $\sigma_r(\omega,T)$ is related to the HTL spectral functions of the transverse and longitudinal gluons and is distinct from the spectral function $\rho_r(\omega,T)$. For the Wilson line correlator, it only depends on the spectral function of the longitudinal gluons. The important feature of this correlator is that the static energy exists, \begin{equation} \label{htl_energy} \begin{split} E^{HTL}_{s}(r, T)=\lim_{t\rightarrow\infty} i \frac{\partial \log W(r,\tau=it,T)}{\partial t}\\ =\mathrm{Re}\,V(r, T)-i\,\mathrm{Im}\,V(r, T). \end{split} \end{equation} At leading order, the real or imaginary parts are given as \begin{equation} \label{htl_energy LO} \begin{split} \mathrm{Re}\,V(r, T) = - \frac{g^2 C_F}{4 \pi} \left(\frac{e^{- m_{D} r}}{r} + m_{D}\right) \\ \mathrm{Im}\,V(r, T) = \frac{g^2 C_F}{4\pi} T \int\limits_0^\infty dz \, \frac{2 z}{\left(z^2+1\right)^2} \left[ 1 - \frac{\sin z m_D r}{z m_D r} \right]. \end{split} \end{equation} The real part of the potential $\mathrm{Re}\,V(r, T)$, which in this approximation, is at leading order identical to the singlet free energy in Coulomb gauge \cite{Laine:2006ns,Brambilla:2008cx}. We observe the $\tau$ dependence in the above correlator consists of linear and periodic part in $\tau$. This particular $\tau$ dependence of the HTL correlator along with the fact that $\sigma_{r}(\omega, T)$ has a $1/\omega$ singularity allows us to have a well-defined limit in Eq.(\ref{htl_energy}). In \cref{sec:BalaDatta}, while calculating the static energy non-perturbatively we will parametrize the correlator as a combination of linear and periodic parts in $\tau$. The obvious consequence of a parametrization as in Eq.(\ref{lnW_htl}) is that the first cumulant of $W(r,\tau,T)$ is anti-symmetric around the mid-point $\tau=1/(2 T)$. Since we study the Wilson line correlators in a large temperature range, including high-temperature values it makes sense to compare the lattice results with the weak-coupling ones. Comparison with the HTL perturbative result is also important since it gives some insight into the general features of the spectral function and how these features manifest in the cumulants of the Euclidean time correlator. Therefore, in Fig.\ref{fig:spf_htl_T667} we show the spectral functions corresponding to Wilson line correlators for different $r$ at $T=667$ MeV in the HTL approximation. Note that we use a different renormalization prescription compared to Ref. \cite{Burnier:2013fca} as well as the two-loop running of the coupling constant with $\Lambda_{\overline{MS}}^{n_f=3}=332$ MeV \cite{Petreczky:2020tky}. We will use this choice for the gauge coupling throughout this paper. We see a peak in the spectral function at $\omega={\rm Re} V(r, T)=F_S(r, T)$, which can be well described by skewed Lorentzian for frequencies around the location of the peak \cite{Burnier:2013fca}. Far away from the peak position, the spectral function is described by different structures, distinct from the Lorentzian. \begin{figure} \includegraphics[width=7cm]{spf_HTL-eps-converted-to.pdf} \caption{The HTL spectral function for $T=667$ MeV for different $r$.} \label{fig:spf_htl_T667} \end{figure} In general, one has to expect that the non-perturbative spectral function such as the one calculated on a lattice contains further structures. In particular, the lattice spectral function has a large UV continuum part along with a tail at very low $\omega$. The HTL feature of the lattice spectral functions could only come from the medium dominated part of the spectral function ($\rho_{r}^{med}(\omega, T)$). Therefore, we could possibly see the HTL-like features in the non-perturbative correlator near the $\tau\sim 1/(2T)$ region~if these are sufficiently separated from any further structures. We consider the comparison between the lattice and HTL result for the Wilson line correlator in terms of its first cumulant, $m_1$. The first cumulant should be sensitive to the peak position of the spectral functions. In absence of peak width, i.e. when $\rho_{r}^{med}(\omega, T)\sim \delta(\omega-E_{0}(r))$, $m_1$ should approach the energy of static $Q \bar Q$ pair $E_0$ at intermediate $\tau$. It is known that the leading order perturbative result does not provide an accurate description of the static $Q\bar Q$ energy at zero temperature. For this reason, we naturally expect that at finite temperature non-perturbative real and imaginary parts of the complex static energy defined through the parametrization of Eq.~\eqref{lnW_htl} will be different from the expressions given in Eq.~(\ref{htl_energy LO}). Therefore, non-perturbative investigation of this complex static energy is very important, about which we will discuss in detail in \cref{sec:BalaDatta}. Furthermore, the static energy needs to be renormalized, and the renormalization condition used on the lattice is different from the one in the $\overline{MS}$ scheme. Connecting these two renormalization schemes is a non-trivial task. We also know that the real part of the static energy is given by the so-called singlet $Q \bar Q$ free energy, $F_S(r,T)$ in the HTL approximation \cite{Brambilla:2008cx}, as discussed above. Therefore, when comparing the lattice results on $m_1$ to the HTL results we will assume that the peak position is similar to $F_S(r, T)$ and subtract the latter from the first cumulant. As mentioned above the HTL calculation is not expected to describe the spectral function at large $\omega$. Therefore, we should use the subtracted first cumulant when comparing the lattice and HTL results, or simply ignore the data points at small $\tau$ in the comparison. But the HTL feature in the correlator could only appear in the data points around $\tau\sim 1/2T$, where the effect of this subtraction is small. We performed a comparison of the lattice results on the subtracted first cumulant with leading-order HTL calculations for $T=474$ MeV and $T=667$ MeV. In the HTL calculations we used three values of the renormalization scale $\mu=\pi T,~2 \pi T$ and $4 \pi T$. The comparison is shown in Fig.\ref{fig:comp_htl_667} for $T=667$ MeV and four representative distances $rT=1/4,~1/2,~3/4$ and $1$. The lattice and the HTL results for $m_1$ share some qualitative features, namely they decrease monotonically with increasing $\tau$. This decrease of $m_1$ with $\tau$ around $\beta/2$ comes from the fact that the spectral function is not a delta function but rather a broad peak (see Fig. \ref{fig:spf_htl_T667}) and the slope of $m_1$ is loosely related to the width of the peak. The HTL curve is antisymmetric with respect to $1/(2T)$ over the whole $\tau$ range, while the lattice data do not show the same antisymmetry. This is expected as lattice correlator gets contributions both from $\rho_{r}^{high}(\omega, T)$ and $\rho_{r}^{tail}(\omega, T)$. However, as we will see in \cref{sec:BalaDatta}, non-perturbative data are compatible with a small antisymmetric region around $\tau\sim 1/(2T)$. We mentioned earlier that the leading-order HTL results for the real part of static energy and the singlet free energy agree exactly; hence, the corresponding $m_1-F_{S}$ vanishes at $\tau =1/(2T)$. However, the lattice result for $m_1-F_s$ is non-zero at $\tau =1/(2T)$. This implies that the real part of the static energy that will be determined in \cref{sec:BalaDatta} must be different from the singlet free energy. The slope of $m_1$ around $\tau \sim 1/(2T)$ is much larger for the lattice correlator than for the leading-order HTL curve. This corresponds to the fact that non-perturbative imaginary part determined from the lattice data in \cref{sec:BalaDatta} is significantly different from the expression given in Eq.(\ref{htl_energy LO}). The comparison turned out to be similar for $T=474$ MeV. \begin{figure*} \includegraphics[width=8cm]{m1_r14_T667-eps-converted-to.pdf} \includegraphics[width=8cm]{m1_r12_T667-eps-converted-to.pdf} \includegraphics[width=8cm]{m1_r34_T667-eps-converted-to.pdf} \includegraphics[width=8cm]{m1_r1_T667-eps-converted-to.pdf} \caption{ The comparison of $m_1-F_S$ on the lattice (subtracted) with HTL results at $T=667$ MeV for $rT=1/4,~1/2,~3/4$ and $1$. The HTL result for $\mu=2 \pi T$ are shown as solid lines. The dashed lines correspond to variation of the scale $\mu$ by a factor of two.} \label{fig:comp_htl_667} \end{figure*} We also performed a comparison between the lattice and HTL calculations at the highest temperature available, $T=1938$ MeV. Since we do not have the corresponding zero temperature result, here the comparison is performed in terms of the un-subtracted cumulants. We see that also at the highest temperature the lattice data differs from the perturbative HTL data. The first cumulant calculated on the lattice has stronger dependence on $\tau T$ correspond to the fact that non-perturbative imaginary part is much higher than the perturbative imaginary part even at that very high temperature. \begin{figure*} \includegraphics[width=8cm]{m1_r14_T1938-eps-converted-to.pdf} \includegraphics[width=8cm]{m1_r12_T1938-eps-converted-to.pdf} \includegraphics[width=8cm]{m1_r34_T1938-eps-converted-to.pdf} \includegraphics[width=8cm]{m1_r1_T1938-eps-converted-to.pdf} \caption{ The comparison of $m_1-F_S$ calculated on the lattice with HTL results at $T=1938$ MeV for $rT=1/4,~1/2,~3/4$ and $1$. The HTL result for $\mu=2 \pi T$ are shown as solid lines. The dashed lines correspond to variation of the scale $\mu$ by a factor of two.} \label{fig:comp_htl_1938} \end{figure*} In Fig.~\ref{fig:m2htl} we show the comparison of the lattice and HTL results for the second cumulant, $m_2$ for $T=667$ MeV and three representative distances, $rT=1/4,~1/2$ and $1$. We choose the renormalization scale in the HTL calculation to be $\mu=2 \pi T$ and vary it by a factor of two around this value. Unsurprisingly, there is no quantitative agreement between the lattice results and leading-order HTL as already visible from Fig.~\ref{fig:comp_htl_667}. The fact that $m_2$ on the lattice is constant at small $\tau$ is due to subtraction of UV contribution, while the fact that $m_2$ is not a constant at large $\tau$ is due to the low-energy tail. Since neither of these are present in the leading-order HTL result, the mismatch between lattice and leading-order HTL is particularly pronounced in these regions. \begin{figure*} \includegraphics[width=5.8cm]{m2_r14_T667-eps-converted-to.pdf} \includegraphics[width=5.8cm]{m2_r12_T667-eps-converted-to.pdf} \includegraphics[width=5.8cm]{m2_r10_T667-eps-converted-to.pdf} \caption{The second cumulant, obtained from a fourth order polynomial fit to the first cumulant $m_1$, of the subtracted Wilson line correlators as function of $\tau$ at $T=667$ MeV calculated on $N_{\tau}=12$ lattices and in HTL perturbation theory (lines) for $rT=1/4$ (left), $rT=1/2$ (middle) and $rT=1$ (right). The solid line corresponds to the choice of the renormalization scale $\mu=2 \pi T$, while the dashed lines correspond to the scale choice $\mu=\pi T$ and $4 \pi T$.} \label{fig:m2htl} \end{figure*} \section{Study of the spectral function and its ground state peak} \label{sec:spectra} We will in the subsections of this section make four different attempts to analyze the lattice results. The four methods are: fit with a finite width spectral function (\ref{sec:potfit}), fit with HTL Ansatz (\ref{sec:BalaDatta}), Pade fit (\ref{sec:Pade}), and fit using Bayesian methods (\ref{sec:Bayes}). We will outline here the basic idea behind each method and the pros and cons of each choice, and leave a more technical description for each subsection. We clearly state that each among the first three approaches (Bayesian methods are an exception) only aims at identifying and parametrizing the lowest, dominant spectral feature. Thus, the inability of their results to reproduce the input data over the complete $\tau$ range has to be expected. We stress that this is nothing unsual -- the same applies to almost any analysis of zero-temperature lattice correlators that may have to leave out the first few time steps due to not having enough independent information in this range to fully constrain the complex UV structure affecting those data. The aim is to leave it to the reader to judge each method based on the results put forth in this paper and let them decide which one they prefer. The first method used in section \ref{sec:potfit} is a simple fit, using a model spectral function with a well defined position and width. This choice comes from the observation that when one uses zero temperature results to remove contributions coming from higher energy excitations, the first cumulant takes a form that is well approximated by a low order polynomial, sometimes even that of a straight line. The benefit of this method is that it gives a precise answer to the question ``What is the position and width of the the dominant feature in the spectral function?'' with a fit that works very well, excluding the first and last point. The downside is that for it to work, the high energy contributions to the correlator has to be removed using zero temperature results, a procedure which might not be well defined. Also the actual shape of the dominant spectral feature is not determined, but only its position and effective width. The second method is in~\cref{sec:BalaDatta} and uses an Ansatz motivated by HTL to fit in a narrow range around $\tau \sim 1/(2T)$ to extract physically relevant information. Namely, one parametrizes the $\tau$ dependence around $1/(2T)$ exactly like the $\tau$ dependence in Eq.~(\ref{lnW_htl}). In this method the peak position and peak width can be interpreted as a real and imaginary part of thermal static energy. As shown earlier in this paper, HTL cannot explain the full spectral function. The HTL-motivated fit only attempts to describe the dominant feature of the spectral function which is responsible for the thermal static energy. The third method used in section \ref{sec:Pade} is the Pad\'e interpolation. This approach operates on the Fourier transformed lattice data, instead of directly on the correlator. The rational interpolation of the Matsubara frequency correlator is subsequently rotated to real time. In our mock data tests, this method has shown to give reasonable results for the position of the peaks, but failed to reliably estimate the width of the spectral function. By construction the Pad\'e approach does not need to reproduce the input data and we find that the reconstructed spectral function indeed does not fulfill the original spectral decomposition. The method requires a high quality of the input data, as it does not contain a regulator of the statistical noise. The fourth and last method we use is the Bayesian Reconstruction (BR) method described in section \ref{sec:Bayes}, which is based on Bayesian inference. The basic idea here is to regularize a $\chi ^2$ fit with an additional functional, which encodes how compatible the fitted spectral function is with prior knowledge one possesses on the spectrum, such as its positivity. This method is constructed such that it will always reproduce the Euclidean input data points within their statistical uncertainty. It has shown to outperform the Maximum Entropy Method in the reproduction of sharp peaked features but may suffer from ringing artifacts in the reconstruction of extended spectral features, and requires high precision data. However, it was realized that correlators on the finer lattices with improved gauge action contain non-negligible contributions with negative weights that render the BR method inapplicable. \subsection{Determination of the ground state peak from spectral function model fits} \label{sec:potfit} \begin{figure*} \centering \includegraphics[width=7.5cm]{pot_Nt12-eps-converted-to.pdf} \includegraphics[width=7.5cm]{ImV_Nt12_T-eps-converted-to.pdf} \caption{The peak position of the spectral function (left figure) and the width (right figure) as function of the separation $r$ obtained from Gaussian fits of the $N_{\tau}=12$ data.} \label{fig:pot} \end{figure*} In order to constrain the spectral function $\rho_r(\omega,T)$ from limited data on Euclidean time correlation functions we need to assume some functional form for it. As for the analysis of the cumulants we assume that the spectral function can be written $\rho_r(\omega,T)=\rho_r^{tail}(\omega,T)+\rho_r^{med}(\omega,T)+\rho_r^{high}(\omega)$, with $\rho_r^{high}(\omega)$ assumed to be temperature independent high frequency part of the spectral function and $\rho_r^{med}(\omega,T)$ containing the dominant peak structure. Based on general grounds and EFT arguments it is natural to assume that $\rho_r^{med}(\omega,T)$ has a Lorentzian form. However, for a Lorentzian form the integral in Eq. \eqref{eq:ETwilsonspecdec} will not converge at the lower integration limit. We also do not expect that the Lorentzian form can describe the spectral function well below $\omega=\Omega(r,T)$. This follows from the general properties of the spectral function discussed in Appendix \ref{app:spec_decomp}. In the case of HTL spectral function we have seen that while around the peak the spectral function appears to be Lorentzian, different structures dominate the spectral function far away from the peak, in particular at very low frequency, see Fig. \ref{fig:spf_htl_T667}. Thus in addition to the parametrization of the peak of the spectral function, we also need to parametrize the behavior of the spectral function at very low frequency, i.e. the low energy tail. This part of the spectral function will affect the correlation function at large values of $\tau$. Unfortunately, we do not have a well motivated form for this part of the spectral function. Furthermore, for calculations in finite volume the spectral function is not a continuous function but a discrete sum of delta functions with an envelope function of certain shape. For small volumes as used in the present calculations there could be significant distortion of the envelope function, since the number of low lying energy levels and the corresponding number of $\delta$ peaks is quite limited. This is especially the case for the low $\omega$ tail as it extends over a large $\omega$-range below the dominant peak position, including negative $\omega$ values. The information we have on the different structures in the spectral function is also quite limited. At small $\tau$ values only the first two cumulants can be determined with the third cumulant being zero within the estimated errors. Therefore, at small $\tau$ the lattice data are only sensitive to the position and the effective width of the dominant peak, and a Gaussian form provides a simple parametrization for this that avoids convergence problem in Eq. (\ref{eq:ETwilsonspecdec}). At larger $\tau$ the correlation function is sensitive to the low energy tail, i.e. the region $\omega \ll \Omega(r,T)$. In the previous section we have seen that in this region also the third cumulant is non-zero, but cumulants beyond the third one cannot be constrained by our lattice data. While it would be tempting to parametrize the low $\omega$ tail of the spectral function by a series of delta functions avoiding any bias, in practice it is impossible to constrain all the corresponding parameters. We need to approximate this part of the spectral function by a single delta function Thus a simple parametrization of the Wilson line correlator function consistent with the above observations is the following: \begin{align} W(r,\tau,T) =& A_P(r,T) \exp(-\Omega(r,T)\tau+\Gamma_G(r,T) ^2 \tau ^2/2)+\nonumber\\&\label{GAnsatz} A_{cut}(r,T) \exp(-\omega_{cut}(r,T)\tau), \end{align} with $A_{cut} \ll A_P$ and $\omega_{cut}\ll \Omega$ The first cumulant corresponding to this form will decrease linearly at small $\tau$, while exhibiting a non-linear behavior for large $\tau$ as observed in our lattice results. We performed correlated fits of our lattice data using Eq.~(\ref{GAnsatz}) and determined the parameters $A_P,~\Omega,~\Gamma_G,~A_{cut}$ and $\omega_{cut}$. The fits describe the lattice data very well, with possible exception of the data at the smallest $\tau$ value. The details of these fits are discussed in Appendix \ref{app:fit}. The peak position, $\Omega(r,T)$ is shown in Fig. \ref{fig:pot} as function of the distance $r$ for different temperatures. It shows no temperature dependence and agrees with the zero temperature static energy. The fact that $\Omega$ is close to the zero temperature static energy can be easily understood from Fig. \ref{fig:demo_m1}. The subtracted first cumulant at smallest $\tau$ is already close to the zero temperature plateau and shows and linear behavior at small $\tau$. A linear extrapolation naturally gives the zero temperature static energy. The width of the dominant peak depends on the specific parametrization of the spectral function and the Gaussian form has no physical motivation. A parametrization independent definition of the effective width could be the width at the half maximum. For a Gaussian form this means $\Gamma=\Gamma_G \sqrt{2 \ln 2}$. In Fig. \ref{fig:pot} we also show the effective width $\Gamma$ as function of the distance, $r$ at different temperatures. We see that $\Gamma$ increases with increasing $r$. We also see that when plotted as function of $rT$ the effective width in temperature units shows very little temperature dependence. This is expected at very high temperature, but not in the temperature range studied by us. For the other two fit parameters we find that $\omega_{cut} \ll \Omega$ and $A_{cut} \ll A_P$ in accordance with our expectations. The same parametrization of the spectral function has been used in the analysis of NRQCD bottomonium correlators at non-zero temperature \cite{Larsen:2019bwy,Larsen:2019zqv}. It has been observed that different bottomonium states have thermal width, but no significant mass shift has been observed. Furthermore, the thermal width turned out to be larger for higher lying bottomonium states that have larger size \cite{Larsen:2019bwy,Larsen:2019zqv}. Thus the thermal modification of static $Q\bar Q$ states and bottomonium is quite similar. Furthermore, the bottomonium Bethe-Salpeter amplitudes also do not show large temperature modifications \cite{Larsen:2020rjk}. Using this result a potential model analysis resulted in a potential that has a real part which is identical to the zero temperature static energy \cite{Shi:2021qri}. \subsection{Determination of the ground state peak via the HTL-motivated method} \label{sec:BalaDatta} In this section, we will use the method of \cite{Bala:2019cqu} to obtain the position and the width of the dominant peak of the static $Q\bar Q$ spectral function. In this method, the peak position $\Omega(r, T)$ and peak width $\Gamma(r, T)$ of the dominant peak are interpreted as the real and imaginary part of thermal static energy $E_{s}(r, T)$. Quantitatively, one assumes the following limit exists \begin{equation} E_{s}(r,T)=\lim_{t\rightarrow\infty} i\frac{\partial \log W(r,t,T)}{\partial t}=\Omega(r,T)-i\Gamma(r,T). \label{p-def} \end{equation} Here, $W(r,t,T)$ is the real-time correlator obtained as the Fourier transform of the spectral function $\rho_{r}(r,\omega)$. Below the crossover temperature $W(r,\tau,T=0)\sim \exp (-\Omega \tau)$ for $0\ll\tau\ll1/T$ follows from a transfer matrix argument, and therefore the above limit exists trivially. However, above the crossover temperature, the existence of the limit in Eq.~\ref{p-def} is a non-trivial statement whose consequences are important for various applications like in open quantum systems \cite{Kajimoto:2017rel} or the construction of vector current spectral function \cite{Burnier:2007qm}. The definition of the static energy involves a $Q\bar Q$ correlator, whose large-time behavior is governed by the lowest, dominant feature of $\rho_{r}^{med}(\omega, T)$. As a consequence, it is sufficient to determine the structure of the dominant peak of the spectral function to obtain the thermal static energy. In this HTL-motivated method, we model the dominant peak of the spectral function such that the above limit exists. In \cref{sec:corrHTLcmp} we mentioned that for the leading-order HTL correlator the limit in Eq.~(\ref{p-def}) exists. We observed that this is possible because the correlator can be written as a combination of a part linear in $\tau$ and a part periodic in $\tau$, cf.~Eq.~\eqref{lnW_htl}. Now let us see whether the non-perturbative data near $\tau\sim 1/(2T)$ could be parameterized by a combination of periodic and linear parts in $\tau$. Motivated by this we write the following parameterization of the $Q\bar Q$ correlator near $\tau\sim {1}/{(2T)}$, \begin{align} \log\,W(r,\tau,T)=-\Omega(r,T)\,\tau+\nonumber\\ \int_{-\infty}^{\infty} d\omega \Sigma_r(\omega,T) (e^{-\omega \tau}+e^{-\omega (\beta-\tau)})\nonumber\\ +\mathrm{const} \label{htl_param} \end{align} The condition for the limit in Eq.~(\ref{p-def}) is then \begin{align} \lim_{t\rightarrow \infty}\int_{-\infty}^{\infty} d\omega \Sigma_{r}(\omega) \omega (e^{-i\omega t}-e^{-\omega (\beta-it)}) =\mathrm{const}. \end{align} Using the fact $\lim\limits_{t\rightarrow \infty}(e^{-i\omega t}-e^{-\omega (\beta-it)})=-2\pi i \omega \delta(\omega)$, we observe that the above limit will exist only if $\Sigma_r(\omega)\sim \frac{1}{\omega^2}$ as $\omega \rightarrow 0$. Without loss of generality we can introduce a factor $(1+n_b(\omega))$ and write \begin{align} \Sigma_r(\omega,T)=(1+n_b(\omega))\,\eta_{r}(\omega,T). \label{htl_spf_exp} \end{align} Since the function $(1+n_{b}(\omega))$ already contains a factor of $1/\omega$, the $\eta_{r}(\omega, T)$ should also contain $1/\omega$ at small $\omega$. Then Eq.~(\ref{htl_param}) becomes, \begin{align} \log\,W(r,\tau,T)=-\Omega(r,T)\,\tau+\nonumber\\ \int_{-\infty}^{\infty} d\omega \eta_r(\omega,T) \frac{\exp(\omega \tau)+\exp(\omega (\beta-\tau))}{\exp(\omega \beta)-1}\nonumber\\ +\mathrm{const}. \label{htl_param1} \end{align} $\eta_{r}(\omega, T)$ can only be an odd function $\omega$. The most general expansion of the function $\eta_{r}(\omega, T)$ consistent with the existence of thermal static energy can then be written as \begin{align} \eta_r(\omega,T)=\frac{c_0(r,T)}{\omega}+c_1(r,T) \omega + c_2(r,T) \omega^3+ \dots . \label{htl_spf_exp} \end{align} We emphasize that Eq.~(\ref{htl_param1}) is a completely non-perturbative parametrization. The only information we take from HTL perturbation theory is the possible $\tau$ dependence, which can give rise to the limit in Eq.~(\ref{p-def}). Using this form of $\eta_r(\omega)$, the $\tau$ dependence of the integration in Eq.~(\ref{htl_param}) can be computed and we obtain the following expression for the Wilson line correlator, \begin{align} \log W(r,\tau,T)=-\Omega(r,T)\tau+\frac{c_0(r,T)}{\pi} \log[\sin(\pi \tau T)]+ \nonumber\\ \sum_{l=1}^{\infty} \frac{c_{l+1}(r,T)}{\pi} (2l-1)! T^{2l} \left( \zeta\left(2l,\tau T\right)+\zeta\left(2l,1-\tau T\right) \right) \nonumber\\+\mathrm{const}(r,T). \label{htl_m1_exp} \end{align} Using this HTL parametrization it is easy to check that the limit in Eq.~(\ref{p-def}) exists and the static energy is given by \begin{equation} E_{s}(r,T)=\lim_{t\rightarrow\infty} i\frac{\log W(r,t,T)}{\partial t}=\Omega +i c_{0} T. \end{equation} From this equation we can identify the imaginary part of static energy $\Gamma(r,T)=-c_{0} T$. Furthermore, at large Minkowski time higher-order terms, $l\ge 1$, do not contribute, i.e. the large Minkowski time behavior of $W(r,t, T)$ is determined by $\Omega$ and $c_0$, which in this case can be identified with the real and imaginary part of the static energy. It has been found in \cite{Bala:2019cqu} for quenched QCD with unimproved gauge action and large $N_{\tau}$ that with the expression in Eq.~(\ref{htl_m1_exp}), which is motivated from leading-order HTL perturbation theory, a reasonable number of data points around $\tau \sim 1/(2T)$ could indeed be described by the above expression. However, with $N_{\tau}=12$ the number of data points available for fitting near $\tau \sim 1/(2T)$ becomes small. If we focus on the narrow region around $\tau=1/(2 T)$, and if the higher order terms in Eqs. (\ref{htl_spf_exp}) and (\ref{htl_m1_exp}) can be neglected, we can fit the lattice results on the first moment with the form \begin{align} m_{1}(r,n_\tau=\tau/a)a=\log\left(\frac{W(r,n_\tau,N_{\tau})}{W(r,n_\tau+1,N_\tau)}\right)\nonumber\\ =\Omega(r,T)\,a-\frac{\Gamma(r,T)a N_\tau }{\pi }\,\log\left[\frac{\sin(\pi n_\tau/N_\tau)}{\sin(\pi (n_\tau+1)/N_\tau)}\right] \label{BDfit} \end{align} We performed fits of our $N_{\tau}=12$ lattice data for $m_1$ for $\tau/a=5,6,7$ using Eq.~(\ref{BDfit}) to determine $\Omega(r,T)$ and $\Gamma(r,T)$. The details of these fits can be found in Appendix \ref{app:fit}. A sample fit for both unsubtracted and subtracted data has been shown in Fig.~\ref{fig:BDfit}. The ansatz also describes some data points outside the fitting range. The smaller $\tau$ and larger $\tau$ behavior are not expected to be described by the above Ansatz, as it only describes the dominant peak of the spectral function. \begin{figure} \centering \includegraphics[width=8cm]{b7825_m1_unsub-eps-converted-to.pdf} \includegraphics[width=8cm]{b7825_m1_sub-eps-converted-to.pdf} \caption{Sample fit of the lattice result (Top) unsubtracted correlator and (Bottom) subtracted correlator to Eq. (\ref{BDfit}), see text.} \label{fig:BDfit} \end{figure} In Fig.~\ref{fig:BDpot} we show $\Omega(r,T)$ and $\Gamma(r,T)$ from these fits as function of $r$ at different temperatures. The peak position $\Omega(r, T)$ and width $\Gamma(r, T)$ for subtracted and unsubtracted correlators are very close to each other. This is expected because we only consider $\tau$ values around $1/(2T)$, where the contribution of the high $\omega$ part of the spectral function is small. The peak position, $\Omega(r, T)$ shows significant temperature dependence and differs from the zero temperature potential. The width of the peak, $\Gamma(r, T)$ increases with increasing $r$. Furthermore, $\Gamma(r, T)$ does not scale with the temperature unlike in the case of Gaussian fits in the temperature range explored by us. We also find that in the temperature region studied by us $\Gamma(r, T)$ is larger than the HTL result. Another widely studied quantity at finite temperature is the singlet free energy $F_S(r, T)$, see e.g. \cite{Bazavov:2018wmo}. As mentioned above in leading-order HTL perturbation theory, the singlet free energy and the real part of the static energy are the same. From Fig. \ref{fig:f_real} we see that even non-perturbatively the difference between $\Omega(r,T)$ and $F_S(r,T)$ is very small, while the difference between the zero temperature static energy and $F_S(r,T)$ is even smaller for $rT < 0.4$~\cite{Bazavov:2018wmo}. This is very similar to the findings of the calculations in quenched QCD, where smeared Wilson loops have been used \cite{Bala:2019cqu}. \begin{figure*} \centering \includegraphics[width=8cm]{htl_rev3-eps-converted-to.pdf} \includegraphics[width=8cm]{htl_imv3-eps-converted-to.pdf} \caption{The peak position (left) and the width (right) from the HTL motivated method as function of $r$ at different temperatures. The open(closed) symbols corresponds to real and imaginary part from unsubtracted(subtracted) correlator.} \label{fig:BDpot} \end{figure*} \begin{figure} \centering \includegraphics[width=8cm]{diff-f-r-eps-converted-to.pdf} \caption{The difference between the peak position $\Omega(r,T)$ and singlet free energy $F_s(r,T)$ at different temperatures.} \label{fig:f_real} \end{figure} It is straightforward to continue the parametrization of the Wilson line correlator given by Eq. (\ref{htl_m1_exp}) to Minkowski time and then calculate the dominant peak of the spectral function $\rho_r^{med}(\omega,T)$, \begin{equation} \rho_r^{med}(\omega,T)=\int_{-\infty}^{\infty} W(r,t,T)\exp(i\omega t) \,dt , \label{htl_mot_sp} \end{equation} which has been plotted in Fig. \ref{fig:spf_baladatta}. The dominant peak shows a qualitatively similar feature with the leading-order HTL spectral function, see Fig. \ref{fig:spf_htl_T667}. We would like to again mention that the spectral feature $\rho_r^{med}(\omega,T)$ plotted in the figure is not the full spectral function $\rho_{r}(\omega, T)$, but rather the dominant peak of $\rho_r^{med}(\omega,T)$ due to the thermal static energy. $\rho_r^{med}(\omega,T)$ is quite different from the full spectral function $\rho_{r}(\omega, T)$ for $\omega$ far away from its peak at $\Omega(r,T)$. A similar situation arises also while calculating the $Q\bar Q$ potential in hadronic phase. In this case it is well known that the dominant peak of the spectral function is the Dirac delta function, and this describe only the plateau region of $m_1$. The integration in Eq.(\ref{htl_mot_sp}) can be performed exactly \cite{Bala:2020tdt} and near the peak, where $\rho_r^{med}(\omega,T)$ describes the spectral function reliably, it can be approximated by \begin{align} \rho_r^{med}(\omega,T) &\approx \sqrt{\frac{2}{\pi}} \ \frac{\Gamma( r,T)}{(\Omega(r,T) - \omega)^2 \; + \; \Gamma(r,T)^2} \nonumber \qquad \\ &{} |\Omega(r,T) - \omega|, \; \Gamma(r,T) \ll T \nonumber .\\ \end{align} This is expected as we already assumed the limit in Eq.~(\ref{p-def}) exists. \begin{figure} \centering \includegraphics[width=8cm]{htl_spf_revised-eps-converted-to.pdf} \caption{Dominant peak of spectral function for T=408 MeV at various distances from HTL-motivated method.} \label{fig:spf_baladatta} \end{figure} \subsection{Determining the ground state peak via the Pade rational approximation} \label{sec:Pade} So far we have used two methods to extract the properties of the ground state spectral peak, which both required some form of modeling input. In the spectral fit approach it amounts to the choice of fitting with a spectral function for which the second moment is much larger than the higher moments. A Gaussian shape for the dominant peak and a number of supporting delta peaks is one possible such choice. In the Bala-Datta method of Ref.~\cite{Bala:2019cqu} the applicability of a non-standard spectral representation (see Eq. (\ref{htl_param})) is assumed. Here we attempt to extract the potential with a model independent approach, based on the Pad\'e rational approximation. One reason to deploy the Pad\'e is that for the Symanzik gauge action, the spectral function of the Wilson line correlator (or Wilson loop) is not positive for large $\omega$ when the separation, $r$ is small as discussed in Ref. \cite{Bazavov:2019qoo}. Thus Bayesian approaches designed to operate on positive spectral functions, deployed in the past, may not be reliable on this dataset. We see that at high temperatures an small separations distances the Bayesian approaches indeed fail when applied to the raw data. In the Pad\'e approach we first transform the Euclidean correlator data into Matsubara frequency space, after which we carry out a projection of the data onto a set of rational basis functions. It is these rational functions, which are then analytically continued. From the ensuing correlation functions in real-space frequencies, the spectral functions are obtained by taking the negative of the imaginary part, drawing on the analytic properties of the Lehmann representation \begin{align} \displaystyle W(r,\tilde \omega_n,T)=\int d\omega \frac{1}{ \omega-i \tilde \omega_n} \rho_r(\omega,T).\label{eq:lehmannrep} \end{align} The Wilson line correlator is particularly well suited in this context. Since it does not contain the cusp divergences that plague the Wilson loop, approximations that exploit analyticity, such as the Pad\'e, are expected to work well. The Pad\'e approximation so far is not commonly deployed for spectral function reconstruction (for recent work see e.g. \cite{Tripolt:2018xeo}), since it is known to require extremely precise data to yield robust results, often beyond what a lattice simulation can provide in practice. In addition it is known that the Pad\'e does not respect the spectral representation of the input data (see e.g. \cite{Cyrol:2018xeq}), i.e. the reconstructed spectral function inserted into \cref{eq:lehmannrep} does not necessarily reproduce the input data. All direct projection methods, such as e.g.\ those by Cuniberti \cite{Burnier:2011jq}, suffer from the fact that, in contrast to the Bayesian approach, the influence of the data uncertainty on the projection is not regularized. On the other hand the Pad\'e method has an advantage over the Bayesian approach in that it can exploit much more efficiently the smallness of statistical errorbars. In the Bayesian approach, reducing the statistical uncertainty, while leaving the number of datapoints fixed may result in increased ringing artifacts in practice, an issue that the Pad\'e does not suffer from in the same manner. And it is the exceptionally high statistics of the ensembles present in this study, which promise that a meaningful Pad\'e approximation can be carried out, as we will show in the following. In this study we implement the Pad\'e approximation in the form of a continued fraction according to the Schlessinger prescription \cite{Schlessinger:1968}. This particular approach amounts to a Pad\'e approximation in which the polynomial in the denominator carries at least the same order as that in the numerator or one higher order, leading to an expression that is able to robustly reproduce functions that decay at large frequencies, which is just the case for the Wilson line correlator \cite{Burnier:2013fca}. Note that it actually amounts to an interpolation of the data, which in contrast to a fitted rational approximation does not require us to carry out a costly minimization. We deploy the approximation on the Wilson line correlators in imaginary frequency space \begin{align} \displaystyle W(r,\tilde \omega_n,T)=\sum_{j=0}^{N_\tau-1}e^{ia \tilde \omega_n j } W(r,j a,T),~\tilde \omega_n=2 \pi n/aN_{\tau}. \label{eq:specdec} \end{align} A representative example is shown in Fig. \ref{Fig:LatCorrFiniteTPade}, where we plot as discrete data points in the top panel the real and in the bottom panel the imaginary part of the correlator at $T = 408$ MeV ($\beta = 7.825$ $N_\tau=12$) at three spatial distances $r=0.0387$ fm,$r=0.176$ fm and $r=0.296$ fm (dark blue to light blue). Note that the discrete Fourier transform (DFT) applied to Eq. (\ref{eq:specdec}) does not reproduce the continuum Lehmann kernel but introduces corrections related to both the finite lattice spacing and available grid size. Since our subsequent strategy to extract the spectral function will rely on the continuum form of the Lehmann representation, we need to compensate for these artifacts, which we do in the spirit of the tree-level corrections of the lattice artifacts in the static $Q\bar Q$ energy \cite{Necco:2001xg}, I.e. instead of using the naive Fourier frequencies $\tilde \omega_n$, we instead assign the Matstubara correlator datapoints to the eigenvalues of the discrete frequency operator \begin{align} \tilde \omega_n \rightarrow \omega_n=2 {\rm sin}\big(\frac{\pi n}{ N_\tau}\big)/a \end{align} The $\omega_n$ absorb the distortion of the frequency Brillouin zone in the UV and we may interpret the correlator as being expressed otherwise in its continuum form (we have checked that taking into account the DFT artifacts improves the stability of the Pade extraction using mock data). The deployment of the corrected frequencies also means that our correlators are plotted on non-equidistant frequency values in figure \ref{Fig:LatCorrFiniteTPade}. With the Wilson line correlators not being symmetric in Euclidean time, their discrete Fourier transform is in general complex valued. The complex data along the corrected imaginary frequencies $\omega_n$ is interpolated by a continued fraction $C_{N_\tau}$ of the form \begin{align} \nonumber&C_{N_\tau}(r,i\omega,T)=\frac{W(r,\omega_0,T)}{1+} \frac{a_0(r,T)[\omega-\omega_0]}{1+}\frac{a_1(r,T)[\omega-\omega_1]}{1+}\\ &\ldots\frac{a_{N_\tau-2}(r,T)[\omega-\omega_{N_\tau-2}]}{1+}a_{N_\tau-1}(r,T)[\omega-\omega_{N_\tau-1}].\label{Eq:ContFrac} \end{align} For better readability the above continued fraction is expressed in the following way: each subsequent level of the continued fraction, instead of being written in the denominator of the preceding term, is listed as separate fraction to the left. The expression $1+$ in the denominator therefore indicates that the following fraction should be considered the next level of the continued fraction, concretely $\frac{A}{1+}\frac{B}{1+}C\equiv (A/(1+(B/(1+C))))$. The complex coefficients are determined recursively by demanding that the rational approximation exactly reproduces the input data at each available frequency, leading to the following prescription \begin{align} a_l(& r,T)(\omega_{l+1}-\omega_l)=-\Big\{ 1+\\ \nonumber &\frac{a_{l-1}(r,T)[\omega_{l+1}-\omega_{l-1}]}{1+}\frac{a_{l-2}(r,T)[\omega_{l+1}-\omega_{l-2}]}{1+}\cdots \\ \nonumber &\cdots\frac{a_{0}(r,T)[\omega_{l+1}-\omega_0]}{1-[W(r,\omega_0,T)/W(r,\omega_{l+1},T)]}\Big\}. \end{align} \begin{figure}[t] \centering \includegraphics[scale=0.58]{PaperFig_b7825WilsonLines_ImfreqRe2.pdf}\vspace{-0.28cm} \includegraphics[scale=0.58]{PaperFig_b7825WilsonLines_Imfreqim.pdf} \caption{Discrete Fourier transform of the $T>0$ Wilson line correlators at $T=407$MeV ($\beta=7.825$, $N_\tau=12$) at three spatial separation distances $r=0.03872$ fm, $r=0.1758$ fm and $r=0.2964$ fm . The top panel shows its real part, while the lower panel its imaginary part as colored symbols. The solid lines denote the Pad\'e approximation based on eight data points, which is subsequently used in the analytic continuation.} \label{Fig:LatCorrFiniteTPade} \end{figure} Applying this formula directly on complex valued imaginary frequency data amounts to a generalization of the resonances via Pad\'e method used e.g.\ in Ref. \cite{Tripolt:2016cya} to non-symmetric correlators in Euclidean time. The evaluation of a continued fraction is prone to accumulation of rounding errors, which is why we compute the $a_i$'s with at least 30 digits accuracy. This need for accuracy is independent of the amount of noise present in the underlying data and is a well known drawback of direct projection methods \cite{Burnier:2011jq}. The outcome of the interpolation, based on a subset of eight input datapoints (the seven positive Matsubara frequency data points and the one at the smallest negative available frequency), is shown as solid colored lines in Fig. \ref{Fig:LatCorrFiniteTPade}. Substituting in the continued fraction the Euclidean frequencies by their Minkowski counterparts $C_{N_\tau}(r,i\omega,T)\to C_{N_\tau}(r,\omega,T)$ we explicitly implement the analytic continuation. There are two equivalent ways to proceed. We may either compute the spectral function from $C_{N_\tau}(r,\omega,T)$ via the real-time relation \begin{align} \rho_r(\omega,T)\approx-\frac{1}{\pi}{\rm Im}[ C_{N_\tau}(r,\omega,T) ]. \end{align} and carry out a similar analysis as that in previous studies based on Bayesian spectral reconstructions. In that approach we locate the lowest lying peak structure in $\rho_r(\omega,T)$ and fit it with a skewed Lorentzian, embedded in a polynomial background of the form derived in Ref. \cite{Burnier:2012az} \begin{align} &\rho_r(\omega,T)\propto \\ \notag &\frac{|\Gamma(r,T)|{\rm cos}[{\rm Re}{\sigma_\infty}(r,T)]-(\Omega(r)-\omega){\rm sin}[{\rm Re} {\sigma_\infty}(r,T)]}{ \Gamma(r,T)^2+ (\Omega(r)-\omega)^2}\label{skewedrho}\\ \notag&+{c_0}(r,T)+{c_1}(r,T)(\Omega(r,T)-\omega) \\ \notag &+{c_2}(r,T)(\Omega(r,T)-\omega)^2\ldots. \end{align} On the other hand we may ask whether the information encoded in the dominant spectral peak may be read-off from the real-time correlator directly in a more simple fashion. Indeed the peaks of the spectral function are but a projection of the pole structure of the underlying correlation function. Since we are in possession of the rational function approximation of the correlator, we can compute the pole structure explicitly from the roots of the polynomial in the denominator. The number of poles present, obviously depends on the degree of the Pade interpolation, but we find that varying the number of input points does not change the fact that one of the poles lies significantly closer to the real frequency axis than all other poles. This pole in turn leads to the dominant peak structure seen in the spectral function. \begin{figure}[t] \centering \includegraphics[scale=0.58]{PaperFig_ReV_HTL.pdf}\vspace{-0.28cm} \includegraphics[scale=0.58]{PaperFig_ImV_HTL.pdf}\vspace{-0.28cm} \caption{Extraction of spectral position $\Omega$ and width $\Gamma$ of the dominant peak, based on Hard Thermal Loop mock data for $dD/D = 10^{-2}$ and $dD/D = 10^{-3}$ for $T=667$MeV. The error bars are obtained from Jackknive resampling.} \label{Fig:HTL_PotPade} \end{figure} We have checked that both approaches give numerically consistent results for the position and width of the dominant spectral peak structure and therefore in the remainder of the study will analyze the spectrum directly via the poles. In order to ascertain, whether the Pad\'e is a viable method for the exploration of spectral structures in practice we must assess its reliability in a realistic test scenario. To this end we carry out mock data tests based on HTL correlation functions. We deploy as starting point the ideal correlators computed for $T=667$MeV, discretized on $N_\tau=12$ points. This ideal data is distorted by Gaussian noise. Here 1000 samples of the correlator are generated, such that their mean exhibits constant relative uncertainties of either $\Delta D/D=10^{-2}$ or $\Delta D/D=10^{-3}$. Since the data in our lattice study is precise down to sub-percent level, the choice of one-percent relative error corresponds to a worse case scenario for the Pade analysis, while the one-permille error represents the best-possible scenario. We carry out the Pade interpolation and pole analysis based on a selection of eight noisy input data points, starting with the correlator at positive Matsubara frequencies. We have checked that adding or removing two datapoints does not significantly change the results, as well as have checked that a reordering of the datapoints in the construction of the continued fraction does not have any relevant effects. Note that when adding more and more datapoints in the construction of the Pad\'e, it will eventually become unreliable. The reason is that the redundancy of the Matsubara input data (symmetry of ${\rm Re}[W]$, anti-symmetry of ${\rm Im}[W]$) requires subtle cancellations to take place in the continued fraction. The optimal choice for stability we found lies at using $N_\tau/2+2$ datapoints. The real and imaginary part of the dominant pole are plotted in the top and bottom panel of Fig. \ref{Fig:HTL_PotPade} respectively as colored data points. The analytically known values for the peak position $\Omega$ and its width $\Gamma$ are shown as gray solid lines. The errorbars here arise from a combination of Jackknife uncertainty, the differences in changing the number of datapoints by one or two, as well as the reordering in the construction of the continued fraction. \begin{figure}[t] \centering \includegraphics[scale=.38]{HTLT6667spectra.pdf}\vspace{-0.28cm} \caption{Representative selection of spectral functions extracted from Hard thermal loop ideal data for $dD/D = 10^{-3}$ at $r =0.09,0.11$ and $0.27$ fm respectively using Pade (colored lines) vs. the analytic HTL result (gray lines).} \label{Fig:HTL_SpectraPade} \end{figure} The HTL Pade pole analysis is very encouraging in that even under the adverse circumstances of relatively large statistical uncertainty of $\Delta D/D=10^{-2}$ it allows us to recover the position of the dominant peak well within uncertainties. For $\Delta D/D=10^{-3}$ the results are spot-on. In Fig. \ref{Fig:HTL_SpectraPade} we have also computed several spectral functions for $\Delta D/D=10^{-3}$. We can see that the peak position is very well estimated. As expected the determination of the spectral width $\Gamma$ on the other hand is much more difficult and for the small number of datapoints present here ($N_\tau=12$), the results are not yet robust at $\Delta D/D=10^{-2}$ and tend to significantly underestimate the true value even for $\Delta D/D=10^{-3}$. Thus for the application to actual lattice data we will focus on extracting $\Omega$ in the following. Having checked the limitations of the Pad\'e method in a non-trivial realistic test case, we proceed to apply it to our HISQ lattice data. We have carried out the pole analysis for Pad\'e interpolations based on different number of input datapoints. On $N_\tau=12$ lattices the results are unaffected by changing between seven to eleven input points and we arbitrarily decide to show the results based on eight. The uncertainty budget represented by the error bars includes the Jackknife errors, as well as variation due to change in the ordering when composing the continued fraction. \begin{figure} \centering \includegraphics[scale=0.58]{PaperFig_b7825Spectra.pdf}\vspace{-0.28cm} \caption{Representative spectral functions obtained from the Pad\' e interpolation at $T=407$~MeV ($\beta = 7.825$ $N_\tau = 12$) for different separation distances. A single well defined peak structure of skewed Lorentzian form emerges from the analysis.} \label{Fig:PadeSpectra} \end{figure} For the $N_\tau=12$ lattices we investigated, the Pad\'e interpolation yields one dominant pole close to the real axis manifesting itself as a well-defined skewed Lorentzian peak in the spectral function, as shown in Fig. \ref{Fig:PadeSpectra}. Reading off the values of the real-part of the pole as estimate for $\Omega$ we obtain the values plotted in Fig. \ref{Fig:RealVPadeHISQ}. The corresponding values for the imaginary part as estimate of $\Gamma$ are shown in Fig. \ref{Fig:ImVPadeHISQ}. As we are cautioned about the quantitative reliability of the extraction of $\Gamma$ from the mock data analysis, we here present its values simply for completeness. We have carried out the analysis on both the subtracted and unsubtracted correlators (see \cref{sec:corrmom}) and found that the subtracted correlators are computed to a statistical precision which unfortunately is not high enough for the Pad\'e to extract the value of $\Gamma$ with even statistical reliability. \begin{figure}[t] \centering \includegraphics[scale=0.58]{PaperFig_ReV_lattice_unsub.pdf} \includegraphics[scale=0.58]{PaperFig_ReV_lattice_sub.pdf} \caption{$\Omega$ as a function of separation distance for different temperatures obtained from a Pad\'e pole analysis on $N_\tau = 12$. The figure on the top is obtained by using the unsubtracted correlator and the figure on the bottom is obtained using the subtracted correlator.} \label{Fig:RealVPadeHISQ} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.58]{PaperFig_ImV_lattice_unsub.pdf} \caption{ Width $\Gamma$ as a function of separation distance for different temperatures obtained from Pad\'e pole analysis for $N_\tau = 12$. The analysis is done using unsubtracted correlators.} \label{Fig:ImVPadeHISQ} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.58]{ReV_comp7825_pade.pdf} \caption{Comparison of extracted $\Omega$ using subtracted and unsubtracted correlators using Pad\'e pole analysis with the T=0 effective mass and colour singlet free energy at $T=408$~MeV ($\beta = 7.825$, $N_\tau = 12$).} \label{Fig:CompVPadeHISQ7825} \end{figure} The values the Pade analysis yields for $\Omega$ on the HISQ Wilson line correlators are similar to the results obtained from the model spectral function fits deployed in section \ref{sec:potfit}. We find that the values do not show any significant changes over a large temperature range. In Fig. \ref{Fig:CompVPadeHISQ7825} we pick out the results at $T=407$~MeV for a closer inspection. We plot $\Omega$, based on the subtracted and unsubtracted Euclidean correlator Pad\'e analysis at $T>0$ (orange and dark blue data points), along with the $T=0$ static energy (light blue datapoints) and the colour singlet free energy. The results obtained are in stark contrast to those of the method by Bala and Datta, in which at temperatures inside the QGP phase one does observe a deviation from the linear rise present in the hadronic phase. Our Pade results appear also in stark contrast to previous analyses of the spectral functions of Wilson lines from both quenched \cite{Burnier:2013nla,Burnier:2016mxc} and dynamical QCD \cite{Burnier:2014ssa,Burnier:2015tda} based on Bayesian methods. There a discernable change of $\Omega$ with temperature was found, more similar to the results of HTL-motivated method in this study. A previous Pade analysis of a subset of the HISQ data was discussed in Ref. \cite{Petreczky:2018xuh}. That analysis showed relatively large uncertainties, arising from the fact that less statistics was available and that the improved frequencies were not deployed. Within its sizable uncertainties, these results were consistent with the Bayesian studies but within $2\sigma$ would also encompass the result obtained here. \subsection{Determining the ground state peak via Bayesian reconstruction} \label{sec:Bayes} \begin{figure}[t] \centering \includegraphics[scale=0.58]{PaperFig_ReV_HTL_BR.pdf} \includegraphics[scale=0.58]{PaperFig_ImV_HTL_BR2.pdf}\vspace{-0.28cm} \caption{Extraction of $\Omega$ and width $\Gamma$ for Hard Thermal Loop ideal data for $dD/D = 10^{-2}$ and $dD/D = 10^{-3}$ for $T=667$MeV using the BR method. The error bars are obtained from Jackknife resampling.} \label{Fig:HTL_PotBR} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.58]{HTLT6667spectraBR.pdf}\vspace{-0.28cm} \caption{A selection of spectral functions extracted from Hard thermal loop data for $dD/D = 10^{-3}$ $r = 0.01,0.09,0.11$ and $0.27$ fm respectively (dark blue to light blue) using Bayesian Reconstruction vs. the analytic result (solid gray).} \label{Fig:HTL_SpectraBR} \end{figure} The last type of methods to be deployed in the study of the Wilson line spectral function are Bayesian spectral reconstructions. While the most well known variant, the Maximum Entropy Method \cite{Asakawa:2000tr} faces challenges, as it does not easily reproduce Lorentzian structures encoded in peaks, the more recently developed Bayesian reconstruction (BR) method \cite{Burnier:2013nla} has been deployed successfully in the extraction of such structures, both from mock data as well as from lattice QCD data. All Bayesian methods exploit Bayes' theorem \begin{align} P[\rho|D,I]\propto P[D|\rho,I]P[\rho|I] = {\rm exp}[-L+\alpha S_{\rm BR}], \end{align} to systematically regularize the inversion problem. They amend the simulation data $D$ by additional so called prior information $I$. The posterior probability $P[\rho|D,I]$ for a test function $\rho$ denotes the probability for $\rho$ to be the correct spectrum, given simulation data and prior information, which in turn is written as the product of the likelihood $ P[D|\rho,I]$ and prior probability $P[\rho|I]$. The former states how compatible $\rho$ is with the simulation data, and is nothing but the usual quadratic distance functional used in $\chi^2$ fitting \begin{align} L=\frac{1}{2}\sum_{i,j=1}^{N_d}(D_i-D^\rho_i)C_{ij}^{-1}(D_j-D^\rho_j). \end{align} Here $C_{ij}$ denotes the standard unbiased covariance matrix. It is amended by the prior probability $P[\rho|I]={\rm exp}[\alpha S_{\rm BR}]$, which acts as a regulator to the many flat directions of the likelihood functional \begin{align} S_{\rm BR}=\int d\omega \big( 1- \frac{\rho(\omega)}{m(\omega)} + {\rm log}\big[ \frac{\rho(\omega)}{m(\omega)} \big]\big). \end{align} The function $m(\omega)$ denotes the default model and by definition corresponds to the correct spectrum in the absence of data. In this work, we choose to use a default model, which implements a $1/\omega$ falloff at large frequencies as $m(\omega)\propto 1/(a\omega-a\omega_{\rm min}+1)$. When estimating the error budget for the spectral features we include the variation between results based on different default models, including the constant one, as well as $m\propto \omega, \omega^2$ and $m\propto 1/(a\omega-a\omega_{\rm min}+1)^2$. In the original formulation of the BR method, the hyperparameter $\alpha$, which weighs the influence of prior information and data is marginalized from the posterior by assuming no knowledge of its values $P[\alpha]=1$ so that \begin{align} P[\rho|D,I,m]\propto P[D|\rho,I]\int_0^{\infty} d\alpha P[\rho | m,\alpha] P[\alpha]. \label{Eq:IntOutAlpha} \end{align} In this study we deploy a different handling of $\alpha$, which is akin to the Morozow criterion in classic regularization, i.e. we simply tune the hyper parameter such that the likelihood takes on the value $L=N_\tau/2$. The motivation for this choice is to avoid very large or very small values of $\alpha$ to contribute to the end results, as would be the case when integrating over $P[\alpha]=1$. In turn the occurrence of ringing artifacts is expected to be diminished. In the present study no significant differences between different choices of $\alpha$ handling were found. After specifying the likelihood and prior, we numerically search for the most probable spectrum in the Bayesian sense by locating the unique extremum of the posterior $P[\rho|D,I,m]$ via a quasi-Newton optimization algorithm, the LBFGS method. In practice we resolve the spectrum along $N_\omega=1000$ points in a frequency interval of $\omega a = [0:15]$. We take into account all Euclidean data points except at $\tau=0$ and $\tau=1/T$. This in particular excludes the point from which the color singlet free energies are defined. In previous studies it has been observed that the BR method is well suited to extract strongly peaked spectral features with high accuracy. On the other hand, if only a small number of datapoints are available $O(10)$, the regulator $S_{\rm BR}$ is unable to avoid ringing artifacts in the reconstruction if the encoded spectrum contains broad structures. Improving the regulator functional to retain its resolving capability, while preventing ringing is work in progress (see e.g. \cite{Fischer:2017kbq}). We can benchmark the reliability of the spectral reconstruction at high temperatures by using the non-trivial mock data computed in hard-thermal loop perturbation theory (HTL) similarly as for the Pad\'e reconstruction. In this case we add Gaussian noise with constant relative error $\Delta D/D=\kappa={\rm const.}$ directly onto the Euclidean data and supply it to the reconstruction algorithm. As shown in Fig. \ref{Fig:HTL_PotBR} we find that for the worst-case test with $\kappa=10^{-2}$, we are able to reproduce the position of the lowest lying peak with higher precision than what the Pade allows us to do. For the width $\Gamma$, the BR results at $\kappa=10^{-2}$ are equally disappointing as with the Pade method. However if we go to the best case scenario of $\kappa=10^{-3}$, the BR method shows its strength in being able to much more closely recover the correct imaginary part compared to the Pade. It is important to note that in none of the HTL mock data tests have we encountered any signs of ringing artifacts, which, combined with the good performance regarding the imaginary part of the HTL potential, bodes well for the application of the BR method to the task of extracting the spectral features from data, which features positive spectral functions. A representative selection of reconstructed spectral functions obtained with the BR method from HTL mock data is shown in Fig. \ref{Fig:HTL_SpectraBR}. \begin{figure}[t] \centering \includegraphics[scale=0.58]{spectrum_compare_b6740.pdf}\vspace{-0.28cm} \caption{Comparison of reconstructed spectra using Pade (grey) and BR (blues) method at $T=151$~MeV ($\beta = 6.740$, $N_\tau =12$) at different separation distances $r = 0.32, 0.64,0.96$ and $1.28$ fm.} \label{Fig:Pade_BR_spectra} \end{figure} At low temperatures, e.g. at $T=151$~MeV, the Euclidean correlators do not yet show signs of positivity violation (i.e. we obtain effective masses that are monotonic in Euclidean time) and the BR method succeeds in reconstructing their spectral function. By construction, the result reproduces the input Euclidean data points within their statistical errors. A selection of these spectra for $r = 0.32, 0.64,0.96$ and $1.28$ fm is shown in Fig. \ref{Fig:Pade_BR_spectra} (solid dark blue to lighter blue) compared to the outcome of the Pad\'e reconstruction (gray solid). We find important differences between the two approaches. The BR method reconstructions, as expected from the effective mass analysis, shows a single well defined lowest lying peak. Towards the origin that peak rapidly decays in an exponential fashion, qualitatively similar to the behavior observed in HTL spectral functions. In contrast the Pad\'e reconstruction assigns significant weight to the low frequency region. This difference is among the reasons, why the spectral function of the Pad\'e reconstruction does not fulfill the spectral decomposition of the original Euclidean data, a known drawback of the Pad\'e reconstruction method. At larger frequencies than its maximum, the BR spectral function shows a tail, which eventually behaves as $\propto 1/\omega$ per choice of the default model. We have checked that changing the default model to different powers $\alpha$ as $m\propto \omega^\alpha$ does not change the peak structure significantly. The central peak obtained by the BR method agrees in position with the Pad\'e result at small separation distances but the Pad\'e eventually seems to smear out significantly with the center of the bump lying at a higher frequency than the BR spectra peak. Let us compare $\Omega$ obtained from the Pade pole analysis (magenta) and the skewed Breit-Wigner fit of the BR spectral reconstruction (light blue) in Fig. \ref{Fig:ReV_Comp}. In addition we provide the values of the peak position $\Omega$ at $T=0$ in green. Since $T=151$~MeV still lies close to the crossover transition, the effect of the medium on the static potential is expected to be weak. We find this intuition reflected in the agreement between the zero temperature static energy and the BR result for $\Omega$. Interestingly all three results agree up to around $r=0.3$fm. However as we consider larger distances we find that the Pad\'e reconstruction shows a systematic tendency to lie above the zero temperature static energy. The effect becomes significant around $r=0.5$fm and remains visible up to $r=0.85$fm, beyond which the signal of the $T=0$ static energy is lost. The BR result lies much closer to the $T=0$ effective masses over the whole range of distances. The success of the BR reconstruction at $T=151$~MeV tells us that the data is compatible with a dominant skewed Breit-Wigner peak structure in the spectral function. \begin{figure}[t] \centering \includegraphics[scale=0.58]{ReV_comp6740_2.pdf}\vspace{-0.28cm} \caption{Comparison of $\Omega$ using the Pad\'e, BR method and Gaussian Fit at $\beta = 6.740$ with $N_\tau =12$ ($T=151$ MeV). The $T=0$ potential for the same $\beta$ are given as grey data points.} \label{Fig:ReV_Comp} \end{figure} At higher temperatures the BR method cannot be reliably applied to the extraction of spectral functions from the raw correlators, due to the presence of non-positivity in the underlying spectral functions. At $T=407$~MeV, for example, the effective masses at small distances show explicitly non-monotonic behavior \cite{Bazavov:2018wmo}. However, the spectral density may not be positive definite even if the effective masses decrease monotonically. We see that also at intermediate distances, the BR method fails to converge successfully. While in principle we could proceed by investigating the UV-subtracted finite temperature correlators, we have found that the statistical uncertainties introduced by the $T=0$ subtraction dominate over those inherent in the $T>0$ data, thus preventing us from a precision analysis of the spectral function at higher temperatures. \section{Conclusions} \label{sec:conclusion} In the first part of the study we have investigated the Wilson line correlation functions obtained from the numerical simulations directly in imaginary time. Computing the first three cumulants, as defined in Eq.~(\ref{eq:m_n}), of imaginary-time correlation functions. The n$^\mathrm{th}$ cumulant of the imaginary-time correlation function at $\tau=0$ is equivalent to the n$^\mathrm{th}$ moment of the corresponding spectral functions (see Eq.~(\ref{eq:ETwilsonspecdec})), assuming that the moment of the underlying spectral functions is finite. We found that they differ beyond statistical errors from the predictions of resummed HTL perturbation theory at all temperatures investigated, including the highest temperature, $T=1938$ MeV. Furthermore, even at qualitative level there is a difference between the lattice results and HTL result. The first cumulant calculated in leading order HTL perturbation theory is antisymmetric around $\tau=1/(2T)$, but the lattice results do not show such feature except near $\tau\sim1/(2T)$. In addition we checked that our datasets allow for a meaningful determination of up to the third cumulant of the correlation function in Euclidean time. For higher moments the signal to noise does not suffice. In turn we understand that our input data will be able to constrain spectral information only within the limitations placed by these three moments. The second part of the study was concerned with extracting the position $\Omega$ and width $\Gamma$ of the dominant spectral peak structure encoded in the Wilson line correlators. We deployed four different approaches: spectral function model fits where the dominant peak is described by a Gaussian, the HTL-inspired fit of Bala and Datta, the Pade approximation and, where positivity allowed, the Bayesian BR method. In essence each of the four methods introduces certain prior information in order to regularize the ill-posed inversion problem to gain access to the spectral function. It turns out that the Euclidean data scrutinized in the first part of our study is amenable to different possible hypotheses, which in turn lead to different outcomes for $\Omega$ and $\Gamma$. The spectral function fits assume that the high energy part of the spectral function has negligible temperature dependence, and that the observed temperature dependence of the Wilson line correlators is determined by the dominant peak structure. Since the correlator is found to have a second cumulant much larger than its higher cumulants, a Gaussian for the dominant peak, and a single delta function for the low energy tail are the simplest, permissible choices for parametrizing the data. The Gaussian spectral function model shows a value of $\Omega$, which is virtually independent of temperature and a width, which scales trivially with the temperature. In order to extract the values of $\Omega$ via the HTL-inspired fit, one assumes that the correlation functions are amenable to a certain non-standard spectral decomposition, similar to the one encountered in leading-order HTL perturbation theory. This spectral decomposition leads to a first cumulant that is anti-symmetric around $\tau=1/(2T)$. Because of small $N_\tau$, the fits can be performed only in a small region around $\tau=1/(2T)$. This fit yields an $\Omega$, which shows clear temperature dependence and signs of asymptotic flattening in the QGP phase. The width that the method computes shows a non-trivial scaling with the temperature, which is weaker than linear in the temperature. The third method we deployed is the Pad\'e rational approximation. The only assumption it makes is that the correlation function represents an analytic function. However it suffers from the drawback that its outcome is known to violate the spectral decomposition of the input data. I.e. the Pad\'e spectrum, when reinserted into the Lehmann representation, does not reproduce the original correlator. We have however tested the Pade method under non-trivial settings in HTL perturbation theory and found that for the temperature and spatial separation distances probed, the position of the lowest lying peak structure was well reproduced. Applied to genuine lattice data we obtained results that were robust under changes in the number of input points and a reordering when constructing the Pade approximation of the Matsubara domain input data. The outcome of the extraction of $\Omega$ based on the Pade method yields values, which similarly to the Gaussian model fit, shows virtually no temperature dependence. While the mock data tests tell us to take the outcome of the width with a significant grain of salt, we find small statistical errorbars and a behavior that qualitatively agrees with that of the Bala-Datta method, i.e. $\Gamma$ scales weaker than linear with the temperature. Last but not least we also deployed the Bayesian BR method, where positivity allowed. The BR method has been extensively tested on HTL mock data and has been shown to outperform other Bayesian methods, such as the MEM in the accurate reconstruction of the lowest lying peak from Wilson line correlators, a finding reproduced in this study. As the BR method is designed to reproduce the Euclidean input data within their uncertainty, its reconstructed spectra denote a valid hypothesis for the actual underlying spectrum. The BR method possesses an explicit default model dependence, which however can and is assessed by repeating reconstructions for different functional forms of the default model. And while the BR method is known to be susceptible to ringing artifacts, as its regulator is weakest among the reconstruction methods on the market, no signs of ringing have been observed in this study, neither in the HTL mock test nor in the reconstruction of genuine lattice data. As a crucial limitation in the context of the current study, the BR method is only applicable to positive definite spectral functions. If effective masses show non-monotonicity it indicates that the BR method cannot be deployed. However even if the effective masses are monotonous, positivity violation may persist, which explains that the BR method fails to converge successfully for higher temperatures on the raw Euclidean correlators. The outcome of the extraction of $\Omega$, based on the BR method at low temperatures such as $T=151$~MeV yields a real-part which agrees well with the static energy from (multi-state) exponential fits, also applicable on those lattices. We find that the spectral functions show well defined Breit-Wigner like peaks, which get exponentially cut off close to the origin, similar to what is seen in HTL perturbation theory at much higher temperatures. Comparing the BR result to the Pad\'e we find that the Pad\'e incorrectly assigns too much weight to the low frequency regime and at the same time produces a less and less well-defined peak, which is consistently located at a higher position than the BR peak. Agreement between the BR and the effective masses and the tension with the Pade method starting around $r=0.5$fm seem to indicate that the Pade tends to overestimate the values of $\Omega$ when applied to our lattice data. The comparison of different methods of the spectral reconstruction in terms of the peak position, $\Omega$ of the dominant peak and its width, $\Gamma$ is summarized in Fig. \ref{fig:ReV_conc} and Fig. \ref{fig:ImV_conc}, respectively for three temperatures, $T=151$ MeV (just below the chiral crossover), $T=199$ MeV (the typical temperature most relevant for RHIC), and $T=408$ MeV deep in QGP. The present study sheds new light onto the extraction of $\Omega$ and $\Gamma$. While different methods often lead to quantitatively different results some general features are the same. The width $\Gamma$ is significant compared to the temperature scale and increases with distance $r$ for all temperatures. In fact, for the lowest temperature all method give consistent results for $\Gamma$. For temperatures $150~{\rm MeV} < T < 200~{\rm MeV}$, the Gaussian fits and HTL fits lead to similar width for large $r$, while at small $r$ the HTL fit gives a smaller width. The Pade method always gives a smaller $\Gamma$ than Gaussian and HTL fits at large $r$, but agrees with the HTL result at small $r$, c.f. Fig. \ref{fig:ImV_conc}. The $r$ dependence of the peak position turns out to be similar for the Gaussian fits and the Pade method, indicating an apparent absence of the screening effects. Furthermore in the temperature range $150~{\rm MeV} < T < 200~{\rm MeV}$ and at intermediate distances, all the explored methods give a peak position that is slightly larger than the singlet free energy, see Fig. \ref{fig:ReV_conc}. For these temperatures, which are the most relevant ones for RHIC, the spread of the results is not too large in order to have an impact on the phenomenological studies. At higher temperatures, which are of interest for quarkonium phenomenology in heavy ion collisions at LHC our results are inconclusive at present, and lattice calculations with larger $N_{\tau}$ and smaller statistical errors are needed. Increasing the temporal extent of the lattice will be possible in the coming years. At the same time accumulation of statistics at $T=0$ will also enable a high precision subtraction, which in turn will enable us to use the BR method above the crossover temperature. All data from our calculations, presented in the figures of this paper, can be found in \cite{data}. \begin{figure} \centering \includegraphics[scale=0.58]{PaperFig_ReV_6740.pdf}\vspace{-0.28cm} \includegraphics[scale=0.58]{PaperFig_ReV_7030.pdf}\vspace{-0.28cm} \includegraphics[scale=0.58]{PaperFig_ReV_7825.pdf}\vspace{-0.28cm} \caption{Comparison of $\Omega$ as a function of separation distances for three different temperatures 151,199 and 408 MeV obtained from different methods discussed in the text. We have also shown the T=0 potential (dark grey) for all temperatures and the free energy (light grey) for high temperature (408 MeV).} \label{fig:ReV_conc} \end{figure} \begin{figure} \centering \includegraphics[scale=0.58]{PaperFig_ImV_6740.pdf}\vspace{-0.28cm} \includegraphics[scale=0.58]{PaperFig_ImV_7030.pdf}\vspace{-0.28cm} \includegraphics[scale=0.58]{PaperFig_ImV_7825.pdf}\vspace{-0.28cm} \caption{Comparison of $\Gamma/T$ as a function of separation distances for three different temperatures 151, 199 and 408 MeV obtained from different methods discussed in the text. } \label{fig:ImV_conc} \end{figure} \section*{Acknowledgement} This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through the (i) Contract No. DE-SC0012704, and (ii) Scientific Discovery through Advance Computing (SciDAC) award Computing the Properties of Matter with Leadership Computing Resources. (iii) R.L., G.P. and A.R. acknowledge funding by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. (iv) J.H.W.’s research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 417533893/GRK2575 ``Rethinking Quantum Field Theory''. (v) D.B. and O.K. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'– project number 315477589 – TRR 211. This research used awards of computer time provided by: (i) The INCITE and ALCC programs at Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility operated under Contract No. DE-AC05- 00OR22725. (ii) The National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02- 05CH11231. (iii) The PRACE award on JUWELS at GCS@FZJ, Germany. (iv) The facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy. (v) The UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX "Real-time dynamics of nuclear matter under extreme conditions". The computations in this work were performed using the SIMULATeQCD suite \cite{Mazur:2021zgi,Altenkort:2021fqk}.
d62c910f39c4df3dc278667454b7eb6d19d21d9f
\section{Introduction} Attosecond science \cite{attoscience} is rapidly developing nowadays thanks to the laser-based techniques such as chirped-pulse amplification and high-harmonic generation. There are also different schemes proposed for generation of attosecond X-ray pulses in free electron lasers \cite{attofel-oc,oc-2004-2,atto-b,atto-e,atto-f,prstab-2006-2,ding,huang-24as}. Many of these schemes make use of a few-cycle intense laser pulse to modulate electron energy in a short undulator, and then to make only a short slice (a fraction of wavelength) efficiently lase in a SASE undulator. In particular, in the chirp-taper scheme \cite{prstab-2006-2}, a slice with the strongest energy chirp is selected for lasing by application of a strong reverse undulator taper that compensates FEL gain degradation within that slice. The lasing in the rest of the bunch is strongly suppressed due to uncompensated reverse taper. Creation of a short lasing slice can also be done without using a laser. In particular, nonlinear compression of multi-GeV electron beams \cite{shuang} and self-modulation in a wiggler of a bunch with the special temporal shape \cite{duris} allowed to generate a few hundred attosecond long pulses at the Linac Coherent Light Source (LCLS). However, creation of sub-femtosecond features in the electron bunch at lower electron energies ($\simeq 1$ GeV) is problematic. Typically, pulse duration in SASE-based short-pulse schemes is limited by FEL coherence time \cite{book}. For hard X-ray FELs, coherence time is usually in a few hundred attosecond range. For such a case an adequate choice of a laser could be a Ti:Sapphire system providing a few mJ within 5 fs (FWHM) with the central wavelength at 800 nm. However, for XUV and soft X-ray regimes the coherence time is in femtosecond range, and a longer wavelength laser is needed \cite{fawley} to match a lasing slice duration and coherence time. In this paper a simple method is developed \footnote{A similar concept was proposed by the author as an option for FLASH upgrade \cite{FL-CDR} but was not studied.} to overcome this barrier and to produce XUV and soft X-ray pulses that are much shorter than FEL coherence time, and can be as short as few hundred attoseconds. \section{Principles of operation} \begin{figure}[tb] \includegraphics[width=1.0\textwidth]{Atto-scheme.eps} \caption{ Conceptual scheme for generation of attosecond pulses. Dashed rectangle illustrates a particular realization of suppression (separation) of a radiation background from the SASE undulator. } \label{fig:atto-scheme} \end{figure} Conceptual representation of the attosecond scheme is shown in Fig.~\ref{fig:atto-scheme}. Few-cycle laser pulse is used to modulate a central part of an electron bunch in energy in a short (typically, two-period) modulator undulator. The wavelength $\lambda_L$ is chosen such that the lasing slice is much shorter than FEL coherence length. In particular, for generation of attosecond pulses in XUV and soft X-ray regime one can consider Ti:sapphire laser. A typical shape of energy modulation after the modulator undulator is shown in Fig.~\ref{fig:mod4MeV}. Then the bunch enters a long SASE undulator tuned to a wavelength $\lambda$. The undulator is operated in the same way as in the classical chirp-taper scheme \cite{prstab-2006-2}: it is reverse-tapered to compensate for the energy chirp within the central slice (positioned at $t=0$ in Fig.~\ref{fig:mod4MeV}). In this way the FEL gain degradation within this slice is avoided, and the amplification proceeds up to the onset of saturation. The rest of the bunch suffers from the uncompensated reverse taper, and the lasing is strongly suppressed (except maybe for two satellites positioned around $t = \pm 2.7$ fs on Fig.~\ref{fig:mod4MeV} with the negative time derivative). The difference with the standard scheme is that now the central lasing slice is much shorter than FEL coherence time. The distribution of bunching (density modulation amplitude) is rather narrow and is localized at the end of that slice but the radiation slips forward, and a relatively long pulse (on the order of coherence time) is produced. \begin{figure}[tb] \includegraphics[width=0.6\textwidth]{Mod4MeV.eps} \caption{ Energy modulation induced by the laser. Bunch head is on the left side. } \label{fig:mod4MeV} \end{figure} The next task is to get rid of this relatively long radiation pulse (as well as of the background radiation from the rest of the bunch) while preserving the bunching. This can be done in different ways. In Fig.~\ref{fig:atto-scheme} a possible realization is illustrated: an offset chicane with a reflector or absorber inside. Alternative options are discussed below in this Section: excessive reverse taper, an achromatic bend, a kick with a quadrupole, a dogleg, and a harmonic afterburner. Finally, the microbunched beam radiates in a short radiator undulator. The bunching is strong in the central slice, it is weaker in the two satellites around $t = \pm 2.7$ fs, and much weaker in the rest of the bunch. Note that reverse tapering is very efficient in suppression of the radiation but the bunching can reach high values, depending on conditions \cite{rev-tap}. We use sufficiently strong chirp and taper to make sure that the the bunching stays at low level in the whole bunch except for the mentioned slices. In addition to that, another feature of the process is used to strongly suppress the ratiation from unwanted parts of the bunch, including satellites. Namely, the central slice is stretched in the main undulator due to a strong energy chirp so that the frequency of the bunching is red-shifted with respect to the resonance frequency at the entrance of SASE undulator. The satellites have a weaker red shift. The rest of the bunch also has a red shift due to undulator taper \cite{stupakov-taper} but it is even weaker. The radiator undulator is set to the resonance with the central slice, and the number of periods is approximately equal to the number of cycles of density modulation within that slice. The other parts of the bunch are non-resonant and radiate very weakly. Below in this Section the operation of the radiator is discussed in more details. As a result, few hundred attosecond long pulses with low background can be produced in XUV and soft X-ray ranges. The prerequisite for operation of this scheme is a sufficiently long SASE undulator. Note that there is always the range of photon energies at XUV and X-ray FEL facilities for which the saturation occurs well before the undulator end, and there is a reserve for operation with different advanced schemes. \subsection{Chirp-taper compensation effect} If there is a linear energy chirp at the undulator entrance, it can have a significant effect on SASE FEL properties, in particular on the gain. The strength of this effect can be characterized by the energy chirp parameter \cite{prstab-2006-2}: \begin{equation} \hat{\alpha} = -\frac{d \gamma}{dt} \frac{1}{\gamma_0 \omega_0 \rho^2} \ , \label{eq:chirp-parameter} \end{equation} \noindent where $\rho$ is the well-known FEL parameter \cite{bon-rho, book}, and $\gamma$ is relativistic factor. Factor $\gamma_0$ for a reference particle and reference frequency $\omega_0$ are connected by the FEL resonance condition: $\omega_0 = 2ck_w \gamma_0^2/(1+K^2/2)$. Here $K$ is the undulator parameter and $k_w = 2\pi/\lambda_w$ with $\lambda_w$ being the undulator period. It was shown in \cite{prstab-2006-2} that a degrading effect of a linear energy chirp on SASE FEL gain can be compensated for by applying a linear undulator taper as soon as the following condition is satisfied: \begin{equation} \frac{d K}{dz} = - \frac{(1+K_0^2/2)^2}{K_0} \frac{1}{\gamma_0^3} \frac{d \gamma}{c dt} \label{compensation} \end{equation} \noindent Here $K_0$ is the value of undulator parameter at the undulator entrance. Note that the condition (\ref{compensation}) is applicable when $4 \pi \rho \hat{\alpha} \ll 1$, and a perfect compensation is only possible in the limit $\rho \to 0$. However, for practical applications a perfect compensation is usually not required. \subsection{Chirp-taper compensation for a short lasing slice} Operation of SASE FELs with short bunches was studied in \cite{bon-short,we-short}. A relevant parameter to characterize an effect of bunch length on FEL operation is $\rho \omega \sigma_z/c$ with $\sigma_z$ being rms bunch length. When this parameter is smaller than one (i.e. when bunch is shorter than FEL coherence length $c(\rho \omega)^{-1}$), one can observe an increase of saturation length and a reduction of FEL efficiency. The condition (\ref{compensation}) is also valid for short bunches or for short lasing slices (as it was mentioned above, an ideal compensation is only possible when $\rho \to 0$). In this paper we deal with long bunches but short lasing slices having the strongest laser-induced energy chirp. For such a case, instead of $\sigma_z$ one can consider a reduced laser wavelength, $\lambar_L = \lambda_L/(2\pi)$. Thus, a relevant parameter is now $\rho \lambda_L/\lambda$ where $\lambda = 2 \pi c/\omega$ is the FEL wavelength. It follows from numerical simulations with laser-modulated beam that an increase of saturation length for a short lasing slice with respect to a normal SASE with long bunches can be approximated as follows: \begin{equation} \frac{L_{\mathrm{sat}}}{L_{\mathrm{sat}}^{\mathrm{(long \ bunch)}}} \simeq \left(\rho \frac{\lambda_L}{\lambda} \right)^{-1/2} \ \ \ \ \ \ \ \mathrm{for} \ \ \ \ \ \ \ \rho \frac{\lambda_L}{\lambda} < 1 \label{short-sat} \end{equation} The dependence is similar to that for short bunches \cite{bon-short,we-short}. For the purpose of the proposed scheme, one should stop at the onset of saturation (typically 80\% to 90\% of saturation length) to avoid an increase of the width of bunching distribution within the lasing slice. Thus, a total increase of the required undulator length can be acceptable in many practical cases even for a small parameter $\rho \lambda_L/\lambda$. \subsection{Suppression of background from the main undulator} One of the advantages of the proposed scheme is that one can get a clean attosecond pulse from the afterburner. However, we need to get rid of the background produced in the main undulator. Let us consider possible ways of doing this. \subsubsection{Excessive reverse taper} Reverse taper is efficient in suppression of the radiation, although under some conditions the bunching can survive \cite{rev-tap}. In case of the considered scheme one can apply reverse taper which is stronger than the one needed for compensation of the energy chirp in the main lasing slice. With some delay of the saturation, one can get a strong bunching there but almost no radiation. Excess of reverse taper would then be even stronger in the adjacent slices with the same sign of energy chirp but a weaker amplitude. There one can suppress the radiation even stronger, and the bunching factor saturation is delayed also stronger than in the main slice. In general, the intensity of the radiation from the main undulator can be made sufficiently small. Then, in the radiator a strong power is produced only within the main slice, also due to a frequency offset mentioned above \footnote{Note that in some cases a regular undulator segment (if it is sufficiently short) with optimized K value can play a role of the radiator undulator.}. A disadvantage of this method is that it requires a longer main undulator which is not always possible. Also, bunching within the main slice can be weaker than in a case without over-compensation, depending on parameters. \subsubsection{Achromatic bend or a kick with a quadrupole} Another way to produce clean attosecond pulses in the afterburner is to create an angle between the radiation from the main SASE undulator and from the radiator by using an achromatic bend \cite{bend-kulipanov} or a kick with a quadrupole \cite{macarthur}. The latter technique (in combination with reverse taper) was successfully used for generation of circularly polarized radiation with high purity at LCLS \cite{lutman-circ}. \subsubsection{Chicane or dogleg} One can also create an offset between the electron beam and the radiation from the SASE undulator with the help of a chicane, as shown in Fig.~\ref{fig:atto-scheme}. Then the radiation is either absorbed directly or reflected to an absorber. A possible difficulty is that the longitudinal dispersion, characterized by a transfer matrix coefficient $R_{56}$, is generated in the chicane. This can be a useful effect: additional bunching can be created, so that one can stop earlier in SASE undulator; moreover the lasing slice is stretched even stronger which helps in suppression of background in the radiator. However, these two functions of the chicane (a technically reasonable offset and an optimal $R_{56}$) should be matched which is not always easy to do. A more flexible system could be a chicane with quadrupoles in the dispersion regions \cite{thompson} so that one can efficiently control $R_{56}$ while the required offset is kept. Another possible solution is a dogleg that creates a sufficient offset but the $R_{56}$ is typically too small to influence longitudinal dynamics. \subsubsection{Harmonic afterburner} Radiation at the even harmonics of SASE undulator is weak. Thus, tuning the radiator to the second harmonic, for example, would help to provide low-background attosecond pulses. Radiation at the fundamental of the undulator can be filtered out if it disturbs an experiment. \subsection{Suppression of satellites in the afterburner} One of the problems of laser-based methods for production of the attosecond pulses is an insufficient contrast of laser modulation that leads to generation of satellite pulses shifted in time by a cycle of the laser light \cite{atto-f}. They are weaker than the main pulse but can still be a problem for user experiments. In the proposed scheme we rely not only on a less efficient generation of bunching for the satellites, but also (and mainly) on the fact the frequency of bunching in the main slice is different (more red-shifted) from that in the satellites. The radiator is tuned to the frequency of the main slice, and the radiation from the adjacent slices is strongly suppressed because of the offset from resonance. Spectral properties of the radiator are characterized by the well-known sinc-function: \begin{equation} f_1 (\omega) = \left( \frac{\sin (N_w \pi \frac{\omega - \omega_r}{\omega_r})}{N_w \pi \frac{\omega - \omega_r}{\omega_r}} \right)^2 \label{sinc} \end{equation} \noindent Here $\omega_r$ is the resonance frequency of the radiator and $N_w$ is the number of periods. The latter parameter should be chosen such that it is approximately the same as the number of cycles in the bunching distribution within the main lasing slice. At the same time, as it can be seen from (\ref{sinc}), for an efficient suppression one needs to satisfy the condition $N_w \ge \omega_m/(\omega_s - \omega_m)$ with $\omega_m$ being the frequency of bunching in the main slice and $\omega_s$ in the satellites. One can even adjust parameters such that the satellites are positioned in frequency domain at the zeros of the sinc function, i.e. when $N_w \simeq n \omega_m/(\omega_s - \omega_m)$, where $n$ is a natural number. In this case the suppression will be especially effective. The density modulations in the bulk of the beam (not modulated by the laser) are much weaker than those on the slopes. In addition, they have a much larger frequency offset from the resonance in the radiator, so that the radiation is strongly suppressed. As a result, one can obtain a clean attosecond pulse from the radiator. \section{Numerical simulations for FLASH} In the first XUV and soft X-ray FEL user facility FLASH \cite{flash-nat-phot,njp} the electron bunches with maximum energy of 1.25 GeV are distributed between the two undulator lines. The facility operates in the wavelength range 4 - 60 nm with long pulse trains (several hundred pulses) following with 10 Hz repetition rate. After the planned upgrade, the electron energy will reach 1.35 GeV, this energy is used in numerical simulations. Electron bunches with the charge 100 pC and the following quality \cite{zemella} are considered in this paper: peak current 1.5 kA, normalized emittance 0.5 mm mrad, uncorrelated energy spread 200 keV. Parameters of the second undulator line FLASH2 are used in the simulations. Segmented variable-gap undulator with the period of 3.14 cm and the maximum K about 2.7 consists of twelve 2.5 m long segments with quadrupoles in the intersections. Average beta-function of the FODO structure is 7 m. Modulator undulator has two periods with the period length of 15 cm and the K value of 12. The Ti:Sapphire laser system generates 5 fs long pulses (FWHM intensity), a pulse energy is 0.25 mJ. The Rayleigh length is chosen to be 1 m. Energy modulation of the electron beam for this parameter set of laser-modulator system is presented in Fig.~\ref{fig:mod4MeV}, the maximum energy deviation is 4 MeV. FEL simulations were performed with the code SIMPLEX \cite{simplex}. The results of simulations at three different wavelengths with respectively optimized afterburners are presented below for illustration. \subsection{Case I: the fundamental at 4.7 nm} \begin{figure}[tb] \includegraphics[width=0.5\textwidth]{bunching-4nm.eps} \includegraphics[width=0.5\textwidth]{power-4nm-one-shot.eps} \caption{ Bunching factor at the entrance to the radiator (upper plot) and power at its exit (lower plot) for a single shot. Bunch head is on the left side. } \label{fig:4nm} \end{figure} Let us first consider the case when energy-modulated beam (see Fig.~\ref{fig:mod4MeV}) radiates in FLASH2 undulator tuned to 4 nm at the entrance ($K_0 = 1.25$). Since the parameter $\rho$ for the considered beam and undulator parameters is $1.9\times 10^{-3}$, one can use (\ref{eq:chirp-parameter}) to find that $\hat{\alpha} \simeq 4$ for the central slice. The reverse step-taper is applied such that $K$ increases by 0.025 in each undulator segment, we use ten segments in simulations. Note that the chosen reverse taper is about $20 \%$ stronger than the one needed for the perfect compensation. This helps reduce background from the main undulator without affecting significantly the generation of strong bunching within the central slice. However, a much stronger excessive reverse taper would lead to a significant increase of the undulator length and cannot be considered as the main method of background reduction for given parameters. Let us discuss an increase of undulator length with respect to that needed for saturation of long bunch with the same slice parameters (which is 18 m). The parameter $\rho \lambda_L/\lambda$ is 0.38 in the considered case, so that an increase of saturation length is about 60 \% according to Eq.~(\ref{short-sat}). We do not aim at reaching saturation since there is a broadening of the bunching distribution at that point. Lasing is stopped a bit earlier, at about 90 \% of the saturation length, so that the required increase of the undulator length is about 40 \% (from 18 m to 25 m). The distribution of bunching factor in the modulated part of the bunch at the exit of the tenth undulator segment is shown for a single shot in Fig.~\ref{fig:4nm} (upper plot). One can see not only strong bunching within the central lasing slice but also a significant bunching in the satellites. In this simulation we assume that the $R_{56}$ between the main undulator and the radiator is negligible (to avoid an effect on bunching distribution it should be below $\simeq 1 {\mu}m$). Thus, the electron beam without modifications is sent to the radiator while the radiation from the main undulator is suppressed with the help of one of the methods discussed in the previous Section. The radiator is the short undulator with 40 periods, period length of 2.5 cm and the undulator parameter 1.804. In Fig.~\ref{fig:4nm} (lower plot) one can see the temporal profile of radiation pulse at 4.7 nm emitted by the beam with bunching shown in Fig.~\ref{fig:4nm} (upper plot). The wavelength increase is due to stretching of the central slice in the main undulator. One can also see that satellites are strongly suppressed despite a significant bunching factor, the mechanism is explained above. Total background (that includes satellites and the radiation produced in the bulk of the beam) does not exceed a few per cent level. \begin{figure}[tb] \includegraphics[width=0.8\textwidth]{power-4nm-four-shots.eps} \caption{ Radiation power for four representative shots at 4.7 nm. Bunch head is on the left side. } \label{fig:4nm-four-shots} \end{figure} Forty simulation runs were performed to study the properties of the attosecond pulses. Four representative shots are shown for illustration in Fig.~\ref{fig:4nm-four-shots}. Pulse duration is in the range 300 - 400 as (FWHM) which is by an order of magnitude smaller than FEL coherence time in the main undulator. The average pulse energy is 70 nJ with the rms shot-to-shot variations about 40 \%. It is interesting to note that despite a significant variation of pulse energy, the timing is very stable (contrary to the case of a standard single-mode lasing of short electron bunches \cite{duesterer}). This opens up a possibility of pump-probe experiments with near infrared (NIR) and soft X-ray pulses keeping sub-cycle synchronization. Indeed, the laser-modulated beam after emission of an attosecond X-ray pulse can be sent through magnetic chicane for conversion of energy modulation into density modulation on the scale of laser wavelength. Then it can radiate a NIR pulse in the short undulator (similar to modulator undulator). Both pulses (soft X-ray and NIR) can be transported to a user instrument through the same mirror system thus preserving timing. At the experiment one can either use NIR pulse directly or (if it is too weak) do cross-correlation with a powerful laser pulse thus preserving timing information for every shot. Finally, let us discuss accuracy of the predictions with FEL simulation codes in the considered case of the strong energy chirp. Note that the parameter $4 \pi \rho \hat{\alpha}$ is about 0.1 which is sufficiently small for FEL theory with energy-chirped beams to be applicable. At the same time, one cannot expect a per cent level accuracy of the predictions, they are rather in the ten per cent range. In particular, one can expect a significant stretching of the central lasing slice in that range. The evolution of the process in time-frequency domain is correctly simulated by FEL codes (all the necessary information is contained in phases), however the change of average electron density is neglected in these codes. Thus, the radiated power is somewhat overestimated in the simulations presented here. \subsection{Case II: the second harmonic at 2.4 nm} The macroparticle distributions from simulation runs at 4 nm in the main undulator were used for simulations of the radiation at the second harmonic in the dedicated afterburner. The undulator parameters are as follows: period length is 2 cm, number of periods is 20, K equals 1.134. Three representative shots are shown in Fig.~\ref{fig:2nm}. Again, due to the stretching, the wavelength of the second harmonic is not 2 nm but 2.4 nm. Pulse durations are in the range 250 - 300 as (FWHM), and the average pulse energy is 6 nJ. \begin{figure}[tb] \includegraphics[width=0.8\textwidth]{power-2nm-three-shots.eps} \caption{ Radiation power for three representative shots at 2.4 nm. Bunch head is on the left side. } \label{fig:2nm} \end{figure} \subsection{Case III: the fundamental at 9.3 nm} It is also constructive to illustrate the operation of the scheme at a longer wavelength. The electron beam is the same as it was in the previous simulations. The main undulator is now shorter, it consists of nine segments. The period of the afterburner is the same as that of the main undulator, but the number of periods is only 25, so that it is shorter than a segment of the main undulator. Four representative shots are shown in Fig.~\ref{fig:9nm}. Average pulse energy is 75 nJ, and the pulse duration is about 400 as (FWHM). Note that three different periods of the afterburners were used in three considered cases. They are all relatively short (about 1 m) and can be placed behind each other. In practice one can optimize the parameters and the number of devices depending on the operating range of the attosecond facility and a range of electron energy (in simulations only one energy was used). \begin{figure}[tb] \includegraphics[width=0.8\textwidth]{power-9nm-four-shots.eps} \caption{ Radiation power for four representative shots at 9.3 nm. Bunch head is on the left side. } \label{fig:9nm} \end{figure} \section{Prospects for hard X-rays} The standard chirp-taper attosecond scheme was originally intended for application in hard X-ray FELs. Indeed, the coherence time (a few hundred attoseconds) naturally matches lasing slice when a few-cycle short wavelength laser (like Ti:sapphire) is used. The scheme, described in this paper, can potentially be applied to push pulse duration to few tens attoseconds regime if the following approach is used. One can apply so-called eSASE scheme \cite{atto-f} and use the same laser to create energy modulation in the electron bunch with a subsequent conversion of these modulations into density modulations (current spikes). The duration of the central spike with the highest peak current is then in sub-hundred attosecond range. A strong energy chirp is accumulated along this spike due to longitudinal space charge field in front and inside of the SASE undulator \cite{sc-und}. Then the chirp-taper compensation is used in the same way as in the case of laser-induced chirp. And to avoid pulse lengthening due to the slippage one can use the method developed in this paper. Note that in some cases the application of excessive reverse taper can be adequate, and a regular segment of an X-ray FEL undulator can be used as a radiator. One can also consider the second harmonic generation in one segment. In other words, application of this method might not require any modification of existing SASE undulators. \section{Conclusion} A modification of the chirp-taper scheme allows to produce FEL pulses that are much shorter than FEL coherence time. Thus, generation of attosecond pulses in XUV and soft X-ray regimes is enabled. Application of such a scheme to a user facility like FLASH would make it possible to create a unique source for attosecond science. \section{Acknowledgment} The author would like to thank Wim Leemans for his interest in this work, and Martin Dohlus for careful reading of the manuscript and useful suggestions. \clearpage
726252c0ff6f87fe6d51b7ceffa7172e8cf14049
\section{Introduction} \label{sec:introduction} The dynamic range that the human visual system can experience in the real world is vast. Unfortunately, most off-the-shelf digital cameras capture only a limited range of illumination in the scene. This discrepancy has lead to a great deal of research in reconstructing still HDR images from the conventional LDR off-the-shelf camera images. Most of these works have used a bracketed exposure imaging method~\cite{mann1994beingundigital,debevec2008recovering,mann2002painting,kalantari2017deep} which involves taking multiple images at different exposures and merging them to generate a single HDR image. Generating HDR images by taking multiple images with different exposures may involve object/ camera movement. Therefore these methods end up producing ghosting artifacts in dynamic scenes of the image. These artifacts can be reduced through various methods like replacing/rejecting the pixels that move across the images~\cite{khan2006ghost,zhang2010denoising,zheng2013hybrid}, merging all different exposure images with a reference image \cite{ward2003fast,tomaszewska2007image,gallo2015locally} or aligning and reconstructing in a unified optimization system \cite{sen2012robust,zheng2013hybrid}. Capturing HDR video directly involves expensive specialized cameras that use complex optical systems \cite{tocciversatile}, and sensors \cite{zhao2015unbounded}. On the other hand, reconstructing HDR video from the LDR sequence obtained from a standard off-the-shelf camera is a much more challenging task. Existing methods that are focused on reconstructing HDR images have been observed to generate temporally unstable results when applied to video sequences. There exist few works addressing this problem~\cite{kang2003high,mangiat2011spatially,kalantari2013patch,li2016maximum}, which are typically slow and have limitations in several scenarios. Recently, the first deep learning-based approach was proposed by Kalantari~\emph{et al.}~\cite{kalantari2019deep} for HDR video reconstruction, which utilized dense frame-to-frame motion information (optical flow)~\cite{original_flow}. Their method first aligns the neighboring alternating exposure LDR frames to a reference frame by computing the optical flow between them, and then they use a convolutional neural network (CNN) based model to merge and reconstruct the final HDR frame. Although they show a reduction in time while reconstructing the HDR frames by a certain factor but as pointed out by its authors, their approach still suffers from discoloration and flickering artifacts in the reconstructed HDR video frames. In this paper, we take inspiration from \cite{kalantari2019deep} and design a Generative Adversarial Network (GAN) based framework for reconstructing HDR video frames from the LDR sequence with alternating exposures. Kalantari~\emph{et al.}~\cite{kalantari2019deep} performed an end-to-end training by minimizing the error between the reconstructed and ground truth HDR video frames on a set of training scenes. We show that merely reducing the pixel to pixel error between reconstructed and ground truth HDR frames from noisy LDR is prone to content loss and undesirable artifacts in the generated frames. We address this by proposing a framework comprising of an LDR denoising network, a light-weight optical flow estimation network, and a GAN based model for final HDR reconstruction. We modify the training procedure of our GAN based model by incorporating a temporal-stability based regularization term~\cite{eilertsen_mantiuk_unger_2019} along with content and style-based losses in the cost function while training the network. We use an altered version of an existing optical flow estimation model, LiteFlowNet~\cite{hui18liteflownet}, which is fine-tuned to estimate the dense optical flow between LDR frames with varying exposures. The estimated optical flow is then used to align the LDR frames with alternating exposures to the current frame. The final HDR frame is generated by using a Generative Adversarial Network (GAN). In addition to the regularization term, standard adversarial loss, and the HDR reconstruction losses, we also incorporate perceptual loss~\cite{Johnson2016Perceptual}, and style-aware content loss~\cite{sanakoyeu2018styleaware} while training the network for better performance. The proposed framework generates temporally stable HDR video with high visual quality. GAN based models require a lot of data for image synthesis tasks. For the HDR reconstruction, we generate our training dataset synthetically by extracting the input LDR frames from a set of open-sourced HDR video repositories~\cite{froehlich2014creating, kronander2014unified}. However, unlike these synthetically generated LDR videos, frames captured from standard digital cameras have varied noise in them. Therefore, for the framework to generalize well, we fuse the LDR frames with the Gaussian noise of varied signal-to-noise (SNR) ratios. The main contributions of our work are given below: \begin{itemize} \itemsep0em \item We propose the first GAN-based method for the HDR video reconstruction by using LDR frames with alternating exposures. Our proposed framework consists of a denoising network for extracting clean LDR frames, a light-weight optical flow estimation network, and a GAN based model for final HDR reconstruction. \item We incorporate perceptual as well as style-aware content losses to improve the visual quality of HDR frames. Along with utilizing the optical flow, we also incorporate a temporal-stability based regularization while training to further reduce the temporal incoherence in the reconstructed HDR frames. \item Our experimental results on the different HDR video datasets demonstrate that the proposed framework outperforms the existing approaches and produces high-quality HDR video. \end{itemize} \noindent \textit{Outline of the paper:} The entire paper is organized as follows. Section \ref{related_work} narrates the related work in the area of HDR imaging. Section \ref{dataset_section} describes the dataset used for the experimentation. Section \ref{archi} presents the model architecture in detail. Section \ref{training_det} describes the details about the used hyperparameters. Section \ref{results} reports the qualitative and quantitative evaluations against the baseline. Finally, section \ref{sec:ablation_study} presents the ablation study. \section{Related Work} \label{related_work} \begin{table}[!b] \begin{tabularx}{\linewidth} {lX} \toprule \toprule Notation & Desciption\\ \midrule \raggedright $L_{i}$ -- & original $i^{th}$ LDR frame (alternating expos.)\\ $\hat{L}_{i}$ -- & $i^{th}$ generated clean LDR (alternating expos.)\\ $\widetilde{L}_{i}$ -- & aligned $i^{th}$ clean LDR (alternating expos.)\\ $H_{i}$ -- & Original $i^{th}$ HDR frame \\ $\widetilde{H}_{i}$ -- & Generated $i^{th}$ HDR frame \\ $T_{i}$ -- & $i^{th}$ Tonemapped Frame of original HDR\\ $\widetilde{T}_{i}$ -- & $i^{th}$ Tonemapped Frame of generated HDR\\ \bottomrule \noalign{\vskip 0.1cm} \end{tabularx} \caption{Description of notations frequently occurring in the paper} \label{tab:notation} \vspace{-0.5cm} \end{table} In the last few years with the onset on learning algorithm, the problem of HDR imaging has also been extensively explored. However, a lot of work is centered around the generation of still HDR images. One set of approaches uses a sequence of different exposure images to generate HDR images~\cite{debevec1997recovering, sen2012robust, hu2013hdr, oh2014robust, ma2017robust, kalantari2017deep}, while the other approach uses burst images to generate the HDR image~\cite{liu2014fast, hasinoff2016burst}. There are some more focused works in the last few years to generate HDR images from a single image~\cite{eilertsen2017hdr, marnerides2018expandnet}. Almost all these approaches are not suitable for generating HDR video because of the lack of temporal consistency in still HDR imaging. For brevity, we will only discuss the works related to the generation of HDR video. The system that produced the most high-quality results to date has been the specialized camera that directly captures HDR video. These cameras include special sensors that can capture extensive dynamic range~\cite{brajovic1996sorting, seger1999hdrc, nayar2000high} or the camera which has beam-splitters that deflects the light to many sensors such that ever sensors measure the different amount of radiance concurrently. However, these approaches are limited because they need specialized custom hardware that has enormous costs and, therefore, less widespread~\cite{tocci2011versatile, kronander2013unified}. One way to generate HDR video is from the input sequence of frames having alternate exposure of each frame. Kang~\emph{et al.}~\cite{kang2003high} first proposed the method of HDR video reconstruction using the alternating exposure LDR frames. They used optical flow to align the neighboring frames to the reference frame. After aligning the nearby frame to the reference frame, they take a weighted sum to combine with the reference frame to avoid ghosting artifacts. However, their approach leads to ghosting artifacts when the scene has a significant amount of motion. Mangiant and Gibson~\cite{kang2003high} improved the approach of Kang~\emph{et al.}~\cite{kang2003high} using a block-based motion estimation method which was coupled with a refinement stage. In their successive work, they filtered the region with a significant motion to minimize the blocking artifacts. However, their approach still had the blocking artifacts when the scene has substantial movement in it. In addition to that, their approach is limited to working only on two exposure sequences. Kalantari~\emph{et al.}~\cite{kalantari2013patch} propose a patch-based method to reconstruct the missing exposure at each frame. After reconstruction, all the images produced were combined to obtain the final HDR frame. Temporal coherency is improved by estimating the motion between the neighboring/adjacent and the reference frame. However, the patch search was constraint only to small window around the predicted movement, where a greedy approach obtains the window size. This method produces the result, which is significantly better than the previous approach. However, to solve the complex patch-based optimization was a time consuming process and produce a single HDR frame. A major drawback of this approach was that it often was unable to constrain the patch search properly and underestimates the search window size. Ghosting artifacts were observed in such cases. One further work is \cite{gryaditskaya2015motion} that improves the method of Kalantari~\emph{et al.}~\cite{kalantari2013patch} by adaptively adjusting the exposure. In a recent work by Li~\emph{et al.}~\cite{li2016maximum} proposes to consider the HDR video reconstruction problem as a maximization of posteriori estimate. Their method focuses on finding the foreground and background of each HDR frame separately. They extract the background and foreground using rank minimization and multiscale adaptive regression techniques, respectively. The major drawback of this method is also the computational cost involved, it takes around two hours to generate a single frame with $1280\times720$ resolution. Additionally, many of the frames were accompanied by noise and discoloration. Recently, Kalantari and Ramamoorthi \cite{kalantari2019deep} have proposed an approach in which they have used two connected networks called Flow network and Merge Network. Flow Network aligns the neighboring frame with the current frame, while the Merge network is used to merge the aligned frames with the reference frame. Their approach solves the problem in the best way so far. This is also state of the art for HDR video generation. But there are still ghosting artifacts in the challenging cases where the reference image is overexposed or if there are notable parallax and occlusion. \section{Dataset} \label{dataset_section} \begin{figure}[!b] \centering \includegraphics[width=1\columnwidth]{Figures/intro2.png} \caption{Visual comparison of the over and under exposed LDR frames generated using Equation~\ref{eq1} with the corresponding HDR frame. Note the loss of details in the dark and bright regions of under and over exposed LDR frames.} \label{expose} \end{figure} We require a large dataset consisting of HDR video frames with corresponding LDR frames having alternating exposures. We use two publicly available HDR video datasets curated by Froehlich~\emph{et al.}~\cite{froehlich2014creating} (13 videos) and Kronander~\emph{et al.}~\cite{kronander2014unified} (8 videos). These datasets were prepared using cameras with a specific optical design containing external~\cite{froehlich2014creating} and internal~\cite{kronander2014unified} beam filters. The dataset contains 41 HDR videos, out of which 38 were used for training, and the remaining three were used as the hold-out test set consistent with the current state-of-the-art method proposed by Kalantari~\emph{et al.}~\cite{kalantari2019deep} during experiments. We generate synthetic LDR frames from ground truth HDR frames at different exposures using Eq \ref{eq1}. \begin{equation} \label{eq1} \begin{split} L_{i}=g_i({H}_{i})=clip[({H}_{i}t_i)^{1/\gamma}] \end{split} \end{equation} \noindent here $\gamma$ is 2.2, ${H_i}$ is HDR image in linear domain, $t_i$ is the exposure time, and $clip$ function clips the output in the range $[0,1]$. Figure~\ref{expose} shows the comparison of an underexposed, and overexposed LDR frames, along with an HDR frame. It can be clearly seen from Figure~\ref{expose} (a) and (b) clearly depicts the content loss in overexposed and underexposed LDR frames as compared to the corresponding HDR. \section{Proposed Architecture} \label{archi} In this section, we present a detailed discussion of our proposed framework. Our proposed framework consists of three parts, i.e., an LDR denoising network for extracting clean LDR frames from noisy LDR video, a lightweight optical flow estimation network, and a GAN based model for the final reconstruction of high-quality HDR frames. \textbf{Notations.} Table \ref{tab:notation} summarises all the variables used in this paper and their corresponding definitions. Here $L$ denotes the LDR frame, $H$ denotes the HDR frame, and $T$ denotes the tonemapped HDR. An important point to note here is that $i$ represents the $i^{th}$ frame of the video. So $(i-1)^{th}$ and $(i+1)^{th}$ represents previous frame and the next frame with respect to the $i^{th}$ frame. \subsection{Self-Supervised Denoising Network} \label{denoising_section} \begin{figure}[!b] \centering \includegraphics[width=1\columnwidth]{Figures/noise_ldr.png} \caption{Visual comparison of noisy LDR frame $L^{\prime}_{t-1}$ generated by adding gaussian noise using Equation~\ref{noisy_ldr} to the synthetically generated LDR frames $L_{t-1}$} \label{fig_noise_ldr} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=2\columnwidth]{Figures/eldr.jpg} \caption{Architecture of our Denoising network} \label{denoise_ldr} \end{figure*} Real-world off-the-shelf cameras are prone to capturing noise while recording LDR frames. This noise produces unwanted artifacts in both scenarios, i.e., first while aligning the neighboring frames by computing the optical flow between the frames and secondly on final reconstruction from these aligned frames. In order to reconstruct high-quality HDR frames, we require the corresponding LDR frames to be less noisy. In our method, we incorporate a denoising network that removes such imperfections from the noisy LDR frames. We call our self-supervised denoising blocks as ELDR blocks. Self-Supervised paradigm has shown promising results in learning feature representation~\cite{fl,karen}, temporal coherency~\cite{temp}, image denoising~\cite{xu2019noisyasclean}, and many other tasks~\cite{color,ssflow,6869}. Inspired by this, we design a self-supervision based LDR frames denoising network that learns to create clean LDR frames from the noisy LDR video frames. For all our experimentation, we use Gaussian noise as the perturbation function to generate noisy LDR frames from the synthetic LDR frames as described in Equation~\ref{noisy_ldr}, consistent with previous baselines~\cite{kalantari2019deep}. Figure~\ref{fig_noise_ldr} shows the example of an over-exposed LDR and the corresponding frame after the Gaussian noise addition. \begin{equation} \label{noisy_ldr} \begin{split} {L^{\prime}}_{i}=L_{i}+N(\mu, \sigma) \end{split} \end{equation} Our LDR frame denoise network consists of a series of convolution and deconvolution operations along with a skip connection between them following a U-Net~\cite{ronneberger2015unet} like structure. Each 2-D convolution operation is followed by a BatchNorm operation, ReLU activation, and a $2 \times 2$ Max-pooling layer to reach the bottleneck representation. Then the bottleneck feature map is upsampled using deconvolution layers. Figure~\ref{denoise_ldr} shows the architecture of our denoising network. In each iteration of the training procedure, we take the LDR frames ($L_i$), and we add perturbation to it according to Equation~\ref{noisy_ldr} and use these perturbed images as the input to the network. The network is trained to extract out the original clean LDR frames from all the added noise. Our proposed method consists of two such denoising networks, each for a different exposure. All the parameters are identical across both networks. The difference is that one network is trained on LDR frames with low exposures and another for LDR frames with high exposures. Formally, we train our denoising blocks using the $L1$ loss given below. \begin{equation} \label{self_supervised_l1} \begin{split} L_{denoise}'=||\widetilde{L}_i - L_i||_1 \end{split} \end{equation} \subsection{Flow Network} In video-to-video synthesis tasks, object movements across frames are known to create temporal artifacts during the reconstruction. Visually incoherent frames with poor temporal coherency is observed if existing image synthesis is directly applied to videos without incorporating the temporal dynamics in the model. We address this by aligning all the input LDR frames with alternating exposure to a reference frame before using it for reconstructing the HDR frames. In order to achieve such an alignment, we first estimate the optical flow~\cite{original_flow} between the consecutive LDR frames having alternating exposures, which is then used to warp the previous frame to the current frame. Convolution neural network-based optical flow estimation was originally proposed by Dosovitskiy~\emph{et al.}~\cite{first_CNNflownet}, which directly generates a flow field from a pair of images. After this, many works have been proposed on neural network-based optical flow estimation~\cite{pyramid_flow, flownet2, maurer2018proflow}. However, most of these techniques are computationally expensive, and direct application of these methods in our case, would not be scalable for real-time estimation of HDR frames. Therefore, we use a fine-tuned version of LiteFlowNet~\cite{hui18liteflownet} in our proposed pipeline for optical flow estimation, which outperforms most other neural network-based flow estimation methods both in terms of speed and accuracy. After obtaining the clean version of LDR frames with alternating exposures, we compute the optical flow between these neighboring frames. Originally, LiteFlowNet~\cite{hui18liteflownet} was trained to generate flow-maps between video-frames having similar exposures. Directly using a pre-trained version of this LiteFlowNet~\cite{hui18liteflownet} would result in inconsistencies due to the difference in exposures. In order to utilize this across different exposures, we fine-tune it by leaving it trainable during our end-to-end training procedure after initializing LiteFlowNet~\cite{hui18liteflownet} with the pre-trained version. \subsection{GAN Based HDR Frame Generation} \begin{figure*}[!ht] \centering \includegraphics[width=2\columnwidth]{Figures/arch2.png} \caption{Our proposed method for HDR video generation consisting of two Denoising networks, a LiteFlownet~\cite{hui18liteflownet} model, and a final GAN based reconstruction model. Layers in both generator and discriminator can be identified by its color as described in the table on the right side. Each label of the layer follows the convention of k-\textit{kernel size}-n-\textit{number of kernels}-s-\textit{stride size}.} \label{main_dia} \end{figure*} \textbf{Network Architecture.} We adopt the GAN based architecture for our HDR frame reconstruction proposed by~\cite{thasarathan2019automatic}. The proposed generator network is based on encoder-decoder architecture, where the encoder first downsamples the image twice ($H \times W \to H/ 4 \times W/4$). The feature map is then passed through 8 Res-Blocks followed by two upsampling layers. Similar to~\cite{thasarathan2019automatic}, we use the instance norm layer. We warp a convolution layer, an instance-norm layer, and a ReLU activation layer into one basic unit. The Res-Block consists of 2 basic units stacked over each other. The discriminator consists of 5 convolution layers followed by two dense layers. To stabilize the training process, we use the spectral norm. By restricting our discriminative function to 1-Lipschitz, we prevent the gradient uninformativeness problem~\cite{lipschitz}. Figure~\ref{main_dia} shows our proposed architecture. Let \textit{G} and \textit{D} denotes the generator and discriminator networks. Our generator network takes clean overexposed LDR ($\widetilde{L}_i$) and an underexposed LDR ($\widetilde{L}_{i-1}$) received after flow correction from the denoising networks and generates the current HDR frame ($H_{i}$). \begin{equation} \label{generator} \begin{split} H_{i}=\textit{G}(\widetilde{L}_{i},\widetilde{L}_{i-1}) \end{split} \end{equation} \subsection{Objective Function} \textbf{Tone Mapping.} Kalantari~\emph{et al.}~\cite{kalantari2019deep} argued that defining loss function in linear HDR domain underestimates the error in darker regions. The solution that they suggested is to convert HDR from linear domain to log domain~\cite{kalantari2017deep,zhang-iccv-17}. Consistent with the previous baselines~\cite{kalantari2019deep}, we also use differentiable $\mu$-law function transformation denoted by $T$. We compute our loss on tonemapped HDR frames. \begin{equation} \label{tonemap} \begin{split} T_{i}=\frac{log(1+\mu H_i)}{log(1+\mu)} \end{split} \end{equation} \textbf{\boldmath {$L_1$} Loss.} Kalantari~\emph{et al.}~\cite{kalantari2019deep} used $l_1$ loss computed between generated HDR frame and ground truth HDR frame to train the model. The authors also argued that the use of $L_1$ loss promotes sharpness in images as compared to $L_2$ loss. Again, consistent with the previous baselines, we also use $L_1$ loss in our model. \begin{equation} \label{l1} \begin{split} L_{l_1}= ||T_i - \widetilde{T}_{i} ||_1 \end{split} \end{equation} \textbf{Adversarial Objective.} Rather than discriminating with $i^{th}$ ground truth HDR frame, Thasarathan and Nareri~\cite{thasarathan2019automatic} proposed use of previous $(i-1)^{th}$ frame. They argued that using a previous frame for discrimination generates temporally more coherent frames than frames generated using conventional adversarial loss. This adversarial loss reduces flickering artifacts in frames. \begin{equation} \label{adv} \begin{split} L_{adv} & =\mathbb{E}_{(\widetilde{L}_{i},\widetilde{L}_{i-1})}log[\textit{D}(\widetilde{L}_{i},\widetilde{L}_{i-1})] \\ & + \mathbb{E}_{\widetilde{L}_{i-1}}log[1-\textit{D}(T_i,\widetilde{T}_{i-1})] \end{split} \end{equation} \textbf{Content and Style Losses.} Frame reconstruction is also accompanied by other visual artifacts like blurriness and color mismatch. We incorporate content and style loss~\cite{sanakoyeu2018styleaware} to minimize visual artifacts. Let $\phi_i$ represent the activated feature map of $j^{th}$ layer of pre-trained VGG-19. For our experiments, we use feature maps of $1^{st}$ to $5^{th}$ layers. We also use style loss to maintain spatial consistency in the generated HDR frame. $\Delta^{\phi}_j$ represents Gram matrix of $j^{th}$ feature map $\phi$. Equation \ref{content} represents content loss and Equation \ref{style} represents style loss. \begin{equation} \label{content} \begin{split} L_{content}=\mathbb{E}_{j} \left[ \frac{1}{N_j}||\phi_j(T_i) - \phi_j(\widetilde{T}_i) ||_1 \right] \end{split} \end{equation} \begin{equation} \label{style} \begin{split} L_{style}=\mathbb{E}_{j} \left[ ||\Delta^{\phi}_j(T_i) - \Delta^{\phi}_j(\widetilde{T}_i) ||_1 \right] \end{split} \end{equation} For our experiments we use $\lambda_{adv}=5$, $\lambda_{content}=1$, $\lambda_{style}=1000$ and $\lambda_{l_{1}}=30$, Equation \ref{final} represents the overall loss function. \begin{equation} \begin{split} L_{rec}= \lambda_{adv}L_{adv} + \lambda_{content}L_{content} + \lambda_{style}L_{content} \\+ \lambda_{style}L_{style} +\lambda_{l_{1}}L_{l_{1}} \end{split} \label{final} \end{equation} \textbf{Temporal Regularization.} Finally, we incorporate explicit regularization for additional temporal stability~\cite{bong,eilertsen_mantiuk_unger_2019} between two consecutive frames, which further helps in reducing blurriness in high motion frames. \textit{W} represents warping function from our flow network and $\alpha$ is our regularization parameter, we set $\alpha=0.3$. \begin{equation} \begin{split} L_{reg}=||T_{i} - W({T}_{i-1}) ||_2 \end{split} \end{equation} \begin{equation} \begin{split} L_{total} = \alpha L_{rec} + (1-\alpha)L_{reg} \end{split} \end{equation} \section{Training Details} \label{training_det} In this section, we discuss our training methodology and present the values of hyper-parameters. For all of our experiments, we used $\mu=5000$ for tonemapping from linear to a logarithmic scale. We train our self-supervised denoising networks with $L_{l2}$ loss for 100 epochs. We train our GAN model with $L_{rec}$ loss using a batch-size of 20 for 70 epochs, and then we fine-tune our network with $L_{total}$ using a batch-size of 35 for 15 epochs. The training was performed entirely on a machine with Intel Core i7, 64GB of memory, and a GeForce RTX 2080-Ti GPU. It roughly takes six days to complete the training procedure (both denoising and GAN combined). Note that we freeze the weights for our denoising network before training our GAN model. We optimize our loss objective function using Adam optimizer with a learning rate of $10^{-4}$ for training self-supervised network and $10^{-4}$ for GANs with a batch size of 12. We use Leaky ReLU activation for the discriminator network~\cite{thasarathan2019automatic}, and for the rest of the network, we used ReLU activation. We used the spectral norm in the discriminator to stabilize our training procedure. \section{Results} \label{results} We compare our approach against the method of Kalantari~\emph{et al.}~\cite{kalantari2013patch}, which uses a patch-based mechanism for high dynamic range video generation and against the current state-of-the-art, Kalantari~\emph{et al.}~\cite{kalantari2019deep} which is based on convolutional neural networks (CNNs). We used the publicly available source code for the patch-based method by Kalantari~\emph{et al.}~\cite{kalantari2013patch} and Li~\emph{et al.}~\cite{li2016maximum}. For CNN based approach by Kalantari~\emph{et al.}~\cite{kalantari2019deep}, the authors provided their results on only three scenes from the test set. Both the patch-based mechanism by Kalantari~\emph{et al.}~\cite{kalantari2013patch} and Li~\emph{et al.}~\cite{li2016maximum} takes roughly 1-2 hours for generating each of the frames with a resolution $1280\times 720$. Recent work on CNN based method by Kalantari~\emph{et al.}~\cite{kalantari2019deep} showed that both patch-based method~\cite{kalantari2013patch} and the method of Li~\emph{et al.}~\cite{li2016maximum} produce poor results on different scenes. Thus, visual comparison against patch-based method~\cite{kalantari2013patch} and the method of Li~\emph{et al.}~\cite{li2016maximum} is difficult and superfluous. Moreover, our proposed method has a training mechanism that is consistent with that of the recent CNN based model by Kalantari~\emph{et al.}~\cite{kalantari2019deep} as explained in Section~\ref{training_det}. Therefore, we only compare the visual results against CNN based model of Kalantari~\emph{et al.}~\cite{kalantari2019deep}. \subsection{Evaluation Metrics} \begin{table}[!b] \centering \begin{tabular}{ccccc} \toprule & Kalantari~\cite{kalantari2013patch} & Kalantari~\cite{kalantari2019deep} & Ours \\ [0.5ex] \hline \noalign{\vskip 0.1cm} PSNR &38.77&40.67&\textbf{43.35}\\ SSIM &-&0.78&\textbf{0.83}\\ HDR-VDP-2 &62.12&74.15&\textbf{77.19}\\ \bottomrule \end{tabular} \caption{Quantitative comparison of our method against the patch based method of Kalantari~\emph{et al.}~\cite{kalantari2013patch} and CNN based method by Kalantari~\emph{et al.}~\cite{kalantari2019deep}.} \label{tab:eval} \end{table} To evaluate the performance of our end-to-end generative model we use PSNR~\cite{psnr}, SSIM~\cite{ssim} and HDR-VDP-2~\cite{vdp}. Given the ground truth image$(gt)$ and the predicted image$(pred)$, PSNR$(gt, pred)$ is defined as in equation~\ref{psnr} - \begin{equation} \label{psnr} PSNR (gt,pred)=10\log_{10}(255^{2}/MSE(gt,pred)) \end{equation} where $MSE(gt,pred)$ is mean squared error between the ground truth image and predicted image having a size of $M \times N$ as in equation~\ref{mse} - \begin{equation} \label{mse} MSE(gt,pred)={1\over M \cdot N}\sum_{i=1}^{M}\sum_{j=1}^{N}(gt_{ij}-pred_{ij})^{2} \end{equation} Higher the PSNR value better is the quality of the reconstructed image. The SSIM metric is also a well-known metric for measuring the visual quality of the reconstructed image. SSIM metric takes into account luminance, contrast and structural similarity into account and hence it is highly correlated with the human perception. HDR-VDP-2~\cite{vdp} is also a visual metric that compares the visibility score, i.e., the difference between $gt$ and $pred$ with respect to an average observer and the degradation quality of $pred$ with respect to $gt$ expressed as a mean opinion score. \subsection{Quantitative Comparison} We quantitatively compare our results against the methods of Kang~\emph{et al.}~\cite{khan2006ghost}, the patch-based method of Kalantari~\emph{et al.}~\cite{kalantari2013patch}, and the current state-of-the-art by Kalantari~\emph{et al.}~\cite{kalantari2019deep}. We select frames from the scenes of \uppercase{FISHING LONGSHOT, CAROUSEL FIREWORKS,} and \uppercase{POKER FULLSHOT,} which all comprise of the test set. We extract LDR frames with alternating exposures, as described in previous sections. Each frame has a resolution of $1920\times1080$, but has a wide black border of 10 pixels around them, which we crop on-wards for quantitative comparison. We evaluate the results on PSNR (Peak signal-to-noise ratio) and SSIM (Structural Similarity Index) in its tone-mapped domain as described in Equation~\ref{tonemap}. To further evaluate the quality of the generated HDR frames, we use HDR-VDP2~\cite{hdr_vdp2}, which is designed specifically to evaluate HDR images and videos. Table~\ref{tab:eval} shows all the values using these metrics computed and averaged across all the frames on test data. It can be seen from Table~\ref{tab:eval} that the proposed method outperforms the other existing approaches with respect to all the considered metrics. \begin{figure}[!b] \centering \includegraphics[width=1\columnwidth]{Figures/res21_lite.png} \caption{Visual Comparison of our generated HDR frames having high motion from \uppercase{CAROUSEL FIREWORKS} scene in test data against Kalantari~\emph{et al.}~\cite{kalantari2019deep}.} \label{res2_motion1} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=2\columnwidth]{Figures/res1_lite.png} \caption{Visual Comparison of the our generated HDR frames from a scene of \uppercase{FISHING LONGSHOT} in the test data against Kalantari~\emph{et al.}~\cite{kalantari2019deep}. Identical regions of comparison are all grouped in the same color.} \label{result_big} \end{figure*} \subsection{Visual Comparisons} \begin{figure}[!b] \centering \includegraphics[width=1\columnwidth]{Figures/res22_lite.png} \caption{Visual Comparison of our generated HDR frames having high motion from \uppercase{FISHING LONGSHOT} scene in test data against Kalantari~\emph{et al.}~\cite{kalantari2019deep}.} \label{res2_motion2} \end{figure} We compare the reconstructed HDR frames of our method, and the CNN based method of Kalantari~\emph{et al.}~\cite{kalantari2019deep} on scenes in the test set. Figure~\ref{result_big} shows a detailed comparison of an HDR frame from a test video scene. The scene shows the \uppercase{FISHING LONGSHOT}, which includes a bright region exposed by the sun (marked in the blue box) and a dark region having very low exposure (marked in red box). It can be clearly observed from the regions bounded by the blue and green color that the proposed method produces frames with much more dynamic range than that of Kalantari~\emph{et al.}~\cite{kalantari2019deep}. On a close observation near the region of the sky (bounded in blue) in Figure~\ref{result_big}, it can be seen that our method reconstructs the details of the clouds, while the one produced by the method of Kalantari~\emph{et al.}~\cite{kalantari2019deep} loses the content and reconstructs an over-exposed frame. Overall, from Figure~\ref{result_big} it can be seen that our method generates HDR frames with much more details in regions of both the high and low exposure areas. Figure~\ref{res2_motion1} and Figure~\ref{res2_motion2} compare our approach against the CNN based method by Kalantari~\emph{et al.}~\cite{kalantari2019deep} on the scenes of \uppercase{CAROUSEL FIREWORKS} and \uppercase{FISHING LONGSHOT} respectively, which were not part of the training set. We selected those frames to form the video scene with significant motion in-between the adjacent frames i.e. high motion frames. As evident from the frames of \uppercase{CAROUSEL FIREWORKS} in Figure~\ref{res2_motion1}, CNN based method by Kalantari~\emph{et al.}~\cite{kalantari2019deep} generates tearing artifacts in moving parts of the person in the frame. It also produces blurred frames with ghosting artifacts in the scene of \uppercase{FISHING LONGSHOT} in Figure~\ref{res2_motion2} marked by red arrows. \subsection{Denoising Network} \begin{figure}[!b] \centering \includegraphics[width=1\columnwidth]{Figures/noise_res.png} \caption{Visual comparison of a reconstructed clean LDR frame using denoising network from a noisy LDR frame of a scene used previously in Figure~\ref{fig_noise_ldr}} \label{fig_noise_res} \end{figure} We begin by showing the results of the denoising network (ELDR Blocks), which is trained in a self-supervised manner on the same training set scenes. We extract LDR frames with alternating exposures from the HDR videos, as described in Section~\ref{dataset_section}. For each LDR frame having alternating exposures, we add Gaussian noise with varied signal-to-noise ratios (SNR), which we described in Section~\ref{denoising_section}. Figure~\ref{fig_noise_res} shows the results of a clean LDR frame generated by our denoising network from a noisy LDR frame of a scene, which was earlier described in Figure~\ref{fig_noise_ldr}. In general, we observe that the generated LDR frames have a coherent texture with sharp features as compared to the noisy LDR frames. The use of $L_1$ loss function in the ELDR blocks can be accounted for the above observation. \section{Ablation Study} \label{sec:ablation_study} \textbf{Importance of Separate LDR Denoising Blocks.} To study the significance of these ELDR blocks, we remove these blocks from our overall pipeline and re-train the model. It is evident from Table~\ref{tab:ablation} and Figure~\ref{ablation} that both visual quality and metric-wise, removal of denoising network have a significant effect on the performance of our method. We observe a notable drop in the PSNR, SSIM, and HDR-VDP2~\cite{hdr_vdp2} values. We find HDR frames generated directly from noise embedded LDRs to be blurry with less detailed reconstruction as compared to our complete approach, which generates crisp details as shown in Figure~\ref{ablation}. Thus, creating a two-stage network where noise removal and reconstruction of HDR videos are performed separately has shown to perform better than a single network performing both the tasks. Moreover, the generated intermediate clean LDR frames add more interpretability in terms of noise removal to our model as compared to previous baselines~\cite{kalantari2019deep,kalantari2017deep}. \begin{table}[!h] \centering \begin{tabular}{ccccc} \toprule & Without denosing net & Ours (Complete) \\ \hline \noalign{\vskip 0.1cm} PSNR &41.39&\textbf{43.35}\\ SSIM &0.76&\textbf{0.83}\\ HDR-VDP-2 &73.87&\textbf{77.19}\\ \bottomrule \end{tabular} \caption{Quantitative comparison of our complete approach to our method without denoising network. Removing the denoising network clearly drops the performace on all the mentioned metrics.} \label{tab:ablation} \end{table}% \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{Figures/ablation3.png} \caption{Visual comparison of our complete approach to our method without the denoising network. Removing the denoising network leads to reconstruction of HDR frames having poor quality with less details.} \label{ablation} \end{figure} \section{Conclusion} \label{conclusion} In this paper, we proposed a temporally stable GAN-based HDR video reconstruction network that reconstructs HDR videos from LDR sequences with alternating exposures. Our method incorporates a separate LDR denoising network for extracting clean LDR frames, and we showed that creating separate denoising and reconstruction network outperforms a single network that performs both the tasks. We first align the neighboring alternating exposure frames using the LiteFlownet~\cite{hui18liteflownet} to generate temporally coherent frames. Training our model over a joint objective consisting of $L_1$ loss, style-aware content losses~\cite{sanakoyeu2018styleaware} and augmented GAN loss~\cite{thasarathan2019automatic} helped in minimizing the visual artifacts. Further, we fine-tune our model on a temporal stability based regularization term to further reduce the tearing and ghosting artifacts due to temporal incoherence. We perform all our experimentation consistent with the previous baselines, and we demonstrate that our method outperforms the previous baselines both visually and metric-wise. We believe that there is a great scope for further improvement in terms of better colors, more dynamic range, and in overall visual quality. In the future, we would like to further test our proposed method on larger datasets of HDR videos as and when they become available. \begin{acks} We would like to thank Science and Engineering Research Board (SERB) Core Research Grant for supporting our work. \end{acks}
95d24124d180e702d8fcb234e3c2703d41daf5f7
\section*{Appendix 1} \label{sec:appendix-2} This section presents the set of hyper parameters used in XGboost algorithm to run the experiments. The configuration attributes are: \begin{itemize} \item booster = ``gbtree''; \item objective = ``reg:linear''; \item eta = $0.05$; \item max\_depth = $2$; \item min\_child\_weight = $100$; \end{itemize} \section*{Appendix 2} \label{sec:appendix-1} This section provides experimental results using the size of rolling window $L = 126$ and $L=504$ trading days to construct the financial networks. Considering $L=126$, Figs.~\ref{fig:auc-all-methods-dag-126},~\ref{fig:auc-all-methods-dtn-126} and~\ref{fig:auc-all-methods-dmst-126} show the AUC measure of the proposed machine learning method compared against baseline algorithms for DAG, DTN and DMST network filtering methods. Considering $L=504$, Figs.~\ref{fig:auc-all-methods-dag-504},~\ref{fig:auc-all-methods-dtn-504} and~\ref{fig:auc-all-methods-dmst-504} present results for DAG, DTN and DMST network filtering methods, respectively. For each time step ahead $h$, we calculated the average AUC of each method and its respective standard error over the test period, ranging from $5$ May $2007$ to $5$ September $2020$. Figs.~\ref{fig:auc-all-comparison-126} and~\ref{fig:auc-all-comparison-504} present the AUC performance and the AUC$^\ast$ improvement of the proposed method using $L= 126$ and $L=504$ for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Results are provided for DAG, DTN and DMST network filtering methods. The AUC$^\ast$ improvement is calculated over the time invariant (TI) benchmark method. \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/DAX30-sw-126-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/EUROSTOXX50-sw-126-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/FTSE100-sw-126-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/HANGSENG50-sw-126-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-ALL/126/NASDAQ100-sw-126-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/126/NIFTY50-sw-126-AUC.pdf}} \caption{\textbf{DAG - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.} \label{fig:auc-all-methods-dag-126} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/DAX30-sw-126-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/EUROSTOXX50-sw-126-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/FTSE100-sw-126-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/HANGSENG50-sw-126-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-T/126/NASDAQ100-sw-126-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/126/NIFTY50-sw-126-AUC.pdf}} \caption{\textbf{DTN - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.} \label{fig:auc-all-methods-dtn-126} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/DAX30-sw-126-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/EUROSTOXX50-sw-126-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/FTSE100-sw-126-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/HANGSENG50-sw-126-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-MST/126/NASDAQ100-sw-126-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/126/NIFTY50-sw-126-AUC.pdf}} \caption{\textbf{DMST - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.} \label{fig:auc-all-methods-dmst-126} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAG (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-ALL/126/only_network-sw-126-AUC-COMPARISON.pdf}} \subfigure[DAG (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-ALL/126/only_network-sw-126-AUC-IMPROVMENT-COMPARISON.pdf}} \subfigure[DTN (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-T/126/only_network-sw-126-AUC-COMPARISON.pdf}} \subfigure[DTN (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-T/126/only_network-sw-126-AUC-IMPROVMENT-COMPARISON.pdf}} \subfigure[DMST (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-MST/126/only_network-sw-126-AUC-COMPARISON.pdf}} \subfigure[DMST (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-MST/126/only_network-sw-126-AUC-IMPROVMENT-COMPARISON.pdf}} \caption{\textbf{Machine Learning AUC and AUC$^\ast$ for DAG, DTN and DMST network filter methods.} Panels (a), (c) and (e) present the machine learning AUC measure and its standard error for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Panels (b), (d) and (f) present the AUC improvement over the benchmark time-invariant method and its standard error. Results for $L = 126$.} \label{fig:auc-all-comparison-126} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/DAX30-sw-504-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/EUROSTOXX50-sw-504-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/FTSE100-sw-504-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/HANGSENG50-sw-504-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-ALL/504/NASDAQ100-sw-504-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/504/NIFTY50-sw-504-AUC.pdf}} \caption{\textbf{DAG - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.} \label{fig:auc-all-methods-dag-504} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/DAX30-sw-504-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/EUROSTOXX50-sw-504-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/FTSE100-sw-504-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/HANGSENG50-sw-504-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-T/504/NASDAQ100-sw-504-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/504/NIFTY50-sw-504-AUC.pdf}} \caption{\textbf{DTN - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.} \label{fig:auc-all-methods-dtn-504} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/DAX30-sw-504-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/EUROSTOXX50-sw-504-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/FTSE100-sw-504-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/HANGSENG50-sw-504-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-MST/504/NASDAQ100-sw-504-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/504/NIFTY50-sw-504-AUC.pdf}} \caption{\textbf{DMST - Predictive performance comparison of all methods.} Figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method overcomes the baseline methods in all market indices.} \label{fig:auc-all-methods-dmst-504} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAG (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-ALL/504/only_network-sw-504-AUC-COMPARISON.pdf}} \subfigure[DAG (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-ALL/504/only_network-sw-504-AUC-IMPROVMENT-COMPARISON.pdf}} \subfigure[DTN (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-T/504/only_network-sw-504-AUC-COMPARISON.pdf}} \subfigure[DTN (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-T/504/only_network-sw-504-AUC-IMPROVMENT-COMPARISON.pdf}} \subfigure[DMST (AUC)]{\includegraphics[trim=0.0cm 0.0cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-MST/504/only_network-sw-504-AUC-COMPARISON.pdf}} \subfigure[DMST (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.0cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-MST/504/only_network-sw-504-AUC-IMPROVMENT-COMPARISON.pdf}} \caption{\textbf{Machine Learning AUC and AUC$^\ast$ for DAG, DTN and DMST network filter methods.} Panels (a), (c) and (e) present the machine learning AUC measure and its standard error for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Panels (b), (d) and (f) present the AUC improvement over the benchmark time-invariant method and its standard error. Results for $L = 504$.} \label{fig:auc-all-comparison-504} \end{figure*} \section{Appendix 3} \begin{algorithm} \caption{Machine Learning Based Approach}\label{alg:cap} \begin{algorithmic} \Require $G(V,E)$ \State $k \gets 30$ \For{$k$ in $1$ to $30$} \State extractNodeLevelFeatures ( $G_{t-k}(V,E)$ ) \State extractLinkLevelFeatures ( $G_{t-k}(V,E)$ ) \EndFor \If{$N$ is even} \State $X \gets X \times X$ \State $N \gets \frac{N}{2}$ \Comment{This is a comment} \ElsIf{$N$ is odd} \State $y \gets y \times X$ \State $N \gets N - 1$ \EndIf \end{algorithmic} \end{algorithm} \section{Conclusion} \label{sec:conclusion} In this article, we investigated stock market structure forecasting of multiple financial markets using financial networks modeled using stock returns of major market indices constituents. The stock market structure was modeled as networks, where nodes represent assets and edges represent the relationship among them. Three correlation based filtering methods were used to create stock networks: Dynamic Asset Graphs (DAG), Dynamic Threshold Networks (DTN) and Dynamic Minimal Spanning Tree (DMST). We formulated market structure forecasting as a network link prediction problem, where we aim to accurately predict the edges that will be present in future networks. We proposed and experimentally assessed a machine learning model based on node- and link-based financial network features to forecast future market structure. We used data from company constituents of six different stock market indices from the U.S., the U.K., India, Europe, Germany and Hong Kong markets, ranging from $1$ March $2005$ to $18$ December $2019$. To assess the predictive performance of the model, we compared it to seven link prediction benchmark algorithms. Experimental results showed the proposed model was able to forecast the market structure with a performance superior to all benchmark methods and for all market indices, regardless the network filter method. We also measured the improvement against the Time Invariant (TI) algorithm, which assumes that the network does not change over time. Experimental results showed a greater improvement over the TI in networks created using the DTN filtering method, reaching almost $40\%$ improvement for NASDAQ100. Our experimental results also suggested that topological network information is useful in forecasting stock market structure compared to pair-wise correlation measures, particularly for long-horizon predictions. As work limitations, we should emphasize that we only used assets that stayed in the market index throughout the whole period, which limits the insertion and removal of nodes in the networks. In addition, for networks with large number of nodes, the execution time increased significantly, both for generating derived features and for training ML models. Our results can be useful in the study of stock market dynamics and to improve portfolio selection and risk management on a forward-looking basis and market structure estimation. As future work, we plan to use the predicted stock market structure as input in portfolio and risk management tools to evaluate its usefulness in risk management scenarios. Future work also includes market structure forecasting using order book data for high frequency trading analysis and the study of different asset classes beyond equities. \subsection{Market Data} In this study, we used data from six different stock market indices spread across the American, European and Asian markets. The stock indices were chosen to measure the performance of the proposed approach in different scenarios, given the diversity of the stock markets. Moreover, it is important to mention that they represent the stock market of the region or country where they are listed. We considered the following indices and associated countries/regions: \begin{itemize} \item \textbf{DAX30} (Germany): This is a stock market index that consists of the $30$ largest and most liquid German companies trading on the Frankfurt Stock Exchange. \item \textbf{EUROSTOXX50} (Eurozone): This is a list of the $50$ companies that are leaders in their respective sectors from eleven Eurozone countries, including Austria, Belgium, Finland, France, Germany, Ireland, Italy, Luxembourg, the Netherlands, Portugal and Spain. \item \textbf{FTSE100} (United Kingdom): This is an index listed in the London Stock Exchange. The Financial Times Stock Exchange Index (FTSE) is Britain's main asset indicator, managed by the independent organization and calculated based on the $100$ largest companies in the United Kingdom. \item \textbf{HANGSENG50} (Hong Kong): This is an index listed in the Stock Exchange of Hong Kong. This stock market index has the $50$ constituent companies with the highest market capitalization. It is the main indicator of the market performance in Hong Kong. \item \textbf{NASDAQ100} (United States): This is an index composed of the $100$ non-financial largest companies listed in NASDAQ. \item \textbf{NIFTY50} (India): This is a stock market index listed in the National Stock Exchange of India based on the $50$ largest Indian companies. \end{itemize} Each financial index has a daily price time series for each one of its constituent stocks. Price time series are constructed using daily closing prices collected from \textit{Thomson Reuters}. The list of company constituents of each stock market index is not static and may change over time. In this article, we only consider companies that were part of the underlying indices across the entire period analyzed, as commonly used in other studies, when node prediction is out-of-scope~\cite{SOUZA2019122343, 10.1007/978-3-030-22744-9_27}. We consider prices ranging from $1$ March $2005$ to $18$ December $2019$. \section{Results and Discussion} \label{sec:results-and-discussion} In this section, we present the experimental results for financial market structure forecasting. Initially, we present a set of descriptive analyses on evolution of financial networks and a brief discussion about the impact of the different network filtering methods in the financial market structure. Afterwards, we present a set of predictive analyses related to the machine learning approach and the benchmark methods. Finally, we present a discussion about the interpretability of the machine learning models. \subsection{Descriptive Analysis} We present a set of descriptive analyses of temporal financial networks created across different market indices. Our first descriptive analysis describes financial network persistence, considering $L = 252$ trading days to create each graph (results regarding $L \in \lbrace 126, 504 \rbrace$ trading days can be found in Supplementary Material, Section S.$3$). This analysis allows us to measure how the financial networks change their structure over time. We estimate the network persistence by calculating pair-wise network similarity between $G(t)$ and $G(t')$ using the Jaccard Distance, defined as follows: \begin{equation} sim (G(t), G(t')) = \frac{ \left| G(t) \cap G(t')\right|}{\left| G(t) \cup G(t)\right|}, \end{equation} \noindent where $t$ and $t'$ range from $12$ May $2006$ to $18$ December $2019$. \begin{figure*}[t!] \centering \subfigure[DAX30]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/DAX30_252.jpg}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/EUROSTOXX50_252.jpg}} \subfigure[FTSE100]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/FTSE100_252.jpg}} \subfigure[HANGSENG50]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/HANGSENG50_252.jpg}} \subfigure[NASDAQ100]{\includegraphics[trim=0cm 0.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/NASDAQ100_252.jpg}} \subfigure[NIFTY50]{\includegraphics[trim=0cm 3.8cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/NIFTY50_252.jpg}} \caption{\textbf{DAG - Cross-similarity matrix for each market index.} We calculate the pair-wise Jaccard Distance across all financial networks $G(t)$ and $G(t')$ ranging from $12$ May $2006$ to $18$ December $2019$, related to a given market index. For each market index figure, the first network on $12$ May $2006$ is represented in the top-left and the last network on $18$ December $2019$ in the bottom-right corner of each individual figure.} \label{fig:cross-similarity-dag} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/DAX30_252.jpg}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/EUROSTOXX50_252.jpg}} \subfigure[FTSE100]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/FTSE100_252.jpg}} \subfigure[HANGSENG50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/HANGSENG50_252.jpg}} \subfigure[NASDAQ100]{\includegraphics[trim=0cm 0.2cm 0cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/NASDAQ100_252.jpg}} \subfigure[NIFTY50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/T/NIFTY50_252.jpg}} \caption{\textbf{DTN - Cross-similarity matrix for each market index.} We calculate the pair-wise Jaccard Distance across all financial networks $G(t)$ and $G(t')$ ranging from $12$ May $2006$ to $18$ December $2019$, related to a given market index. For each market index figure, the first network on $12$ May $2006$ is represented in the top-left and the last network on $18$ December $2019$ in the bottom-right of each individual figure.} \label{fig:cross-similarity-dtn} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/DAX30_252.jpg}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/EUROSTOXX50_252.jpg}} \subfigure[FTSE100]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/FTSE100_252.jpg}} \subfigure[HANGSENG50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/HANGSENG50_252.jpg}} \subfigure[NASDAQ100]{\includegraphics[trim=0cm 0.2cm 0cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/NASDAQ100_252.jpg}} \subfigure[NIFTY50]{\includegraphics[trim=0cm 4.2cm 0.1cm 0cm, clip=true, width=0.30\textwidth]{fig/cross-similarity/MST/NIFTY50_252.jpg}} \caption{\textbf{DMST - Cross-similarity matrix for each market index.} We calculate the pair-wise Jaccard Distance across all financial networks $G(t)$ and $G(t')$ ranging from $12$ May $2006$ to $18$ December $2019$, related to a given market index. For each market index figure, the first network on $12$ May $2006$ is represented in the top-left corner and the last network on $18$ December $2019$ in the bottom-right of each individual figure.} \label{fig:cross-similarity-dmst} \end{figure*} Figures~\ref{fig:cross-similarity-dag},~\ref{fig:cross-similarity-dtn} and~\ref{fig:cross-similarity-dmst} present the cross-similarity analysis for DAG, DTN and DMST of each stock market index, respectively. In the individual figure of each stock market index, the first network is represented in the top-left and the last network is represented in the bottom-right, where the first network is $12$ May $2006$ and the last network is $18$ December $2019$. In general, we can observe that the structure consistently changes over time, which emphasizes the importance of tools to forecast market structure. DAG results in Figure~\ref{fig:cross-similarity-dag} show network structure changes considerably throughout the time in all stock market indices. Figure~\ref{fig:cross-similarity-dtn} presents results from the DTN network filtering method. We can observe the similarity among networks tends to be noisier than the previous DAG method. In some periods, the similarity among the networks is maximum, while at other times it reaches zero, as can be seen in NASDAQ100 and NIFTY50. The DTN network filtering method can produce disconnected or even empty graphs, which may cause these similarity oscillations. DMST results are shown in Figure~\ref{fig:cross-similarity-dmst}. This figure shows that there is low similarity for long-range comparisons among trees created by the DMST filtering method for all market indices, suggesting low stability as reported by other authors~\cite{carlsson2010characterization,marti2015proposal}. Given the cross-similarity matrices of each market, we calculate the distance among all matrices to measure the market similarity in terms of network evolution. This analysis allows us to identify which markets have similar behavior considering the persistence of networks. To do this, we use the cosine similarity, calculated using the following formula: \begin{equation} cosine\_sim (a,b) = \frac{\sqrt{\sum_{}^{}{(a-b)^{2} } } }{\sqrt{\sum_{}^{}{a^2}} * \sqrt{\sum_{}^{}{b^2} } } , \end{equation} \noindent where $a$ and $b$ are two non-zero numeric vectors and represents the upper triangle of two distinct cross-similarity matrices. This metric ranges from $0$ to $1$ and it is defined as the angular distance from two vectors. Table~\ref{tab:cosine-distance} presents the pairwise cosine similarity for DAG, DTN and DMST. DAX30 and EUROSTOXX50 have the highest cosine similarity for DAG and DTN. For DMST, the highest value is between FTSE100 and EUROSTOXX50. This analysis demonstrates that the network persistence among markets from Europe are higher than markets from other regions of the world, given the three network filtering methods. \begin{table}[hb!] \centering \small \caption{\textbf{Cosine distance from cross-similarity results.} We calculate the cosine similarity from cross-similarity matrices. We use the upper triangle of each matrix as the input vector. European markets have the highest similarity.} \begin{tabular}{|c|l|ccccc|} \cmidrule{3-7} \multicolumn{1}{r}{} & & \multicolumn{1}{l}{\textit{\textbf{EUROSTOXX50}}} & \multicolumn{1}{l}{\textit{\textbf{FTSE100}}} & \multicolumn{1}{l}{\textit{\textbf{HANGSENG50}}} & \multicolumn{1}{l}{\textit{\textbf{NASDAQ100}}} & \multicolumn{1}{l|}{\textit{\textbf{NIFTY50}}} \\ \midrule \multirow{5}[2]{*}{\textbf{DAG}} & \textit{\textbf{DAX30}} & \textbf{0.9532} & 0.9435 & 0.9472 & 0.9341 & 0.9257 \\ & \textit{\textbf{EUROSTOXX50}} & & 0.9228 & 0.9403 & 0.9420 & 0.9070 \\ & \textit{\textbf{FTSE100}} & & & 0.9150 & 0.9358 & 0.8978 \\ & \textit{\textbf{HANGSENG50}} & & & & 0.9297 & 0.9302 \\ & \textit{\textbf{NASDAQ100}} & & & & & 0.9137 \\ \midrule \multirow{5}[2]{*}{\textbf{DTN}} & \textit{\textbf{DAX30}} & \textbf{0.9338} & 0.8367 & 0.7573 & 0.6209 & 0.5795 \\ & \textit{\textbf{EUROSTOXX50}} & & 0.8755 & 0.7873 & 0.6143 & 0.6000 \\ & \textit{\textbf{FTSE100}} & & & 0.8331 & 0.5479 & 0.5503 \\ & \textit{\textbf{HANGSENG50}} & & & & 0.5892 & 0.5531 \\ & \textit{\textbf{NASDAQ100}} & & & & & 0.4269 \\ \midrule \multirow{5}[2]{*}{\textbf{DMST}} & \textit{\textbf{DAX30}} & 0.9486 & 0.9354 & 0.8967 & 0.9011 & 0.9200 \\ & \textit{\textbf{EUROSTOXX50}} & & \textbf{0.9500} & 0.9058 & 0.9294 & 0.9312 \\ & \textit{\textbf{FTSE100}} & & & 0.9253 & 0.9400 & 0.9338 \\ & \textit{\textbf{HANGSENG50}} & & & & 0.9169 & 0.9080 \\ & \textit{\textbf{NASDAQ100}} & & & & & 0.9160 \\ \bottomrule \end{tabular}% \label{tab:cosine-distance}% \end{table}% The second descriptive analysis is the similarity between the current financial network $G(t)$ and the future network $G(t + h)$, where $h$ is the time lag, $\forall$ $h \in \lbrace 1, 5, 10, 15, 20 \rbrace$ trading weeks. This analysis provides an accurate point of view concerning how the current network changes in the near future - if they do not change, we do not need to forecast them. We quantify the changes in the network structure using the Jaccard Distance between $G(t)$ and $G(t+h)$, considering $L = 252$ trading days to create each graph. Figure~\ref{fig:similarity-lag} presents the distribution of networks similarity related to the three network filtering methods DAG, DTN and DMST of each stock market index. Experimental results suggest a high similarity distribution among networks considering $h = 1$ step ahead to all network filtering methods. However, the similarity distribution decreases with $h$, mainly in the DMST method. Considering $h = 20$, DMST presents a mean similarity lower than $25\%$ in all markets. In general, financial networks tend to have a certain margin of similarity for low $h$, but as $h$ increases, they become more and more dissimilar, hence justifying the importance of forecasting future market structures, particularly in high-horizon forecasting scenarios. Analyzing the DTN method, NIFTY50 and HANGSENG50 present a different behavior for larger $h$, where the distribution of the similarity behaves differently from other markets, oscillating between the maximum value and almost zero for larger $h$, as shown in $h=5$, $h=10$ and $h=15$. This amplitude can be explained by the analysis presented in Figure~\ref{fig:cross-similarity-dtn}, which shows that for some periods the similarity among networks is high, but it is also very low for other periods. The smallest similarity values are presented for the DMST method considering $L = 20$. \begin{figure*}[ht!] \centering \subfigure[Dynamic Asset Graph]{\includegraphics[trim=0cm 1cm 0cm 0cm, clip=true, width=0.65\textwidth]{fig/similarity-and-degree/only_network-sw-252-NETWORK-SIMILARITY-BY-INDEX.pdf} \label{subfig:similarity-lag-dag}} \subfigure[Dynamic Threshold Networks]{\includegraphics[trim=0cm 1cm 0cm 0cm, clip=true, width=0.65\textwidth]{fig/similarity-and-degree/T/only_network-sw-252-NETWORK-SIMILARITY-BY-INDEX.pdf} \label{subfig:similarity-lag-dtn}} \subfigure[Dynamic Minimal Spanning Tree]{\includegraphics[trim=0cm 1cm 0cm 0cm, clip=true, width=0.65\textwidth]{fig/similarity-and-degree/MST/only_network-sw-252-NETWORK-SIMILARITY-BY-INDEX.pdf} \label{subfig:similarity-lag-dmst}} \caption{\textbf{Networks Similarity vs. Time Lag.} Figure shows the distribution of networks persistence considering $h = \lbrace 1, 5, 10, 15, 20 \rbrace$ trading weeks ahead related to the three network filtering methods: DAG, DTN and DMST. Network similarity is quantified using the Jaccard Distance between graphs $G(t)$ and $G(t+h)$.} \label{fig:similarity-lag} \end{figure*} \begin{figure*}[ht!] \centering \subfigure[DAG ($L = 126$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/126.eps} \label{subfig:degree-cdf-126-dag}} \subfigure[DAG ($L = 252$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/252.eps} \label{subfig:degree-cdf-252-dag}} \subfigure[DAG ($L = 504$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/504.eps} \label{subfig:degree-cdf-504-dag}} \subfigure[DTN ($L = 126$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/T/126.eps} \label{subfig:degree-cdf-126-dtn}} \subfigure[DTN ($L = 252$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/T/252.eps} \label{subfig:degree-cdf-252-dtn}} \subfigure[DTN ($L = 504$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/T/504.eps} \label{subfig:degree-cdf-504-dtn}} \subfigure[DMST ($L = 126$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/MST/126.eps} \label{subfig:degree-cdf-126-dmst}} \subfigure[DMST ($L = 252$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/MST/252.eps} \label{subfig:degree-cdf-252-dmst}} \subfigure[DMST ($L = 504$)]{\includegraphics[trim=0.4cm 0.0cm 2.0cm 1.6cm, clip=true, width=0.28\textwidth]{fig/similarity-and-degree/MST/504.eps} \label{subfig:degree-cdf-504-dmst}} \caption{\textbf{CDF of node degree across networks using DAG, DTN and DMST network filtering methods.} We calculate the cumulative distribution function of node degree across all stock networks using the size of rolling window $L = 126, 252$ and $504$ trading days. The period of the experiments ranges from $3$ March $2007$ to $18$ December $2019$. Market indices with the smallest number of constituents present a similar behaviour in the DAG network filtering method. The DTN method presents the highest probability of nodes without edges, mainly on NIFTY50, NASDAQ100 and HANGSENG50. EUROSTOXX50 presents a distinct shape compared with the other market indices in DTN with the smallest number of nodes without connection. Results also suggest the degree distribution of the market indices are similar for $L = 126, 252 \text{ and } 504$ trading days in all network filtering methods.} \label{fig:degree-cdf} \end{figure*} The third descriptive analysis is presented in Figure~\ref{fig:degree-cdf}. We present the Cumulative Distribution Function (CDF) of the node degree across networks of each index using the DAG, DTN and DMST network filtering methods. This analysis provides information concerning the node degree according to three main aspects: \textit{(i)} the impact of time series size $L$; \textit{(ii)} network filtering method and \textit{(iii)} size of the market index, considering the number of constituents. We calculated the node degree distribution across all financial networks ranging from $3$ March $2007$ to $18$ December $2019$. Results using $L \in \lbrace 126, 252, 504 \rbrace$ trading days as rolling window size are presented. We observe in Figure~\ref{fig:degree-cdf} that market indices with the smallest number of constituents present a similar behaviour in terms of node degree when we use the DAG network filtering method. Besides, DAG nodes are prone to have a higher occurrence of node with no connections. The DTN method also presents high probability of nodes without edges, mainly on NIFTY50, NASDAQ100 and HANGSENG50. EUROSTOXX50 presents a distinct shape compared with the other market indices in DTN with the smallest number of nodes without a connection - more than $75\%$ of nodes has a degree greater than $1$ edge. On the other hand, for all market indices, at least $50\%$ of the nodes have $4$ or more connections in DAG. Considering the number of stocks in each market index, we can also conclude that there are no nodes connecting to all other vertices in any network filtering method because the largest degree distribution of each market index. Results also suggest the degree distribution of the market indices are similar for $L = 126, 252 \text{ and } 504$ trading days in all network filtering methods, indicating that the size of $L$ does not affect the degree distribution of stock networks of each market index. \subsection{Predictive Analysis} In this section, we present a set of experimental results related to market structure forecasting using machine learning. First, we investigate the predictive performance of the proposed method in different scenarios, comparing it against the benchmark methods. Then, we present a qualitative analysis concerning the model interpretability and its implications. \subsubsection{Performance Results} We used a machine learning approach to forecast the financial network $G(t + h)$, where $h$ is the number of weeks ahead, $h = 1, 2, \dots, 20$ trading weeks. We discuss and report results using the size of rolling windows $L = 252$ trading days to construct the financial networks. Results regarding $L \in \lbrace 126, 504 \rbrace$ trading days can be found in the Supplementary Material, Section S.$4$. Figures~\ref{fig:auc-all-methods-dag},~\ref{fig:auc-all-methods-dtn} and~\ref{fig:auc-all-methods-dmst} show the AUC measure of the proposed machine learning method compared to baseline algorithms for DAG, DTN and DMST network filtering methods. For each time step ahead $h$, we calculated the average AUC of each method and its respective standard error over the test period, ranging from $5$ May $2007$ to $18$ December $2019$. \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/DAX30-sw-252-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/EUROSTOXX50-sw-252-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/FTSE100-sw-252-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/HANGSENG50-sw-252-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-ALL/252/NASDAQ100-sw-252-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-ALL/252/NIFTY50-sw-252-AUC.pdf}} \caption{\textbf{DAG - Predictive performance comparison of all methods.} This figure shows the AUC measure of the machine learning method compared to the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method outperforms the baseline methods in all market indices.} \label{fig:auc-all-methods-dag} \end{figure*} Denoted as ``ML'', the machine learning method outperforms the baseline methods in all market indices and all network filtering methods. In general, predictive performance decreases as the time lag $h$ increases. Despite its simplicity, TI is quite effective and presents good performance across market indices and network filtering methods, similar to RW algorithm. Figure~\ref{fig:auc-all-methods-dag} presents results for the DAG network filtering method, suggesting that market indices with a small number of constituents have a higher AUC than markets with a large number of constituents. Results also suggest that the RW algorithm produces a edge ranking quite similar to TI. The JC method presents the worst predictive performance in all market indices, except for FTSE100 in which PA presents lower AUC values for the DAG network filtering method. \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/DAX30-sw-252-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/EUROSTOXX50-sw-252-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/FTSE100-sw-252-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/HANGSENG50-sw-252-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-T/252/NASDAQ100-sw-252-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-T/252/NIFTY50-sw-252-AUC.pdf}} \caption{\textbf{DTN - Predictive performance comparison of all methods.} This figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method outperforms the baseline methods in all market indices.} \label{fig:auc-all-methods-dtn} \end{figure*} \begin{figure*}[h!] \centering \subfigure[DAX30]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/DAX30-sw-252-AUC.pdf}} \subfigure[EUROSTOXX50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/EUROSTOXX50-sw-252-AUC.pdf}} \subfigure[FTSE100]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/FTSE100-sw-252-AUC.pdf}} \subfigure[HANGSENG50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/HANGSENG50-sw-252-AUC.pdf}} \subfigure[NASDAQ100]{\includegraphics[trim=0.2cm 1.8cm 0.1cm 0.2cm, clip=true, width=0.2804\textwidth]{fig/AUC-MST/252/NASDAQ100-sw-252-AUC.pdf}} \subfigure[NIFTY50]{\includegraphics[trim=0.2cm 6.2cm 0.1cm 0.2cm, clip=true, width=0.28\textwidth]{fig/AUC-MST/252/NIFTY50-sw-252-AUC.pdf}} \caption{\textbf{DMST - Predictive performance comparison of all methods.} This figure shows the AUC measure of the machine learning method compared against the baseline methods. For each time step, we calculate the AUC average of each method and its respective standard error over the entire test period. The machine learning method outperforms the baseline methods in all market indices.} \label{fig:auc-all-methods-dmst} \end{figure*} Figure~\ref{fig:auc-all-methods-dtn} presents results for the DTN network filtering method. ML results are superior in all markets and suggest the proposed method can accurately identify links with high correlation due the main purpose of DTN method. We can observe that baseline algorithms have worst results for HANGSENG50, NASDAQ100 and NIFTY50 indices. As presented in Figure~\ref{fig:degree-cdf}, these market indices have expressive number of nodes without connections. TI algorithm outperforms baseline algorithms in DAX30, EUROSTOXX50 and NASDAQ100. Figure~\ref{fig:auc-all-methods-dmst} presents results related to the DMST network filtering method. Baseline methods have the worst results among the three filtering methods, except for the TI and RW algorithms. ML outperforms the benchmark methods in all markets. \begin{figure*}[h!] \centering \subfigure[DAG (AUC)]{\includegraphics[trim=0.0cm 0.5cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-ALL/252/only_network-sw-252-AUC-COMPARISON.pdf}} \subfigure[DAG (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.5cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-ALL/252/only_network-sw-252-AUC-IMPROVMENT-COMPARISON.pdf}} \subfigure[DTN (AUC)]{\includegraphics[trim=0.0cm 0.5cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-T/252/only_network-sw-252-AUC-COMPARISON.pdf}} \subfigure[DTN (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.5cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-T/252/only_network-sw-252-AUC-IMPROVMENT-COMPARISON.pdf}} \subfigure[DMST (AUC)]{\includegraphics[trim=0.0cm 0.5cm 9.2cm 1.3cm, clip=true, width=0.29\textwidth]{fig/AUC-MST/252/only_network-sw-252-AUC-COMPARISON.pdf}} \subfigure[DMST (AUC$^\ast$)]{\includegraphics[trim=0.0cm 0.5cm 0.0cm 1.3cm, clip=true, width=0.416875\textwidth]{fig/AUC-MST/252/only_network-sw-252-AUC-IMPROVMENT-COMPARISON.pdf}} \caption{\textbf{Machine learning AUC and AUC$^\ast$ for DAG, DTN and DMST network filtering methods.} Panels (a), (c) and (e) present the machine learning AUC measure and its standard error for $h$ trading weeks ahead ($1 \leq h \leq 20)$. Panels (b), (d) and (f) present the AUC improvement over the benchmark time-invariant method and its standard error. Results for $L = 252$.} \label{fig:auc-all-comparison} \end{figure*} Figure~\ref{fig:auc-all-comparison} presents the proposed method AUC performance for $h$ trading weeks ahead ($1 \leq h \leq 20)$ using the DAG, DTN and DMST network filtering methods. The AUC measure decreases as the time lag $h$ increases. We also compared our results against the benchmark time invariant method TI, where the network $G(t)$ is used as the forecast $G(t+h)$. We choose TI to compare our method due to its superior performance over all benchmark methods presented in the previous analysis. Moreover, we selected the TI method because it is derived from information from the pair-wise correlation, as described in Table~\ref{tab:linkfeatures}. The AUC* improvement is calculated as follows: \begin{equation} AUC^\ast = (AUC_m - 0.5) / (AUC_b - 0.5) - 1, \end{equation} \noindent where $AUC_m$ is the machine learning AUC and $AUC_b$ is the benchmark's AUC. Figures~\ref{fig:auc-all-comparison}(b),~\ref{fig:auc-all-comparison}(d) and~\ref{fig:auc-all-comparison}(f) present AUC$^\ast$ improvement results and their standard errors for DAG, DTN and DMST network filtering methods. The proposed method presents similar AUC results for all network filtering methods. Results using DAG shown in Figure~\ref{fig:auc-all-comparison}(a) suggest that networks with fewer constituents have better AUC results. Figure~\ref{fig:auc-all-comparison}(b) shows that the highest AUC$^\ast$ improvement is from NASDAQ100, reaching almost $30\%$ for $h = 20$ weeks ahead. On the other hand, for the DTN method shown in Figure~\ref{fig:auc-all-comparison}(c), the best results are FTSE100 and NIFTY50, in which EUROSTOXX50 is the most distinct result. The biggest AUC$^\ast$ improvement related to DTN shown in Figure~\ref{fig:auc-all-comparison}(d) is over NASDAQ100 and NIFTY50, reaching almost $40\%$. Results shown in Figure~\ref{fig:auc-all-comparison}(e) are related to the DMST network filtering method and have a similar decay of AUC for all markets, where DAX30 is the best result. Interestingly, the AUC$^\ast$ improvement shown in Figure~\ref{fig:auc-all-comparison}(e) presents similar curves to NIFTY50 and HANGSENG50 markets. Results show that AUC$^\ast$ improvement for NIFTY50 and HANGSENG50 increases until approximately $h = 9$, achieving almost $12\%$ on NIFTY50. After this max value, the AUC$^\ast$ improvement decreases as $h$ increases. NASDAQ100 presents the best AUC$^\ast$ improvement, reaching almost $19\%$ for $h = 15$ trading weeks ahead. \subsubsection{Model Interpretability} In finance, particularly in portfolio management, the investment risk is calculated using the correlation among portfolio assets. This is the main information used to estimate risk and, given its importance in financial analyses, we also explore them as an input feature for market structure forecasting. However, we want to measure how the topology of the network helps forecast the future network itself. In other words, we are interested in evaluating the importance of non pair-wise correlation features for the forecasting market structure. As described in Section~\ref{sec:network-based-features}, we separated the feature set into two subsets: pair-wise correlation features and non pair-wise correlation features. After constructing the boosted trees in the XGBoost model, we can estimate the importance of each individual attribute. The importance of an attribute is related to the number of times that it is used to create relevant split decisions, i.e., split points that improve the performance metrics~\cite{hastie2009elements}. For each market index, we calculate the average and standard error of aggregate importance of pair-wise correlation and non pair-wise correlation features. Figure~\ref{fig:importance} presents results related to the importance of non pair-wise correlation features, considering the network filtering methods DAG, DTN and DMST and $L \in \lbrace 126, 252,504 \rbrace $ trading days as the rolling window size. It is important to note that the importance of the two feature subsets add up to $1$. \begin{figure*}[t!] \centering \subfigure[DAG ($L = 126$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/only_network-sw-126-IMPORTANCE-COMPARISON.pdf}} \subfigure[DAG ($L = 252$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/only_network-sw-252-IMPORTANCE-COMPARISON.pdf}} \subfigure[DAG ($L = 504$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/only_network-sw-504-IMPORTANCE-COMPARISON.pdf}} \subfigure[DTN ($L = 126$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/Threshold/only_network-sw-126-IMPORTANCE-COMPARISON.pdf}} \subfigure[DTN ($L = 252$)]{\includegraphics[trim=0.1cm 6.1cm 0.1cm 0cm, clip=true, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/Threshold/only_network-sw-252-IMPORTANCE-COMPARISON.pdf}} \subfigure[DTN ($L = 504$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/Threshold/only_network-sw-504-IMPORTANCE-COMPARISON.pdf}} \subfigure[DMST ($L = 126$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/MST/only_network-sw-126-IMPORTANCE-COMPARISON.pdf}} \subfigure[DMST ($L = 252$)]{\includegraphics[trim=0.1cm 0.9cm 0.1cm 0cm, clip=true, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/MST/only_network-sw-252-IMPORTANCE-COMPARISON.pdf}} \subfigure[DMST ($L = 504$)]{\includegraphics[trim=0.1cm 6.1cm 0.3cm 0cm, clip=true, clip=true, width=0.30\textwidth]{fig/feature-importance/MST/only_network-sw-504-IMPORTANCE-COMPARISON.pdf}} \caption{\textbf{Importance of non pair-wise correlation features for DAG, DTN and DMST.} Figure shows the aggregate importance for non pair-wise correlation features using the size of rolling window $L = \lbrace 126, 252,504 \rbrace $ trading days and DAG, DTN and DMST network filtering methods. Results show the importance of these features increases with the time step $h$. The importance of non pair-wise correlation features for $L = 126$ trading days is higher than $L = 252$ and $L = 504$ for all network filtering methods. The growth of the importance of this subset is consistent across all markets. An interesting result is that the importance of non pair-wise correlation features changes according to the network filtering method.} \label{fig:importance} \end{figure*} Results presented in Figure~\ref{fig:importance} show that non pair-wise correlation features help forecast the future market using different network filtering methods. We observe that the importance of non pair-wise correlation features increases with $h$. Moreover, the importance of this subset of features changes according to the network filtering method. Their importance can be observed mainly for smaller $L$, such as $L = 126$, shown in Figures~\ref{fig:importance}(a),~\ref{fig:importance}(d) and~\ref{fig:importance}(g), where their importance for $h =20$ reaches almost $80\%$ for NIFTY50 using the DAG method, $60\%$ for EUROSTOXX50 using DTN and almost $90\%$ for all markets using DMST. For the DMST method, shown in Figures~\ref{fig:importance}(g),~\ref{fig:importance}(h) and~\ref{fig:importance}(i), the importance of non pair-wise correlation features has a similar shape to $L = 126$, $252$ and $504$ rolling window size. DAG results are shown in Figures~\ref{fig:importance}(a),~\ref{fig:importance}(b) and~\ref{fig:importance}(c). For short $h$ values, non pair-wise correlation attributes do not add much information when compared to pair-wise correlation features. However, the importance of these features rapidly increases with the time step $h$, suggesting that these attributes can be more useful than pair-wise correlation attributes for long-horizon forecasting exercises, particularly for short rolling window sizes. For $L = 252$ and $L = 504$, non pair-wise correlation features have less importance in forecasting networks modeled using DAG and DTN network filtering methods. Considering DMST results, the importance of non pair-wise features rapidly increases, even for short $h$ values. This behavior is different from DAG and DTN. A possible explanation for this is the low persistence of trees, as shown in Figure~\ref{fig:cross-similarity-dmst}. Thus, network features are able to add more information to the ML model when compared to pair-wise correlation features. \subsection{Dynamic Financial Networks} \label{sec:dynamic-financial-networks} There are many methods in the literature to model financial market structure. Some of the most commonly used methods include correlation based networks and network filtering methods~\cite{marti2021review}. Network filtering methods allow prompt and temporal analysis of the market structure by exploring market data snapshots to model financial networks that represent the topology and the structure of the market. Using a rolling window approach, we can take snapshots in each time window of arbitrary length, allowing to explore temporal analysis of the market evolution~\cite{musmeci2014clustering}, also called as dynamic or temporal networks. Some examples of the most common methods include Minimal Spanning Tree approach~\cite{Mantegna1999}, the Planar Maximally Filtered Graph~\cite{Tumminello26072005}, the Directed Bubble Hierarchical Tree~\cite{song2012hierarchical}, asset graphs~\cite{onnela2003} and other approaches based on the threshold networks~\cite{onnela2004clustering}. In this study, we investigate three different network filtering methods to estimate financial market structure: \textit{(i)} Dynamic Asset Graph; \textit{(ii)} Dynamic Threshold Networks and \textit{(iii)} Dynamic Minimal Spanning Tree. We explore these three methods due to their importance for financial analysis, considering that there is a vast literature~\cite{onnela2003dynamic,onnela2003dynamics,meng2014systemic,mantegna1999introduction,onnela2003,Yang2008,onnela2004clustering} that uses these methods to study different characteristics of the structure of financial networks. These methods estimate an asset distance matrix through co-movement metrics of daily return prices. Let $P(t)$ be the closing price of an asset at day $t$. We consider assets' daily log-returns $R(t) = \log{P(t)} - \log{P(t-1)}$ that are calculated at time $t$. First, we calculate a distance matrix that measures the co-movement of daily log-returns~\cite{Mantegna1999}, defined as \begin{equation} \label{eq:distance} D_{i,j}(t) = \sqrt{2(1 - \rho_t(i,j))}, \, \end{equation} \noindent where $\rho_t(i,j)$ is the Pearson's correlation coefficient between the time series of log-returns of assets $i$ and $j$ at time $t$, $\forall i,j \in V$, where $V$ is the set of assets. The distance matrix is constructed by dividing the returns time-series $R(t)$ into rolling windows of size $L$ trading days with $\delta T$ trading days between two consecutive windows (time-step). The choice of window width $L$ and window time-step $\delta T$ is arbitrary, and it is a trade-off between having an analysis that is either too dynamic or too smooth~\cite{tumminello2007correlation}. The smaller the window width and the larger the window steps, the more dynamic the data are. We report results for $L \in \lbrace 126, 252, 504\rbrace$ and $\delta T = 5$ trading days. A dynamic financial network is defined as a temporal network \begin{equation} W = \langle V, E_1, \ldots, E_T : E_t \subseteq V \times V, \, \forall t \in \{1, \ldots, T\} \rangle, \end{equation} \noindent where vertices $i \in {V}$ correspond to assets of interest. For every pair $\langle i, j \rangle$ at time-window $t$, $\forall i,j \in {V}$ $\vert$ $ i \neq j$, there is a corresponding edge $(i,j)_t \in {E_t}$ and every edge has a weight $w_{i, j}(t) = D_{i, j}(t)$. Considering the distance matrix $D_{i,j}(t)$ previously defined, we can apply a network filtering method in order to create dynamic networks. The three evaluated methods in this work are described in the next sections. \subsubsection{Dynamic Asset Graph (DAG) } A Dynamic Asset Graph~\cite{onnela2003} is a type of filtered financial network modeled by first ranking edges in ascending order of weights $w_1(t), w_2(t), ... , w_{N(N-1)/2}(t)$. The resulting graph is obtained by selecting the edges with the strongest connections. The number of edges are, of course, arbitrary. Here, we select edges with weights in the top quartile, i.e., $w_1(t), w_2(t), ... , w_{\floor{N(N-1)/8}}(t)$, as proposed in Souza et al.~\cite{SOUZA2019122343}. The main idea of this method is to identify the smallest distances in the stock market. \subsubsection{Dynamic Threshold Networks (DTN) } Considering the distance matrix $D(t)$ defined in Equation (\ref{eq:distance}), we create a filtered adjacency matrix $A$ to construct the financial network using the following rules~\cite{Yang2008,onnela2004clustering}: \begin{equation} A_{i,j}(t) = \left\{ \begin{array}{lr} 1, & \left| D_{i,j}(t) \right| \ge r_c\\ 0, & \left| D_{i,j}(t) \right| < r_c \end{array} \right. \end{equation} \noindent where assets $i,j \in V$ and $\forall (i,j)_t \in E_t$. The critical value $r_c$ converts the matrix $D$ into an undirected network, whereby $A_{ij}(t) = 1$ and $A_{ij}(t) = 0$ represents the existence and absence of edges between $i$ and $j$ at time window $t$, respectively. We fixed the $r_c$ value in $0.65$ because for $r_c \leq 0.65$ the network characteristics are submerged in large fluctuations~\cite{Yang2008}. It is important to observe that the DTN method can produce disconnected graphs and the number of edges is dynamic. In general, the main goal of this method is to identify pairs of assets that are highly correlated and above the threshold $r_c$. This is different from DAG, where pairs with a correlation value lower than $r_c$ can be added to the network. \subsubsection{Dynamic Minimal Spanning Tree (DMST) } We create a Dynamic Minimal Spanning Tree~\cite{Mantegna1999} based on the smallest asset distance in the previous defined matrix $D(t)$. We use the Kruskal's Algorithm to identify the Minimal Spanning Tree (MST) in the fully connected graph $D$ at time $t$. The number of edges is fixed and calculated as $N - 1$, where $N$ is the number of assets. This method provides the smallest distance to interconnect the market, producing the minimal market structure to connect all assets. \section{Introduction} Multi-asset financial analyses, particularly optimal portfolio selection and portfolio risk management, traditionally rely on the usage of a covariance matrix representative of market structure, which is commonly assumed to be time invariant. Under this assumption, however, non-stationarity~\cite{1742-5468-2012-07-P07025,Morales20136470} and long range memory~\cite{Cont2005} can lead to misleading conclusions and spoil the ability to explain future market structure dynamics. Empirical analyses of networks in finance have been used successfully to study market structure dynamics, particularly to explain market interconnectedness from high-dimensional data~\cite{Mantegna1999,Tumminello10421,IORI2018637,marti2021review}. Under this approach, market structure is modeled as a network whose nodes represent different financial assets and edges represent one or many types of relevant relationships among those assets. There is a vast literature applying financial networks to descriptive analysis of market and portfolio dynamics, including market stability~\cite{morales2012dynamical}, information extraction~\cite{song2008analysis}, asset allocation~\cite{pozzi2013spread,Mineo2018} and dependency structure~\cite{Mantegna1999,Tumminello201040,musmeci2014clustering,song2012hierarchical,musmeci2017multiplex}. However, there is little research on the application of financial networks in market structure forecasting. Recent research on market structure inference makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance matrix~\cite{PhysRevE.94.062306}, achieving computationally efficient results. In a later study~\cite{SOUZA2019122343}, the authors forecast market structure based on a model that uses a principle of link formation by triadic closure in stock market networks. Spelta~\cite{spelta2017financial} proposed a method to predict abrupt market changes, inferring the future dynamics of stock prices by predicting future distances between them, using a tensor decomposition technique. Musmeci et al.~\cite{Musmeci2016} proposed a new tool to predict future market volatility using correlation-based stock networks, meta-correlation and logistic regression. Park et al.~\cite{park2020link} analyzed the evolution of Granger causality network of global currencies and proposed a link prediction method incorporating the squared eta of the causality directions of two nodes as the weight of future edges. To build the causality network, they used the effective exchange rate of $61$ countries and showed that the predictive capacity of their model outperforms other static methods for predicting links. Other related work~\cite{castilho2019weighted} proposed a model for predicting links in weighted financial networks, used to define input variables for the portfolio management problem, increasing the financial return of the investment. In this article, financial market structure forecasting is formulated as a link prediction problem where we estimate the probability of adding or removing links in future networks. To tackle this problem, we developed a machine learning-based model that uses node- and link-specific financial network features to forecast stock to stock links based on past market structure. Applying machine learning algorithms in the decision-making process on stock markets is not a recent task~\cite{trippi1992neural}. An increasing number of applications have been created using machine learning-based models to predict the behavior of price time series~\cite{long2019deep}, volatility forecasting~\cite{liu2019novel}, sentiment analysis for investment~\cite{pagolu2016sentiment} and automatic trading rules~\cite{potvin2004generating}. This paper provides a set of empirical experiments designed to address the following research questions: \begin{enumerate} \item To what extent can dynamic financial networks help forecast stock market correlation structure? \item How do financial network topology features perform relative to traditionally used pair-wise correlation data to forecast stock market structure? \item How does the predictability of market structure vary across multiple financial markets for the proposed models? \end{enumerate} Findings can be particularly useful to improve portfolio selection and risk management, which commonly rely on a backward-looking correlation matrix to estimate portfolio risk. To the best of our knowledge, this is the first study that combines financial network features and machine learning to forecast stock market structure. The remainder of this paper is organized as follows: Section~\ref{sec:material-and-methods} describes the Materials and Methods used to provide the experiments; Section~\ref{sec:results-and-discussion}, which is the Results and Discussion, presents a descriptive analysis of the temporal stock networks and predictive analysis of market structure forecasting, and Section~\ref{sec:conclusion} draws the Conclusions. \section{Stock Market Structure and Network Prediction} (UNDER REVIEW) In~\cite{Mantegna1999}, the authors introduce one of the most commonly used methods to perform structure and topological analysis of financial markets, where assets are represented as nodes and edges represent the relationship between assets based on correlation measures between time series of log-returns of assets. It is used in several studies, including~\cite{gopikrishnan2000scaling},~\cite{onnela2003dynamics},~\cite{lee2012overall},~\cite{bonanno2003topology},~\cite{bonanno2004networks} and~\cite{eom2009topological}. In~\cite{onnela2004clustering}, the authors discuss how to select the relevant correlations through the correlation matrix among stocks and compare the results with random graphs. In addition to Pearson's correlation~\cite{feller1968introduction}, the most popular approach to measure the correlation between two assets, other approaches have been investigated to extract the market structure. In~\cite{yang2014cointegration}, the authors investigate the construction of financial networks using co-integration coefficient between the main indices of world financial markets~\cite{granger1981some}~\cite{johansen1990maximum}. In~\cite{tabak2010topological}, topological properties of financial networks in the Brazilian stock market are assessed using a Minimum Spanning Tree algorithm, employing correlation matrix of the asset price variation of several sectors. These studies often describe the stock market structure using graphs~\cite{bonanno2004networks}~\cite{lee2012overall}. In~\cite{spelta2017financial}, the author proposed a method to predict abrupt market changes by inferring the forthcoming dynamic of stock prices through the prediction of future distances between them, using a tensor decomposition technique. Other areas, besides stock market, use time series transformation in complex networks for different analyzes, such as time series clustering~\cite{ferreira2016time} and time-series link prediction~\cite{Yang2008}. Another subject related to this work is the stock market structure prediction. We addressed this problem as a link prediction task. For such, previously known network information is used to find connections that may appear in the future or simply to find hidden connections. This predictive task is investigated in many real problems, mainly involving social networks~\cite{martinez2017survey}~\cite{grover2016node2vec}. Some authors separate the type of link prediction according to the method, like similarity-based methods, probabilistic and statistical methods, and algorithmic methods, which include methods based on Machine Learning (ML)~\cite{marti2017review}. Other authors differentiate the type of prediction according to the specific nature of the problem, such as dynamic link prediction or hidden link prediction, as well as the application of link prediction, such as recommendation in social networks or network completion~\cite{wang2015link}. ML based methods treat the link prediction problem as a binary classification task, whose classes are linked and not linked. In~\cite{al2006link}, the authors investigate the use ML to identify possible links in future networks. In~\cite{da2012time}, the authors investigate the prediction of links using time series forecasting over similarity metrics. In a real stock market scenario, relationship between assets can not only appear along the time, but can also disappear. Thus, assets can have dynamic interactions. A sequence of dynamic interactions over time introduces another dimension to the challenge of mining and predicting link structure, named temporal link prediction~\cite{dunlavy2011temporal}. Temporal networks are a specific type of dynamic networks in which time can be organized as a third-order tensor, or multi-dimensional array~\cite{wang2015link}. A common deficiency of these methods is keeping the links between nodes even when their relationship no longer exist. To address this deficiency, we propose a model to predict future market correlation structure using link- and node-based financial network features, differing from the previous studies mainly regarding to the possibility of identifying removal of links over the time. \section*{Acknowledgements} D.C. and A.C.P.L.F.C would like thank to CAPES, Intel and CNPq (grant 202006/2018-2) for their support. \section*{Author contributions statement} D.C. and T.T.P.S. developed the proposed model. D.C. and T.T.P.S. conceived and designed the experiments. D.C. and T.T.P.S. prepared figures and tables, implemented and carried out the experiments. All authors analyzed the results and wrote the manuscript. All authors reviewed the article. \section*{Additional information} \noindent \textbf{Competing Interests:} the authors declare no competing financial interests. \\ \noindent \textbf{Supplementary Information:} provided with this document as Supplementary Material. \section{Materials and Methods} \label{sec:material-and-methods} \input{financial_networks} \input{prediction} \subsection{Machine Learning Based Approach} \label{sec:machine-learning-based-approach} In this section, we describe the proposed machine learning based approach to forecast stock market structure for a given market index. In this study, we address market structure forecasting as a network link prediction problem. Given snapshots of financial networks up to time $t$, we want to accurately predict the edges that will be present in the network at a given future time $t'$. We choose three times $t_0 < t < t'$ and provide an algorithm that accesses $W[t_0, t] = \langle V, E_{t_0}, \ldots, E_t \rangle$ to estimate the likelihood of edges to be present in $W[t']$, where $t' = t + h$ and $h = \lbrace 1, 2, \dots, 20 \rbrace$ trading weeks. Similarity-based methods and classifier-based methods are two of the most common approaches for link prediction~\cite{martinez2016survey}. In similarity-based methods~\cite{liben2007link}, the algorithm assigns a connection weight $score(x, y)$ to pairs of nodes $\langle x, y \rangle$, based on the input graph $G$, and then produces a ranked list in decreasing order of $score(x, y)$. These algorithms can be viewed as computing a measure of proximity or ``similarity'' between nodes $x$ and $y$. Common Neighbors, Jaccard Coefficient, Preferential Attachment, Adamic Adar, and Resource Allocation are among the most popular local indices (node-based). Katz, Leicht-Holme-Newman, Average Commute Time, Random Walk, and Local Path represents global indices (path-based). While the local indices are simple in computation, the global indices may provide more accurate predictions. In classifier-based methods, the link prediction is defined as a binary classification problem. Here, a feature vector is extracted for each pair of nodes and a $1/0$ label should be assigned based on the existence/not-existence of that link in the network. Any similarity-based method could form the required feature vector for a supervised learning method~\cite{al2006link}. Afterwards, any conventional supervised learning algorithm might be applied to train a supervised link predictor. In this article, we applied a classifier-based method to forecast the financial market structure. Our approach uses financial network features as input to a machine learning model in order to create a link prediction method, as presented in Figure~\ref{fig:machine-learning}. \begin{figure*}[ht!] \centering \includegraphics[trim=4.5cm 29.5cm 11.9cm 0.9cm, clip=true, width=0.7\textwidth]{fig/methodology/machine-learning-3-3.pdf} \caption{\textbf{Building the machine learning dataset.} We calculate features for each node ranging from $1$ to $N$, where $N$ is the number of assets. We applied a pairwise concatenation of node and link features as input variables for the link prediction, while edges on the network at time $t+h$ are used as the target variable, where $h$ is the number of trading weeks. } \label{fig:machine-learning} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[trim=0.5cm 21.5cm 9.7cm 13cm, clip=true, width=0.9\textwidth]{fig/methodology/machine-learning-3-3.pdf} \caption{\textbf{Train and test sets used to induce the machine learning model.} Machine learning models were trained and tested using a rolling window approach. Considering $L$ as the size of the log-return time series and $t$ as current time, we create the train set using data from $t-k$ to $t-1$ and the test set using data from $t$. The target of the supervised learning is the network $G(t+h)$, where $h$ is the number of trading weeks. After training and testing the machine learning model, the time-step $\delta T$ is used to move the rolling window forward, in order to restart the process and re-train the machine learning model. The train set includes data from $1$ March $2005$ to $30$ May $2007$ and the test set has data from $30$ May $2007$ to $18$ December $2019$.} \label{fig:machine-learning-train-test} \end{figure*} Figure~\ref{fig:machine-learning} presents the process used to create the machine learning database. Assuming $i$ and $j$ as two arbitrary nodes ranging from $1$ to $N$ and $t$ as the current time, an instance of the dataset used in the machine learning algorithm has the following predictive attributes: (a) $i$ node-level features; (b) $j$ node-level features; (c) $(i,j)$ link-level features. As previously described, the target of the supervised machine learning model is to forecast the existence of links in a network $G(t + h)$, where $h = 1, 2, \dots, 20 $ trading weeks. Figure~\ref{fig:machine-learning} presents an illustration of how we build instances to the machine learning model, exemplified as the snapshot at time $t$. We split the dataset between train and test sets taking into account the temporal sequence of the data. The train set includes data produced in the period from $1$ March $2005$ to $30$ May $2007$ and the test set has data from $30$ May $2007$ to $18$ December $2019$. Figure~\ref{fig:machine-learning-train-test} presents an illustration explaining how we created the train and test sets. Machine learning models were trained and tested using a rolling window approach. Considering $L$ as the size of the log-return time series, $t$ as current time and $t - k < t < t + h$, we create the train set using network features from $G(t - k)$, where $k = 1, 2, \ldots, 30 $. The test set contains data from the current network $G(t)$, in which $G(t + h)$ is the target, where $h = 1, 2, \dots, 20$ trading weeks. After training the machine learning model and testing it, we move the rolling window forward taking into account the time-step $\delta T = 5$ trading days ($1$ trading week) between two consecutive executions (see Supplementary Material, Section S.$1$ for further details). To assess the information rate that a machine learning model can extract from the features set, we applied the XGboost~\cite{chen2016xgboost} algorithm. In this experiment, the algorithm induces a predictive model for stock market structure forecasting. XGboost is a fast, highly effective, interpretable and widely used machine learning model. Further information regarding the experimental setup is described in the Supplementary Material, Section S.$2$. \subsubsection{Network Features} \label{sec:network-based-features} As previously mentioned, we proposed an approach for market structure forecasting based on supervised machine learning. In order to provide information to train this supervised method, we extracted a set of network features at node- and link-level. These features are used as input to the machine learning model. We summarized the network features as follows: \begin{itemize} \item \textbf{Node-Level Features} assess the position of a node within the overall structure of a given graph $G(V,E)$~\cite{oliveira2012overview}. Table~\ref{tab:nodefeatures} presents a set of node-level features related to node/stock $i \in V$ used as input to the machine learning model. \item \textbf{Link-Level Features} examine both the contents and patterns of relationships in a given graph $G(V,E)$ and measure the implications of these relationships~\cite{oliveira2012overview}. Table~\ref{tab:linkfeatures} presents a set link-level features related to link $(i,j) \in E$ used as input to the machine learning model. \end{itemize} Researchers in finance, particularly in portfolio management, commonly use asset correlation in important use cases, such as risk management. Given the importance of this information in financial analyses, we also explore them as input feature for market structure forecasting. However, we are interested in analyzing how topological information helps to forecast the market structure itself. For this reason, we separated the feature set into two distinct subsets. We labeled the two subsets according to their source of information: \textit{(i)} pair-wise correlation features, which are attributes based on asset correlation and not derived from any other network information, and \textit{(ii)} non pair-wise correlation features, which are attributes derived from the network topology. While pair-wise correlation features are traditionally used in financial analysis, the importance of non pair-wise correlation features to forecast market structure is a research question investigated in this work. Thus, we can compare their information gain in market structure forecasting. In Table~\ref{tab:nodefeatures}, all features are non pair-wise correlation attributes. In Table~\ref{tab:linkfeatures}, the pair-wise correlation features are marked with ($^\ast$). \begin{table}[ht!] \centering \caption{\textbf{Node-Level Features:} Features were calculated to node $i$, $ \forall \text{ } i \in V$ for a given graph $G(V,E)$. Consider $N_i$ as the set of adjacent vertices (neighborhood) of node $i$. This set contains only non pair-wise correlation features.} \begin{tabular}{C{4.0cm} C{12.0cm}} \hline \multicolumn{1}{c}{\textbf{Name}} & \textbf{Definition} \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Node Degree}} & $$deg(i) = \vert i \vert$$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Weighted Node Degree}} & $$deg_w(i) = \sum_{j \in N_i }{w_{ < i, j > }},$$ where $w_{ < i, j > }$ is the weight of the edge $e(i,j)$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Average Neighbor Degree}} & $$avg (i) = \frac{ \sum_{j \in N_i }{ \vert j \vert } }{ \vert i \vert}$$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Propensity of $i$ to Increase its Degree}} & $$\gamma (i) = \frac{\vert i \vert}{deg_w(i)} $$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Node Betweenness}} & $$b(v) = \sum_{i,j \in V \setminus v}{ \frac{ \sigma_{ij}(v)}{\sigma_{ij}} },$$ where $\sigma_{ij}(v)$ is the number of shortest paths between $i$ and $j$ passing through node $v$ and $\sigma_{ij}$ the total number of shortest paths from $i$ to $j$\\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Node Closeness}} & $$nc(i) = \frac{n - 1}{ \sum_{j \in V \setminus i}{d(i,j)}}, $$ where $d(i,j)$ represents the distance between $i$ and $j$ and $n$ is the number of nodes in the graph\\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Node Eigenvector}} & $$ne(i) = x_i \frac{1}{ \lambda } \sum_{j=1}^{n}{d_{ij}x_j}, $$ where $d_{ij}$ represents an entry of the adjacency matrix $D$ ($0$ or $1$), $\lambda$ denotes the largest eigenvalue, $x_i$ and $x_j$ denotes the centrality of node $i$ and $j$, respectively \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Node Clustering Coefficient}} & $$cc(i) = \frac{2 \left| e_{jk} \right| }{\vert i \vert * ( \vert i \vert - 1 )} : j, k \in N_i, e_{jk} \in E$$ \\ \hline \end{tabular}% \label{tab:nodefeatures}% \end{table}% \begin{table}[ht!] \centering \caption{\textbf{Link-Level Features:} Features were calculated between nodes $i$ and $j$, $ \forall \text{ } (i, j) \in E$ for a given graph $G(V,E)$. Pair-wise correlation features are marked with $(^\ast)$, while the remaining are features based on non pair-wise correlation. Consider $N_i$ and $N_j$ as the set of adjacent vertices of node $i$ and $j$, respectively.} \begin{tabular}{C{4.0cm} C{12.0cm}} \hline \multicolumn{1}{c}{\textbf{Name}} & \textbf{Defition} \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Link Existence in $G(t)$} ($^\ast$)} & \begin{equation*} E(i,j) = \begin{cases} 1, & \quad \text{exists link}, \\[0ex] 0, & \quad \text{not exists link}. \end{cases} \end{equation*} \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Correlation Value} ($^\ast$)} & $$C (i,j) = \rho_{ij},$$ where $\rho_{i,j}$ is the Pearson’s correlation coefficient between time series of log-returns of assets $i$ and $j$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Common neighbors}} & $$CN (i,j) = \vert N_i \cap N_j \vert$$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Jaccard Coefficient}} & $$JC (i,j) = \frac{\vert N_i \cap N_j \vert}{\vert N_i \cup N_j \vert}$$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Adamic-Adar Coefficient}} & $$AA (i,j) = \sum_{k \in N_i \cap N_j }{ \frac{1}{ \log{\vert N_k \vert} } }, $$ where $N_k$ is the set of adjacent vertices of node $k$\\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Sorenson-Dice Coefficient}} & $$SDC (i,j) = \frac{2 * \vert N_i \cap N_j \vert}{\vert i \vert + \vert j \vert}$$ \\[0ex] \hline \multicolumn{1}{C{4.0cm}}{\textit{Edge Betweenness}} & $$B (i,j) = \sum_{i,j \in V}{ \frac{ \sigma_{ij}(e)}{\sigma_{ij}} },$$ where $\sigma_{ij}(e)$ is the number of shortest paths between $i$ and $j$ crossing the edge $e$ and $\sigma_{i,j}$ is the total number of shortest paths from $i$ to $j$ \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Same Community}~\cite{blondel2008fast}} & \begin{equation*} SC(i,j) = \begin{cases} 1, & \quad \text{if $i$ and $j$ $\in$ same community}, \\ 0, & \quad \text{if $i$ and $j$ $\notin$ same community}. \end{cases} \end{equation*} \\ \hline \multicolumn{1}{C{4.0cm}}{\textit{Preferential Attachment}} & $$PA(i,j) = \vert i \vert * \vert j \vert,$$ where $\vert i \vert$ and $\vert j \vert$ represent the node degree of vertex $i$ and $j$\\ \hline \end{tabular}% \label{tab:linkfeatures}% \end{table}% \subsubsection{Model Evaluation} We calculate the \textit{Area Under the ROC curve} (AUC) to evaluate the predictive performance of the link prediction methods. This metric is largely applied in binary classification and unbalanced problems and ranges from $0.5$ to $1$, where $0.5$ represents a random naive algorithm and $1$ represents the highest result. The AUC measure gives a summary metric for the algorithm’s overall performance with different prediction set sizes, while a detailed look into the shape of the ROC curve reveals the predictive performance of the algorithm at each prediction set size~\cite{huang2009time}. To verify the performance of the proposed method, we compared it against the following similarity-based methods commonly used in literature for link prediction, separated into three categories as follows~\cite{mutlu2019review}: \begin{enumerate} \item \textbf{Local Similarity Methods} \begin{itemize} \item Common Neighbors~\cite{liben2007link} (CN): This is a simple and effective link prediction method based on common neighbors shared by two nodes. Pairs of nodes with high number of common neighbors tend to establish a link; \item Preferential Attachment~\cite{barabasi1999emergence} (PA): This method defines that new links are formed between nodes with higher degrees rather than nodes with lower degrees; \item Jaccard Coefficient~\cite{mutlu2019review} (JC): This method is based on similarity Jaccard's coefficient, taking into account the number of common neighbors shared by two nodes, but normalized by the total number of neighbors of both nodes; \item Adamic-Adar~\cite{adamic2003friends} (AA): This method is also based on common neighbors shared by two nodes. Instead of using the raw number of common neighbors as CN, it is defined using the sum of the inverse of the logarithmic degree of each shared neighbor. \end{itemize} \item \textbf{Quasi-Local Similarity Method} \begin{itemize} \item Local Path Index~\cite{zhou2009predicting} (LP): Similar to CN, this method uses information from the next $2$ and $3$ nearest neighbors instead of using only information of the neighbors shared by two nodes. \end{itemize} \item \textbf{Global Similarity Method} \begin{itemize} \item Random Walk with Restart~\cite{brin1998anatomy} (RW): Based on Random Walk, it is a special case of following the Markov chain, starting from a given node and randomly reaching a selected neighbor. The restart looks for the probability of a random walker starting from node $x$ visits node $y$ and comes back to the initial state node $x$~\cite{mutlu2019review}. \end{itemize} \end{enumerate} In addition to these methods, we included a naive Time Invariant (TI) baseline benchmark in our experiments. This algorithm uses the link occurrence in graph $G(t)$ as the prediction of link occurrence in graph $G(t+h)$, assuming that market structure is time invariant. This assumption is traditionally used in risk management algorithms, which commonly rely on a backward-looking covariance matrix to estimate portfolio risk~\cite{markowitz1952,SOUZA2019122343}.
cc6bae18b249990f354227ce26e8254137a0e3e4
\section{Introduction} \label{introduction} Transmission spectroscopy is a powerful method for studying exoplanetary atmospheres. During the past decade, space missions and ground-based surveys have provided numerous data on exoplanetary atmospheres \citep{Tinetti2007,Tsiaras2018,Seidel2019,Mikal-Evans2020}. In addition, new space telescopes will be launched in the current decade (James Webb Space Telescope (JWST) and Ariel) whose goals will not only be the detection of molecular features, but also the estimation, with greater accuracy, of the atmospheric molecular abundances. However, a higher precision in the observation must be followed by an improvement of theoretical and statistical tools. Several global climate models (GCM) are available \citep{Showman2002,Showman2008,Menou2009,Wordsworth2011,LFC13b,Charnay_2015,Kataria2016,Drummond2016,TK19} that can describe very complex 3D atmospheric structures and might in principle give us deep insight into the physics of a planetary atmosphere. GCM simulations are a useful tool for characterizing from a theoretical point of view the chemical composition of an atmosphere and its physical properties \citep{Showman2008,Leconte2013,Guerlet2014,Venot2014,parmentier2018}. These simulations can also help validate the parameters inferred by retrieval procedures \citep{Irwin2008,Al-Refaie2019,molliere2019}. Finally, it is also possible to use the 3D structure of these simulations to compute transmission spectra, as discussed in \citet{caldas2019effects}. However, the computational cost of a 3D GCM simulation is very high. Their usage as forward models in Markov chain Monte Carlo (MCMC) retrieval procedures is therefore not currently considered. If we were to simplify the parameterization of the 3D structure of the planet (to avoid the cost of the full GCM simulation), then the question of the parameters and their number would also affect the computational cost of the retrieval. Moreover, retrieving too many parameters might be an issue: it would create many degeneracies in which the information would be lost, which would make the results difficult to interpret. Bayesian retrieval codes include a critical trade-off between the precision of a model and its computational cost \citep{Al-Refaie2019,Waldmann2015a,Waldmann2015b,Line2013,Irwin2008}. To increase the reliability of the solution given with a Bayesian approach, it is crucial to be able to quickly generate a forward model, without sacrificing meaningful physical phenomena that could lead to a significant spectral contribution. So far, models that were used to infer atmospheric parameters in transmission or emission spectroscopy within Bayesian frameworks have mainly relied on a one-dimensional structure. The use of such models was partly justified by the low precision in the available observational data \citep{Stevenson2014,Line2016,Tsiaras2018,Edwards-ares2020,pluriel-ares2020}. In some cases, 1D models are a good approximation because the features in the transmission spectra come from a thin annulus around the limb, so that the region probed is almost homogeneous \citep{Barstow2017,Tsiaras2018,Guilluy2021,Swain2021}. However, 1D atmospheric models will hardly explain the spectral shapes detected with the new generation of instruments because they reveal physical and chemical effects due to the 3D geometry of the atmosphere \citep{caldas2019effects,Changeat_2019,MacDonald2020,pluriel2020strong}. In particular, \citet{pluriel2020strong} showed that the atmospheric parameters of ultra hot Jupiters retrieved using 1D models can be biased: the solution that fits the observation best could be very different from the reality. In the case of ultra hot Jupiters, 1D retrieval codes cannot fit transmission spectra well because of the significant day to night thermal and chemical dichotomy. In these atmospheres, the 1D vertical assumption is no longer valid because the region that is probed extends significantly across the limb on both the day- and the nightside of the planet. The transmission spectra carry the information of the absence of water on the dayside (and its presence in the colder nightside) due to the thermal dissociation, and the presence of strong CO features (which does not dissociate because of its stronger triple bond \citep{Lodders2002}). A 1D retrieval will try to fit the water on the nightside (where it is not dissociated) and retrieves a colder temperature. It compensates for this low temperature by overestimating CO to try to fit the CO features. From this observation, the effect of the large difference between a hot temperature on the dayside and a cold temperature on the nightside may be expressed with a simpler model that is composed of only two dimensions: (1) a vertical dimension, and (2) an angular dimension following the star-observer axis. Nevertheless, there are two caveats to keep in mind for the transition from 1D to 2D retrievals. First, the computational time needed to converge using MCMC or nested sampling would be increased. Second, and more importantly, we need to be aware that increasing the number of parameters in these retrieval methods could bring more degeneracy than information because the parameter space that is to be explored becomes larger. Therefore we need a 2D geometry with the simplest parameters space, so that we can reproduce the 2D effects of the atmosphere without adding too many parameters that are to be retrieved. In this first paper of a series, we present our general method for computing transmission spectra using atmospheric simulations with a different number of spatial dimensions and validate the numerical tools that we have developed for this purpose. We first introduce a new open-source documented version of Pytmosph3R\xspace \citep{caldas2019effects} that computes transmission spectra for atmospheres with up to three spatial dimensions. The code is very flexible and can use 3D time-varying atmospheric structures from a GCM as well as simpler, parameterized 1D or 2D structures. \sect{sec:application} shows some application examples, highlighting the benefits of realistically describing the complex structure of the atmosphere. To allow our approach to be used in a retrieval framework, we have implemented our 2D algorithm in TauREx\xspace \citep{Waldmann2015a,Al-Refaie2019,Changeat_2019}. We then demonstrate that our implementation is sufficiently fast to allow converging on a realistic retrieval solution in a reasonable amount of time. \section{Model description} \label{model_description} We discuss here the computation of transmission spectra in the case of 3D, 2D, or 1D simulations. A simple representation of each model is shown in \fig{models_dimensions}. An example of a 3D GCM simulation, representing WASP-121b\xspace, is included. It shows that the higher temperature on the dayside (on the left) affects the scale height, enlarging the atmosphere. We validate and study the performance of each configuration in Sec. \ref{model-validation} and \ref{model-time}. \begin{figure*}\centering \subfloat[WASP-121b\xspace (GCM)]{% \includegraphics[width=.24\textwidth,height=3cm,keepaspectratio]{wasp/wasp_121b_rays} \label{WASP} }\qquad \subfloat[3D]{% \includegraphics[width=.24\textwidth,height=3cm,keepaspectratio]{3D_planet_3D} \label{3D} }\qquad \subfloat[2D]{% \includegraphics[width=.24\textwidth,height=3cm,keepaspectratio]{3D_planet_2D} \label{2D} }\qquad \subfloat[1D]{% \includegraphics[width=.24\textwidth,height=3cm,keepaspectratio]{3D_planet_1D} \label{1D} }\qquad \caption{Visual representation of the dimensions of the models we considered using an equatorial view seen from the east. The colors of \protect\subref{WASP} indicate the temperature (redder is hotter) at isobar levels. The higher temperature on the dayside (on the left) affects the scale height, enlarging the atmosphere. The 3D model \protect\subref{3D} follows a spherical (radius, latitude, longitude) coordinate system, while the 2D model \protect\subref{2D} uses a polar grid whose the radial axis is the altitude and whose angular axis follows the solar altitude angle (which incidentally is also the angle of the zenith of the considered point with the zenith of the terminator). The 1D model \protect\subref{1D} simply relies on the altitude. } \label{models_dimensions} \end{figure*} \subsection{Three-dimensional case} \label{3D_case} The computation of the optical depth, and eventually the transmission spectrum in the case of 3D simulations, has already been discussed in \citet{caldas2019effects}. We reintroduce the method and notations needed hereafter in the context of a new version of the implementation, \href{https://forge.oasu.u-bordeaux.fr/jleconte/pytmosph3r-public}{Pytmosph3R\xspace~2.0}, which is more robust and user friendly, of which the documentation is available \href{http://perso.astrophy.u-bordeaux.fr/~jleconte/pytmosph3r-doc/index.html}{here} \footnote{\url{http://perso.astrophy.u-bordeaux.fr/~jleconte/pytmosph3r-doc/index.html} \label{pytmodoc} }. Global climate model simulations such as the LMDZ GCM \citep{Wordsworth2011} provide a description of physical properties such as pressure, temperature, and volume-mixing ratios of absorbing or scattering molecules and aerosols in a three-dimensional grid. The vertical dimension of this grid may rely on pressure levels, for example, while the horizontal grid relies on latitude and longitude coordinates. The number of pressure levels is noted $N_{p,\mathrm{lay}}$, the number of latitudes $N_{\mathrm{lat}}$ , and the number of longitudes $N_{\mathrm{lon}}$. Pytmosph3R\xspace~2.0 offers several options to compute the volume-mixing ratio of all gases in each cell of the atmospheric grid: (1)~It can extract this information directly from the input simulation when it is present, (2)~it can interpolate from a separate table providing the mixing ratios on a pressure and temperature grid (e.g., when thermodynamical equilibrium is assumed), (3)~it can use analytical formulae (e.g., those of \cite{parmentier2018} where thermal dissociation of key molecules is accounted for), (4)~it can call a chemistry module like FastChem \citep{stock2018fastchem} to compute the abundances on the fly, and (5)~the users can specify their own constant mixing ratios. Other types of chemistry may be easily implemented by the users to adjust to the context of the simulation. As discussed in \citet{caldas2019effects}, because i) isobaric surfaces are generally not iso-altitude surfaces and ii) altitude is the key variable in transmission geometry, the input simulation needs to be interpolated into a (radius, latitude, longitude) spherical coordinate system. The vertical axis of this new grid is discretized in $N_{lay}$ layers delimited by $N_{lay}+1 \equivN_{\mathrm{lev}}$ points called levels. The number of latitudes and longitudes of the new grid is identical to that of the input grid. The coordinates of the points in this new system are noted $(\rho,\Lat_0,\Lon_0)$. The user can specify the position of the observer in this latitude-longitude system $(\varphi_\mathrm{0,obs},\lambda_\mathrm{0,obs})$. This can be used to study the spectral variations caused by the rotation of the planet during the transit, as discussed in Sec.~\ref{sec:application}. \label{rays_definition} \label{position_observer} The optical depth is computed for a set of rays that are parallel to the planet-observer axis. As described extensively in \citet{caldas2019effects}, each ray is uniquely defined by its intersection with the plane normal to the planet-observer axis and passing through the planet center (hereafter, the plane of the sky). In this plane, the rays are arranged on a polar grid with $N_{r}$ points along the impact parameter axis that are uniformly distributed between the surface of the planet and the top of the atmosphere, and $N_{\theta}$ points along the azimuthal coordinate that goes around the limb of the planet. Even though the number of radial points $N_{r}$ in this grid of rays may be different from the number of $N_{lay}$ by definition, they are ultimately connected. Adding too many rays per layers does not increase the precision of the model because the information contained in the simulation does not increase, while too few rays would mean to lose some of this information. We use $N_{r} = N_{lay}$ in this paper and discuss the position of the rays in the layers in Sec.~\ref{tests1D}. This new version relies on \texttt{Exo\_k}\xspace \citep{leconte2020spectral} for the interpolation of the opacities (including regular molecular and atomic transitions, collision-induced absorptions, Rayleigh scattering by the gas, and Mie scattering by aerosols). As a result, the user can use both high-resolution cross sections or correlated-$k$ coefficient tables. \subsection{Two-dimensional case} \label{2D_case} To reduce the cost of the computation of a transmission spectrum, we may simplify the simulation by assuming symmetry around the star-observer axis. We may thus reduce the simulation over a 2D grid that includes this axis. This allows us to describe the day to night temperature differences of certain types of planets such as ultra hot Jupiters with a lower cost than a full 3D simulation, and with a better precision than 1D models \citep{pluriel2020strong}. Because this model is entirely new (although it is based on the same method as the 3D model), we describe the details of its computation here. \subsubsection{Input data} \label{input_simulation} The two-dimensional grid that we consider in this paper is centered on the planet center and contains the star-observer axis. Because atmospheric flows tend to follow isobars, the vertical dimension follows pressure levels, as is usually done in GCM simulations. The angular dimension follows the solar altitude angle, and the star is placed in the equatorial plane. This 2D model could be applied on a complex 2D temperature, but we rely on the following thermal structure in this paper. The temperature is defined for every level and angle $(i,\alpha^*)$ using \setlength\arraycolsep{1pt} \begin{equation} \label{temperature2D} \left\{ \begin{array}{lll} P > P_{iso}, & T = & T_{deep},\\ \multirow{3}{*}{$ P < P_{iso}, \left\{ \begin{array}{l} 2\alpha^* \geq \beta , \\ 2\alpha^* \leq -\beta, \\ -\beta < 2\alpha^* < \beta, \end{array} \right. % $} & T = & T_{day},\\ & T = & T_{night},\\ & T = & T_{night} \\ & & + (T_{day} - T_{night}) \frac{\alpha^*+\beta/2}{\beta}, \end{array} \right. \end{equation} \noindent where $T_{day}$, $T_{night}$ are scalar parameters that define temperatures of the day- and nightside. The $\beta$ angle defines the area around the terminator, where the temperature decreases linearly from $T_{day}$ to $T_{night}$. $T_{deep}$ defines the isothermal structure of the atmosphere for pressures higher than $P_{iso}$. This simple parameterization of the temperature structure is inspired by GCM simulations of hot and ultra hot Jupiters \citep{Showman2015,parmentier2018,TK19} where the models can be approximated according to \eq{temperature2D}. More particularly, \citet{pluriel2020strong} have shown that this 2D representation could approximate ultra hot Jupiters with a satisfying precision. Like in the 3D model (see \sect{3D_case}), the chemical composition of the gas can parameterized by the user or computed using more complex chemical models. \subsubsection{Interpolation into an altitude-based polar grid} \label{altitude_based_grid} To compute the intersections of the rays with the atmospheric grid (discussed in \sect{rays_intersections}), a regular geometric grid is to be preferred. Because every grid point in our 2D input grid can have different physical and chemical properties (pressures, temperatures, abundances, etc.) and hence different scale-heights, however, the altitude of the $n$-th layer will change from one column (parameterized by the angle $\alpha^*$) to the next, as illustrated by \fig{altitude_interpolation}. \begin{figure}[ht]\centering \subfloat[Input grid. The pressure levels separating the $N_{p,\mathrm{lay}}$ layers are identical in each column. Because of the temperature and composition differences, the altitude of a layer changes from one column to the next. ]{% \includegraphics[width=.45\textwidth,keepaspectratio]{planet_z_P}% \label{input_grid} }\qquad \subfloat[Altitude-based 2D grid (in red) with $N_{lay}$ layers superimposed on the input grid. In this example, we have $N_{lay}=N_{p,\mathrm{lay}}$, but this is not a requirement.]{% \includegraphics[width=.45\textwidth,keepaspectratio]{planet_P_z}% \label{altitude_grid} }\qquad \caption{Illustration of the two grids used in the model. On the left, the pressure-based input grid \protect\subref{input_grid}. On the right, the regular altitude grid \protect\subref{altitude_grid}. The location of each of the $N_{\alpha}$ columns is parameterized by its solar elevation angle, $\alpha^*$, equal to 90$^\circ$ at the substellar point, 0$^\circ$ at the terminator, and -90$^\circ$ at the subobserver point. $\beta$ is the angle over which the atmosphere transitions from dayside to nightside temperatures. $R_p$ is the planetary radius at the bottom of the grid. $z_i$ is the altitude of level $i$, and $dz_i$ is the thickness of layer $i$ between levels $i$ and $i+1$. } \label{altitude_interpolation} \end{figure} The altitude can be calculated from the hydrostatic equilibrium through the hypsometric equation, \begin{gather} dp = -\rho \cdot g \cdot dz,\\ z_{i+1} = z_{i} + \frac{M\cdot R\cdot T_{i}}{g(z_{i})} \cdot \ln\left(\frac{P_{i}}{P_{i+1}}\right), \label{hydrostatic} \end{gather} where $\rho$ is the mass density, $M$ is the molar mass of the gas, $R$ is the universal gas constant, and $g(z_{i})$ is the gravity at altitude $z_{i}$. From this equation, we can infer that a difference of temperature in the angular dimension will indeed affect the scale height. With the altitude computed in the input grid, we can construct a new grid based on the altitude, as is represented in \fig{altitude_grid}, in a linear discretization of $N_{\mathrm{lev}}$ points that define $N_{lay} = N_{\mathrm{lev}}-1$ layers, up to a maximum altitude of our choice. The data in this new polar grid can be interpolated using a logarithmic interpolation for the pressure and a linear interpolation for the temperature and chemistry. This new grid facilitates computing the intersection points between the rays and the grid, as we discuss in Sec.~\ref{rays_intersections}. The number of slices or angles in the 2D model is $N_{\alpha}$, of which $N_{\alpha}-2$ discretize the angle between $-\beta/2$ and $\beta/2$. The first and last angular slices account for the day- and nightside (not subdivided as they are uniform). The points in this 2D altitude-based coordinate system are noted hereafter $(\rho, \alpha)$. \subsubsection{Intersections of the rays with the altitude grid} \label{rays_intersections} In this model, we consider $N_{r} = N_{lay}$ rays. A ray may be described using its impact parameter $r$, defined as the normal distance between the ray and the center of the planet (see \fig{paths}). We chose the grid of impact parameters so that the rays cross the plane of the sky (or equivalently here, the terminator of the planet) exactly at the center of the atmospheric layers. To compute the optical depth (Eq. \ref{eq:tau}) of a ray $r$, we need the length of each segment of the ray crossing individual cells of the altitude grid, as well as the physical properties of these cells. The coordinates of the intersections points of the ray with the grid may be computed using \begin{equation} r = \rho \cos(\alpha^*). \label{eq:2D_intersection} \end{equation} This returns the angle of intersection when applied on the $N_{\mathrm{lev}}$ levels of radius $\rho$ (duplicated for positive and negative values), and the radius of intersection when applied on the $N_{\alpha}-1$ solar elevation angles $\alpha^*$ that separate the slices of the grid. \begin{figure}[ht]\centering \includegraphics[width=.475\textwidth,keepaspectratio]{planet_path} \caption{Intersection of a ray with the 2D grid. An example of intersections for one level and one angle is highlighted by points $j$ and $k$ at coordinates $(\rho_j, \alpha^*_j)$ and $(\rho_k, \alpha^*_k)$, respectively. The length $\Delta\ell_{r,i}$ of the corresponding segment is deduced from the computation of the distances $x_j$ and $x_k$ of both points to the terminator. The coordinates of the segment must also be computed to obtain the physical properties of the cell. } \label{paths} \end{figure} The two types of intersections (with levels and with angles) are visually represented in blue and red (or indices $j$ and $k$), respectively, in \fig{paths}. The intersection points of a given ray are then sorted using their angular coordinate. The length of each segment (as highlighted in \fig{paths}) is equal to the subtraction of the distance of both extremities of the segment ($j$ and $k$ in our example) to the terminator. The distance of a point $k$ to the terminator is given by \begin{equation} \mathbf{ x_k^2 = \rho_k^2 - r^2. } \label{eq:1D_intersection} \end{equation} This equation returns two solutions, corresponding to a point before and a point after the terminator (following the sign of the solution). By convention, a negative distance corresponds to a point on the dayside. We note $\Delta\ell_{r,i}$ the distance of a segment $i$ between two consecutive intersection points of a ray $r$. \subsubsection{Optical depth} \label{optical_depth} Based on the previous definitions, the optical depth of each ray $r$ at a wavelength $\lambda$ can be computed using Eq. \ref{eq:tau}, \begin{equation} \tau_{\rc}^\lambda = \sum_{\substack{i}} \frac{P_{r,i}}{k_B T_{r,i}} \left(\sum_{m=1}^{N_{gas}} \chi_{m,r,i} \sigma_{m, \lambda} + \sum_{j=1}^{N_{con}} k_{mie, j} \right)\Delta\ell_{r,i}, \label{eq:tau} \end{equation} where $P_{r,i}$ and $T_{r,i}$ are the pressure and temperatures of the cell corresponding to the segment $i$ of the ray $r$, $k_B$ is the Boltzmann constant, $\chi_{m,r,i}$ is the volume-mixing ratio of the $m$-th molecule, $\sigma_{m, \lambda}$ is the total cross-section of Rayleigh scattering and molecular and continuum absorptions, and finally, $k_{mie, j}$ is the extinction coefficient associated with the Mie scattering for the $j$-th aerosol. $N_{gas}$ is the number of molecules, and $N_{con}$ is the number of aerosols. The cross-sections and Mie coefficients can be computed by \texttt{Exo\_k}\xspace \citep{leconte2020spectral}. \subsubsection{Transmittance map and spectrum} \label{spectrum} A transmittance map may be computed from the optical depth through \begin{equation} \mathcal{T}_{\rc}^\lambda = e^{-\tau_{\rc}^\lambda} . \label{eq:transmittance} \end{equation} In the case of a homogeneous stellar disk, the relative dimming of the stellar flux is given by \begin{equation} \Delta_\lambda = \frac{{\pi R_p}^2 + \sum\limits_{r} \left(1 - e^{-\tau_{\rc}^\lambda}\right) S_r }{{\pi R_s}^2}, \label{eq:integral} \end{equation} with $S_r = 2 \pi (r + \frac{\diffr}{2})\diffr$. $R_p$ is the radius of the planet, $R_s$ is the radius of the star, and $\diffr$ is the distance between two consecutive $r$. The computation of the transmission spectrum is complete. \subsection{One-dimensional case} \label{1D_case} To simplify the simulation even further, we can approximate the model using only one dimension: the vertical axis. This approximation has been extensively used in the past and will serve as a reference. In this case, the temperature and composition are horizontally uniform, so that isobaric and iso-altitude surfaces are equivalent and a single grid can be used. The steps required to compute the transmission spectrum are then exactly identical to those in the 2D model, except that we do not need to use Eq.~\ref{eq:2D_intersection} because there is no slice in this model. However, although there is consensus about the theoretical equations for computing the transmission spectrum of a horizontally uniform atmosphere, some differences in the numerical algorithm can lead to significant quantitative differences in the results. In particular, there are two main options when the impact parameters of the rays are chosen, as shown in \fig{rays_position}. \begin{figure}[h]\centering \subfloat[Rays at levels]{% \includegraphics[width=.45\columnwidth,height=.6\textheight,keepaspectratio]{rays_bottom}% \label{rays_bottom} }\qquad \subfloat[Rays at mid-layers]{% \includegraphics[width=.45\columnwidth,height=.6\textheight,keepaspectratio]{rays_center}% \label{rays_center} }\qquad \caption{Schematic of the two possible methods for positioning the rays (dashed lines). In panel \protect\subref{rays_bottom}, the rays are tangent to the pressure levels (solid arcs), as done in TauREx\xspace~3.0. In panel \protect\subref{rays_center}, the rays pass the middle of the layers, as done in Pytmosph3R\xspace and throughout this paper. A comparison of the accuracy of these two methods is shown in \fig{iso_results}} \label{rays_position} \end{figure} The first option (used in TauREx\xspace~3.0) is to place the rays so that their impact parameter $r$ is equal to the radius of the levels $\rho$. The second option is to place the rays so that their impact parameter $r$ is equal to the radius of the midpoints of the layers $\rho+\frac{\diff \rho}{2}$. As we show in Sec.~\ref{tests1D}, the second option (rays at midlayers) appears to results in a faster convergence of the numerical scheme. We therefore implemented this method in our 2D and 3D codes. \section{Model accuracy} \label{model-validation} In this section, we use a series of test cases of increasing complexity to cross-validate our different implementations. As a byproduct, this will allow us to test the accuracy of our algorithms for various grid resolutions. The efficiency in terms of computing time is discussed in \sect{model-time}. We emphasize that our test cases are based on the atmosphere of an ultra hot Jupiter, WASP-121b\xspace, where hydrogen partially dissociates so that the scale height is extremely large. In addition, the planet is extremely inflated. All these factors add up to increase the atmospheric signal up to several thousand ppms. As a result, this is a particularly stringent test for models, which explains the relatively high resolution needed to achieve a given precision. A lower resolution could be used for cooler objects. \subsection{Experimental setup} \label{experimental_setup} The 1D model is based on the TauREx\xspace 3 implementation \citep{Al-Refaie2019}. We developed the 2D model in the same framework, extending each 1D data profile to a 2D alternative, and using Eq. \ref{temperature2D} for the temperature map. The 3D model, developed in a first version by \citet{caldas2019effects} was reimplemented here in a second version, \href{http://perso.astrophy.u-bordeaux.fr/~jleconte/pytmosph3r-doc/index.html}{Pytmosph3R\xspace~2.0}$^{\ref{pytmodoc}}$. All models are coded in Python 3, with computationally intensive operations relying on numba \citep{lam2015numba}. The machines on which the numerical experiments were performed are Intel® Xeon® Gold 6138 CPUs with 40 = 20$\times$2 cores @ 2.00GHz, with 128GB of RAM. \subsection{Validation for the 1D test cases} \label{tests1D} A test case that can be reproduced by each model is the case of isothermal atmospheres. We therefore study here an isothermal example of a hot Jupiter planet with the physical properties listed in Table~\ref{tab:isothermal}. \begin{table}[h]\centering \begin{tabular}{|c|c|} \hline Planet radius & 1.807 $R_J$ \\ Surface gravity & 9.39 m.s$^{-2}$ \\ Temperature & 2500 $K$ \\ [H$_2$O] & $5.01 \cdot 10^{-4}$ \\ [CO] & $4.4 \cdot 10^{-4}$ \\ [H$_2$] & $0.740$ \\ [He] & $0.259$ \\\hline \end{tabular} \caption{Characteristics of the isothermal case, including the abundances (volume mixing ratios) of each molecule. } \label{tab:isothermal} \end{table} The spectrum is generated for 39124 wavelengths from 0.3 to 15~$\mu$m. We have ensured that all codes indeed converge toward a solution when the number of layers is increased, as is shown by Fig. 4 of \citet{caldas2019effects} for the previous version of Pytmosph3R\xspace. \fig{iso_results} shows the convergence of the TauREx\xspace and Pytmosph3R\xspace 2.0 with respect to a converged Pytmosph3R\xspace (1000 layers). \begin{figure}\centering \includegraphics[width=.5\textwidth]{convergence_1D} \caption{Average of the absolute difference (in ppm) between each model and the converged solution of Pytmosph3R\xspace (with 1000 layers) as a function of the number of layers. P3 stands for Pytmosph3R\xspace, and T1 and T2 for TauREx\xspace 1D and 2D, respectively. The original implementation of TauREx\xspace (denoted "T1 (original)", with rays that are tangent to levels) is included. } \label{iso_results} \end{figure} To accelerate the convergence of TauREx\xspace~1D, we implemented a new version that places the rays at the center of the layers (``T1 (layers)'') and not at the levels. The original implementation of TauREx\xspace is given by the curve ``T1 (original)''. See \fig{rays_position} for a visual representation of the two methods. This simple algorithmic change improves the model accuracy by a factor of 3 on average at no cost. For more than $\sim$1000 layers, the difference between T1 (layers) and our reference seems to reach a plateau. When the interpolation of the opacities in TauREx\xspace\ is replaced by that of \texttt{Exo\_k}\xspace (the one used in Pytmosph3R\xspace) in addition to the position of the rays at the center of the layers, we obtain the method ``T1 (+exo\_k)'' for which the difference with the reference model continues to decrease as the number of layers increases. This shows that in the case of relatively hot giant planets, the opacity interpolation scheme can lead to errors of $\sim0.3$ppm. Overall, all (new) codes converge to a difference smaller than 1~ppm with a few hundred layers. From this point forward, all methods rely on \texttt{Exo\_k}\xspace for the computation of the opacities and place the rays at the center of the layers. For horizontally uniform atmospheres, the choice of the orientation of the longitude or latitude grid used in Pytmosph3R\xspace is arbitrary and should not affect the results. We verified that the output spectrum is independent of this choice down to machine precision, which further validates our implementation. \subsection{2D test case} \label{tests2D} We now cross-validate TauREx\xspace~2D and Pytmosph3R\xspace using a 2D temperature structure that is symmetrical around the planet-observer axis (see \eq{temperature2D}). The chosen temperature structure, shown in \fig{temp2D}, has a large difference between the dayside and nightside temperature. \begin{figure}\centering \includegraphics[width=0.45\textwidth]{img/T3_2D/output_temp} \caption{Temperature map for our reference 2D test case. The substellar point is on the left. $T_{day} = 3300K$ while $T_{night} = 500K$ and $T_{deep} = 2500K$, with $\beta = 10$\textdegree. The black lines correspond to isobar levels. The altitude is in Mm, i.e., thousands of km. } \label{temp2D} \end{figure} Our temperature structure does not \textup{\textit{\textup{directly}}} depend on the longitude and latitude: it only depends on the solar elevation angle, which is given by \balign{ \sin \alpha^* = &\sin \Lat_0 \sin \varphi_\star\nonumber\\ &+ \cos \Lat_0 \cos \varphi_\star \cos(\Lon_0 - \lambda_\star), \label{solar_elevation_angle} } where $(\Lat_0,\Lon_0)$ are the latitude and longitude of the current cell, and $(\varphi_\star,\lambda_\star)$ are the latitude and longitude of the substellar point. As a result, the choice of the grid orientation is arbitrary. In other words, the choice of the direction of the star in our arbitrary reference frame is a free parameter that should not affect the results (as long as we align star, planet, and observer). However, as illustrated in \fig{2D_equator_pole}, for a grid with a finite size, the orientation of the grid can slightly affect the way the temperature is represented in the model, and consequently, the resulting spectrum. \begin{figure}\centering \subfloat[$\varphi_\star$~=~0\textdegree]{% \includegraphics[width=.22\textwidth,height=.6\textheight,keepaspectratio]{2d_equator}% \label{2d_equator} }\qquad \subfloat[$\varphi_\star$~=~90\textdegree]{% \includegraphics[width=.22\textwidth,height=.6\textheight,keepaspectratio]{2d_pole}% \label{2d_pole} }\qquad \caption{Discrete horizontal temperature maps at high altitude computed using \eq{temperature2D} with 80/80 latitude and longitude grid points with two different choices of grid orientation (redder is hotter). These maps are visualized in 3D through ParaView \citep{ahrens2005paraview}. a) The star is located in the equatorial plane (the planet is seen from the pole, with the star on the left). b) The star is located at the pole (the star and the north pole are located at the top). In this idealized setup, choice b allows us to take advantage of the symmetries of the system and to use only one longitude point for our grid, which considerably speeds up the computation. } \label{2D_equator_pole} \end{figure} We find that this effect is about 1~ppm. For the rest of this section, we choose to place the star at the pole ($\varphi_\star$~=~90\textdegree) because it allows us to use only one longitude ($N_{\mathrm{lon}} = 1$) and one azimuthal angle ($N_{\theta} = 1$, see \sect{3D_case}), which considerably speeds up computations. We can now compare the convergence of TauREx\xspace 2D with that of Pytmosph3R\xspace. The results are shown in \fig{taurex_convergence_2d}. \begin{figure}\centering \includegraphics[width=.5\textwidth]{convergence_2D} \caption{Convergence of TauREx\xspace 2D and Pytmosph3R\xspace when the atmospheric grid resolution is increased. The Y-axis indicates the average of the absolute difference over all wavelengths considered between each model and Pytmosph3R\xspace ($N_{lay}, N_{\mathrm{lat}}$) = (1000, 540). The X-axis indicates the number of layers $N_{lay}$ in the model. The legend indicates the angular resolution of the model, $\diff\alpha$, equal to $\beta/N_{\alpha}$ for TauREx\xspace (T2) and $180/N_{\mathrm{lat}}$ for Pytmosph3R\xspace (P3). For example, $\diff\alpha = 1\degree$ leads to $N_{\alpha} = 10$ and $N_{\mathrm{lat}} = 180$. Pytmosph3R\xspace was run with $\varphi_\star$ = 90\textdegree (see \fig{2D_equator_pole}), so that only one longitude and one azimuthal angle are necessary, i.e., $N_{\mathrm{lon}} = N_{\theta} = 1$. } \label{taurex_convergence_2d} \end{figure} This figure shows the convergence of TauREx\xspace with an increasing number of slices and Pytmosph3R\xspace with a increasing number of latitudes for an increasing number of layers. With an equivalent angular resolution (e.g., $N_{\alpha} = 10$ and $N_{\mathrm{lat}} = 180$, which leads to an angular width of $1$\textdegree~for each angular point), TauREx\xspace and Pytmosph3R\xspace follow a very similar trend. The models converge to a difference smaller than 1~ppm with a sufficiently high resolution. \subsection{3D simulations} \label{tests3D} We study here a 3D GCM simulation based on WASP-121b\xspace \citep{parmentier2018,pluriel2020strong}. One of the characteristics of this simulation is the strong dichotomy between the temperature on the dayside and that of the nightside. The simulation is shown in \fig{WASP} from the east in the equatorial plane. Temperature maps in the equatorial plane and at high altitude are given in \fig{fig:wasp}. \begin{figure}\centering \includegraphics[width=.49\textwidth,keepaspectratio]{img/t_map_latitude_16_pytmosph3r} \label{fig:wasp121_equator} \includegraphics[width=.5\textwidth]{img/t_map_altitude_49_pytmosph3r} \label{fig:wasp121_altitude} \caption{Temperature maps of WASP-121b\xspace. The top map shows an equatorial slice. The planet radius is divided by 10 for visual reasons. The black lines correspond to different isobar levels. The bottom map shows a slice at high altitude (29.2~Mm, i.e., 29200~km). The hottest point is slightly shifted to the east limb, i.e., the trailing limb. } \label{fig:wasp} \end{figure} This simulation also has a slight east-west asymmetry; the hottest point is shifted toward the east. This feature is most visible in \fig{fig:wasp121_altitude}. The chemistry is given by tables from \citet{parmentier2018} and includes He, H$_2$, H, H$_2$O, CO, TiO, and VO. Using this (heterogeneous) 3D GCM simulation, the number of latitudes $N_{\mathrm{lat}}$ and longitudes $N_{\mathrm{lon}}$ is fixed. The number of layers $N_{lay}$ of the altitude grid may be chosen as different from that of the input simulation. To simplify the number of parameters, we simply set this number to the number of radial points $N_{r}$ in the polar grid of rays ($N_{r} \times N_{\theta}$). To study the accuracy of the model, we can therefore change the number of rays, as we show in \fig{pytmo_convergence}. \begin{figure}\centering \includegraphics[width=.5\textwidth]{img/convergence_3D} \caption{Convergence of Pytmosph3R\xspace when the number of rays [$N_{r} \times N_{\theta}$] crossing the atmosphere is increased, using as a reference point the converged model ([$N_{r} \times N_{\theta}$] = [$500 \times 64$]). The simulation is a GCM with an equilibrium temperature $T_{eq} = 2100 K$ based on the characteristics of WASP-121b\xspace, and including TiO and VO. The number of cells in this simulation is $(N_{lay}, N_{\mathrm{lat}}, N_{\mathrm{lon}}) = (100, 32, 64)$. } \label{pytmo_convergence} \end{figure} The model tends to converge if we have enough radial and angular points to correspond to the GCM resolution, that is, at least 32~angular points, although we might apparently need to double the number of radial points to obtain better accuracy. \section{Performance and optimizations} \label{model-time} We now study the performance of our models and ways to reduce the time and memory required for the computation of a transmission spectrum. \subsection{TauREx\xspace} TauREx\xspace is mainly used for its retrieval feature. We must therefore ensure that the forward model can be computed fast enough for the retrieval to be achieved in a reasonable amount of time for the user. In its 1D variant, an important part of the arithmetic complexity of TauREx\xspace is the computation of the optical depth, which is related to the number of cells that the rays are passing through, and for which we must compute the formula of Eq.~\ref{eq:tau}. The number of calculations in 1D leads to a complexity for the optical depth of \begin{equation} \OC{\frac{N_{lay}^2}{2} \cdot N_{\lambda}}, \label{1D_absorption_complexity} \end{equation} where $N_{\lambda}$ is the number of wavelengths. As a result of the assumption that day and night are identical, a ray at the altitude of level $i$ only intercepts the $N_{lay}-i$ levels above $i$. With $N_{lay}$ rays, the number of layers for which the optical depth has to be computed is thus $\sum_{i = 1}^{N_{lay}} i = \frac{N_{lay}(N_{lay}+1)}{2}$. We focus here on molecular absorption and leave Rayleigh scattering and the continuum absorption out of this study. In the 2D version of the model, however, we must account for the day- and nightside (therefore doubling the number of computations of \eq{1D_absorption_complexity}), as well as all the intersections of the layers with the slices in the linear transition around the terminator. This leads to a complexity for the 2D optical depth of \begin{equation} \OC{\left(N_{lay}^2 + N_{lay} \cdot N_{\alpha}\right)\cdot N_{\lambda}}. \label{2D_absorption_complexity} \end{equation} We show in the results that the main calculations in 2D are done for the number of opacities $\chi_{m,\lambda}[\alpha, r]$ that are to be interpolated. The arithmetic complexity of computing the opacities is \begin{equation} \OC{C \cdot N_{lay} \cdot N_{\alpha} \cdot N_{\lambda} \cdot N_\mathrm{mol}}, \label{2D_opacities_complexity} \end{equation} where $C$ is the cost of computing one interpolation, and $N_\mathrm{mol}$ is the number of molecules. Incidentally, adding molecules will increase the cost of the interpolation of the opacities (Eq.~\ref{2D_opacities_complexity}), but not of the optical depth itself (Eq~\ref{2D_absorption_complexity}). Eq.~\ref{2D_opacities_complexity} is also valid in 1D, but in this case, $N_{\alpha} = 1$. To ensure that the model behaves as expected, we performed a series of measures following the method we used and the dimensions of the atmospheric grid that were considered. These measures are gathered in Tab. \ref{time_taurex}. \begin{table}[h]\centering \pgfplotstabletypeset[ every head row/.style={before row={% \hline & & \multicolumn{4}{c|}{$N_{\alpha}$ (2D)} \\ }, after row=\hline, }, columns={n_layers,t1,2,10,20,30}, columns/n_iter/.style={column name=$n_{iter}$,column type=|c}, columns/n_layers/.style={column name=$N_{lay}$,column type=|c}, columns/n_slices/.style={column name=$N_{\alpha}$}, columns/time_1D/.style={column name=1D}, create on use/t1/.style={create col/expr={\thisrow{time_1D}}}, create on use/2/.style={create col/expr={\thisrow{t2_2}}}, create on use/10/.style={create col/expr={\thisrow{t2_10}}}, create on use/20/.style={create col/expr={\thisrow{t2_20}}}, create on use/30/.style={create col/expr={\thisrow{t2_30}}}, columns/30/.style={column type=c|}, columns/t1/.style={column name=1D,column type=c|,zerofill,precision=2,}, multistyles={1,...,5}{,zerofill,precision=2,} ]{img/taurex_timing.dat} \caption{Average time (s) to run one TauREx\xspace model, considering the molecular absorption only (no Rayleigh scattering or CIA) of four molecules on 39124 wavelengths from 0.3 to 15~$\mu$m. } \label{time_taurex} \end{table} Because the whole calculation scales linearly with the number of wavelength points ($N_{\lambda}$) at which the model is run, it is set to a constant value (39124 wavelength points) for this study. There are multiple interesting trends that we can observe in this table. First, we can observe that there is a factor of 2 between the 1D and the 2D model with two slices because there is a day- and a nightside. Second, the computational time seems to increase less than quadratically with respect to the number of layers in 1D, showing that there are enough molecules to make the interpolation of the opacities a significant part of the computations. Third, for a large number of slices, the time also increases linearly with the number of slices, which means that the cost of one interpolation, $C$, in Eq.~\ref{2D_opacities_complexity} and the number of molecules $N_\mathrm{mol}$ are large enough to make the interpolation of the opacities the dominant part of the computations. Increasing the number of layers will change this behavior, as the quadratic complexity of Eq.~\ref{2D_absorption_complexity} will start to be dominant again. A good compromise between accuracy and computational time seems to be running 2D models with 200~layers and 30~slices. \subsection{Pytmosph3R\xspace} In 3D, the great majority of the calculations lies in the interpolation of the opacities. This interpolation scales with the number of cells for which we need to compute the opacity. To decrease the number of calculations, we can therefore first identify how the number of cells for which we need to compute an interpolation can be reduced. The first step is to realize that we do not need (and cannot afford in terms of time and memory) to compute the opacity of all $N_{lay} \times N_{\mathrm{lat}} \times N_{\mathrm{lon}}$ cells in the model. A large part of the cells is not crossed by any light ray, and therefore we do not need to compute their opacity. A naive algorithm would simply iterate over all the rays and compute the opacity for each segment that is crossed by a ray. However, depending on the resolution of the model, many cells may be crossed by multiple rays, so that we can reuse the information from one ray to another. In addition, when we have the opacities of all segments of one ray, the optical depth of that ray can be computed, and the opacities may be discarded. In light of these two facts, we developed an algorithm that will group light rays together at each azimuthal angle $\theta$. This means we can benefit from the reusability of opacities within that angle (along the radial axis) while being able to release the memory of all opacities when the optical depth of all rays within this angle have been computed. The memory will therefore almost be a constant throughout the execution of the program, if we consider each angle to be equivalent (this depends on the resolution of the grid). This method, which we refer to as "Per-angle" in the experiments, works quite well for every kind of problem (completely heterogeneous to isothermal). However, the computational cost can be decreased further by making a few assumptions. Some atmospheric simulations may have a number of cells that have identical physical properties, for example, in the case of the 2D representation of Eq. \ref{temperature2D}. If this number is sufficiently large, we can also aggregate these cells together to further reduce the number of opacities that are to be computed. This method is referred to as "Identical" in the experiments. We compare these two methods in Figs. \ref{per_angle_times} and \ref{per_angle_memory} for a 2D problem defined with Eq. \ref{temperature2D}. \begin{figure}\centering \includegraphics[width=0.48\textwidth]{img/pytmo_per_angle_times} \caption{Time (s) required by Pytmosph3R\xspace to compute the transmission spectrum of a 2D atmospheric grid defined with Eq. \ref{temperature2D}. Two methods are turned on or off: the Per-angle method (off is "Whole"), and the Identical method (off is "Non-identical"). The number of rays $N_{r} \times N_{\theta}$ = n\_radial$ \times $n\_angular also varies. Missing points have run out of memory.} \label{per_angle_times} \end{figure} These two figures show the computational time required for a model to run and its memory consumption peak, respectively. \begin{figure}\centering \includegraphics[width=0.48\textwidth]{img/pytmo_per_angle_memory} \caption{Real memory peak (MB) of the corresponding points in \fig{per_angle_times}. The machine has 128GB of RAM, and the recorded peak is below that point for those that have run out of memory (missing points in \fig{per_angle_times}).} \label{per_angle_memory} \end{figure} The first observation we can make here is the effect of the Identical method, which reduces the time and memory complexity of the program by approximately one-third in this case. However, this is possible only due to the characteristics of the simulation, in which many cells are actually identical. We must emphasize that a completely heterogeneous simulation would not benefit at all from this method. A second observation we can make is the drastic memory saving due to the Per-angle method. As we mentioned earlier, this method allows us to discard the opacities after each angle and keep the memory due to the opacities below a constant. However, other variables such as the transmittance (if needed) will still increase in size with respect to the number of angles. In conclusion, the Per-angle method provides a drastic memory reduction, while the Identical should be used only for data that contain redundancies. A method to ensure that the computation will not run out of memory is also under study, as well as a parallel version. \section{Examples of applications} \label{sec:application} Pytmosph3R\xspace allows us to do more than just compute a more realistic spectrum from a static atmospheric structure. The transmission signal from a planet is expected to vary over time due to two main reasons: (1) because the atmospheric structure itself is variable, whether it is the temperature, the composition, or the distribution of clouds and hazes; (2) because the planet rotates during a transit (even when in synchronous rotation), showing us a slightly different cross section of its atmosphere. In this section, we illustrate these two effects. \subsection{Atmospheric variability} In a recent study, \citet{charnay2020formation} investigated the dynamics of clouds on temperate mini-Neptunes, taking the example of K2-18b. They revealed that the abundance of clouds at the terminator was highly variable. To assess the effect of these clouds on the transmission spectrum, we ran Pytmosph3R\xspace on the climate model results at various time steps. \fig{transmittance} shows transmittance maps calculated using \eq{eq:transmittance}. These maps provide more information than the sole transmission spectrum that can be obtained by their spatial integration (see \eq{eq:integral}). Here we can use these maps to infer the effective fraction of the limb covered by clouds, as well as the difference in the effective absorption altitude for a clear versus cloudy atmsophere. \begin{figure}[ht]\centering \includegraphics[width=.49\textwidth,height=.7\textheight,keepaspectratio]{t_58_transmittance}% \includegraphics[width=.49\textwidth,height=.7\textheight,keepaspectratio]{t_40_transmittance}% \caption{Example of transmittance maps at two different times of a simulation of K2-18b \citep{charnay2020formation} at a wavelength of 0.6 $\mu m$ from the observer's point of view. The atmosphere is transparent when the transmittance is equal to 1, and it is opaque when it is equal to 0. The absence of clouds is visible on the right side, which corresponds to the west limb. The atmosphere scale height has been multiplied by 10 for visual reasons. The altitude is in Mm, i.e., thousands of km. } \label{transmittance} \end{figure} As the two maps in \fig{transmittance} correspond to two time steps in the simulation, we can also follow the movement of aerosols and the variation of the cloud fraction over time. As the climate model is expected to predict a realistic evolution of the clouds in time, this can allow us to quantify how the variability of the cloud structure during a single transit could affect the observed signal. \subsection{Rotation of the planet during transit} \label{ssec:rotation} An interesting aspect of our 3D model is also that the geometry of the observation can be changed. When a planet passes in front of its star, it rotates slightly, showing the observer a phase that varies with time. This occurs even when the planet is in a tidally synchronized rotation. In this case, the star (and the terminator) remains fixed in the reference frame corotating with the planet, but the observer (and the limb it probes) is moving. In the case of an asymmetric and heterogeneous 3D atmosphere, a change in the planet’s phase angle implies that the light rays will not probe the same areas of the atmosphere. This therefore completely changes the associated transmittance map and the resulting transmission spectrum. Interestingly, this should create asymmetries in the transit light-curve \citep{EJ21}. While we are in the process of implementing a full-fledged light-curve generator, we wish to quantify in a simple way here how much the spectral transit depth might vary due to these effects. To do this, we took the example of WASP-121b\xspace (\fig{fig:wasp}; details of the simulation can be found in \sect{tests3D}) and ran Pytmosph3R\xspace at five different phases during the transit, as illustrated in \fig{fig:wasp121_transit}. The planet was considered to be tidally locked. \begin{figure*}\centering \includegraphics[width=\textwidth]{img/wasp_transit} \caption{Transmittance maps of WASP-121b\xspace at 0.6 $\mu m$ for the five orbital phase angles whose spectra are shown in \fig{fig:wasp121_transit_effective_radii}. The orbital phase angle is $\phi = -15$\textdegree\xspace for ingress and $\phi = 15$\textdegree\xspace for egress. For visual reasons, the planet atmosphere has been enlarged with respect to its radius, and the early and late transmittance maps are slightly shifted. Only half the planet covers the star at ingress and egress. } \label{fig:wasp121_transit} \end{figure*} This figure shows the transmittance maps at different stages of the transit at a wavelength of $0.6~\mu$m. It gives an example of how transmittance maps may evolve during a transit, and which information we can retrieve from them. We only took the effect of the (synchronous) rotation of the planet between ingress and egress into account because we assumed a stationary atmosphere during the transit. The selected phases are listed below. \begin{enumerate} \item Ingress: Half of the planet (west limb) is in front of the star. For the system parameters we used, this corresponds to an orbital phase angle of $\phi = -15$\textdegree. The center of the planet is located at the edge of the star. \item Early: The planet has completed entering the transit (second contact; $\phi = -13$\textdegree). \item Mid: The planet is at mid-transit, $\phi = 0$\textdegree. \item Late: The planet is preparing to exit the transit, $\phi = 13$\textdegree. \item Egress: The east limb is in front of the star. The planet is exiting the transit, $\phi = 15$\textdegree. \end{enumerate} \fig{fig:wasp121_transit_effective_radii} shows the relative transit depth (\eq{eq:integral}) for each phase. To facilitate comparison, we removed the effect of stellar limb darkening, even though it will be accounted for when computing realistic light-curves. For the ingress and egress spectra, only one limb is in front of the star, so that we multiplied the covered area by two to facilitate comparison. \begin{figure}\centering \includegraphics[width=.5\textwidth]{img/wasp121_transit_diff} \caption{Spectral variations of the transit depth of WASP-121b\xspace during a transit for the phases listed in \fig{fig:wasp121_transit}. The bottom plot shows the difference between each spectrum and the mid-transit spectrum, taken as a reference. } \label{fig:wasp121_transit_effective_radii} \end{figure} The figure shows that these transit depth variations are not negligible: as an example, the early spectrum is 110~ppm below mid-transit on average (with a minimum at $-200$~ppm in certain wavelength regions, especially in TiO/VO bands, i.e., from around 0.4 to $1~\mu m$). The early and the late spectra follow similar patterns, including in the TiO/VO bands and higher wavelengths, indicating a general symmetry in the TiO/VO repartition. In our simulation, TiO and VO are mainly located around the terminator. The decrease in absorption in early and late spectra (with respect to mid-transit) is due to the rotation of the terminator disk, which shows the highest cross section at mid-transit. As shown in \fig{fig:wasp121_altitude}, the temperature structure is not completely symmetrical because there is an eastward shift of the hottest region of about $20^{\circ}$, even though the day-night transition remains very sharp and symmetric. This eastward shift of the hot spot results in an east limb that is larger than the west one, which is visible in all transmittance maps. The elongation is less visible during ingress as the hot spot is behind the planet from the observer, and this is exacerbated during egress as the spot is closer to the east limb. During the early part of the transit, the limb is different from the terminator (rotated by $-\phi$), and we probe deeper in the dayside west of the planet and deeper in the nightside east of the planet. The situation is exactly reversed at the late position. \fig{fig:wasp121_transit} shows that the transmittance map at the early position is mostly symmetrical because the asymmetry of the temperature map is compensated for by the rotation of the planet during its orbit. Then, moving to the late step, the reverse situation occurs: The temperature on the east limb is hotter, implying a greater scale height, but the west limb is colder, inducing a smaller scale height. The temperature eastward shift seems to lead to a difference smaller than 100~ppm (the largest spectral difference). The location of the molecules (around the terminator) decreases this difference further, and the two spectra are very similar in most parts (with an average difference of less than 40~ppm). Although the changes from the mid-transit to the early and the late transmittance maps are (longitudinally) reversed, these differences disappear when the transmittance maps are spatially integrated to generate the spectra (\eq{eq:integral}). For ingress and egress, only the planet and atmospheric half that is in front of the star (the west and east limbs, respectively) are considered for the integration of the transmittance into a spectrum. The rotation of the planet at egress means that the east limb of the planet is hotter (accentuated by the eastward shift of the temperature map), while the west limb is colder. Because the east side alone is considered, the scale height of the atmosphere is larger and we observe a larger effective radius (see \fig{fig:wasp121_transit_effective_radii}). This leads to stronger spectral differences with an average of 370~ppm and peaks up to 560~ppm. For the ingress, the eastward shift of the hottest region (see \fig{fig:wasp121_altitude}) and the orbital phase of the planet implies that the light crosses colder regions of the atmosphere on the west limb. This results in a smaller effective radius because of a smaller scale height. The key points of this study are therefore threefold: \begin{enumerate}[topsep=0pt] \item The rotation of the planet during transit results in variations of the transit depth of up to 300\,ppm for a hot Jupiter such as WASP-121b\xspace. \footnote{The egress and ingress spectra were multiplied by 2 to compare them to the other phases, therefore the differences shown in \fig{fig:wasp121_transit_effective_radii} for these phases are twice larger than for the real signal.} It should therefore be detectable. The noise in observations from the HST currently reaches around 50~ppm and can be as low as 20~ppm at best. This noise could be lowered to 10~ppm or less with the upcoming JWST \citep{Greene_2016}. \item The most important differences are between ingress and egress (when only half the planet covers the star) and are mainly due to the asymmetry caused by the eastward shift of the hot spot. \item Measuring these light-curve asymmetries would allow us to place constraints on the rotation of the planet and/or the direction of the hotspot shift without the need for a complete and expansive phase-curve. \end{enumerate} \subsection{Toward a time-domain analysis} \label{sec:discussion} As Pytmosph3R\xspace can simulate any position for the observer, it can provide spectra and transmittance maps at any position during a transit. The transmittance maps can be used to extract the part of the atmosphere (and planet) that covers the star, for example, during ingress and egress (see Sec. \ref{ssec:rotation}). This information can also be used in the future for a time-domain analysis of the transit with Pytmosph3R\xspace. We are in the process of extending Pytmosph3R\xspace to generate transit light-curves, which could be very useful to theoretically study transit observations. As Pytmosph3R\xspace is fully 3D, the light-curves that are generated would be the closest to a real observation and would avoid biases due to 1D model assumptions. It will be interesting to compare the information extracted from light-curves by other codes \citep{Kreidberg_2015,Tsiaras2018,EJ21,Feliz2021} to the input model provided to Pytmosph3R\xspace. \section{Conclusion} We have discussed the computation of transmission spectra for exoplanetary atmospheric simulations with a varying number of dimensions. This method, implemented in Pytmosph3R\xspace, handles atmospheric simulations with up to three dimensions, including GCM models. The 2D formulation has also been integrated into the (initially one-dimensional) TauREx\xspace framework \citep{Al-Refaie2019}. We then discussed the computational requirements and efficiency of each model, which is especially critical in the context of retrievals, as well as possible applications. We have introduced a new version of \href{http://perso.astrophy.u-bordeaux.fr/~jleconte/pytmosph3r-doc/index.html}{Pytmosph3R\xspace} that is more robust and flexible, which is open-source and under a BSD license$^{\ref{pytmodoc}}$. Taking into account the 3D structure of the atmosphere during a transit is essential for generating consistent observations because assuming atmospheres to be homogeneous leads to strong differences in the transmittance maps and in the final integrated spectrum \citep{caldas2019effects, pluriel2020strong}. We will further study the biases due to the 1D assumption of retrievals with Pytmosph3R\xspace~2.0 in the second part of this series of articles, and quantify these biases. The two-dimensional model was shown to be a good compromise between accuracy and computational requirements, making it a valid forward model for a retrieval. Thanks to this method, we can remove the biases that were observed when a 1D forward model is used to retrieve very hot exoplanets. We will discuss the relevance, precision, and reliability of this 2D retrieval in the third part of this series. However, it should be noted that there might be other ways to parameterize 2D retrievals (discussed in Sec. \ref{introduction}). For instance, we know that east-west effects might also bias transmission spectra in warm atmospheres \citep{MacDonald2020}, where the jet stream cools down the west limb and heats up the east limb. In these configurations, our 2D model, which is symmetric with respect to the star-observer line, would not be able to give a better solution than a 1D model because its limb is homogeneous by definition. 2D retrievals using adapted configurations depending on the type of observed exoatmospheres are required. We could also develop hybrid 2D models that would take several geometric effects into account, keeping in mind that too many parameters in a retrieval code may create degeneracies. Overall, TauREx\xspace~2D can infer the atmospheric parameters of specific exoplanetary types, that is, ultra hot Jupiters, with a good compromise between computational time and model precision. This 2D version of TauREx\xspace will be made publicly available in the near future. \begin{acknowledgements} We are grateful to all of the TauREx\xspace developping team. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n$^\circ$679030/WHIPLASH). We thank the Programme National de Planétologie (CNRS/INSU/PNP) and the CNES for their financial support. \end{acknowledgements} \bibliographystyle{aa}
44e9e14821c5995d784f8313329b7f56007e22fa
\section{Introduction} \label{sec_intro} Generative adversarial networks (GANs) \cite{goodfellow2014generative} have achieved impressive performance in various tasks such as image generation \cite{mirza2014conditional,arjovsky2017wasserstein,gulrajani2017improved,miyato2018spectral,brock2018large,karras2018progressive,karras2019style,karras2020analyzing}, image super-resolution \cite{ledig2017photo, wang2018esrgan}, and image translation \cite{isola2017image,zhu2017unpaired,liu2017unsupervised,park2019semantic,huang2018multimodal,liu2019few,DRIT_plus,choi2020starganv2}. In recent years, GAN has also been widely used for face stylization such as portrait drawing \cite{yi2019apdrawinggan}, caricature \cite{cao2018carigans,shi2019warpgan}, manga \cite{su2020unpaired}, and anime \cite{kim2019u}. With the deepening of GAN research, the community has further paid attention to the quality of generated images and the disentanglement ability of latent space. To achieve high-quality generation and latent disentanglement, StyleGAN \cite{karras2019style} introduces the adaptative instance normalization layer (AdaIN) \cite{huang2017arbitrary} and proposes a new generator architecture, which achieves pretty good performance in generating face images. StyleGAN2 \cite{karras2020analyzing} explores the causes of droplet-like artifacts and further improves the quality of generated images by redesigning generator normalization. StyleGAN2-ada \cite{Karras2020ada} proposes an adaptive discriminator augmentation mechanism which reduces the required number of images to several thousands. In the experiment of StyleGAN2-ada, the model is trained on MetFaces dataset \cite{Karras2020ada} to generate artistic faces, however, the style of image is uncontrollable, and the corresponding original face cannot be obtained. Recently, a layer-swapping mechanism \cite{pinkney2020resolution} has been proposed to generate high-quality natural and stylized face image pairs, and equipped with an additional encoder for unsupervised image translation \cite{kwong2021unsupervised}. In particular, these methods first finetune StyleGAN from a pretrained model with target-style faces and then swap some convolutional layers with the original model. Given randomly sampled latent codes, the original model generates natural face images, while the layer-swapped model outputs stylized face images correspondingly. Though effective at large, these methods have to train models in a case-by-case flavor, thus a single model can only generate images with a specific style. Besides, these methods still require hundreds of target-style images for finetuning, and one has to carefully select the number of training iterations, as well as the swapped layers. In this paper, we present BlendGAN for arbitrary stylized face generation.\footnote{Our framework can also cooperate with GAN inversion \cite{xia2021gan,abdal2019image2stylegan,shen2020interpreting,pidhorskyi2020adversarial,richardson2020encoding} or StyleGAN distillation \cite{viazovetskyi2020stylegan2} methods to enable end-to-end style transfer or image translation. We will show the results in the appendix.} BlendGAN resorts to a flexible blending strategy and a generic artistic dataset to fit arbitrary styles without relying on style-consistent training images for each style. In particular, we analyze a stylized face image as composed of two latent parts: a face code (controlling the face attributes) and a style code (controlling the artistic appearance). Firstly, a self-supervised style encoder is trained via an instance discrimination objective to extract the style representation from artistic face images. Secondly, a \emph{weighted blending module} (WBM) is proposed to blend the face and style latent codes into a final code which is then fed into a generator. By controlling the indicator in WBM, we are able to decide which parts of the face and style latent codes to be blended thus controlling the stylization effect. By combining the style encoder and the generator with WBM, our framework can generate natural and stylized face image pairs with either a randomly sampled style or a reference style (see Figure \ref{fig_teaser}). \begin{figure} \centering \includegraphics[width=0.9\textwidth]{imgs/teaser_bs.pdf} \caption{Illustration of some reference-guided synthesis results. Our framework can generate stylized face images with high quality.} \label{fig_teaser} \end{figure} As for the generic artistic data, we present a novel large-scale dataset of high-quality artistic face images, Artstation-Artistic-face-HQ (AAHQ), which covers a wide variety of painting styles, color tones, and face attributes. Experiments show that compared to state-of-the-art methods, our framework can generate stylized face images with higher visual quality and style diversity for both latent-guided and reference-guided synthesis. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{imgs/architecture.pdf} \caption{Overview of the proposed framework. The style encoder $E_{style}$ extracts the style latent code $\mathbf{z}_{s}$ of a reference style image. The face latent code $\mathbf{z}_{f}$ is randomly sampled from the standard Gaussian distribution. Two MLPs transform face and style latent codes into their \emph{W} spaces separately, then they are combined by the \emph{weighted blending module (WBM)} and fed into generator $G$ to synthesise natural and stylized face images. Three discriminators are used in our method. The face discriminator $D_{face}$ distinguishes between real and fake natural-face images, the style discriminator $D_{style}$ distinguishes between real and fake stylized-face images, and the style latent discriminator $D_{style\_latent}$ predicts whether the stylized-face image is consistent with the style latent code $\mathbf{z}_{s}$.} \label{fig_overview} \end{figure} \section{Related Work} \label{sec_relatedwork} \paragraph{Generative Adversarial Networks} The recent years have witnessed rapid advances of generative adversarial networks (GANs) \cite{goodfellow2014generative} for image generation. The key to the success of GANs is the adversarial training between the generator and discriminator. Various improvements to GANs have been proposed to stabilize GAN training and improve the image quality. WGAN \cite{arjovsky2017wasserstein} uses the Wasserstein distance as a new cost function to train the network. WGAN-GP \cite{gulrajani2017improved} and SNGAN \cite{miyato2018spectral} improve the stability of GAN by introducing gradient penalty and spectral normalization separately to make the training satisfy the 1-Lipschitz constraint. SAGAN \cite{zhang2019self} enlarges the receptive field using the self-attention mechanism. BigGAN \cite{brock2018large} shows the significant improvement of image quality when training GANs at large scales. StyleGAN \cite{karras2019style,karras2020analyzing} redesigns the generator architecture with AdaIN layers, making it better disentangle the latent factors of variation. In this work, we employ GANs to generate high-quality stylized face images. \paragraph{Image-to-Image Translation} With the good performance of GANs, many GAN-based image-to-image translation techniques have been explored in recent years \cite{isola2017image,zhu2017unpaired,liu2017unsupervised,huang2018multimodal,liu2019few,DRIT,DRIT_plus,choi2018stargan,choi2020starganv2,park2020swapping}. For image translation in two domains, Pix2pix \cite{isola2017image} proposes the first unified image-to-image framework to translate images to another domain. CycleGAN \cite{zhu2017unpaired} proposes a cycle-consistency loss, making the network train with unpair data. UNIT \cite{liu2017unsupervised} maps images in source and target domains to a shared latent space to perform unsupervised image translation. UGATIT \cite{kim2019u} proposes an adaptive layer-instance normalization (AdaLIN) layer to control shapes during translation. For image translation in multi-domains, MUNIT \cite{huang2018multimodal} extends UNIT to multi-modal contexts by decomposing images into content and style spaces. FUNIT \cite{liu2019few} leverages a few-shot strategy to translate images using a few reference images from a target domain. DRIT++ \cite{DRIT_plus} also proposes a multi-modal and multi-domain model using a discrete domain encoding. Training with both latent codes and reference images, StarGANv2 \cite{choi2020starganv2} could provide both latent-guided and reference-guided synthesis. Park et.al. \cite{park2020swapping} utilize the autoencoder structure to swap the style and content of two given images, and it can also be regarded as a reference-guided synthesis method. Although these methods could translate images to multiple target domains, the diversity of target domains is still limited and these methods require hundreds of training images for each domain. Besides, the above methods could only operate at a resolution of at most 256x256 when applied to human faces. \paragraph{Neural Style Transfer} The arbitrary stylized face generation task could also be regarded as a kind of the neural style transfer \cite{gatys2016image,Johnson2016Perceptual,zhang2018multi,huang2017arbitrary}. Gatys \emph{et al.} \cite{gatys2016image}, for the first time, proposes a neural style transfer method based on optimization strategy. Johnson \emph{et al.} \cite{Johnson2016Perceptual} trains a feed-forward network to avoid the time consumption of optimization. Luan \emph{et al.} \cite{luan2017deep} proposes a photorealism regularization term to preserve the structure of source images. The above methods could only transfer source images to a single style with a model, many works extend the style flexibility and achieved multi-style or arbitrary style transfer such as MSG-Net \cite{zhang2018multi}, AdaIN \cite{huang2017arbitrary}, WCT \cite{li2017universal} and SANet \cite{park2019arbitrary}. Although arbitrary style transfer methods could transfer images to arbitrary styles, they only transfer the reference style to source images globally without considering local semantic styles, which leads to global texture artifacts in the outputs. \section{Method} \label{sec_method} Our goal is to train a generator \begin{equation} \hat{x}_f, \hat{x}_s=G(\mathbf{z}_{f}, \mathbf{z}_{s}, i), \label{equ_G} \end{equation} that can generate a image pair (natural face image $\hat{x}_f$ and its artistic counterpart $\hat{x}_s$) with a pair of given latent code $(\mathbf{z}_{f}, \mathbf{z}_{s})$ and a blending indicator $i$. The face latent code $\mathbf{z}_{f}$ controls the face identity, and the style latent code $\mathbf{z}_{s}$ controls the style of $\hat{x}_s$. The blending indicator $i$ decides which parts of $\mathbf{z}_{f}$ and $\mathbf{z}_{s}$ are blended to generate plausible images. Figure \ref{fig_overview} illustrates an overview of our framework. \subsection{Self-supervised Style Encoder} \label{sec_encoder} The style encoder $E_{style}$ is independently trained to extract the style representation from an artistic face image while discarding the content information such as face identity. Many style transfer methods \cite{gatys2016image,Johnson2016Perceptual,zhang2018multi} obtain the style representation by calculating the Gram matrices of feature maps extracted by a pretrained VGG \cite{Simonyan15} network. However, if simply concatenating the Gram matrices of all the selected feature maps, the resulting style latent code (‘style embedding’) would have a very high dimension. For example, if we choose layers $relu1\_2$, $relu2\_2$, $relu3\_4$, $relu4\_4$ and $relu5\_4$, the style embedding would be a 610,304-dimensional vector ($64^2+ 128^2+256^2+512^2*2$). Higher latent code dimension leads to sparser distribution, which makes the generator harder to train and perform worse in style disentanglement. Hence, following the self-supervised method SimCLR \cite{chen2020simple, chen2020big}, we train the style encoder to reduce the dimension of style embeddings. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{imgs/encoder.pdf} \caption{Architecture of the proposed style encoder, which consists of a pretrained \emph{VGG-19} \cite{Simonyan15} network, a \emph{Gram matrix and concat} module and an MLP predictor. More detailed notations are described in the contexts of Sec. \ref{sec_encoder}.} \label{fig_encoder} \end{figure} Figure \ref{fig_encoder} illustrates the architecture of our style encoder. The style image is augmented and then sent to a pretrained VGG \cite{Simonyan15} network. We calculate the Gram matrices of selected feature maps, flatten and concatenate them into a long vector (610,304-D in our case). Then the vector is passed into a 4-layer MLP module (predictor) to reduce the dimension and generate the style embedding $\mathbf{z}_{s}$ of the input style image. Following $\mathbf{z}_{s}$, another 2-layer MLP performs as a projection head for contrastive learning. The augmentations in the original SimCLR include affine transformation and color transformation, following the assumption that affine and color transformations do not change the classes of objects. However, for style encoding, the image style is strongly related to color, so we only use affine transformations in the augmentation step. During training the style encoder, only parameters of two MLP modules are optimized while \emph{VGG-19} is fixed to perform as a feature extractor. The network is trained using NT-Xent loss \cite{chen2020simple}: \begin{equation} \ell_{i, j}^{\mathrm{NT}-\mathrm{Xent}}=-\log \frac{\exp(\operatorname{sim}(\boldsymbol{z}_{i}, \boldsymbol{z}_{j}) / \tau)}{\sum_{k=1}^{2N} \mathbbm{1}_{[k \neq i]} \exp (\operatorname{sim}(\boldsymbol{z}_{i}, \boldsymbol{z}_{k}) / \tau)}, \label{equ_ntxent} \end{equation} where $(i, j)$ is a pair of positive examples (augmented from the same image), $\boldsymbol{z}$ is the projection result, $\operatorname{sim}(\cdot, \cdot)$ is cosine similarity between two vectors, and $\tau$ is a temperature scalar. \subsection{Generator} \label{sec_generator} The generator $G$ is similar to StyleGAN2 \cite{karras2020analyzing}, but its input is a combination of two latent codes - face latent code $\mathbf{z}_{f}$ and style latent code $\mathbf{z}_{s}$. In particular, $\mathbf{z}_{f}$ controls the face identity of the generated images, and is randomly sampled from the standard Gaussian distribution $\mathcal{N}(0, 1)$. $\mathbf{z}_{s}$ controls the style of the generated stylized image, it could be either randomly sampled from $\mathcal{N}(0, 1)$ or encoded by the style encoder $E_{style}$ from a reference artistic image. Two MLPs transform $\mathbf{z}_{f}$ and $\mathbf{z}_{s}$ into their $W$ spaces as $\mathbf{w}_{f}$ and $\mathbf{w}_{s}$ separately. For $1024 \times 1024$ output, the dimension of $\mathbf{w}_{f}$ and $\mathbf{w}_{s}$ are all $18 \times 512$. Controlled by the blending indicator $i$, the \emph{weighted blending module (WBM)} combines $\mathbf{w}_{s}$ and $\mathbf{w}_{f}$ into $\mathbf{w}$, which is then fed into the generator as the final code. \paragraph{WBM} Different resolution layers in the StyleGAN model are responsible for different features in the generated image \cite{karras2019style} (e.g. low resolution layers control face shape and hair style, high resolution layers control color and lighting). Therefore, the blending weights of $\mathbf{w}_{s}$ and $\mathbf{w}_{f}$ should not be consistent at different layers. We propose \emph{WBM} to blend the two codes, which can be described as: \begin{equation} \mathbf{w} = \mathbf{w}_{s} \odot \hat\alpha + \mathbf{w}_{f} \odot ( \mathbbm{1} - \hat\alpha ), \label{equ_wbm1} \end{equation} \begin{equation} \hat\alpha = \mathbf{\alpha} \odot \mathbf{m}(i), \label{equ_wbm2} \end{equation} \begin{equation} \mathbf{m}(i;\theta)=\left[m_{0}, m_{1}, m_{2}, \ldots, m_{n}\right], \ m_{j}=\left\{\begin{array}{ll} 0 & j < i \\ \theta & j = i \\ 1 & j > i \end{array}, \right. \ \theta \in (0, 1), \label{equ_wbm3} \end{equation} where $\odot$ denotes the broadcasting element-wise product, $\mathbf{w}_{s},\mathbf{w}_{f} \in \mathbbm{R}^{18 \times 512}$ are the input codes, $\alpha \in \mathbbm{R}^{18}$ is a learnable vector to balance the blending weights for $\mathbf{w}_{s}$ and $\mathbf{w}_{f}$ in different layers. $\mathbf{m}(i;\theta)$ has the same dimension as $\alpha$, it controls which layer should be blended. If $i=0$, $\mathbf{w} = \mathbf{w}_{s} \odot \alpha + \mathbf{w}_{f} \odot ( \mathbbm{1} - \alpha )$, the generator outputs stylized face image $\hat{x}_s$; if $i=18$, $\mathbf{w}=\mathbf{w}_{f}$, the generator outputs natural face image $\hat{x}_f$. When $0<i<18$, some low resolution layers are not influenced by style codes, which ensures the $\hat{x}_s$ to keep the same face identity as $\hat{x}_f$ (see Figure \ref{fig_overview}). $\theta$ is used for finer adjustment. \subsection{Discriminator} \label{sec_discriminator} There are three discriminators in our framework(see the right column in Figure \ref{fig_overview}). The face discriminator $D_{face}$ and the style discriminator $D_{style}$ have the same architecture as in StyleGAN2 \cite{karras2020analyzing}. $D_{face}$ distinguishes between real and fake natural-face images, it only receives images sampled from the natural-face dataset (FFHQ) or generated by $G$ when $i=18$. In the contrast, $D_{ style}$ distinguishes between real and fake stylized-face images and it only receives images sampled from the artistic face dataset (AAHQ) or generated by $G$ when $i=0$. The style latent discriminator $D_{style\_latent}$ has the same architecture as the \emph{projection discriminator} \cite{miyato2018cgans}, which has two inputs - a generated stylized-face image $\hat{x}_s$ and a style latent code $\mathbf{z}_{s}$, it predicts whether $\hat{x}_s$ is consistent with $\mathbf{z}_{s}$. During training the network, we use an embedding queue to storage the style latent codes of the previously sampled style images embedded by $E_{style}$, and randomly sample one as the fake $\mathbf{z}_{s}$. \subsection{Training objectives} \label{sec_trainingobjs} Given a natural-face image $x_{f} \in \mathcal{X}_{natural}$, a stylized-face image $x_{s} \in \mathcal{X}_{style}$ and a randomly sampled face latent code $\mathbf{z}_{f}\sim\mathcal{N}(0, 1)$, we train our framework using the following adversarial objectives. For the face discriminator, the blending indicator $i=18$. $G$ only takes $\mathbf{z}_{f}$ as the input and learns to generate a natural-face image $\hat{x}_f=G_{i=18}(\mathbf{z}_{f})$ via an adversarial loss \begin{equation} \mathcal{L}_{face} = \mathbb{E}_{x_f}[\log{D_{face}(x_f)}]+ \mathbb{E}_{\mathbf{z}_f}[\log{(1-D_{face}(G_{i=18}(\mathbf{z}_{f})}], \label{equ_lface} \end{equation} where $D_{face}(\cdot)$ denotes the output of the face discriminator. For the style discriminator, the blending indicator $i=0$. $G$ takes $\mathbf{z}_{f}$ and $x_{s}$ as inputs, and generate a stylized-face image $\hat{x}_s$ which has the same style as $x_{s}$. The loss is described as \begin{equation} \mathcal{L}_{style} = \mathbb{E}_{x_s}[\log{D_{style}(x_s)}] + \mathbb{E}_{\mathbf{z}_f, x_s}[\log{(1-D_{style}(G_{i=0}(\mathbf{z}_f, E_{style}(x_{s})))}], \label{equ_lstyle} \end{equation} where the style encoder $E_{style}$ extracts the style latent code of $x_{s}$, and $D_{style}(\cdot)$ denotes the output of the style discriminator. For the style latent discriminator, we denote $\mathbf{z}_s=E_{style}(x_{s})$ as the style latent code of $x_s$, and randomly sample another style latent code $\mathbf{z}_s^-$ from the embedding queue as a negative sample, then the loss could be described as \begin{equation} \mathcal{L}_{style\_latent} = \mathbb{E}_{\mathbf{z}_f, x_s}[\log{D_{style\_latent}(x_s, \mathbf{z}_s)}] + \mathbb{E}_{\mathbf{z}_f, x_s}[\log{(1-D_{style\_latent}(\hat{x}_s, \mathbf{z}_s^-)}]. \label{equ_lstylelatent} \end{equation} Consequently, we combine all the above loss functions as our full objective as follows: \begin{equation} \mathcal{L}_{G} = \mathcal{L}_{face} + \mathcal{L}_{style} + \mathcal{L}_{style\_latent}, \label{equ_full_G} \end{equation} \begin{equation} \mathcal{L}_{D} = -\mathcal{L}_{G}. \label{equ_full_D} \end{equation} \section{Experiments} \label{sec_experiments} Our proposed method is able to generate arbitrary stylized face images with high quality and diversity. In this section, we describe evaluation setups and test our method both qualitatively and quantitatively on a large amount of images spanning a large range of face and style varieties. \paragraph{Datasets} As described in Sec. \ref{sec_method}, for arbitrary stylized face generation, we need a natural-face dataset and an artistic face dataset to train the networks. We use FFHQ \cite{karras2019style} as the natural-face dataset, which includes 70,000 high-quality face images\footnote{The FFHQ dataset is under Creative Commons BY-NC-SA 4.0 license.}. In addition, we build a new dataset of artistic-face images, Artstation-Artistic-face-HQ (AAHQ), consisting of 33,245 high-quality artistic faces at $1024^2$ resolution (Figure \ref{fig_AAHQ}). The dataset covers a wide variety in terms of painting styles, color tones, and face attributes. The artistic images are collected from Artstation\footnote{\href{https://www.artstation.com}{https://www.artstation.com}} (thus inheriting all the biases of that website) and automatically aligned and cropped as FFHQ. Finally, we manually remove images without faces or with low quality. More details of this dataset can be found in the appendix. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{imgs/AAHQ_s.pdf} \caption{The AAHQ dataset offers a lot of variety in terms of painting styles, color tones and face attributes.} \label{fig_AAHQ} \end{figure} \paragraph{Baselines} We compare our model with several leading baselines on diverse image synthesis including AdaIN \cite{huang2017arbitrary}, MUNIT \cite{huang2018multimodal}, FUNIT \cite{liu2019few}, DRIT++ \cite{DRIT_plus}, and StarGANv2 \cite{choi2020starganv2}. All of these methods could synthesise images in different modalities through the control of latent codes or reference images. All the baselines are trained on FFHQ and AAHQ using the open-source implementations provided by the authors, and our BlendGAN code is based on an unofficial PyTorch implementation of StyleGAN2\footnote{\href{https://github.com/rosinality/stylegan2-pytorch}{https://github.com/rosinality/stylegan2-pytorch}}. \paragraph{Evaluation metrics} To evaluate the quality of our results, we use Frechet inception distance (FID) metric \cite{heusel2017gans} to measure the discrepancy between the generated images and AAHQ dataset. A lower FID score indicates that the distribution of generated images are more similar to that of AAHQ dataset and the generated images are more plausible as real stylized-face images. In addition, we adopt the learned perceptual image patch similarity (LPIPS) metric \cite{zhang2018unreasonable} to measure the style diversity of generated images. Following \cite{choi2020starganv2}, as for a face latent code (corresponding to a natural-face image), we randomly sample 10 style latent codes or reference style images to generate 10 outputs and evaluate the LPIPS scores between every 2 outputs. We randomly sample 1000 face latent codes, repeat the above process and average all of the scores as the final LPIPS score. A higher LPIPS score indicates that the model could generate images with larger style diversity. \subsection{Blending indicator} \label{sec_indicator} As described in Sec. \ref{sec_generator}, the indicator $i$ in WBM controls in which layers the weights of $\mathbf{w}_{s}$ and $\mathbf{w}_{f}$ should be blended. In StyleGAN generator, low-resolution layers are responsible for face shape and high-resolution layers control color and lighting. Hence, as $i$ gets larger, less low-resolution layers are influenced by style codes, and the face shapes of stylized outputs are more similar to the natural-face images. As shown in Figure \ref{fig_indicatorcompare}, when $i=0$, the generated images have the strongest style as well as the most different face identities from the corresponding natural-face images. When $i=18$, the generated images completely lose the style information and become natural-face images. Table \ref{tab_indicatorcompare} also shows that the style quality and diversity of the generated images decrease with the increase of $i$. We notice that when $i=6$, the generated images have a good balance between stylization strength and face shapes consistency with the natural-face images. To this end, we set $i=6$ in the qualitative experiments, while we use both $i=0$ and $i=6$ in the quantitative experiments. \begin{figure} \centering \includegraphics[width=\textwidth]{imgs/qualitative_comparison_indicator_1_s.pdf} \caption{Reference-guided stylized-face images with different blending indicators.} \label{fig_indicatorcompare} \end{figure} \begin{table} \caption{FID and LPIPS comparison with different blending indicators.} \label{tab_indicatorcompare} \renewcommand\tabcolsep{0.01\textwidth} \centering \begin{tabular}{cccccccccccc} \toprule \multicolumn{2}{c}{Indicator $i$} & 0 & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 \\ \midrule \multirow{2}{*}{latent} & FID $\downarrow$ & \textbf{8.97} & 12.45 & 12.75 & 23.17 & 42.45 & 57.34 & 68.31 & 76.91 & 78.25 & 76.73 \\ \cmidrule{2-12} & LPIPS $\uparrow$ & \textbf{0.581} & 0.571 & 0.568 & 0.515 & 0.459 & 0.367 & 0.304 & 0.207 & 0.145 & 0.159 \\ \midrule \multirow{2}{*}{reference} & FID $\downarrow$ & \textbf{3.79} & 6.39 & 6.82 & 15.08 & 34.33 & 51.49 & 63.73 & 76.00 & 77.11 & 76.97 \\ \cmidrule{2-12} & LPIPS $\uparrow$ & \textbf{0.661} & 0.651 & 0.650 & 0.599 & 0.540 & 0.450 & 0.377 & 0.237 & 0.191 & 0.160 \\ \bottomrule \end{tabular} \end{table} \subsection{Comparison on arbitrary stylized face generation} \label{sec_comparison} Our framework can generate stylized-face images with two mechanisms: latent-guided generation and reference-guided generation. \begin{figure} \begin{minipage}[t]{0.48\textwidth} \setlength{\belowcaptionskip}{11pt} \makeatletter\def\@captype{table} \caption{Quantitative comparison on latent-guided stylized-face generation.} \label{tab_latentguided} \centering \begin{tabular}{lll} \toprule Method & FID $\downarrow$ & LPIPS $\uparrow$ \\ \midrule MUNIT & 29.69 & 0.394 \\ FUNIT & 176.96 & 0.500 \\ DRIT++ & 22.53 & 0.448 \\ StarGANv2 & 50.20 & 0.312 \\ \textbf{BlendGAN ($i=6$)} & 23.17 & 0.515 \\ \textbf{BlendGAN ($i=0$)} & \textbf{8.97} & \textbf{0.581} \\ \bottomrule \end{tabular} \end{minipage} \quad \begin{minipage}[t]{0.48\textwidth} \makeatletter\def\@captype{table} \caption{Quantitative comparison on reference-guided stylized-face generation.} \label{tab_refguided} \centering \begin{tabular}{lll} \toprule Method & FID $\downarrow$ & LPIPS $\uparrow$ \\ \midrule AdaIN & 37.23 & 0.345 \\ MUNIT & 103.61 & 0.192 \\ FUNIT & 87.71 & 0.327 \\ DRIT++ & 31.71 & 0.241 \\ StarGANv2 & 50.03 & 0.307 \\ \textbf{BlendGAN ($i=6$)} & 15.08 & 0.599 \\ \textbf{BlendGAN ($i=0$)} & \textbf{3.79} & \textbf{0.661} \\ \bottomrule \end{tabular} \end{minipage} \end{figure} \paragraph{Latent-guided generation.} For latent-guided generation, the style latent code $\mathbf{z}_{s}$ is randomly sampled from the standard Gaussian distribution. Figure \ref{fig_latentguided} shows a qualitative comparison of the competing methods. The results of AdaIN \cite{huang2017arbitrary} are not shown because it is not able to take latent codes as the style inputs. Since FUNIT is designed for few-shot image-to-image translation, it could not generate face images when inputting randomly sampled style latent codes. The results of MUNIT and DRIT++ have severe visible artifacts and they cannot reserve face identities in some images. StarGANv2 has the best performance among the baselines but the generated images still have some subtle artifacts and are not as natural as our results. We observe that the generated stylized-face images of our method have the highest visual quality, and well preserve the face identities of the source images. Quantitative comparison is shown in Table \ref{tab_latentguided}. Although when $i=6$, the FID score of our method is slightly higher than DRIT++, the qualitative comparison in Figure \ref{fig_latentguided} shows that our stylized images have less artifacts and higher visual quality. When $i=0$, our method has the lowest FID score and the highest LPIPS, indicating that our model could generate stylized images with the best visual quality and style diversity. \begin{figure} \setlength{\belowcaptionskip}{-3pt} \begin{minipage}[t]{0.41\textwidth} \makeatletter\def\@captype{figure} \centering \includegraphics[width=\textwidth]{imgs/qualitative_comparison_2_lat_bs.pdf} \caption{Comparison of latent-guided generation. The source images are generated by our model with indicator $i=18$, and the style latents are randomly sampled from $\mathcal{N}(0, 1)$. From left to right: the source image, the results of MUNIT \cite{huang2018multimodal}, FUNIT \cite{liu2019few}, DRIT++ \cite{DRIT_plus}, StarGANv2 \cite{choi2020starganv2} and our BlendGAN. Digital zoom-in recommended.} \label{fig_latentguided} \end{minipage} \quad \begin{minipage}[t]{0.557\textwidth} \makeatletter\def\@captype{figure} \centering \includegraphics[width=\textwidth]{imgs/qualitative_comparison_adain_1_bs.pdf} \caption{Comparison of reference-guided generation. The source images are generated by our model with indicator $i=18$, and the reference images are sampled from the AAHQ dataset. From left to right: the reference artistic-face image, the source natural-face image, the results of AdaIN \cite{huang2017arbitrary}, MUNIT \cite{huang2018multimodal}, FUNIT \cite{liu2019few}, DRIT++ \cite{DRIT_plus}, StarGANv2 \cite{choi2020starganv2} and our BlendGAN. Digital zoom-in recommended.} \label{fig_refguided} \end{minipage} \end{figure} \paragraph{Reference-guided generation.} For reference-guided generation, the style latent code $\mathbf{z}_{s}$ is embedded by the style encoder $E_{style}$ from a reference artistic face image. Figure \ref{fig_refguided} illustrates qualitative comparison results. The results show that the style transfer method AdaIN can not consider semantic style information and introduces severe visible artifacts in their generated stylized faces. FUNIT and DRIT++ also lead to visible artifacts in some images, as shown in the first and fourth rows. Although MUNIT allows to input a reference image at the testing stage, it only transfers the average color of the reference images, and the results do not have similar textures to the references. StarGANv2 has the best performance among the baselines, however, the style of some generate images is not consistent with their references, as shown in the fourth and fifth rows. As a comparison, our BlendGAN generates stylized-face images with the highest quality and the style of generated images is the most consistent with the references thanks to our well-designed training strategy. Quantitative comparison is shown in Table \ref{tab_refguided}. For $i=0$ and $i=6$, our method achieves FID of 3.79 and 15.08, which outperform all the previous leading methods by a large margin. This implies that images generated by our model are the most similar to real stylized-face images. The LPIPS of our method is also the highest, indicating that our model can produce faces with the most diverse style considering the reference images. Besides, it is worth noting that our method with $i=0$ performs better than $i=6$ because low-resolution layers are not affected by indicator $i$. \section{Conclusions} \label{sec_conclusions} In this paper, we propose a novel framework for arbitrary stylized face generation. Training with FFHQ and a new large-scale artistic face dataset (AAHQ), our framework can generate infinite arbitrary stylized faces as well as their unstylized counterparts, and the generation could be either latent-guided or reference-guided. Specifically, we propose a self-supervised style encoder to extract the style representation from reference images, as well as a weighted blending module to implicitly combine the face and style latent codes for controllable generation. The experimental results show that our proposed model could generate images with high quality and style diversity, and the generated images have good style consistency with reference images, remarkably outperforming the previous leading methods. Our experiments are currently only performed on face datasets, future work includes extending this framework to other domains, such as stylized landscape generation. \section*{Broader Impact} \label{sec_broader} This work would be of great interest to the general community. It shows the great potential of applying GAN models to stylized image generation, which could be further used for other tasks, such as style transfer, style manipulation, and new style creation. Although the goal of this work is to generate faces for stylistic purposes, it still has some ethical challenges like other face generation methods. It is well documented that common face datasets lack diversity \cite{merler2019diversity}. As our model is trained using the FFHQ \cite{karras2019style} dataset, it inherits the biases of this dataset like other StyleGAN-based methods \cite{pinkney2020resolution,kwong2021unsupervised,menon2020pulse}. Specifically, similar to those discussed in the previous works \cite{salminen2020analyzing,menon2020pulse}, for natural-face images, the model generates light-skinned faces more frequently than darker-skinned faces, and this concern can be mitigated by enlarging the racial diversity of the natural-face dataset. For stylized-face images, our AAHQ dataset already covers a wide diversity of faces (see Figure \ref{fig_AAHQ} and Section B of the supplementary materials). In addition, since our framework only extracts the style representation of images in the AAHQ dataset and does not extract information related to face identity, the ethical biases (if any) of this dataset will not be transferred to the generated images. Therefore, the stylized-face images generated by our model have equally good performance for both light-skinned and darker-skinned faces (as shown in Figures \ref{fig_teaser}, \ref{fig_latentguided} and \ref{fig_refguided}), which further indicates the superiority of our framework. Nonetheless, our model still has the same challenges as PULSE \cite{menon2020pulse} (i.e., input images with darker skin are now stylized as lighter-skinned faces in the output), and these challenges are worthy of further research. Besides, though not the purpose of this work, the natural-face images generated by our model could also be misused in the “deepfakes” relevant applications, such as fake portraits in social media \cite{hill2020designed}. Wang et al. \cite{wang2020cnn} showed that a classifier trained to classify between real images and synthetic images generated by ProGAN \cite{karras2018progressive}, was able to detect fakes produced by other generators, among them, StyleGAN \cite{karras2019style} and StyleGAN2 \cite{karras2020analyzing}. This finding can partially mitigate the above concern. Last but not least, faces in the FFHQ and our AAHQ datasets are biometric data and thus need to be sourced and distributed more carefully, with attention to potential privacy, consent and copyright issues. \section*{Acknowledgments} We sincerely thank all the reviewers for their comments. We also thank Zhenyu Guo for help in preparing the comparison to StarGANv2 \cite{choi2020starganv2}. \small{ \bibliographystyle{unsrt}
1adda411859038a3f9074559b5ab3a29e6619551
\section{Introduction} User experience is an increasingly important consideration in systems, services and products, with many applications emphasizing various aesthetic, affective and hedonic properties \cite{hassenzahl2006user,hassenzahl2008user}. Such qualities are particularly important in applications with no instrumental purpose, such as video games, digital toys and interactive artwork \cite{swink2008game, bernhaupt2010user, sanchez2012playability}. Unfortunately, these applications can be challenging to optimize as they can contain numerous parameters that impact user experience, but for which there are no objective quantities to be maximized and, therefore, need to be set on the basis of subjective preferences. In video games, for example, character movement speed has a significant impact on user experience: too slow and navigating the game world can be tedious, but too fast and it can become difficult to control \cite{swink2008game}. The sweet spot that balances these two extremes will be determined by subjective preferences that are influenced by contextual factors (e.g.~larger characters tend to move slower) and other human factors (e.g.~expert players might prefer a faster, more challenging game) \cite{swink2008game}. How should we go about setting such parameters? A single user (e.g.~the developer) could simply set a given parameter to whatever feels intuitively correct. Alternatively, multiple users could be recruited to provide feedback via questionnaires or to state their preference for one parameterization over another (i.e.~pairwise comparisons). Lastly, we could make assumptions about how user behavior relates to experience, e.g.~we could assume that engagement (time spent) correlates with positive user experiences. Unfortunately, all these approaches have downsides to inferring subjective preferences: a single user is unlikely to be representative of the group, questionnaire response scales need to be validated \cite{lindgaard2013introduction} and require large sample sizes per parameterization. Pairwise comparisons do not scale, with the number of comparisons increasing exponentially with the input size \cite{perez2017practical}. Finally, behavioral data can be misinterpreted; correlating with negative experiences as well as positive ones \cite{tufekci2018youtube}. In this article, we present a critiquing-based approach for modeling relationships between system parameters and subjective preferences called {\em collective criticism}. We consider a crowdsourcing scenario where a group of individuals are presented with a system or product and asked to critique their experiences in terms of a given attribute. These critiques take the form of retrospective summaries, such as ``the food was too spicy/bland'' after eating a meal, or ``the weather was too hot/cold'' when describing a vacation \cite{chen2012critiquing, medlar2017towards}. Such feedback is qualitative, but we can extract quantitative information by considering the context within which the feedback was given. Using the video game example from before, if character movement is ``too slow'' and the speed was set to $x$, then this implies that the optimal speed lies in the interval from $x$ to $+\infty$ (strictly speaking, the right-censored interval $(x,+\infty]$). Importantly, users do not need to know anything about how the game was implemented to give their critiques: they are based solely on their experiences. This method of transforming critiques into intervals allows us to model relationships between user experience and system parameters using interval regression. We demonstrate the generality of this approach using two studies related to images generated using neural style transfer and users' experience of challenge after playing the video game Tetris. At present, our approach cannot guard against abuse from bots and disinterested participants on crowdsourcing platforms like Amazon Mechanical Turk. Instead, we focused on groups of users that may not have any particular expertise, but are assumed to provide honest feedback. Examples of such groups include beta testers for a video game that is currently under development or a design community that meets to critique one another's work. To replicate these scenarios, we present studies performed in uncontrolled environments where participants were informally recruited by interrupting their daily lives, e.g. during a conference coffee break or while studying in the university library. The main contributions of this paper are as follows: \begin{itemize}[leftmargin=0.2in,itemsep=4pt,topsep=6pt,parsep=0pt,partopsep=0pt] \item A novel critiquing-based approach for modeling relationships between system parameters and subject preferences called \emph{collective criticism} that combines randomized tasks with summary retrospective feedback. \item A modeling approach that transforms critiques into censored intervals to be analyzed using statistical packages for interval regression. \item We present two case studies using collective criticism to model (i) aesthetic preferences for images generated using neural style transfer, and (ii) user experiences of challenge after playing the video game Tetris. \end{itemize} \section{Related Work} In this section, we review related work on crowdsourcing preferences and how critiquing has been used as an interaction mechanism in different types of information systems. \subsection{Crowdsourced Preferences} Crowdsourcing has eased the collection of subjective preferences by reducing cost and turnaround time, while showing a high degree of agreement with data collected in controlled laboratory environments \cite{behrend2011viability,crump2013evaluating,tse2016crowdsourcing}. In computing, crowdsourcing has been used to collect subjective preferences for algorithm development and evaluation. In particular, it has been used to collect relevance judgments \cite{alonso2008crowdsourcing}, data for sentiment analysis \cite{bakshi2016opinion}, assessments of toxicity in online discussions \cite{aroyo2019crowdsourcing} and even examples of irony and sarcasm \cite{filatova2012irony}. In these areas, items are scored independently of one another using either binary (e.g.~relevant/not relevant) or ordinal (e.g.~negative to positive) labels. While these labels are considered subjective, there is assumed to be a consensus within a given culture or community. As this data tends not to be aggregated, but used as training data, each label needs to be correct and, therefore, there is an extensive literature on study design and statistical methods for quality control (for a recent survey, see \cite{jin2020technical}). In other domains, crowdsourcing is used to understand the opinions and aesthetic preferences of the general public. It has been used, for example, to study the aesthetics of platforming games \cite{shaker2012crowdsourcing}, 3D models \cite{dev2017polygons} and portrait photography \cite{expressions33mirror}. Crowdsourcing aesthetic preferences has seen extensive use in reconstructive and cosmetic surgery. In reconstructive surgery, it has been used to compare the aesthetics of cleft lip outcomes \cite{tse2016crowdsourcing} and the results of different surgical techniques \cite{suchyta2020applied}. Whereas, in cosmetic surgery, it has been used to assess buttock augmentation outcomes \cite{vartanian2018ideal} and to characterize anatomical aesthetic preferences for male \cite{massie2021defining} and female \cite{frojo2021defining} genitalia. The use of crowdsourcing is considered important as surgical aesthetic outcomes are usually only assessed by an individual, either the patient or surgeon, which can lead to biased assessments \cite{azadgoli2019public}. In this article, we include a study comparing the aesthetic preferences of an individual (in our case, a developer) with our approach and find similar disparities. Beyond aesthetics, crowdsourcing has recently been used to understand the public perception of topical subjects, such as AI fairness \cite{van2019crowdsourcing} and moral decision-making in the context of autonomous driving \cite{awad2018moral}. All of the above examples used either questionnaires or pairwise comparisons to infer subjective preferences, both of which have weaknesses. Questionnaire response scales need to be validated to ensure they are measuring what they purport to measure \cite{lindgaard2013introduction}. Pairwise comparisons do not suffer from this issue, but can require exceptionally large sample sizes (i.e.~multiples of ${n \choose 2}$ pairwise comparisons), practically limiting assessment to a relatively small number of items \cite{perez2017practical}. Our approach does not require validation like a questionnaire because we only ask a single question that is often directly referencing a given parameter (i.e.~we do not use multi-item scales, which would require us to assess construct validity). Furthermore, as users are critiquing individual parameters and not making pairwise comparisons, sample sizes can be much lower. While not usually considered crowdsourcing, A/B testing uses randomized experiments to compare the effectiveness of two (or more) versions of the same system on the basis of conversion rates, e.g.~click-throughs or purchases \cite{kohavi2017online}. This makes the assumption that a given behavior is correlated with positive user experience \cite{kohavi2017surprising}. However, it can also result in unintended consequences if the measured response corresponds to multiple outcomes, e.g.~video watch time is correlated with outrage as well as enjoyment \cite{tufekci2018youtube}. Our approach is based on stated preferences, avoiding the ambiguity-related issues associated with inferring revealed preferences from behavioral data. \subsection{Critiquing} Critiquing has previously been used as an interaction mechanism in both interactive search and conversational recommender systems \cite{chen2012critiquing}. In critiquing-based systems, users provide critiques in relation to item features, e.g.~``too expensive'', to iteratively navigate a complex information space \cite{chen2012critiquing}. The FindMe system was the first critiquing recommender, combining browsing with the critiquing of previously retrieved examples \cite{burke1997findme}. Later systems utilized the approach to develop interactive retrieval systems for e-commerce \cite{burke2002interactive,faltings2004designing}, decision support \cite{pu2004decision} and preference-based search \cite{viappiani2006preference}. However, the most common application of critiquing is in conversational recommender systems \cite{jannach2020survey}, where they have been applied to various domains, including movie \cite{vig2011navigating} and music recommendation \cite{jin2019musicbot}. Recently, critiquing-based systems have used language-based attributes, rather than fixed item features, to automatically identify attributes of an item that can be critiqued \cite{wu2019deep}. In interactive systems, critiques are usually conceptualized as constraints that are applied across items \cite{burke2002interactive,faltings2004designing,pu2004decision,viappiani2006preference}. For example, if a plane ticket is critiqued as too expensive, it does not make sense to recommend tickets with higher prices. In this work, we are concerned with modeling user preferences, so critiques are modeled probabilistically even though they may be constraints from the perspective of individuals. \section{Collective Criticism} Collective criticism is a critiquing-based approach for modeling the relationships between system parameters and subjective preferences. Each study based on collective criticism requires three elements: \begin{enumerate*}[label=(\roman*)] \item critique elicitation, \item randomized tasks, and \item statistical modeling \end{enumerate*} to analyze user preferences. \subsection{Critique Elicitation} During a study, each participant performs a task where they interact with a system or product for a short period of time. After completing the task, participants are asked to critique the property under study using summary retrospective feedback. In concrete terms, summary retrospective feedback takes the form of judgments, such as ``too much'' or ``too little'' of a given property. For example, suppose we wanted to optimize the volume of an audible, but discreet, ringtone for an office environment. We would ask participants to listen to a ringtone and state whether they thought it was ``too quiet'' to be audible or ``too loud'' to be discreet. This kind of critique is qualitative: it does not contain information related to how much something should change, merely the direction of that change. This allows participants to respond with their gut instinct and can be used when an appropriate scale for quantitative feedback does not exist. \subsection{Randomized Tasks} Here, we detail the assumptions of our approach and describe the steps necessary to design and conduct an study. \subsubsection{Assumptions} We assume that investigators have a hypothesis that a given parameter, $p$, affects a specific property of the system under study. Furthermore, we assume that optimizing $p$ necessitates making a trade-off, i.e.~setting $p$ either too high or too low is detrimental to user experience, but that there is a ``sweet spot'' where the average user is maximally satisfied. We do not assume that there exists a single optimal parameterization for all users as this depends on the experimental design of a given study. Indeed, we provide a worked example in Section~\ref{sec:workedexample} that assumes a single optimal parameterization and two further case studies in Sections~\ref{sec:neuraltransfer} and \ref{sec:tetrisstudy} where $p$ is modeled in terms of other explanatory variables. \subsubsection{Effective Parameter Ranges} Investigators need to determine the effective range for the parameter, $p$. This could be the entire range of the parameter, e.g.~the decision threshold in a probabilistic classifier is 0 to 1 inclusive, or be limited to a given interval. In this article, we determined the effective range of parameters by trial and error, however, it could also be limited due to physiological reasons, e.g.~human hearing is limited to 20 Hz to 20 kHz, or technological reasons, e.g.~telephony limits audio frequencies to 300 Hz to 3.4 kHz. Therefore, optimizing the frequency of a tone would have different effective ranges under different circumstances. \subsubsection{Summary Retrospective Anchors} Study participants give critiques using summary retrospective feedback guided by the investigator. The study design, therefore, needs to include verbal anchors to ensure that users critique the correct property. In general, verbal anchors are words used to indicate the informal meaning of response scales, such as ``strongly agree'' and ``strongly disagree''. In our case, anchors are judgments, such as ``too hot'' and ``too cold''. Selecting appropriate verbal anchors is important for two reasons: \begin{enumerate*}[label=(\roman*)] \item from the participants' perspective, anchors need to capture their collective understanding of the extremes of the property being assessed, and, \item from the investigator's perspective, anchors need to correspond to increasing and decreasing the parameter being optimized. \end{enumerate*} \subsubsection{Procedure} We extract quantitative information from summary retrospective feedback by changing the underlying conditions from which the assessment is made. We achieve this by randomizing the parameter of interest within the parameter's effective range. The procedure is as follows: \begin{enumerate}[leftmargin=0.2in,itemsep=4pt,topsep=6pt,parsep=0pt,partopsep=0pt] \item We randomly set the parameter, $p$, to a value selected uniformly at random from the parameter's effective range. \item Participants are instructed to perform a study-specific task. Participants could be asked to use a system for a given period of time or to simply look at an image. \item After completing the task, participants are asked to critique their experience with respect to a given property using the study's summary retrospective anchors (e.g.~too high/too low). \item For each observation, we record the random value of $p$, the participant's critique and any additional metadata or user behavior data that is to be used for modeling (see Section~\ref{sec:tetrisstudy} for an example). \end{enumerate} \noindent If we want to understand the impact that other parameters in the system have on $p$ (i.e.~interactions), then these additional parameters also need to be randomized in Step 1 and recorded along with $p$ in Step 4 (see Section~\ref{sec:neuraltransfer} for an example). \subsection{Statistical Modeling} After all the tasks have been performed, we model the data set using interval regression. This requires us to transform participants' critiques into censored and/or non-censored intervals. We use left-censored intervals to represent when participants stated a parameter was set too high. That is, if the parameter being optimized was assigned the random value $p$, then the resulting censored interval is $(-\infty, p]$, i.e.~while we do not know the optimal value that would maximize the participant's experience, we assume that it is in the interval up to and including $p$. If the effective range of this parameter, however, is such that $p$ cannot be negative, then the (non-censored) interval would be $[0,p]$. We use right-censored intervals, $[p, +\infty)$, when the parameter was set too low, following the same logic. Figure~\ref{fig:intreg} shows the intervals from the worked example in Section~\ref{sec:workedexample}, where left-censored intervals are depicted as red arrows and right-censored intervals as blue arrows. In interval regression, we let $y = \beta\mathbf{X} + \epsilon$, where $y$ is a continuous response variable and errors are assumed to be Gaussian, i.e.~$\epsilon \sim \mathcal{N}(0, \sigma^2)$, where $\sigma$ is the standard deviation. In the general case, the log likelihood is composed of four terms: \begin{equation*} \begin{split} ln L = & - \frac{1}{2} \sum_{j \in \mathcal{C}}{\left\{ \left(\frac{y_j - \mathbf{X}\beta}{\sigma} \right)^2 + log(2\pi\sigma^2) \right\}} \\ & + \sum_{j \in \mathcal{L}} log\; \Phi \left( \frac{y_{\mathcal{L}_j} - \mathbf{X}\beta}{\sigma} \right) \\ & + \sum_{j \in \mathcal{R}} log \left\{ 1 - \Phi \left( \frac{y_{\mathcal{R}_j} - \mathbf{X}\beta}{\sigma} \right) \right\} \\ & + \sum_{j \in \mathcal{I}} log \left\{ \Phi \left( \frac{y_{2_j} - \mathbf{X}\beta}{\sigma} \right) - \Phi \left( \frac{y_{1_j} - \mathbf{X}\beta}{\sigma} \right) \right\}, \end{split} \end{equation*} where $\Phi(\cdot)$ is the cumulative normal function. Each term considers one of the four types of observation: point observations ($\mathcal{C}$), left-censored intervals ($\mathcal{L}$), right-censored intervals ($\mathcal{R}$) and intervals with two end-points ($\mathcal{I}$), respectively \cite{amemiya1973regression}. We note that, in our case, $\mathcal{C}$ is empty as we have no point observations, but a task could allow participants to respond that $p$ is optimal. In our experience, however, participants do not insist that parameterizations are optimal due to a lack of experience with the system under investigation. Throughout this article, we used the R Survival package to fit interval regression models \cite{therneau2014package}. \begin{figure} \centering \includegraphics[width=\columnwidth]{INTERVAL_REGRESSION.pdf} \caption{Interval data and final result from worked example in Section~\ref{sec:workedexample}. Left-censored intervals are colored red and right-censored intervals are blue. The $y$ coordinate of each interval is the order tasks were performed in. } \label{fig:intreg} \end{figure} \subsection{Worked Example} \label{sec:workedexample} We present a toy cognitive estimation task to demonstrate the simplest concrete example of collective criticism. \subsubsection{Objective:} We placed 568 jelly beans in a jar and used collective criticism to estimate the quantity of jelly beans. We determined empirically that the jar could hold $\mathord{\sim}$1000 jelly beans, making the effective range from 0--1000, inclusive. We used ``greater than'' and ``less than'' as summary retrospective anchors. \subsubsection{Task:} For each participant, a number $x$ was sampled uniformly at random from 0-1000, inclusive. Participants were handed the jar of jelly beans and asked the following question: \begin{displayquote} \em How many jelly beans are in the jar: {\bf \em greater than x} or {\bf \em less than x}? \end{displayquote} As follow-up questions, we asked participants to freely estimate the number of jelly beans in the jar, and whether they felt critiquing or freely estimating the same quantity was more cognitively demanding. \subsubsection{Participants:} We recruited 60 participants during the coffee breaks of a conference organized in our department (27 female, 33 male). The participants ranged from PhD students to full professors who had a background in theoretical computer science, optimization or a related field. \subsubsection{Results:} The 60 participants gave 60 critiques and 60 estimates of the number of jelly beans. No data points were excluded from analysis. The difference between participants' mean estimate of the number of jelly beans (M = 465.52, 95\% CI [409.98, 521.05]) and the answer derived from collective criticism (M = 454.96, 95\% CI [382.97, 526.95]) was not statistically significant (t(62.56) = -0.260, p = 0.796), showing that our approach is as accurate as allowing users to directly estimate. Furthermore, a majority of participants (47/60, $p = 1.22 \times 10^{-5}$, binomial test) reported that they perceived critiquing to be less cognitively demanding than free estimation. \begin{figure*} \centering \includegraphics[width=\textwidth]{NST_NUMBERLINE.pdf} \caption{Neural style transfer combines a content image (far left) with a style image (far right). The degree of stylization is controlled by the style weight parameter (middle) which can vary from barely noticeable ($10^8$) to unrecognizable ($10^{11}$).} \label{fig:nst} \end{figure*} \section{Study 1: Aesthetic Preferences in Neural Style Transfer} \label{sec:neuraltransfer} In our first study, we used collective criticism to model users' aesthetic preferences for images generated using neural style transfer \cite{gatys2015neural}. Neural style transfer combines the content from one image (the content image) and the style from another image, usually an artwork (the style image), see Figure~\ref{fig:nst} for an example. We demonstrate how collective criticism can be used to elicit preferences, model different hypotheses and make practical parameterization decisions. \subsection{Objective} Neural style transfer has two hyperparameters: a content weight and a style weight, however, if one parameter is kept constant, there is only one free parameter. We wanted to identify the highest style weight that could be applied to a photo without the subject becoming unrecognizable. Furthermore, we hypothesized that different style weights would be optimal for different kinds of photo, such as head, waist-up and full body shots, i.e.~we hypothesized that there is an interaction between style weight and photo type. \subsection{Neural Style Transfer} \subsubsection{Implementation:} We used the fast neural style transfer implementation included in the PyTorch library\footnote{\url{https://github.com/pytorch/examples/tree/master/fast_neural_style}} that is based on perceptual loss \cite{johnson2016perceptual} and instance normalization \cite{ulyanov2016instance}. \subsubsection{Model Training:} We determined by trial and error that, if the content weight is kept constant at $10^5$, the effective range for style weight is $10^{8}$-$10^{11}$, where higher values result in an output image more heavily influenced by the style image (see Figure~\ref{fig:nst}). We trained 101 neural style transfer models using the COCO 2014 data set \cite{lin2014microsoft} and used an image of a mosaic included with PyTorch as the style image (see Figure~\ref{fig:nst}, far right). Each model had a different style weight parameter where the exponent was incremented by 0.03, i.e.~8.0, 8.03, \dots 10.97, 11.0. This increment was chosen empirically so the difference between images generated by consecutive models was imperceptible. Each model was trained for 2 epochs. \subsubsection{Content Images:} We selected photos from a collection of permissively licensed stock photographs\footnote{\url{https://www.pexels.com/license/}} to use as content images. We selected three categories of portrait: head shots, waist-up and full body shots. We identified 39 images of similar size, with 13 photos in each of the three categories. All categories included men and women of approximate working age from different ethnicities. \subsection{Task} Participants were told we were creating a new website for our research group and wanted to make our photos look more interesting using neural style transfer. We briefly explained the concept of neural style transfer using example images to illustrate different levels of stylization and stated that we wanted output images to be as strongly influenced by the style image as possible without the identity of the person in the content image becoming unrecognizable. Each participant was shown 10 randomly sampled images, stylized with randomly sampled style weights. After being shown each image, participants were asked the following question: \begin{displayquote} \em Do you think the image should look {\bf \em more realistic} or {\bf \em more artistic}? \end{displayquote} During the study, we logged the photos shown including the photo category, style weights and user critiques. Each experiment lasted a total of $\mathord{\sim}$3 minutes. \subsection{Participants} We recruited 31 participants from the Faculty of Science by walking up to people in the corridors of the Department of Computer Science and the Department of Mathematics and Statistics (10 female, 21 male). All participants were PhD students or postdoctoral researchers. \subsection{Results} The 31 participants examined a total of 310 photos: 95 head shots, 117 waist-up shots and 98 full body shots. \subsubsection{Baseline:} There are no validated questionnaires for rating image stylization and pairwise comparisons between all images using all models would require an impractical sample size. Instead, we used a pre-trained model included in the PyTorch distribution as a baseline to understand whether the developer's original parameterization was sufficient to solve this problem. The pre-trained model used a content weight of $10^{5}$ (the same as our models) and a style weight of $10^{10}$. \subsubsection{Preference Models:} We fitted two preference-based models using collective criticism. In the first model, each study participant was modeled as a random effect (due to repeated measures) and photo type (head, waist-up or full body) was modeled as a fixed effect: $$y_{ij} = \beta_0 + \beta_1p_{ij} + u_j,$$ where $y$ are intervals derived from critiques, $p$ is the photo type and $u$ are random intercepts per user, $j$. The second model was identical to the first, but with the photo type term excluded: $$y_{ij} = \beta_0 + u_j$$ The style weights from the first model ($\mathord{log}_{10}$) for head (M = 9.71, 95\% CI [9.46, 9.95]), waist-up, (M = 9.53, 95\% CI [9.31, 9.75]) and body shots (M = 9.39, 95\% CI [9.14, 9.63]) were very similar to one another with overlapping 95\% confidence intervals. Indeed, the difference in model fit between the two models was not statistically significant ($\chi^2$ (1.55, N = 310) = 3.14, $p$ = 0.14). This suggests that the style weight from the second model (M = 9.55, 95\% CI [9.36, 9.73]) is suitable for all three types of photo, assuming all other conditions, such as photo size and the age range of the subject, remain constant. Finally, the style weight parameter used in the pre-trained baseline was $10^{10}$, which was outside of all four confidence intervals and, therefore, the difference was statistically significant. \subsection{Summary} Given these findings, we argue that there is insufficient evidence for using different style weights for each photo type and that the style weight used to train the neural network should be set to $10^{9.55}$ in order to balance style and recognizability. However, if we were to collect more data, then it is likely that the confidence intervals would have been narrower and we could recommend the use of category-specific style weights. This experiment demonstrates how the preferences of an individual (in this case, the original developer) may not match the aesthetic preferences of the group for a specific problem. This case study only looked at group preferences, but could be extended with additional explanatory variables from a user profile to predict personalized style weights. \section{Study 2: Prediction of Challenge Perception in Tetris} \label{sec:tetrisstudy} In our second study, we show how collective criticism can be used to model users' perception of the challenge experienced while playing the video game Tetris. This example demonstrates how to create a model that could be used as the basis for personalization in an adaptive interactive system. \subsection{Objective} Tetris is a tile-matching video game that gets progressively more challenging as the speed of the game increases (see Section~\ref{sec:tetris} for a description of gameplay). A game of Tetris starts out slow, not presenting the player with any challenge. During the endgame, however, Tetris can become frustratingly fast, making the player anxious (see Figure~\ref{fig:tetris}). We wanted to create a model for an adaptive version of Tetris where the level of challenge is personalized for each player. Namely, we wanted to identify how game speed and other contextual factors could be used to keep players feeling challenged, but not overwhelmed. This could be viewed as modeling a kind of flow state \cite{csikszentmihalyi1990flow}: one of many considerations that go into game design \cite{baron2012cognitive} and is actively studied in the development of slot machines \cite{schull2005digital}. \subsection{Tetris} \label{sec:tetris} \subsubsection{Gameplay:} In Tetris, players control falling shapes called tetrominoes to achieve a high score. Players can move tetrominoes left and right, increase their speed of descent (called a ``soft drop'') or force them to immediately drop to the bottom of the play field (a ``hard drop''). When the player completes a line (i.e.~a row of blocks without any gaps), it disappears and the player's score increases. After clearing a fixed number of lines, the difficulty level is increased and, along with it, the speed of the game. The game ends when a tetromino overlaps the top of the play field. \subsubsection{Implementation: } Figure~\ref{fig:tetris} shows our web-based implementation of Tetris. The play field is $12 {\times} 20$ tiles and the surrounding interface shows the score, the number of lines cleared and a timer indicating how much time is remaining in the task. In most versions of Tetris, the scoring function is proportional to the level and, therefore, the current game speed, i.e.~clearing a line is worth more points at level 2 than level 1. In our implementation, however, we used the same scoring function irrespective of the current game speed: 5 points for each placed tetromino and 20 points for each line cleared. The game speed is determined by the delay in milliseconds for a tetromino to move down the play field by one block. We determined by trial and error that the effective range of this delay was 100-600ms, where 100ms gives the fastest speed and 600ms results in the slowest speed. \begin{figure} \centering \includegraphics[width=\columnwidth]{TETRIS_INTERFACE.pdf} \caption{Two screenshots of Tetris: the left image shows the slow (boring) early game, whereas the right image is typical of the faster (stressful) endgame.} \label{fig:tetris} \end{figure} \subsection{Task} Prior to the task, participants were asked to fill out a background questionnaire to capture \begin{enumerate*}[label=(\roman*)] \item demographic information, \item how often they played video games, \item their familiarity with Tetris and \item their opinion of Tetris. \end{enumerate*} After completing the questionnaire, participants were allowed to play as many warm-up games of Tetris as they wanted. During the study, each participant played 3 games of Tetris. In the first game, the fall delay was sampled uniformly at random from the full effective range, 100-600 inclusive. In the second and third games, we altered the upper or lower bounds of the delay range to reflect user feedback. For example, if in the first round a delay of 300 was too slow, then in the second round the delay would be sampled from the range 100-300 (lower delays mean higher speeds). Each game of Tetris lasted a maximum of 2 minutes. After 2 minutes had expired or the game was lost, participants were asked the following question: \begin{displayquote} \em We are collecting data to create a game of Tetris that helps players improve their skill level. It should be just fast enough to feel like a challenge. For a player of your skill level, do you think the game you played should have been {\bf \em faster} or {\bf \em slower}? \end{displayquote} During the study, we logged user critiques, game data (fall delay, time spent playing, score, number of lines cleared) and interaction data (keyboard events and their associated timestamps). Each task took a total of $\mathord{\sim}$10 minutes. \subsection{Participants} We recruited 50 participants who were studying at the university library (24 female, 26 male). Participants were aged between 20-49 with a median age of 28. According to the background questionnaire, over 2/3 of participants played video games at least occasionally (never (14), occasionally (23), every week (9), every day (4)), and a majority of participants, 45/50, had at least some prior experience of playing Tetris (never (5), a few times (31), many times (12), experienced (2)). Overall, participants were neutral in their opinion of Tetris (mean = 2.98 on a 5 point scale where 1 = hate and 5 = love). One participant stated that they hated Tetris, despite having never played the game. \begin{figure} \centering \includegraphics[width=\columnwidth]{new_TETRIS_SPEED_QUADRATIC.pdf} \caption{Using player preferences of tetromino fall delay comes to similar conclusions as a quadratic model based on scores. Gray shaded region is the 95\% confidence interval.} \label{fig:tetris_quad} \end{figure} \subsection{Results} The 50 participants played a total of 150 games of Tetris. We compared a baseline model that was fit using behavioral data with two models created using collective criticism. \subsubsection{Baseline:} We assumed that the fall delay that maximizes the average score would also maximize players' perception that the game was just challenging enough (we are really maximizing the rate of scoring as each experiment has a fixed duration). We fitted a linear mixed model with fall delay as linear and quadratic fixed effects and participant as a random effect: $$y_{ij} = \beta_0 + \beta_1d_{ij}^2 + \beta_2d_{ij} + u_j,$$ where $y$ is the score, $d$ is delay and $u$ are random intercepts per user, $j$. The baseline found that the average score was maximized when the delay was 308.68 ms (see Figure~\ref{fig:tetris_quad}, dashed blue curve). The baseline does not provide any uncertainty estimates for the optimal delay, only standard errors for the coefficients of the delay terms in the model. Furthermore, this analysis is based on assumptions and we do not know whether this delay is truly inline with user preferences. \subsubsection{Static Preference Model:} We used collective criticism to fit a model for users' speed preferences with no explanatory variables other than participant as a random effect: $$y_{ij} = \beta_0 + u_j,$$ where $y$ are intervals derived from critiques and $u$ are random intercepts per user, $j$. The preference model found that the optimal delay is 314.70 ms (95\% CI [283.34, 346.05]). This is very similar to the point estimate from the baseline model, despite using different response variables and very different assumptions. Unlike the baseline, this model allows us to estimate the uncertainty in the mean delay (see Figure~\ref{fig:tetris_quad}, dashed red line). As the baseline result (308.68 ms) falls into the 95\% confidence interval for the preference model, there is no statistically significant difference between the two results. However, the preference model has the advantage of having evidence that the optimal delay is inline with user preferences. \begin{figure} \centering \includegraphics[width=\columnwidth]{new_TETRIS_REPAIR_MODEL3.pdf} \caption{Predicted delays from adaptive preference model. Higher number of lines cleared, Tetris opinion and familiarity predict lower delays are necessary to provide a challenge.} \label{fig:tetris_repair} \end{figure} \subsubsection{Adaptive Preference Model:} Predicting the optimal delay for the average participant masks the variability between players and we assume novice and expert players will have different ideas of what constitutes a challenge. Unfortunately, this is not possible to model with the baseline because the score--a key measure of player ability--is already being used as the response variable. However, the preference model can incorporate the score as an additional explanatory variable. We tried numerous different models and assessed model fit using AIC. Figure~\ref{fig:tetris_repair} shows in-sample predictions from the best model we found based on AIC score (AIC = 163.65). This model used Tetris familiarity and Tetris opinion from the background questionnaire, as well as the number of lines cleared during a game as explanatory variables (using the game score had a slightly higher AIC, but the two variables were strongly correlated): $$y_j = \beta_0 + \beta_1s_j + \beta_2f_j + \beta_3o_j + u_j,$$ where $y$ are intervals derived from critiques, $s$ is the number of lines cleared, $f$ is a factor for familiarity, $o$ is a factor for opinion and $u$ are random intercepts per user, $j$. Figure~\ref{fig:tetris_repair} shows that as player performance (number of lines cleared) increases, the predicted fall delay decreases to offer a greater challenge. Similarly, higher Tetris opinions and familiarity tended to correlate with lower predicted fall delay. We investigated the use of many additional variables in the model, e.g.~average number of hard drops per tetromino and average time between key presses were highly predictive, however, including them in a model together with number of lines cleared resulted in higher AIC. \subsection{Summary} The difference between the baseline and the two models created using collective criticism was the incorporation of preference information, making us confident that we are modeling user experience and not potentially misinterpreting behavioral data. While the baseline model would only satisfy the average player, the additional flexibility of freeing up the score variable allows us to create a model for personalization based on player performance (as determined by the prior two minutes of playtime). Furthermore, the preference models directly predict the optimal delay parameter allowing it to be easily integrated into the game itself, whereas the baseline required us to maximize a quadratic function to calculate the optimal result. \section{Discussion and Limitations} In this paper, we introduced a critiquing-based approach for modeling the relationships between system parameters and subjective preferences called collective criticism. Collective criticism combines randomized tasks with critiques that are transformed into intervals and modeled using interval regression. We demonstrated collective criticism using two studies: \begin{enumerate*}[label=(\roman*)] \item aesthetic preferences in neural style transfer, and \item modeling users' perceptions of challenge in Tetris. \end{enumerate*} In neural style transfer, we showed that the optimal parameterization was different from a baseline pre-trained model, but found insufficient evidence in the data collected to support there being an interaction between style weight and photo type. In Tetris, we demonstrated that modeling subjective preferences using collective criticism allowed us to create more complex models than with behavioral data alone. Furthermore, we could do so without making assumptions about how behavior maps to preferences. There are several limitations to our approach. First, collective criticism requires an understanding of regression modeling and how to express experimental design through model specification in order to produce meaningful results. Collective criticism is, therefore, very general, but difficult to use compared to other crowdsourcing approaches where the goal is to generate annotations or a ranking. Second, in its present form, participants can only give feedback with respect to a single variable. While this is still very general, there may be situations where two or more variables interact to produce a given user experience, e.g. how a video game feels to play might depend on a combination of character speed and projectile speed. We are currently investigating multiple output regression methods to remedy this limitation. Third, as we stated earlier, we could not test our approach on crowdsourcing platforms, such as Amazon Mechanical Turk, where studies tend to include trick questions to guard against abuse from bots and disinterested participants. Given the subjective nature of the experiences we are aiming to model and the limited responses used in task design, it is unclear how investigators would ensure data integrity other than simply increasing the sample size and assuming a majority of participants are well-intentioned. This too, will be a topic for future work. Lastly, as we can see in Figure~\ref{fig:intreg}, asking users to critique experiences that are either very high or very low is wasteful after a certain amount of evidence has already accumulated. We plan to investigate combining collective criticism with Thompson sampling \cite{chapelle2011empirical}, where the next data point is sampled from the posterior distribution, so the effective parameter range gets narrower as the experiment proceeds and participants will only be asked to critique the areas of greatest uncertainty. Thompson sampling has the disadvantage that you need to specify the model a priori, which could lead to sample inefficiency if the model is unnecessarily complex. For well-defined problems, however, this should not be an issue. \begin{acks} This work has been supported by Helsinki Institute for Information Technology HIIT. \end{acks} \balance \bibliographystyle{ACM-Reference-Format}
a49ffec8a27809d296d3e8daefc19692b552d95c
\section{Introduction} \IEEEPARstart{W}{hile} an enormous amount of novel and efficient wireless technologies have been proposed in order to fulfill the demands of \ac{5G} and beyond in terms of reliability, throughput, and latency, the security has become a sensitive issue \cite{hu2015mobile}. This is due to the fact that the open and broadcast nature of wireless transmission makes the physical transmitted signal, bearing the communication data and sensitive information, vulnerable to eavesdropping \cite{6739367}. In order to overcome the security threats, upper-layer encryption-based algorithms are exploited conventionally. Such techniques, however, may not be feasible for future wireless networks because of the difficulties in terms of key management and sharing in these heterogeneous wireless networks. \Ac{PLS} has emerged as an interesting and powerful solution that can complement conventional security techniques and improve the overall security of wireless communication networks \cite{bloch2011physical}. Particularly, \ac{PLS} observes and exploits the dynamic characteristics of the signal, radio, and channel for ensuring the security of features and contents at the physical layer \cite{rivest1990handbook}. \subsection{PLS Literature Review} \Ac{PLS} techniques are proposed to achieve two different goals, namely, securing the data communication and channel estimation. The first goal of \ac{PLS} algorithms is to degrade the data decoding capability of non-legitimate nodes compared to the legitimate node by exploiting different properties of the wireless channel. While the second goal is to enforce poor channel estimation at the eavesdropper/attacker, which will degrade the signal recovery capability at the attacker \cite{9336039, 4543070, 8509094}. \subsubsection{PLS for Data} Among the top areas in \ac{PLS}, securing \ac{OFDM} has drawn enormous attention recently since \ac{OFDM} is the most commonly employed waveform in the current and next-generation systems \cite{8093591}. In line with this vision, different security techniques have been proposed in the literature. These techniques include key generation-based approaches \cite{7120014,keybased2,keybased3}, adaptive communication-based approaches \cite{7562191}, and \ac{AN}-based techniques \cite{6516879,AN1}. \par Key generation-based techniques are based on the exploitation of channel reciprocity property between legitimate nodes as a common source of randomness. For example, amplitude and phase related to \ac{RSS}, \ac{CIR}, \ac{CFR}, and other feedback that can be used for key generation \cite{7120014} \cite{7393435}. These techniques are interesting in the sense that they can solve key management problems faced by encryption algorithms. However, they are very sensitive to channel estimation error, especially at low \ac{SNR} \cite{8509094}. In adaptive transmission-based techniques, the parameters are adjusted/adapted based on the location, channel conditions, and \ac{qos} requirements of the legitimate receiver only. For example, precoding \cite{7562191} and channel shortening filter-based \cite{8292335} techniques provide security at the cost of high \ac{PAPR} \cite{1223551}. Similarly, subcarrier selection-based techniques \cite{8093591} provide security at the cost of spectral efficiency degradation. \Ac{AN}-based techniques are also very effective for ensuring secure communication. In these techniques, an interference signal is added by the trusted node to degrade the performance of a legitimate node without affecting the performance of the legitimate receiver. However, the interference signal may cause an increment in \ac{PAPR} and little power degradation due to the sacrifice of the power resources for noise generation \cite{6516879,6881300}. Due to the wireless channel decorrelation property, the attacker cannot decode the data even if it knows the algorithm and tries to apply it based on its channel. However, the work in \cite{zhang2018csisnoop} claimed that under certain assumptions the attacker can acquire the \ac{CSI} between the legitimate nodes, which puts channel-based \ac{PLS} techniques at high risk of failure. For instance, the attacker can reverse engineer the beamforming matrix to acquire the \ac{CSI} between legitimate nodes \cite{zhang2018csisnoop}. The beamforming matrix can be estimated under the assumption that the attacker possesses the \ac{CSI} between the legitimate transmitter and itself, and that it is equipped with the same number of antennas as that of a legitimate transmitter. After estimating \ac{CSI} corresponding to legitimate nodes, it can compromise channel-dependent \ac{PLS} techniques like \ac{AN} as well as key generation-based techniques \cite{6739367}. The security can still be ensured in such cases by enforcing poor channel estimation quality at the attacker for the \ac{CSI} between the legitimate transmitter and the attacker, which motivates the need for pilot security. Securing the pilots protects the propagation environment properties from being extracted at the attacker side. Additionally, pilot security plays a critical role in physical layer authentication techniques. \subsubsection{PLS for Pilots} From the pilot security perspective, few techniques are proposed in the literature. In the line of this direction, in \cite{7343356}, the phases of pilots subcarriers are rotated based on the previously estimated \ac{CSI}. However, the channel is assumed to be known at both communicating nodes before the start of the algorithm. Moreover, the phase is manipulated for a single value which makes it easy to attack such a technique. In \cite{7605496, chang2009training}, the security of downlink pilots is provided based on CSI at the transmitter via uplink training. Particularly, \ac{AN} is added in the null space of the channel to degrade the channel estimation capability at the attacker, though it may increase the \ac{PAPR} of the signal. On top of that, power allocation between pilot and noise signal needs to be done intelligently in order to enhance security performance. Similarly, in \cite{9070177}, anti-eavesdropping pilots are designed such that the channel can be estimated at legitimate nodes only. However, the proposed algorithm requires full-duplex communication. In \cite{du2018improving}, the legitimate parties employ secret pilots for channel estimation. In the first step, the first node transmits the secret pilot to another node. Afterwards, the receiving node sends the received signal to the transmitting node by using amplify and forward strategy. The second node follows the same procedure as that of the first node. This approach requires extra overhead to share feedback. Although authors in \cite{9095399} exploit the uniqueness of channel responses between different nodes in order to degrade the channel estimation at the attacker node and showed the effect in terms of \ac{BER}, they did not consider the sign ambiguity issue when taking the square root of the channel, which in the practical case leads to huge \ac{BER} due to the erroneous channel estimation. \subsection{Our Contributions} In order to address the above-mentioned challenges, we propose novel schemes that can provide security for data, pilots, or joint data and pilots. The design of the proposed schemes is based on the decomposition of the channel into all-pass and minimum-phase channels and exploiting the property of decomposed channel to provide security. The main contributions of the proposed work are as follows: \begin{itemize} \item We propose two novel minimum-phase all-pass channel decomposition-based \ac{PLS} schemes for OFDM in rich scattering channels, where the first method secures the data and the latter secures the pilots. The proposed data security algorithm ensures strong security compared to conventional schemes. Furthermore, it preserves the requirements of the legitimate user without trading off security with overall performance. On the other hand, the proposed pilot security algorithm destroys the ability of the eavesdropper in estimating its channel; thus, protecting CSI, sensing, and radio environment mapping information. \item To the best of our knowledge, the proposed schemes are the first work that enables flexible adaptive security based on security needs. Specifically, the security of data, pilot, or both can be selected based on the security requirements of the applications. Additionally, the proposed novel security scheme is robust against spatially correlated eavesdroppers located near legitimate nodes. \item Our novel security methods focusing on both data security and pilot exploit only the all-pass component of the channel which has unit power property. Therefore, we provide security without any power constraint such as \ac{PAPR}. Whereas, the conventional PLS techniques such as artificial noise, zero forcing, and channel shortening provide security at the cost of changing the power distribution of the transmitted signal. These changes create high power peaks in time (i.e., PAPR) and frequency (exceeding spectrum mask limits). \item The secrecy of the proposed data security algorithm is evaluated under correlated and uncorrelated eavesdropping channels via closed-form \ac{BER}. Whereas for pilot security, we analyzed the \ac{NMSE} performance of the estimated channel. The simulations very well agree with the analytical formulas emphasizing the effectiveness of the proposed algorithms. \end{itemize} \subsection{Organization and Notation} The rest of this paper is organized as follows. Section \ref{sec:system-model} depicts the system model and discusses OFDM preliminaries. Section \ref{Sec:Proposed algorithm} firstly explains the channel decomposition and then presents the proposed data and pilot security algorithms. The numerical analysis of the proposed schemes is given in Section \ref{Sec:Numerical Analysis} followed by performance and simulation analysis in Section \ref{sec:simulation}. Finally, Section \ref{sec:conclusion} concludes the paper. Bold uppercase $\mathbf{A}$, bold lowercase $\mathbf{a}$, and unbold letters $A,a$ are used to denote matrices, column vectors, and scalar values, respectively. $(\cdot)^H$, $(\cdot)^T$, and $(\cdot)^{-1}$ denote the Hermitian, transpose, and inverse. $|\cdot|$ denotes the Euclidean norm, E$[\cdot]$ denotes the expectation operator, and var$(\cdot)$ denotes the variance operator. $\mathbb{C}^{{M\times N}}$ denotes the space of $M\times N$ complex-valued matrices. Symbol $j$ represents the imaginary unit of complex numbers with $j^2=-1$. \section{System model and preliminaries}\label{sec:system-model} As demonstrated in Fig. \ref{fig:System-model}, a \ac{SISO} \ac{OFDM} system is considered that consists of a legitimate transmitter (Alice, \{a\}), legitimate receiver (Bob, \{b\}), and passive eavesdropper (Eve, \{e\}) that is trying to intercept the transmission between Alice and Bob, where each node is equipped with a single antenna. The channels observed at Alice $\mathbf{h}_{ba} \in \mathbb{C}^{L \times 1}\sim \mathcal{CN}(0,\sigma_b)$, Bob $\mathbf{h}_{ab} \in \mathbb{C}^{L \times 1}\sim \mathcal{CN}(0,\sigma_a)$, and Eve $\mathbf{h}_{ae} \in \mathbb{C}^{L \times 1}\sim \mathcal{CN}(0,\sigma_e)$ are considered as multi-path slowly varying channels with $L$ exponentially decaying taps having Rayleigh fading distribution. Moreover, due to the channel reciprocity assumption, the channel between Alice-Bob $\mathbf{h}_{ab}$ can be estimated from the channel between Bob-Alice $\mathbf{h}_{ba}$, where $\mathbf{h}_{ab}=\mathbf{h}_{ba}^T$ \cite{Recip}. In addition, as the wireless channel varies due to environment richness and locations of nodes, the channel experienced by Bob and Eve is assumed to be independent. Furthermore, it is also assumed that Alice has no information about the channel of Eve because of its passive operation. \ac{OFDM} system is adopted for the communication, where $N$ complex data symbols in frequency domain $\mathbf{{X}}= \begin{bmatrix} X(0) & X(1) &...& X(N-1)\end{bmatrix}^{\textrm T} $ are converted to time-domain $\mathbf{x}=\begin{bmatrix} x(0) & x(1) &...& x(N-1)\end{bmatrix}^{\textrm T} $ using \ac{IFFT} to form one \ac{OFDM} symbol as \begin{equation} {x}(n) = \frac{1}{N}\sum_{k=0}^{N-1}{X}(k)e^{j2\pi nk/N}. \end{equation} To combat the \ac{ISI}, a \ac{CP} is appended to $\mathbf{x}$ before transmission. Finally, the signal is transmitted through the wireless channel and reaches the legitimate receiver (Bob) and illegitimate receiver (Eve). The wireless channel is represented by its \ac{CIR}, which is given as \begin{equation} h_{\Lambda}(t,\tau) = \sum_{l=0}^{L-1}h_l(t)\delta(\tau-\tau_l), \end{equation} where $\Lambda \in \{ab,ba,ae\}, $ $h_l(t)$ and $\tau_l$ are the complex channel gain and the delay of the $l$-th path at time $t$, respectively. $h_l(t)$ is assumed to have Gaussian distribution with zero-mean. $L$ is the total number of effective channel taps and $\delta(\cdot)$ is the Kronecker delta function. The \ac{CFR} is then expressed as \begin{equation} H_{\Lambda}(t,f) = \int^{+\infty}_{-\infty}h_{\Lambda}(t,\tau)e^{-j2\pi f\tau}d\tau. \end{equation} Assuming that the channel is time-invariant during one \ac{OFDM} symbol period $T_s$, and that the frequency spacing is $\Delta f_c$, the \ac{CIR} and the \ac{CFR} can be respectively represented as \begin{equation*} h_{\Lambda}(n) = h_{\Lambda}(nT_s,\tau),~ H_{\Lambda}(k) = H_{\Lambda}(nT_s,k\Delta f_c) . \end{equation*} At the receiver, the \ac{CP} is discarded first, and then the \ac{FFT} process is applied. The received $k$-th symbol is found as \begin{equation} Y_{\Lambda}(k) = H_{\Lambda}(k)X_{\Lambda}(k)+W(k), \end{equation} where $W(k)$ is the zero-mean \ac{AWGN} with variance $\sigma^2$ at the $k$-th subcarrier. It is assumed that the sampling rate satisfies the Nyquist criteria. Given that the \ac{CFR} is estimated using the known pilots after receiving the signal, let $k_p$ be the $p$-th index where the pilot is inserted in the data signal $\mathbf{X}$, then the estimated \ac{CFR} is considered as \begin{equation} \tilde{H}_{\Lambda}(k_p) = \frac{Y(k_p)}{X(k_p)} = H_{\Lambda}(k_p)+\tilde{Z}(k_p), \label{Hp} \end{equation} where $\tilde{W}(k_p)$ denotes the noise term. To get an estimation over all $N$ subcarriers, $\tilde{H}$ is interpolated and the final estimated \ac{CFR} is found. \begin{figure}[t] \centering \includegraphics[width=0.90\columnwidth]{ model2.pdf} \footnotesize\caption{ System model where Alice and Bob are communicating over rich scattering channel with the existence of Eve.} \label{fig:System-model} \end{figure} \begin{figure*}[!t] \begin{center} \subfloat[\footnotesize Overall Channel.]{\label{convPerf:1}\includegraphics[width=58mm]{ zp_channel.eps}} \subfloat[\footnotesize All-pass channel.]{\label{convPerf:2}\includegraphics[width=58mm]{ zp_Allpass.eps}} \subfloat[\footnotesize Minimum-phase channel.]{\label{convPerf:3}\includegraphics[width=58mm]{ zp_Minphase.eps}} \\ \end{center} \centering \footnotesize\caption{ The zero-pole diagram of the minimum-phase all-pass decomposition of a wireless channel.} \label{fig:Channel-decomposition} \end{figure*} \section{Proposed Algorithm}\label{Sec:Proposed algorithm} The increase in the number of wireless communication-based applications with varying requirements motivates the need for adaptive and flexible security designs \cite{9336039}. Inspired by this motivation, in this section, we firstly present the channel decomposition concept, and then we propose novel algorithms that are capable of providing adaptive and flexible security. Particularly, in the case of a very high level of security, the security of the pilot and data is provided using the proposed algorithm. Otherwise, the security of data or pilot is provided based on the security requirements. \subsection{Minimum-phase All-pass Channel Decomposition}\label{Subsec:Channel decomp} \begin{figure*}[t] \centering \includegraphics[scale=0.65]{ Data_security.pdf} \footnotesize\caption{ The block diagram showing the main steps of the proposed data security algorithm.} \label{fig:Data security} \end{figure*} Wireless channel systems are causal because they are real-time systems, where the samples belong only to the present or past. Additionally, the \ac{CIR} of a wireless channel can be represented by a \ac{FIR} filter, and thus it is stable \cite{oppenheim}. Consequently, a stable and causal system with system function $H_{\Lambda}(z)$ would have all poles inside its unit circle; however, the zeros are free to wander outside. Let $H_{\Lambda}^1(z)$ be the system function with all zeros and poles inside the unit circle, and let the zeros outside be at $1/p_k$. This implies that we can decompose such a system into two components as \begin{equation} H_{\Lambda}(z)=\underbrace{\left(H_{\Lambda}^1(z) \prod_{k=1}^{q}\left(1-p_{k}^{*} z^{-1}\right)\right)}_{H_{\Lambda}^{\min }(z)} \overbrace{\prod_{k=1}^{q}\left(\frac{z^{-1}-p_{k}}{1-p_{k}^{*} z^{-1}}\right)}^{H_{\Lambda}^{\mathrm{ap}}(z)}, \label{equ:factorization} \end{equation} where $H_{\Lambda}^{\mathrm{min}}(z)$ and $H_{\Lambda}^{\mathrm{ap}}(z)$ are defined as the minimum-phase and all-pass components of $H_{\Lambda}(z)$, respectively. For instance, Fig. \ref{fig:Channel-decomposition} illustrates the zero-pole diagram of the minimum-phase all-pass decomposition of a random channel. As seen in Fig. \ref{fig:Channel-decomposition}(a), the overall channel contains zeros inside and outside the unit circle where the poles are centered at the origin. As shown in Fig. \ref{fig:Channel-decomposition}(b), for the all-pass channel only the zeros outside the unit circle are considered along with virtual poles added at the inverse of the zeros' location to cancel out the attenuation effect, thus passing all frequencies as the name stands. To compensate the effect of these virtual poles, zeros are added on top of them having a system with all zeros inside the unit circle (i.e., minimum-phase system) as illustrated in Fig. \ref{fig:Channel-decomposition}(c). The resulting components have various properties; for instance, in terms of the magnitude response $|H(e^{j\omega})|$, the factorization in \eqref{equ:factorization} implies that $|H_{min}(e^{j\omega})| = |H(e^{j\omega})|$ and $|H_{ap}(e^{j\omega})| = 1$. These properties of the decomposed channel will be exploited to provide security for both data and pilots. \subsection{Proposed Data Security Method}\label{Subsec:Data security} This subsection presents the details of the proposed algorithm for providing data security. The designed algorithm is based on a novel precoder that exploits the components of the channel separately, instead of using the full channel as in conventional security algorithms \cite{7467419}. As explained in Subsection \ref{Subsec:Channel decomp}, the proposed method uses only the conjugate of all-pass $H_{ap}^*(e^{j\omega})$ component of the channel for precoding. Therefore, it will not enhance the \ac{PAPR} \cite{4543070}. Additionally, it provides an effective solution to ensure secure communication against eavesdropping. \begin{figure*}[t] \centering \includegraphics[scale=0.65]{ Pilot_security.pdf} \footnotesize\caption{ The block diagram showing the main steps of the proposed pilot security algorithm.} \label{fig:Pilot security} \end{figure*} Fig. \ref{fig:Data security} illustrates the block diagram of the proposed data security algorithm, where its basic steps are described as follows: \begin{enumerate} \item Bob sends pilot signal ${P}$ to Alice to estimate $H_{ba}$, where due to channel reciprocity $H_{ba}=H_{ab}$. Thus, we assume that \ac{CSI} is available at Alice. \item Alice decomposes the \ac{CFR} $H_{ba}$ into all-pass $H_{ab}^{\mathrm{ap}}$ and minimum-phase $H_{ab}^{\mathrm{min}}$ components, as explained in Subsection \ref{Subsec:Channel decomp}. \item Alice multiplies the data subcarriers $X(k_d)$ at indices $k_d$ with the conjugate of all-pass components of the channel (${H_{ab}^{\mathrm{ap}}}^*$), while the pilots $X(k_p)$ at $k_p$ indices are intact. Then, the transmitted signal by Alice can be expressed as \begin{equation} X(k) = \begin{cases} {H_{ab}^{\mathrm{ap}}}^*(k)D(k) &; k \in k_d\\ P(k) &; k \in k_p.\\ \end{cases} \end{equation} \item The received signal at Bob can be given as: \begin{equation} \begin{aligned} Y_{ab}(k)= \begin{cases} H_{ab}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ab}(k) &; k \in k_d\\ H_{ab}(k)P(k)+W_{ab}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Using the pilots $P(k)$ at $k_p$, the \ac{CFR} $\tilde{H}_{ab}(k)$ is estimated as described in \eqref{Hp}. \item Applying channel decomposition to the estimated channel as in Subsection \ref{Subsec:Channel decomp}, we obtain $\tilde{H}_{ab}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)\tilde{H}_{ab}^{\mathrm{ap}}(k)$. The data subcarriers of the received signal by Bob is given as \begin{equation} \begin{aligned} Y_{ab}(k) &= H_{ab}^{\mathrm{min}}(k)H_{ab}^{\mathrm{ap}}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ab}(k)\\ &=H_{ab}^{\mathrm{min}}(k)D(k)+W_{ab}(k);~ k\in k_d. \end{aligned} \label{equ:rec_ab_2} \end{equation} \item Using the results of step 5, Bob equalizes $H_{ab}^{\mathrm{min}}$ to decode the data as \begin{equation} \begin{aligned} \hat{X}_{Bob}(k) &= \frac{Y_{ab}(k)}{\tilde{H}_{ab}^{\mathrm{min}}(k)}\\ &=\frac{H_{ab}^{\mathrm{min}}(k)D(k)+W_{ab}(k)}{\tilde{H}_{ab}^{\mathrm{min}}(k)}\\ &= D(k)+\tilde{W}_{ab}(k);~ k\in k_d, \end{aligned} \label{dateqe} \end{equation} where $\tilde{W}_{ab}(k)= W_{ab}(k)/\tilde{H}_{ab}^{\mathrm{min}}(k)$ and $H_{ab}^{\mathrm{min}}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)$ in case of perfect channel estimation. \end{enumerate} The received signal at Eve can be given by \begin{equation} \begin{aligned} Y_{ae}(k)= \begin{cases} H_{ae}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ae}(k) &; k \in k_d\\ H_{ae}(k)P(k)+W_{ae}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Applying the similar technique at Eve, the final signal at Eve can be given as \begin{equation} \begin{aligned} \hat{X}_{eve}(k) &= \frac{Y_{ae}(k)}{\tilde{H}_{ae}(k)}\\ &=\frac{H_{ae}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ae}(k)} {\tilde{H}_{ae}(k)}\\ &= {H_{ab}^{\mathrm{ap}}}^*(k)D(k)+\tilde{W}_{ae}(k);~ k\in k_d, \end{aligned} \label{equ:eve-data} \end{equation} where $\tilde{W}_{ae}(k)= W_{ae}(k)/\tilde{H}_{ae}(k)$ and $H_{ae}(k)=\tilde{H}_{ae}$ in case of perfect channel estimation. As seen from \eqref{equ:eve-data}, Eve will not be able to decode the data even if it perfectly estimates its channel. This is due to the unknown randomness caused by the term ${H_{ab}^{\mathrm{ap}}}^*$ which is uncorrelated with its channel.\footnote{Note that due to channel decorrelation in rich scattering environment between $\mathbf{H}_{ab}$ and $\mathbf{H}_{ae}$, Eve will not be able to estimate and remove the effect of ${H_{ab}^{\mathrm{ap}}}^*(k)$.} \subsection{Proposed Pilot Security Method}\label{Subsec:Pilot Security} This subsection presents the details of the proposed algorithm for providing pilot security. Similar to data security, the proposed pilot security algorithm exploits the components of the channel separately. It ensures that only a legitimate receiver will be able to estimate the channel while Eve will not able to learn the channel or environment without affecting \ac{PAPR} as in \cite{7605496}. Additionally, the proposed algorithm is also suitable for the security of feedbacks, hardware impairments, and hardware-based authentication. Furthermore, the eavesdropper will not be able to extract information of precoder corresponding to the channel of legitimate nodes from the received signal and thus will not be able to launch attacks to learn \ac{CSI} corresponding to legitimate node \cite{zhang2018csisnoop}. Fig. \ref{fig:Pilot security} illustrates the block diagram of the proposed pilot security algorithm, where its basic steps are described as follows: \begin{enumerate} \item Bob sends pilot signal ${P}$ to Alice to estimate $H_{ba}$, where due to channel reciprocity $H_{ba}=H_{ab}$. Thus we assume that \ac{CSI} is available at Alice. \item Alice decomposes the \ac{CFR} $H_{ba}$ into all-pass $H_{ab}^{\mathrm{ap}}$ and minimum-phase $H_{ab}^{\mathrm{min}}$ components as explained in Subsection \ref{Subsec:Channel decomp}. \item Alice multiplies the pilots subcarriers $X(k_p)$ at indices $k_p$ with the all-pass components of the channel ${H_{ab}^{\mathrm{ap}}}$, while the data subcarriers $X(k_d)$ at $k_d$ indices are intact. Then, the transmitted signal by Alice can be expressed as \begin{equation} X(k) = \begin{cases} D(k) &; k \in k_d\\ {H_{ab}^{\mathrm{ap}}}(k)P(k) &; k \in k_p.\\ \end{cases} \end{equation} \item The received signal at Bob can be given as \begin{equation} \begin{aligned} Y_{ab}(k)= \begin{cases} H_{ab}(k)D(k)+W_{ab}(k) &; k \in k_d\\ H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ab}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Using the pilots $P(k)$ at $k_p$, the precoded \ac{CFR} $\tilde{H}_{abp}(k)=H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k)$ is estimated as described in \eqref{Hp}. \item In order to find $\tilde{H}_{ab}(k)$ from the estimated $\tilde{H}_{abp}(k)$, channel decomposition is applied to the estimated precoded channel as explained in Subsection \ref{Subsec:Channel decomp} as follows: $\tilde{H}_{abp}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k) (\tilde{H}_{ab}^{\mathrm{ap}}(k))^2$, where $\tilde{H}_{ab}^{\mathrm{min}}(k)$ is minimum-phase component while $(\tilde{H}_{ab}^{\mathrm{ap}}(k))^2$ is all-pass component of $\tilde{H}_{abp}(k)$. \item The estimated channel at Bob can be calculated as: $\tilde{H}_{ab}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)\sqrt{(\tilde{H}_{ab}^{\mathrm{ap}}(k))^2}$.\footnote{Note that $\sqrt{(\tilde{H}_{ab}^{\mathrm{ap}})^2}=\pm \tilde{H}_{ab}^{\mathrm{ap}}$. Therefore, in order to estimate the sign of estimated channel, we exploit the correlation between the channel subcarriers which ensure a smooth transition between the value of one subcarrier to another. Thus, we solved the sign ambiguity compared to the work in \cite{9095399} which didn't consider such issue.} \end{enumerate} At Eve side, the received signal is given by \begin{equation} \begin{aligned} Y_{ae}(k)= \begin{cases} H_{ae}(k)D(k)+W_{ae}(k) &; k \in k_d\\ H_{ae}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ae}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \label{equ:eve-pilot} \end{equation} Using the pilots $P(k)$ at $k_p$, the precoded \ac{CFR} $\tilde{H}_{abe}(k)=H_{ae}(k){H_{ab}^{\mathrm{ap}}}(k)$ is estimated as described in \eqref{Hp}. As seen from \eqref{equ:eve-pilot}, eavesdropper will not be able to correctly estimates the channel because of unknown randomness caused by all-pass component (${H_{ab}^{\mathrm{ap}}}(k)$) of legitimate channel in \eqref{equ:eve-pilot}. Hence, it cannot estimate the channel and learn the environment. \subsection{Joint Pilot \& Data Security}\label{Subsec:Joint Security} This subsection presents the details of the proposed algorithm for providing joint data and pilot security. Particularly, in case of a very high-security risk, there is a need of securing both pilot and data to provide a very high level of security. Thus, the attacker will not able to learn the channel, environment, and data. Here, the idea is to exploit the proposed algorithm presented in Subsections \ref{Subsec:Data security} and \ref{Subsec:Pilot Security} to ensure both pilot and data security. The transmitted signal by Alice after applying both data and pilot security can be expressed as \begin{equation} X(k) = \begin{cases} {H_{ab}^{\mathrm{ap}}}^*(k)D(k) &; k \in k_d\\ {H_{ab}^{\mathrm{ap}}}(k)P(k) &; k \in k_p.\\ \end{cases} \end{equation} The received signal at Bob can be given as \begin{equation} \begin{aligned} Y_{ab}(k)= \begin{cases} H_{ab}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ab}(k) &; k \in k_d\\ H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ab}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Using the pilots $P(k)$ at $k_p$, the precoded \ac{CFR} $\tilde{H}_{abp}(k)=H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k)$ is estimated as described in \eqref{Hp}. Afterwards, $\tilde{H}_{ab}(k)$ is estimated as $\tilde{H}_{ab}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)\sqrt{(\tilde{H}_{ab}^{\mathrm{ap}})^2}$. Finally, Bob will decode the data similar to \eqref{dateqe}. On the other hand, the received signal at Eve can be given as: \begin{equation} \begin{aligned} Y_{ae}(k)= \begin{cases} H_{ae}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ae}(k) &; k \in k_d\\ H_{ae}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ae}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \label{jointeve} \end{equation} It should be noted from \eqref{jointeve} that Eve will neither be able to estimate its channel nor the data due to the randomness caused by ${H_{ab}^{\mathrm{ap}}}^*(k)$ and ${H_{ab}^{\mathrm{ap}}}(k)$ in $D(k)$ and in $P(k)$, respectively. Thus, providing a two-level security that is suitable for critical applications. \section{Numerical Analysis}\label{Sec:Numerical Analysis} In this section, we analyze the BER performance for Bob and Eve under correlated and uncorrelated eavesdropping channels to investigate the data security algorithm. Afterwards, we derive the \ac{MMSE} of the estimated channel when the pilot security algorithm is applied. \subsection{Data Security: BER-based Secrecy Gap}\label{Subsec:Data security analysis} To emphasize the performance of the data security method, we compare the \ac{BER} performance gap between Bob and Eve. In this subsection, we analyze the \ac{BEP} under correlated and uncorrelated eavesdropping channels. \subsubsection{Uncorrelated Bob-Eve Channel} One elaborate method to suppress the effect of $W(k)$ when estimating the channel is the \ac{MMSE} estimation \cite{MMSE_OFDM}. After estimating the channel $\tilde{H}$, the data at index $k_d$ is given in \eqref{equ:rec_ab_2} and expressed by \begin{equation} Y_{ab}(k)=H_{ab}^{\mathrm{min}}(k)D(k)+W(k);~ k\in k_d. \end{equation} Therefore, for a normalized power data symbols (i.e., $\operatorname{E}[|D(k)|^2]=1$) the \ac{SNR} of the received signal is given by \begin{equation} \begin{aligned} \gamma_{ab}&\triangleq\frac{\operatorname{E}[|H_{ab}^{\mathrm{min}}(k)D(k)|^2]}{M\operatorname{E}[|W(k)|^2]}=\frac{\operatorname{E}[|H_{ab}^{\mathrm{min}}(k)|^2]\operatorname{E}[|D(k)|^2]}{M\operatorname{E}[|W(k)|^2]}\\ &=\frac{\operatorname{E}[|H_{ab}^{\mathrm{min}}(k)|^2]}{M\sigma^2}, \end{aligned} \end{equation} where $M$ denotes the number of bits represented by each symbol. As demonstrated in Subsection \ref{Subsec:Channel decomp}, the minimum-phase component shares the same power with the overall channel response itself $|H_{ab}^{\mathrm{min}}(k)|^2 = |H_{ab}(k)|^2$. Thus, the \ac{SNR} can be written as \begin{equation} \gamma_{ab}=\frac{\operatorname{E}[|H_{ab}(k)|^2]}{M\sigma^2}. \end{equation} The average \ac{BEP} $P_{\Lambda}(e)$ then is given as function of \ac{SNR} $\gamma_\Lambda$ and the correlation coefficients $\rho_\Lambda^1$ and $\rho_\Lambda^2$ between the actual and the estimated channel responses \cite{OFDM_performance} by: \begin{equation} \begin{aligned} P_{\Lambda}(e)=\frac{1}{2}&\left[1-\frac{1}{2} \frac{\frac{\left(\rho_\Lambda^1+\rho_\Lambda^2\right)}{\sqrt{2}}}{\sqrt{1+\frac{1}{2 {\bar{\gamma}}_{\Lambda}}-\frac{\left(\rho_\Lambda^1-\rho_\Lambda^2\right)^{2}}{2}}}\right. \\ &\left.-\frac{1}{2} \frac{\frac{\left(\rho_\Lambda^1-\rho_\Lambda^2\right)}{\sqrt{2}}}{\sqrt{1+\frac{1}{2 \bar{\gamma}_{\Lambda}}-\frac{\left(\rho_\Lambda^1+\rho_\Lambda^2\right)^{2}}{2}}}\right], \end{aligned} \label{equ:BER_imperfect} \end{equation} where $\Lambda=\{ab,ae\}$. In case of perfect channel estimation, we have $\operatorname{E}[|H_{ab}-\tilde{H}_{ab}|^2]=0$, $\rho_{ab}^1 = 1$ and $\rho_{ab}^2 =0$, and the \ac{BEP} performance will become the lower bound performance of \eqref{equ:BER_imperfect}, and it is given as \begin{equation} P_{ab}(E)=\frac{1}{2}\left[1- \frac{1}{\sqrt{1+\frac{1}{ \bar{\gamma}_{ab}}}}\right]. \label{equ:BER_perfect} \end{equation} \begin{figure} [t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{ real_09.eps} \caption{ \footnotesize Real part.} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{ imag_09.eps} \caption{\footnotesize Imaginary part.} \end{subfigure} \footnotesize\caption{ The distribution of uncorrelated part of Eve's channel for $\rho_{ae}=0.9$.}\label{fig:PDF} \end{figure} \subsubsection{Correlated Bob-Eve Channel} To further evaluate the performance of the proposed scheme and emphasize its reliability, we consider the effects of the eavesdropper location with respect to the legitimate user. For that, we consider a correlated eavesdropping channel scenario, where it is assumed that Eve is located near Bob. We model the correlation between channel coefficients \cite{ferdinand2013physical}/ as \begin{equation} H_{ae} = \rho_{ae}H_{ab}+\sqrt{1-\rho_{ae}^2}H_{e}, \label{equ:corr_chan} \end{equation} where $H_{e}$ is i.i.d. $\sim \mathcal{CN}(0,\sigma_a)$ and $\rho_{ae}$ is the correlation function of the legitimate channel gain with the eavesdropping channel given as \begin{equation} \rho_{ae} =\frac{\operatorname{Cov}[{H}_{ab},H_{ae}^*]}{\sqrt{\operatorname{var}({H}_{ab})\operatorname{var}(H_{ae}^*)}}=\frac{\operatorname{E}[{H}_{ab}H_{ae}^*]}{\sigma_a\sigma_e}. \label{equ:rho_ae} \end{equation} Please refer to Appendix \ref{App:rho}.\hfill$\blacksquare$ In the proposed algorithm, the overall channel is not used, and instead, only one component (i.e., all-pass channel) is exploited. Therefore, even if Eve estimates the channel with high correlation, still it will suffer from the error raised due to the decomposition of its channel. This leads to lower eavesdropping channel correlation and enhanced security performance. Fig. \ref{fig:PDF} shows the distribution of the real and imaginary parts of the uncorrelated part of the channel $H_e$. It is clearly seen that the variance of the uncorrelated term, given blue color, increases after performing the channel decomposition as shown by the orange distribution in Fig. \ref{fig:PDF}. \begin{figure}[t] \centering \includegraphics[scale=0.38]{ corre.eps} \footnotesize\caption{The channel correlation relationship between the conventional and proposed schemes.} \label{fig:Correl} \end{figure} Assuming the same channel model as in \eqref{equ:corr_chan} we have \begin{equation} H_{ae} = \rho_{ae}H_{ab}^{\mathrm{min}}H_{ab}^{\mathrm{ap}}+\sqrt{1-\rho_{ae}^2}H_{e}. \end{equation} For the channel equalization, only the minimum-phase component of the channel is needed. So, after performing the channel decomposition and normalization, Eve finds \begin{equation} \begin{aligned} H^{\mathrm{min}}_{ae} &=\rho_{ae}^{\mathrm{min}}H^{\mathrm{min}}_{ab}+\sqrt{1-\left(\rho_{ae}^{\mathrm{min}}\right)^2}H^{\mathrm{min}}_{e}\\ &=\frac{\rho_{ae}}{\Gamma}H^{\mathrm{min}}_{ab}+\sqrt{1-\left(\frac{\rho_{ae}}{\Gamma}\right)^2}H^{\mathrm{min}}_{e}, \end{aligned} \label{equ:corr_min} \end{equation} where $\Gamma$ is the correlation attenuation factor. Note that $\Gamma$ satisfies the following constraints \begin{equation} \begin{aligned} & \Gamma \sim \sqrt{1-\rho_{ae}^2} ~~~~~~~~~(C1)\\ &\Gamma \geq 1~\forall \rho_{ae}~~~~~~~~~~~~(C2)\\ & \Gamma = 1,~ \mathrm{for} ~\rho_{ae}=1~~~(C3) \\ &0\leq\frac{\rho_{ae}}{\Gamma}\leq1,~\forall \rho_{ae}~~~(C4). \end{aligned} \label{equ:const} \end{equation} Taking all the constraints given in \eqref{equ:const}, we find that \begin{equation} \Gamma = 1+\sqrt{1-\rho_{ae}^2}. \label{equ:gamma} \end{equation} Please refer to Appendix \ref{App:gamma}.\hfill$\blacksquare$ Fig. \ref{fig:Correl} shows the relationship between the correlation of the overall channel of Eve and Alice and the correlation of the minimum-phase component of their channels. As observed from Fig. \ref{fig:Correl}, the correlation factor decreases after performing the channel decomposition causing degradation in the data decoding capability at Eve. Therefore, the correlation function becomes $\rho_{ae}^{\mathrm{min}} = \rho_{ae}/(1+\sqrt{1-\rho_{ae}^2})<\rho_{ae}$. This result shows another advantage of using the proposed scheme in case of correlated eavesdropping channels. Please refer to Appendix \ref{App:corr_min}.\hfill$\blacksquare$ To evaluate the results above, the \ac{BER} performance is analyzed. we adopt the same \ac{BER} expression given by \eqref{equ:BER_perfect} by including the spatial correlation factor $\rho$ \cite{9095399}. Then, the \ac{BER} at both Bob and Eve can be found using \eqref{equ:BER_imperfect} as \begin{equation} P_{\Lambda}(E)=\frac{1}{2}\left[1- \frac{\rho_{\Lambda}}{\sqrt{1+\frac{1}{ \bar{\gamma}_{ab}}}}\right], \end{equation} where $\Lambda=\{ab,ae\}$. \subsection{Pilot Security: Channel NMSE}\label{Subsec:Pilot Security analysis} Using the pilot scheme as adopted in Subsection \ref{Subsec:Pilot Security}, the \ac{MMSE} estimate of the channel can be obtained as \cite{MMSE_OFDM} \begin{equation} \tilde{H}_{ab}^{\mathrm{MMSE}} = F\tilde{\mathbf{h}}_{ab} = FR_{h_{ab}Y_{ab}}R_{Y_{ab}Y_{ab}}Y^{-1}, \label{equ:H_MMSE1} \end{equation} where \begin{equation} \begin{aligned} &R_{h_{ab}Y_{ab}} = \operatorname{E}[h_{ab}Y_{ab}^H]= R_{h_{ab}h_{ab}}F^HP^H\\ &R_{Y_{ab}Y_{ab}} = \operatorname{E}[Y_{ab}Y_{ab}^H]= PFR_{h_{ab}h_{ab}}F^HP^H+\sigma^2I_N, \end{aligned} \label{equ:H_MMSE2} \end{equation} where $R_{h_{ab}h_{ab}}$ is the channel autocorrelation matrix, $F$ is the \ac{DFT} matrix, and $I_N$ is the identity matrix. Substituting \eqref{equ:H_MMSE2} in \eqref{equ:H_MMSE1} we find \begin{equation} \tilde{H}_{ab}^{\mathrm{MMSE}} = FR_{h_{ab}h_{ab}}F^HP^H(PFR_{h_{ab}h_{ab}}F^HP^H+\sigma^2I_N)^{-1}Y_{ab}. \label{equ:H_MMSE} \end{equation} Decomposing the estimated channel $\tilde{H}_{ab}^{\mathrm{MMSE}}$ would result in \begin{equation} \tilde{H}_{ab}^{\mathrm{MMSE}}(k) = \tilde{H}_{ab}^{\mathrm{min}}(k)\left(\tilde{H}_{ab}^{\mathrm{ap}}(k)\right)^2. \label{equ:H_MMSE_dec} \end{equation} \section{Simulation Results}\label{sec:simulation} \begin{table} [t] \begin{center} \caption{Simulation parameters} \label{tab:sim-parm} \begin{tabular}{l|l} \hline Parameters & Specifications \\ \hline \hline FFT Size $N$ & 256 \\ \hline Pilot Rate & $1 / 4$ \\ \hline Guard Interval (CP) & 64 \\ \hline Signal Constellation & $ \mathrm{QPSK}$ \\ \hline Channel Model & IEEE 802.11 channel model PDP \cite{mimo_ofdm}\\ \hline Max channel taps $L$ & $11$\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[t] \centering \includegraphics[scale=0.50]{ perfect_BER_proposed} \footnotesize\caption{ BER performance of the proposed data security algorithm vs channel shortening \cite{8292335}, AN \cite{6516879} and conventional CP-OFDM \cite{mimo_ofdm}.} \label{fig:BER_Perfect_CSI} \end{figure} In this section, we demonstrate the performance of channel decomposition-based \ac{PLS} algorithms proposed for \ac{OFDM} systems. To do so, the \ac{BER}-based secrecy gap and \ac{NMSE}-based secrecy gap metrics are used to evaluate the security of data and pilots, respectively. The \ac{BER}-based secrecy gap will quantify the amount of information leakage to the eavesdropper, evaluate the secrecy, and also shows the effect of the proposed algorithm on the reliability with respect to legitimate nodes \cite{8093591}. On the other hand, \ac{NMSE}-based secrecy gap shows the difference between the quality of estimated channel at the legitimate node and illegitimate node. Moreover, the effect of the proposed algorithm on the \ac{PAPR} is also presented along with the comparison to the conventional algorithms. The simulated \ac{OFDM} system parameters are described in Table \ref{tab:sim-parm}. Fig. \ref{fig:BER_Perfect_CSI} shows the \ac{BER} performance versus \ac{SNR} of the proposed data security algorithm along with the comparison to the conventional algorithms such as channel shortening \cite{8292335}, AN \cite{6516879} and conventional CP-OFDM \cite{mimo_ofdm}. First, it is observed that the derived analytical results match well with the simulations. Also, note that under perfect channel estimation the proposed scheme performs exactly as the CP-OFDM while Eve suffers from high error rates. This implies that the proposed data scheme does not degrade the performance of the legitimate user. Moreover, our novel design exhibits a significant \ac{BER} gap performance compared to AN and channel shortening by ensuring the lowest \ac{BER} values at Bob and the highest error rate at Eve. This result emphasizes the effectiveness of the data security algorithm in maintaining high secrecy levels while preserving the performance of the legitimate user. Fig. \ref{fig:BER_Perfect_CSI} also shows the BER performance of Bob under imperfect channel estimation. It is seen that at low SNR values, Bob has almost the same performance as AN and channel shortening while at high SNR values it performed slightly better. \begin{figure}[t] \centering \includegraphics[scale=0.50]{ Corr_999_99_8.eps} \footnotesize\caption{BER performance of the proposed algorithm (blue color) vs LS \cite{9095399} (red color) under correlated eavesdropping channels. The solid and dashed lines stand for the analytical and simulation results, respectively.} \label{fig:Corr_999_99_8} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.49]{ papr.eps} \footnotesize\caption{The CCDF of the PAPR for the proposed algorithm compared to channel shortening \cite{8292335}, AN \cite{6516879}, LS \cite{9095399}, and conventional CP-OFDM \cite{mimo_ofdm}.} \label{fig:papr} \end{figure} Fig. \ref{fig:Corr_999_99_8} illustrates the analytical as well as the simulated results for \ac{BER} performance of the proposed algorithm and \ac{LS} based algorithm in \cite{9095399} for the case when Eve's channel is correlated with Bob's channel. The analytical results agree well with the simulations confirming the model developed in Subsection \ref{Subsec:Data security analysis}. The \ac{BER} performance is evaluated for the correlation values of $\rho = \{0.999, 0.99, 0.8\}$. It is observed that the performance of the Eve improves as the correlation values increase. However, the \ac{BER} performance difference of Eve between the proposed algorithm and LS is large for the same correlation value. For instance, when $\rho =0.999$ the error floor of the proposed scheme settles at $0.03$ while if LS is used the error falls below $0.001$. This is due to the fact that the proposed algorithm is using one component of the channel instead of the overall one, which provides resilience against eavesdroppers near the legitimate nodes as explained in Subsection \ref{Subsec:Data security analysis}. Thanks to the unit power property of the all-pass channel as explained in Subsection \ref{Subsec:Channel decomp}, the proposed precoding would not cause any power issues since it only changes the phase of each OFDM subcarrier. Fig. \ref{fig:papr} depicts \ac{PAPR} performance of the proposed algorithm compared to CP-\ac{OFDM}, AN, LS, and channel shortening methods. It is observed that the \ac{PAPR} performance of the proposed algorithm is similar to that of conventional \ac{OFDM} while the application of other security schemes causes an increase in the \ac{PAPR}. This result makes the usage of the proposed precoding technique independent of any power constraint. \begin{figure}[h] \centering \includegraphics[scale=0.49]{ NMSE_performance.eps} \footnotesize \caption{The channel estimation's NMSE performance vs SNR at Bob and Eve.} \label{fig:NMSE} \end{figure} Fig. \ref{fig:NMSE} shows the \ac{NMSE} versus \ac{SNR} performance of the estimated channel at the legitimate node and at Eve for the proposed algorithm and CP-OFDM. It is observed that there is a significant estimation error gap between the legitimate and the attacker at the cost of some degradation in the estimation quality at very low SNR values. This ensures that Eve will neither be able to estimate its channel nor acquire the sensing information from the surrounding environment. Moreover, it is also observed that the estimation error slightly increases when estimating the minimum-phase or the all-pass channels compared to the effective channel, which justifies the BER performance degradation in Fig. \ref{fig:BER_Perfect_CSI}. \section{Conclusion and Future Work}\label{sec:conclusion} In this work, we proposed novel security algorithms for providing data and pilot security. Unlike conventional security schemes which use the full channel, the proposed algorithms decompose the channel into its minimum-phase and all-pass components and exploit only the all-pass part. The latter provides enough randomness to secure the communication without causing any power issues such as high \ac{PAPR} value at the transmitter due to its unit amplitude property. Particularly, the all-pass component and its conjugate are used to secure the pilots and the data, respectively. For data security, we have considered two scenarios of correlated and uncorrelated eavesdropping channels and evaluated the \ac{BER} gap. Our results reveal that using one component of the channel provides better security than using the total effective channel in both scenarios. Moreover, the results also ensure that the proposed algorithm provides effective security with minimal degradation in the legitimate user’s performance compared to conventional algorithms. For pilot security, we considered the NMSE gap of the estimated channels. The results show that the proposed algorithm is capable of providing significant degradation in the estimated channel quality at the eavesdropper. This will ensure not only securing the CSI but also securing radio environment mapping information. Additionally, the proposed algorithm can enable flexibility in terms of providing security to data, pilot, or both depending upon the application requirements. As future work, the proposed algorithm will be investigated with multiple-input multiple-output systems. \section{Acknowledgment} The work of H. Arslan was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Grant 120C142 and the work of Haji M. Furqan was supported by the HISAR Lab at TUBITAK BILGEM, Gebze, Turkey. \begin{appendices} \section{Proof of \eqref{equ:rho_ae}}\label{App:rho} Let the correlation function of the legitimate channel gain with the eavesdropping channel be given as \begin{equation} \label{euq:correlation} \begin{aligned} \rho_{ae} &=\frac{\operatorname{Cov}({H}_{ab}^*,H_{ae})}{\sqrt{\operatorname{var}({H}_{ab}^*)\operatorname{var}(H_{ae})}}\\ & = \frac{\operatorname{E}[{H}_{ab}H_{ae}^*]-\operatorname{E}[{H}_{ab}]\operatorname{E}[{H}_{ae}^*]}{\sigma_a\sigma_e}. \end{aligned} \end{equation} We know that $\operatorname{E}[{H}_{ab}]=\operatorname{E}[{H}_{ae}]=0$. Additionally $H_{ae}$ is given by \eqref{equ:corr_chan}, then \begin{equation} \begin{aligned} \rho_{ae}& = \frac{\operatorname{E}[{H}_{ab}^*\cdot(\rho_{ae}H_{ab}+\sqrt{1-\rho_{ae}^2}H_{e})]}{\sigma_a\sigma_e}\\ & = \frac{\rho_{ae}\operatorname{E}[|{H}_{ab}|^2]+\sqrt{1-\rho_{ae}^2}\left(\operatorname{E}[{H}_{ab}^*]\operatorname{E}[H_{e}]\right)}{\sigma_a\sigma_e}\\ & = \frac{\rho_{ae}\operatorname{E}[|{H}_{ab}|^2]}{\sigma_a\sigma_e}. \end{aligned} \end{equation} Note that $\sigma_a^2= \operatorname{var}(H_{ab})= \operatorname{E}[|{H}_{ab}|^2]-\left(\operatorname{E}[{H}_{ab}]\right)^2 = \operatorname{E}[|{H}_{ab}|^2]$, and $\sigma_e^2=\operatorname{var}(\rho_{ae}H_{ab}+\sqrt{1-\rho_{ae}^2}H_{e})=\rho_{ae}^2\sigma_a^2+(1-\rho_{ae}^2)\sigma_a^2=\sigma_a^2$. Therefore, we find \begin{equation} \begin{aligned} \rho_{ae} = \frac{\rho_{ae}\sigma_a^2}{\sigma_a\sigma_a}=\rho_{ae}. \end{aligned} \end{equation} Thus, we prove that in the channel model given by \eqref{equ:corr_chan}, $\rho_{ae}$ corresponds to the channel correlation between Bob and Eve. \section{Proof of \eqref{equ:gamma}}\label{App:gamma} To identify the expression of $\Gamma$ and instead of using complex probability distributions analysis, we exploit the four constraints given in \eqref{equ:const}. For instance, from $C1$, we know that $\Gamma$ is proportional to $\sqrt{1-\rho_{ae}^2}$. And if a linear relationship is assumed, we find \begin{equation} \Gamma = \alpha+\beta \sqrt{1-\rho_{ae}^2}, \end{equation} where $\alpha$ and $\beta$ are the linear model's parameters to be defined, and by solving the equation in $(C3)$, i.e., $\rho_{ae}(\Gamma=1)=1$, we find the value of the slope as $\alpha=1$. Also, by using $(C2)$ and $(C4)$ we find the following \begin{equation} 0\leq\sqrt{1-\rho_{ae}^2}\leq\beta. \end{equation} And since $0\leq\sqrt{1-\rho_{ae}^2}\leq1$, we conclude that $\beta=1$, therefore \begin{equation} \Gamma = 1+\sqrt{1-\rho_{ae}^2}. \end{equation} This model for $\Gamma$ agrees very well with the simulation results as shown in Fig. \ref{fig:Correl}. \section{Proof of \eqref{equ:corr_min}}\label{App:corr_min} The correlation function of the legitimate minimum-phase channel component gain with the eavesdropping channel be given as \begin{equation} \begin{aligned} &\rho^{\mathrm{min}}_{ae} =\frac{\operatorname{Cov}({H}^{\mathrm{min}*}_{ab},H^{\mathrm{min}}_{ae})}{\sqrt{\operatorname{var}({H}^{\mathrm{min}*}_{ab})\operatorname{var}(H^{\mathrm{min}}_{ae}})}\\ & = \frac{\operatorname{E}[{H}^{\mathrm{min}*}_{ab}\cdot H_{ae}^{\mathrm{min}}]-\operatorname{E}[{H}^{\mathrm{min}*}_{ab}]\operatorname{E}[{H}^{\mathrm{min}}_{ae}]}{\sqrt{\left(\operatorname{E}[|{H}^{\mathrm{min}}_{ab}|^2]-\left(\operatorname{E}[{H}^{\mathrm{min}}_{ab}]\right)^2\right)\left(\operatorname{E}[|{H}^{\mathrm{min}}_{ae}|^2]-\left(\operatorname{E}[{H}^{\mathrm{min}}_{ae}]\right)^2\right)}}. \end{aligned} \end{equation} Note that $\operatorname{E}[{H}^{\mathrm{min}}_{ab}]=\operatorname{E}[{H}^{\mathrm{min}}_{ae}]=0$, $\operatorname{E}[|{H}^{\mathrm{min}}_{ae}|^2]=\operatorname{E}[|{H}_{ae}|^2]=\sigma_e^2$, and $\operatorname{E}[|{H}^{\mathrm{min}}_{ab}|^2]=\operatorname{E}[|{H}_{ab}|^2]=\sigma_a^2$. Please refer to Appendix \ref{App:mean}.\hfill$\blacksquare$ Thus, we find \begin{equation} \begin{aligned} &\rho^{\mathrm{min}}_{ae} = \frac{\operatorname{E}[{H}^{\mathrm{min}*}_{ab}H_{ae}^{\mathrm{min}}]}{\sigma_a\sigma_e}\\ & = \frac{\operatorname{E}\bigg[{H}_{ab}^{\mathrm{min}*}\cdot\left(\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}H^{\mathrm{min}}_{ab}+\sqrt{1-\left(\frac{\rho_{ae}}{\Gamma}\right)^2}H^{\mathrm{min}}_{e}\right)\bigg]}{\sigma_a\sigma_e}\\ & = \frac{\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}\operatorname{E}[|{H}^{\mathrm{min}}_{ab}|^2]+\sqrt{1-\left(\frac{\rho_{ae}}{\Gamma}\right)^2}\left(\operatorname{E}[{H}^{\mathrm{min}*}_{ab}]\operatorname{E}[H^{\mathrm{min}}_{e}]\right)}{\sigma_a\sigma_e}\\ & = \frac{\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}\sigma_a^2}{\sigma_a\sigma_e}. \end{aligned} \end{equation} Finally, the correlation is found as \begin{equation} \rho^{\mathrm{min}}_{ae} =\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}<\rho_{ae}. \end{equation} \section{Proof of $\operatorname{E}[{H}^{\mathrm{min}}_{ab}]=\operatorname{E}[{H}^{\mathrm{min}}_{ae}]=0$.} \label{App:mean} As explained in Subsection \ref{Subsec:Channel decomp}, any FIR causal channel $H_{ab}$ can be decomposed as follows: $H_{ab} = H_{ab}^{\mathrm{min}}\cdot H^{\mathrm{ap}}_{ab}$, where $H^{\mathrm{ap}}_{ab}=e^{jU}$ and $U\sim \mathcal{U}(0,2\pi)$. Thus, $H^{\mathrm{ap}}_{ab}$ has the following \ac{PDF}: $f_X(x)=\frac{1}{j2\pi x}$. With reference to the main problem, the expectation of $H_{ab}$ is given as \begin{equation} \operatorname{E}[{H}_{ab}] = \operatorname{E}[{H}^{\mathrm{min}*}_{ab}\cdot{H}^{\mathrm{ap}}_{ab}] = 0. \end{equation} Since the minimum-phase and all-pass components are independent, we find \begin{equation} \operatorname{E}[{H}_{ab}] = \operatorname{E}[{H}^{\mathrm{min}}_{ab}]\operatorname{E}[{H}^{\mathrm{ap}}_{ab}] = 0. \label{equ:proof_exp} \end{equation} As shown in \eqref{equ:proof_exp}, whether $\operatorname{E}[{H}^{\mathrm{min}}_{ab}]=0$ or $\operatorname{E}[{H}^{\mathrm{ap}}_{ab}]=0$. However, $\operatorname{E}[{H}^{\mathrm{ap}}_{ab}]=\int_{-\infty}^{+\infty}\frac{x}{j2\pi x}dx \neq 0 $. Therefore \begin{equation} \operatorname{E}[{H}^{\mathrm{min}}_{ab}]= 0. \end{equation} \end{appendices} \section{Introduction} \IEEEPARstart{W}{hile} an enormous amount of novel and efficient wireless technologies have been proposed in order to fulfill the demands of \ac{5G} and beyond in terms of reliability, throughput, and latency, the security has become a sensitive issue \cite{hu2015mobile}. This is due to the fact that the open and broadcast nature of wireless transmission makes the physical transmitted signal, bearing the communication data and sensitive information, vulnerable to eavesdropping \cite{6739367}. In order to overcome the security threats, upper-layer encryption-based algorithms are exploited conventionally. Such techniques, however, may not be feasible for future wireless networks because of the difficulties in terms of key management and sharing in these heterogeneous wireless networks. \Ac{PLS} has emerged as an interesting and powerful solution that can complement conventional security techniques and improve the overall security of wireless communication networks \cite{bloch2011physical}. Particularly, \ac{PLS} observes and exploits the dynamic characteristics of the signal, radio, and channel for ensuring the security of features and contents at the physical layer \cite{rivest1990handbook}. \subsection{PLS Literature Review} \Ac{PLS} techniques are proposed to achieve two different goals, namely, securing the data communication and channel estimation. The first goal of \ac{PLS} algorithms is to degrade the data decoding capability of non-legitimate nodes compared to the legitimate node by exploiting different properties of the wireless channel. While the second goal is to enforce poor channel estimation at the eavesdropper/attacker, which will degrade the signal recovery capability at the attacker \cite{9336039, 4543070, 8509094}. \subsubsection{PLS for Data} Among the top areas in \ac{PLS}, securing \ac{OFDM} has drawn enormous attention recently since \ac{OFDM} is the most commonly employed waveform in the current and next-generation systems \cite{8093591}. In line with this vision, different security techniques have been proposed in the literature. These techniques include key generation-based approaches \cite{7120014,keybased2,keybased3}, adaptive communication-based approaches \cite{7562191}, and \ac{AN}-based techniques \cite{6516879,AN1}. \par Key generation-based techniques are based on the exploitation of channel reciprocity property between legitimate nodes as a common source of randomness. For example, amplitude and phase related to \ac{RSS}, \ac{CIR}, \ac{CFR}, and other feedback that can be used for key generation \cite{7120014} \cite{7393435}. These techniques are interesting in the sense that they can solve key management problems faced by encryption algorithms. However, they are very sensitive to channel estimation error, especially at low \ac{SNR} \cite{8509094}. In adaptive transmission-based techniques, the parameters are adjusted/adapted based on the location, channel conditions, and \ac{qos} requirements of the legitimate receiver only. For example, precoding \cite{7562191} and channel shortening filter-based \cite{8292335} techniques provide security at the cost of high \ac{PAPR} \cite{1223551}. Similarly, subcarrier selection-based techniques \cite{8093591} provide security at the cost of spectral efficiency degradation. \Ac{AN}-based techniques are also very effective for ensuring secure communication. In these techniques, an interference signal is added by the trusted node to degrade the performance of a legitimate node without affecting the performance of the legitimate receiver. However, the interference signal may cause an increment in \ac{PAPR} and little power degradation due to the sacrifice of the power resources for noise generation \cite{6516879,6881300}. Due to the wireless channel decorrelation property, the attacker cannot decode the data even if it knows the algorithm and tries to apply it based on its channel. However, the work in \cite{zhang2018csisnoop} claimed that under certain assumptions the attacker can acquire the \ac{CSI} between the legitimate nodes, which puts channel-based \ac{PLS} techniques at high risk of failure. For instance, the attacker can reverse engineer the beamforming matrix to acquire the \ac{CSI} between legitimate nodes \cite{zhang2018csisnoop}. The beamforming matrix can be estimated under the assumption that the attacker possesses the \ac{CSI} between the legitimate transmitter and itself, and that it is equipped with the same number of antennas as that of a legitimate transmitter. After estimating \ac{CSI} corresponding to legitimate nodes, it can compromise channel-dependent \ac{PLS} techniques like \ac{AN} as well as key generation-based techniques \cite{6739367}. The security can still be ensured in such cases by enforcing poor channel estimation quality at the attacker for the \ac{CSI} between the legitimate transmitter and the attacker, which motivates the need for pilot security. Securing the pilots protects the propagation environment properties from being extracted at the attacker side. Additionally, pilot security plays a critical role in physical layer authentication techniques. \subsubsection{PLS for Pilots} From the pilot security perspective, few techniques are proposed in the literature. In the line of this direction, in \cite{7343356}, the phases of pilots subcarriers are rotated based on the previously estimated \ac{CSI}. However, the channel is assumed to be known at both communicating nodes before the start of the algorithm. Moreover, the phase is manipulated for a single value which makes it easy to attack such a technique. In \cite{7605496, chang2009training}, the security of downlink pilots is provided based on CSI at the transmitter via uplink training. Particularly, \ac{AN} is added in the null space of the channel to degrade the channel estimation capability at the attacker, though it may increase the \ac{PAPR} of the signal. On top of that, power allocation between pilot and noise signal needs to be done intelligently in order to enhance security performance. Similarly, in \cite{9070177}, anti-eavesdropping pilots are designed such that the channel can be estimated at legitimate nodes only. However, the proposed algorithm requires full-duplex communication. In \cite{du2018improving}, the legitimate parties employ secret pilots for channel estimation. In the first step, the first node transmits the secret pilot to another node. Afterwards, the receiving node sends the received signal to the transmitting node by using amplify and forward strategy. The second node follows the same procedure as that of the first node. This approach requires extra overhead to share feedback. Although authors in \cite{9095399} exploit the uniqueness of channel responses between different nodes in order to degrade the channel estimation at the attacker node and showed the effect in terms of \ac{BER}, they did not consider the sign ambiguity issue when taking the square root of the channel, which in the practical case leads to huge \ac{BER} due to the erroneous channel estimation. \subsection{Our Contributions} In order to address the above-mentioned challenges, we propose novel schemes that can provide security for data, pilots, or joint data and pilots. The design of the proposed schemes is based on the decomposition of the channel into all-pass and minimum-phase channels and exploiting the property of decomposed channel to provide security. The main contributions of the proposed work are as follows: \begin{itemize} \item We propose two novel minimum-phase all-pass channel decomposition-based \ac{PLS} schemes for OFDM in rich scattering channels, where the first method secures the data and the latter secures the pilots. The proposed data security algorithm ensures strong security compared to conventional schemes. Furthermore, it preserves the requirements of the legitimate user without trading off security with overall performance. On the other hand, the proposed pilot security algorithm destroys the ability of the eavesdropper in estimating its channel; thus, protecting CSI, sensing, and radio environment mapping information. \item To the best of our knowledge, the proposed schemes are the first work that enables flexible adaptive security based on security needs. Specifically, the security of data, pilot, or both can be selected based on the security requirements of the applications. Additionally, the proposed novel security scheme is robust against spatially correlated eavesdroppers located near legitimate nodes. \item Our novel security methods focusing on both data security and pilot exploit only the all-pass component of the channel which has unit power property. Therefore, we provide security without any power constraint such as \ac{PAPR}. Whereas, the conventional PLS techniques such as artificial noise, zero forcing, and channel shortening provide security at the cost of changing the power distribution of the transmitted signal. These changes create high power peaks in time (i.e., PAPR) and frequency (exceeding spectrum mask limits). \item The secrecy of the proposed data security algorithm is evaluated under correlated and uncorrelated eavesdropping channels via closed-form \ac{BER}. Whereas for pilot security, we analyzed the \ac{NMSE} performance of the estimated channel. The simulations very well agree with the analytical formulas emphasizing the effectiveness of the proposed algorithms. \end{itemize} \subsection{Organization and Notation} The rest of this paper is organized as follows. Section \ref{sec:system-model} depicts the system model and discusses OFDM preliminaries. Section \ref{Sec:Proposed algorithm} firstly explains the channel decomposition and then presents the proposed data and pilot security algorithms. The numerical analysis of the proposed schemes is given in Section \ref{Sec:Numerical Analysis} followed by performance and simulation analysis in Section \ref{sec:simulation}. Finally, Section \ref{sec:conclusion} concludes the paper. Bold uppercase $\mathbf{A}$, bold lowercase $\mathbf{a}$, and unbold letters $A,a$ are used to denote matrices, column vectors, and scalar values, respectively. $(\cdot)^H$, $(\cdot)^T$, and $(\cdot)^{-1}$ denote the Hermitian, transpose, and inverse. $|\cdot|$ denotes the Euclidean norm, E$[\cdot]$ denotes the expectation operator, and var$(\cdot)$ denotes the variance operator. $\mathbb{C}^{{M\times N}}$ denotes the space of $M\times N$ complex-valued matrices. Symbol $j$ represents the imaginary unit of complex numbers with $j^2=-1$. \section{System model and preliminaries}\label{sec:system-model} As demonstrated in Fig. \ref{fig:System-model}, a \ac{SISO} \ac{OFDM} system is considered that consists of a legitimate transmitter (Alice, \{a\}), legitimate receiver (Bob, \{b\}), and passive eavesdropper (Eve, \{e\}) that is trying to intercept the transmission between Alice and Bob, where each node is equipped with a single antenna. The channels observed at Alice $\mathbf{h}_{ba} \in \mathbb{C}^{L \times 1}\sim \mathcal{CN}(0,\sigma_b)$, Bob $\mathbf{h}_{ab} \in \mathbb{C}^{L \times 1}\sim \mathcal{CN}(0,\sigma_a)$, and Eve $\mathbf{h}_{ae} \in \mathbb{C}^{L \times 1}\sim \mathcal{CN}(0,\sigma_e)$ are considered as multi-path slowly varying channels with $L$ exponentially decaying taps having Rayleigh fading distribution. Moreover, due to the channel reciprocity assumption, the channel between Alice-Bob $\mathbf{h}_{ab}$ can be estimated from the channel between Bob-Alice $\mathbf{h}_{ba}$, where $\mathbf{h}_{ab}=\mathbf{h}_{ba}^T$ \cite{Recip}. In addition, as the wireless channel varies due to environment richness and locations of nodes, the channel experienced by Bob and Eve is assumed to be independent. Furthermore, it is also assumed that Alice has no information about the channel of Eve because of its passive operation. \ac{OFDM} system is adopted for the communication, where $N$ complex data symbols in frequency domain $\mathbf{{X}}= \begin{bmatrix} X(0) & X(1) &...& X(N-1)\end{bmatrix}^{\textrm T} $ are converted to time-domain $\mathbf{x}=\begin{bmatrix} x(0) & x(1) &...& x(N-1)\end{bmatrix}^{\textrm T} $ using \ac{IFFT} to form one \ac{OFDM} symbol as \begin{equation} {x}(n) = \frac{1}{N}\sum_{k=0}^{N-1}{X}(k)e^{j2\pi nk/N}. \end{equation} To combat the \ac{ISI}, a \ac{CP} is appended to $\mathbf{x}$ before transmission. Finally, the signal is transmitted through the wireless channel and reaches the legitimate receiver (Bob) and illegitimate receiver (Eve). The wireless channel is represented by its \ac{CIR}, which is given as \begin{equation} h_{\Lambda}(t,\tau) = \sum_{l=0}^{L-1}h_l(t)\delta(\tau-\tau_l), \end{equation} where $\Lambda \in \{ab,ba,ae\}, $ $h_l(t)$ and $\tau_l$ are the complex channel gain and the delay of the $l$-th path at time $t$, respectively. $h_l(t)$ is assumed to have Gaussian distribution with zero-mean. $L$ is the total number of effective channel taps and $\delta(\cdot)$ is the Kronecker delta function. The \ac{CFR} is then expressed as \begin{equation} H_{\Lambda}(t,f) = \int^{+\infty}_{-\infty}h_{\Lambda}(t,\tau)e^{-j2\pi f\tau}d\tau. \end{equation} Assuming that the channel is time-invariant during one \ac{OFDM} symbol period $T_s$, and that the frequency spacing is $\Delta f_c$, the \ac{CIR} and the \ac{CFR} can be respectively represented as \begin{equation*} h_{\Lambda}(n) = h_{\Lambda}(nT_s,\tau),~ H_{\Lambda}(k) = H_{\Lambda}(nT_s,k\Delta f_c) . \end{equation*} At the receiver, the \ac{CP} is discarded first, and then the \ac{FFT} process is applied. The received $k$-th symbol is found as \begin{equation} Y_{\Lambda}(k) = H_{\Lambda}(k)X_{\Lambda}(k)+W(k), \end{equation} where $W(k)$ is the zero-mean \ac{AWGN} with variance $\sigma^2$ at the $k$-th subcarrier. It is assumed that the sampling rate satisfies the Nyquist criteria. Given that the \ac{CFR} is estimated using the known pilots after receiving the signal, let $k_p$ be the $p$-th index where the pilot is inserted in the data signal $\mathbf{X}$, then the estimated \ac{CFR} is considered as \begin{equation} \tilde{H}_{\Lambda}(k_p) = \frac{Y(k_p)}{X(k_p)} = H_{\Lambda}(k_p)+\tilde{Z}(k_p), \label{Hp} \end{equation} where $\tilde{W}(k_p)$ denotes the noise term. To get an estimation over all $N$ subcarriers, $\tilde{H}$ is interpolated and the final estimated \ac{CFR} is found. \begin{figure}[t] \centering \includegraphics[width=0.90\columnwidth]{ model2.pdf} \footnotesize\caption{ System model where Alice and Bob are communicating over rich scattering channel with the existence of Eve.} \label{fig:System-model} \end{figure} \begin{figure*}[!t] \begin{center} \subfloat[\footnotesize Overall Channel.]{\label{convPerf:1}\includegraphics[width=58mm]{ zp_channel.eps}} \subfloat[\footnotesize All-pass channel.]{\label{convPerf:2}\includegraphics[width=58mm]{ zp_Allpass.eps}} \subfloat[\footnotesize Minimum-phase channel.]{\label{convPerf:3}\includegraphics[width=58mm]{ zp_Minphase.eps}} \\ \end{center} \centering \footnotesize\caption{ The zero-pole diagram of the minimum-phase all-pass decomposition of a wireless channel.} \label{fig:Channel-decomposition} \end{figure*} \section{Proposed Algorithm}\label{Sec:Proposed algorithm} The increase in the number of wireless communication-based applications with varying requirements motivates the need for adaptive and flexible security designs \cite{9336039}. Inspired by this motivation, in this section, we firstly present the channel decomposition concept, and then we propose novel algorithms that are capable of providing adaptive and flexible security. Particularly, in the case of a very high level of security, the security of the pilot and data is provided using the proposed algorithm. Otherwise, the security of data or pilot is provided based on the security requirements. \subsection{Minimum-phase All-pass Channel Decomposition}\label{Subsec:Channel decomp} \begin{figure*}[t] \centering \includegraphics[scale=0.65]{ Data_security.pdf} \footnotesize\caption{ The block diagram showing the main steps of the proposed data security algorithm.} \label{fig:Data security} \end{figure*} Wireless channel systems are causal because they are real-time systems, where the samples belong only to the present or past. Additionally, the \ac{CIR} of a wireless channel can be represented by a \ac{FIR} filter, and thus it is stable \cite{oppenheim}. Consequently, a stable and causal system with system function $H_{\Lambda}(z)$ would have all poles inside its unit circle; however, the zeros are free to wander outside. Let $H_{\Lambda}^1(z)$ be the system function with all zeros and poles inside the unit circle, and let the zeros outside be at $1/p_k$. This implies that we can decompose such a system into two components as \begin{equation} H_{\Lambda}(z)=\underbrace{\left(H_{\Lambda}^1(z) \prod_{k=1}^{q}\left(1-p_{k}^{*} z^{-1}\right)\right)}_{H_{\Lambda}^{\min }(z)} \overbrace{\prod_{k=1}^{q}\left(\frac{z^{-1}-p_{k}}{1-p_{k}^{*} z^{-1}}\right)}^{H_{\Lambda}^{\mathrm{ap}}(z)}, \label{equ:factorization} \end{equation} where $H_{\Lambda}^{\mathrm{min}}(z)$ and $H_{\Lambda}^{\mathrm{ap}}(z)$ are defined as the minimum-phase and all-pass components of $H_{\Lambda}(z)$, respectively. For instance, Fig. \ref{fig:Channel-decomposition} illustrates the zero-pole diagram of the minimum-phase all-pass decomposition of a random channel. As seen in Fig. \ref{fig:Channel-decomposition}(a), the overall channel contains zeros inside and outside the unit circle where the poles are centered at the origin. As shown in Fig. \ref{fig:Channel-decomposition}(b), for the all-pass channel only the zeros outside the unit circle are considered along with virtual poles added at the inverse of the zeros' location to cancel out the attenuation effect, thus passing all frequencies as the name stands. To compensate the effect of these virtual poles, zeros are added on top of them having a system with all zeros inside the unit circle (i.e., minimum-phase system) as illustrated in Fig. \ref{fig:Channel-decomposition}(c). The resulting components have various properties; for instance, in terms of the magnitude response $|H(e^{j\omega})|$, the factorization in \eqref{equ:factorization} implies that $|H_{min}(e^{j\omega})| = |H(e^{j\omega})|$ and $|H_{ap}(e^{j\omega})| = 1$. These properties of the decomposed channel will be exploited to provide security for both data and pilots. \subsection{Proposed Data Security Method}\label{Subsec:Data security} This subsection presents the details of the proposed algorithm for providing data security. The designed algorithm is based on a novel precoder that exploits the components of the channel separately, instead of using the full channel as in conventional security algorithms \cite{7467419}. As explained in Subsection \ref{Subsec:Channel decomp}, the proposed method uses only the conjugate of all-pass $H_{ap}^*(e^{j\omega})$ component of the channel for precoding. Therefore, it will not enhance the \ac{PAPR} \cite{4543070}. Additionally, it provides an effective solution to ensure secure communication against eavesdropping. \begin{figure*}[t] \centering \includegraphics[scale=0.65]{ Pilot_security.pdf} \footnotesize\caption{ The block diagram showing the main steps of the proposed pilot security algorithm.} \label{fig:Pilot security} \end{figure*} Fig. \ref{fig:Data security} illustrates the block diagram of the proposed data security algorithm, where its basic steps are described as follows: \begin{enumerate} \item Bob sends pilot signal ${P}$ to Alice to estimate $H_{ba}$, where due to channel reciprocity $H_{ba}=H_{ab}$. Thus, we assume that \ac{CSI} is available at Alice. \item Alice decomposes the \ac{CFR} $H_{ba}$ into all-pass $H_{ab}^{\mathrm{ap}}$ and minimum-phase $H_{ab}^{\mathrm{min}}$ components, as explained in Subsection \ref{Subsec:Channel decomp}. \item Alice multiplies the data subcarriers $X(k_d)$ at indices $k_d$ with the conjugate of all-pass components of the channel (${H_{ab}^{\mathrm{ap}}}^*$), while the pilots $X(k_p)$ at $k_p$ indices are intact. Then, the transmitted signal by Alice can be expressed as \begin{equation} X(k) = \begin{cases} {H_{ab}^{\mathrm{ap}}}^*(k)D(k) &; k \in k_d\\ P(k) &; k \in k_p.\\ \end{cases} \end{equation} \item The received signal at Bob can be given as: \begin{equation} \begin{aligned} Y_{ab}(k)= \begin{cases} H_{ab}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ab}(k) &; k \in k_d\\ H_{ab}(k)P(k)+W_{ab}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Using the pilots $P(k)$ at $k_p$, the \ac{CFR} $\tilde{H}_{ab}(k)$ is estimated as described in \eqref{Hp}. \item Applying channel decomposition to the estimated channel as in Subsection \ref{Subsec:Channel decomp}, we obtain $\tilde{H}_{ab}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)\tilde{H}_{ab}^{\mathrm{ap}}(k)$. The data subcarriers of the received signal by Bob is given as \begin{equation} \begin{aligned} Y_{ab}(k) &= H_{ab}^{\mathrm{min}}(k)H_{ab}^{\mathrm{ap}}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ab}(k)\\ &=H_{ab}^{\mathrm{min}}(k)D(k)+W_{ab}(k);~ k\in k_d. \end{aligned} \label{equ:rec_ab_2} \end{equation} \item Using the results of step 5, Bob equalizes $H_{ab}^{\mathrm{min}}$ to decode the data as \begin{equation} \begin{aligned} \hat{X}_{Bob}(k) &= \frac{Y_{ab}(k)}{\tilde{H}_{ab}^{\mathrm{min}}(k)}\\ &=\frac{H_{ab}^{\mathrm{min}}(k)D(k)+W_{ab}(k)}{\tilde{H}_{ab}^{\mathrm{min}}(k)}\\ &= D(k)+\tilde{W}_{ab}(k);~ k\in k_d, \end{aligned} \label{dateqe} \end{equation} where $\tilde{W}_{ab}(k)= W_{ab}(k)/\tilde{H}_{ab}^{\mathrm{min}}(k)$ and $H_{ab}^{\mathrm{min}}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)$ in case of perfect channel estimation. \end{enumerate} The received signal at Eve can be given by \begin{equation} \begin{aligned} Y_{ae}(k)= \begin{cases} H_{ae}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ae}(k) &; k \in k_d\\ H_{ae}(k)P(k)+W_{ae}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Applying the similar technique at Eve, the final signal at Eve can be given as \begin{equation} \begin{aligned} \hat{X}_{eve}(k) &= \frac{Y_{ae}(k)}{\tilde{H}_{ae}(k)}\\ &=\frac{H_{ae}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ae}(k)} {\tilde{H}_{ae}(k)}\\ &= {H_{ab}^{\mathrm{ap}}}^*(k)D(k)+\tilde{W}_{ae}(k);~ k\in k_d, \end{aligned} \label{equ:eve-data} \end{equation} where $\tilde{W}_{ae}(k)= W_{ae}(k)/\tilde{H}_{ae}(k)$ and $H_{ae}(k)=\tilde{H}_{ae}$ in case of perfect channel estimation. As seen from \eqref{equ:eve-data}, Eve will not be able to decode the data even if it perfectly estimates its channel. This is due to the unknown randomness caused by the term ${H_{ab}^{\mathrm{ap}}}^*$ which is uncorrelated with its channel.\footnote{Note that due to channel decorrelation in rich scattering environment between $\mathbf{H}_{ab}$ and $\mathbf{H}_{ae}$, Eve will not be able to estimate and remove the effect of ${H_{ab}^{\mathrm{ap}}}^*(k)$.} \subsection{Proposed Pilot Security Method}\label{Subsec:Pilot Security} This subsection presents the details of the proposed algorithm for providing pilot security. Similar to data security, the proposed pilot security algorithm exploits the components of the channel separately. It ensures that only a legitimate receiver will be able to estimate the channel while Eve will not able to learn the channel or environment without affecting \ac{PAPR} as in \cite{7605496}. Additionally, the proposed algorithm is also suitable for the security of feedbacks, hardware impairments, and hardware-based authentication. Furthermore, the eavesdropper will not be able to extract information of precoder corresponding to the channel of legitimate nodes from the received signal and thus will not be able to launch attacks to learn \ac{CSI} corresponding to legitimate node \cite{zhang2018csisnoop}. Fig. \ref{fig:Pilot security} illustrates the block diagram of the proposed pilot security algorithm, where its basic steps are described as follows: \begin{enumerate} \item Bob sends pilot signal ${P}$ to Alice to estimate $H_{ba}$, where due to channel reciprocity $H_{ba}=H_{ab}$. Thus we assume that \ac{CSI} is available at Alice. \item Alice decomposes the \ac{CFR} $H_{ba}$ into all-pass $H_{ab}^{\mathrm{ap}}$ and minimum-phase $H_{ab}^{\mathrm{min}}$ components as explained in Subsection \ref{Subsec:Channel decomp}. \item Alice multiplies the pilots subcarriers $X(k_p)$ at indices $k_p$ with the all-pass components of the channel ${H_{ab}^{\mathrm{ap}}}$, while the data subcarriers $X(k_d)$ at $k_d$ indices are intact. Then, the transmitted signal by Alice can be expressed as \begin{equation} X(k) = \begin{cases} D(k) &; k \in k_d\\ {H_{ab}^{\mathrm{ap}}}(k)P(k) &; k \in k_p.\\ \end{cases} \end{equation} \item The received signal at Bob can be given as \begin{equation} \begin{aligned} Y_{ab}(k)= \begin{cases} H_{ab}(k)D(k)+W_{ab}(k) &; k \in k_d\\ H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ab}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Using the pilots $P(k)$ at $k_p$, the precoded \ac{CFR} $\tilde{H}_{abp}(k)=H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k)$ is estimated as described in \eqref{Hp}. \item In order to find $\tilde{H}_{ab}(k)$ from the estimated $\tilde{H}_{abp}(k)$, channel decomposition is applied to the estimated precoded channel as explained in Subsection \ref{Subsec:Channel decomp} as follows: $\tilde{H}_{abp}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k) (\tilde{H}_{ab}^{\mathrm{ap}}(k))^2$, where $\tilde{H}_{ab}^{\mathrm{min}}(k)$ is minimum-phase component while $(\tilde{H}_{ab}^{\mathrm{ap}}(k))^2$ is all-pass component of $\tilde{H}_{abp}(k)$. \item The estimated channel at Bob can be calculated as: $\tilde{H}_{ab}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)\sqrt{(\tilde{H}_{ab}^{\mathrm{ap}}(k))^2}$.\footnote{Note that $\sqrt{(\tilde{H}_{ab}^{\mathrm{ap}})^2}=\pm \tilde{H}_{ab}^{\mathrm{ap}}$. Therefore, in order to estimate the sign of estimated channel, we exploit the correlation between the channel subcarriers which ensure a smooth transition between the value of one subcarrier to another. Thus, we solved the sign ambiguity compared to the work in \cite{9095399} which didn't consider such issue.} \end{enumerate} At Eve side, the received signal is given by \begin{equation} \begin{aligned} Y_{ae}(k)= \begin{cases} H_{ae}(k)D(k)+W_{ae}(k) &; k \in k_d\\ H_{ae}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ae}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \label{equ:eve-pilot} \end{equation} Using the pilots $P(k)$ at $k_p$, the precoded \ac{CFR} $\tilde{H}_{abe}(k)=H_{ae}(k){H_{ab}^{\mathrm{ap}}}(k)$ is estimated as described in \eqref{Hp}. As seen from \eqref{equ:eve-pilot}, eavesdropper will not be able to correctly estimates the channel because of unknown randomness caused by all-pass component (${H_{ab}^{\mathrm{ap}}}(k)$) of legitimate channel in \eqref{equ:eve-pilot}. Hence, it cannot estimate the channel and learn the environment. \subsection{Joint Pilot \& Data Security}\label{Subsec:Joint Security} This subsection presents the details of the proposed algorithm for providing joint data and pilot security. Particularly, in case of a very high-security risk, there is a need of securing both pilot and data to provide a very high level of security. Thus, the attacker will not able to learn the channel, environment, and data. Here, the idea is to exploit the proposed algorithm presented in Subsections \ref{Subsec:Data security} and \ref{Subsec:Pilot Security} to ensure both pilot and data security. The transmitted signal by Alice after applying both data and pilot security can be expressed as \begin{equation} X(k) = \begin{cases} {H_{ab}^{\mathrm{ap}}}^*(k)D(k) &; k \in k_d\\ {H_{ab}^{\mathrm{ap}}}(k)P(k) &; k \in k_p.\\ \end{cases} \end{equation} The received signal at Bob can be given as \begin{equation} \begin{aligned} Y_{ab}(k)= \begin{cases} H_{ab}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ab}(k) &; k \in k_d\\ H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ab}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \end{equation} Using the pilots $P(k)$ at $k_p$, the precoded \ac{CFR} $\tilde{H}_{abp}(k)=H_{ab}(k){H_{ab}^{\mathrm{ap}}}(k)$ is estimated as described in \eqref{Hp}. Afterwards, $\tilde{H}_{ab}(k)$ is estimated as $\tilde{H}_{ab}(k)=\tilde{H}_{ab}^{\mathrm{min}}(k)\sqrt{(\tilde{H}_{ab}^{\mathrm{ap}})^2}$. Finally, Bob will decode the data similar to \eqref{dateqe}. On the other hand, the received signal at Eve can be given as: \begin{equation} \begin{aligned} Y_{ae}(k)= \begin{cases} H_{ae}(k){H_{ab}^{\mathrm{ap}}}^*(k)D(k)+W_{ae}(k) &; k \in k_d\\ H_{ae}(k){H_{ab}^{\mathrm{ap}}}(k) P(k)+W_{ae}(k) &; k \in k_p.\\ \end{cases} \end{aligned} \label{jointeve} \end{equation} It should be noted from \eqref{jointeve} that Eve will neither be able to estimate its channel nor the data due to the randomness caused by ${H_{ab}^{\mathrm{ap}}}^*(k)$ and ${H_{ab}^{\mathrm{ap}}}(k)$ in $D(k)$ and in $P(k)$, respectively. Thus, providing a two-level security that is suitable for critical applications. \section{Numerical Analysis}\label{Sec:Numerical Analysis} In this section, we analyze the BER performance for Bob and Eve under correlated and uncorrelated eavesdropping channels to investigate the data security algorithm. Afterwards, we derive the \ac{MMSE} of the estimated channel when the pilot security algorithm is applied. \subsection{Data Security: BER-based Secrecy Gap}\label{Subsec:Data security analysis} To emphasize the performance of the data security method, we compare the \ac{BER} performance gap between Bob and Eve. In this subsection, we analyze the \ac{BEP} under correlated and uncorrelated eavesdropping channels. \subsubsection{Uncorrelated Bob-Eve Channel} One elaborate method to suppress the effect of $W(k)$ when estimating the channel is the \ac{MMSE} estimation \cite{MMSE_OFDM}. After estimating the channel $\tilde{H}$, the data at index $k_d$ is given in \eqref{equ:rec_ab_2} and expressed by \begin{equation} Y_{ab}(k)=H_{ab}^{\mathrm{min}}(k)D(k)+W(k);~ k\in k_d. \end{equation} Therefore, for a normalized power data symbols (i.e., $\operatorname{E}[|D(k)|^2]=1$) the \ac{SNR} of the received signal is given by \begin{equation} \begin{aligned} \gamma_{ab}&\triangleq\frac{\operatorname{E}[|H_{ab}^{\mathrm{min}}(k)D(k)|^2]}{M\operatorname{E}[|W(k)|^2]}=\frac{\operatorname{E}[|H_{ab}^{\mathrm{min}}(k)|^2]\operatorname{E}[|D(k)|^2]}{M\operatorname{E}[|W(k)|^2]}\\ &=\frac{\operatorname{E}[|H_{ab}^{\mathrm{min}}(k)|^2]}{M\sigma^2}, \end{aligned} \end{equation} where $M$ denotes the number of bits represented by each symbol. As demonstrated in Subsection \ref{Subsec:Channel decomp}, the minimum-phase component shares the same power with the overall channel response itself $|H_{ab}^{\mathrm{min}}(k)|^2 = |H_{ab}(k)|^2$. Thus, the \ac{SNR} can be written as \begin{equation} \gamma_{ab}=\frac{\operatorname{E}[|H_{ab}(k)|^2]}{M\sigma^2}. \end{equation} The average \ac{BEP} $P_{\Lambda}(e)$ then is given as function of \ac{SNR} $\gamma_\Lambda$ and the correlation coefficients $\rho_\Lambda^1$ and $\rho_\Lambda^2$ between the actual and the estimated channel responses \cite{OFDM_performance} by: \begin{equation} \begin{aligned} P_{\Lambda}(e)=\frac{1}{2}&\left[1-\frac{1}{2} \frac{\frac{\left(\rho_\Lambda^1+\rho_\Lambda^2\right)}{\sqrt{2}}}{\sqrt{1+\frac{1}{2 {\bar{\gamma}}_{\Lambda}}-\frac{\left(\rho_\Lambda^1-\rho_\Lambda^2\right)^{2}}{2}}}\right. \\ &\left.-\frac{1}{2} \frac{\frac{\left(\rho_\Lambda^1-\rho_\Lambda^2\right)}{\sqrt{2}}}{\sqrt{1+\frac{1}{2 \bar{\gamma}_{\Lambda}}-\frac{\left(\rho_\Lambda^1+\rho_\Lambda^2\right)^{2}}{2}}}\right], \end{aligned} \label{equ:BER_imperfect} \end{equation} where $\Lambda=\{ab,ae\}$. In case of perfect channel estimation, we have $\operatorname{E}[|H_{ab}-\tilde{H}_{ab}|^2]=0$, $\rho_{ab}^1 = 1$ and $\rho_{ab}^2 =0$, and the \ac{BEP} performance will become the lower bound performance of \eqref{equ:BER_imperfect}, and it is given as \begin{equation} P_{ab}(E)=\frac{1}{2}\left[1- \frac{1}{\sqrt{1+\frac{1}{ \bar{\gamma}_{ab}}}}\right]. \label{equ:BER_perfect} \end{equation} \begin{figure} [t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{ real_09.eps} \caption{ \footnotesize Real part.} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=1\linewidth]{ imag_09.eps} \caption{\footnotesize Imaginary part.} \end{subfigure} \footnotesize\caption{ The distribution of uncorrelated part of Eve's channel for $\rho_{ae}=0.9$.}\label{fig:PDF} \end{figure} \subsubsection{Correlated Bob-Eve Channel} To further evaluate the performance of the proposed scheme and emphasize its reliability, we consider the effects of the eavesdropper location with respect to the legitimate user. For that, we consider a correlated eavesdropping channel scenario, where it is assumed that Eve is located near Bob. We model the correlation between channel coefficients \cite{ferdinand2013physical}/ as \begin{equation} H_{ae} = \rho_{ae}H_{ab}+\sqrt{1-\rho_{ae}^2}H_{e}, \label{equ:corr_chan} \end{equation} where $H_{e}$ is i.i.d. $\sim \mathcal{CN}(0,\sigma_a)$ and $\rho_{ae}$ is the correlation function of the legitimate channel gain with the eavesdropping channel given as \begin{equation} \rho_{ae} =\frac{\operatorname{Cov}[{H}_{ab},H_{ae}^*]}{\sqrt{\operatorname{var}({H}_{ab})\operatorname{var}(H_{ae}^*)}}=\frac{\operatorname{E}[{H}_{ab}H_{ae}^*]}{\sigma_a\sigma_e}. \label{equ:rho_ae} \end{equation} Please refer to Appendix \ref{App:rho}.\hfill$\blacksquare$ In the proposed algorithm, the overall channel is not used, and instead, only one component (i.e., all-pass channel) is exploited. Therefore, even if Eve estimates the channel with high correlation, still it will suffer from the error raised due to the decomposition of its channel. This leads to lower eavesdropping channel correlation and enhanced security performance. Fig. \ref{fig:PDF} shows the distribution of the real and imaginary parts of the uncorrelated part of the channel $H_e$. It is clearly seen that the variance of the uncorrelated term, given blue color, increases after performing the channel decomposition as shown by the orange distribution in Fig. \ref{fig:PDF}. \begin{figure}[t] \centering \includegraphics[scale=0.38]{ corre.eps} \footnotesize\caption{The channel correlation relationship between the conventional and proposed schemes.} \label{fig:Correl} \end{figure} Assuming the same channel model as in \eqref{equ:corr_chan} we have \begin{equation} H_{ae} = \rho_{ae}H_{ab}^{\mathrm{min}}H_{ab}^{\mathrm{ap}}+\sqrt{1-\rho_{ae}^2}H_{e}. \end{equation} For the channel equalization, only the minimum-phase component of the channel is needed. So, after performing the channel decomposition and normalization, Eve finds \begin{equation} \begin{aligned} H^{\mathrm{min}}_{ae} &=\rho_{ae}^{\mathrm{min}}H^{\mathrm{min}}_{ab}+\sqrt{1-\left(\rho_{ae}^{\mathrm{min}}\right)^2}H^{\mathrm{min}}_{e}\\ &=\frac{\rho_{ae}}{\Gamma}H^{\mathrm{min}}_{ab}+\sqrt{1-\left(\frac{\rho_{ae}}{\Gamma}\right)^2}H^{\mathrm{min}}_{e}, \end{aligned} \label{equ:corr_min} \end{equation} where $\Gamma$ is the correlation attenuation factor. Note that $\Gamma$ satisfies the following constraints \begin{equation} \begin{aligned} & \Gamma \sim \sqrt{1-\rho_{ae}^2} ~~~~~~~~~(C1)\\ &\Gamma \geq 1~\forall \rho_{ae}~~~~~~~~~~~~(C2)\\ & \Gamma = 1,~ \mathrm{for} ~\rho_{ae}=1~~~(C3) \\ &0\leq\frac{\rho_{ae}}{\Gamma}\leq1,~\forall \rho_{ae}~~~(C4). \end{aligned} \label{equ:const} \end{equation} Taking all the constraints given in \eqref{equ:const}, we find that \begin{equation} \Gamma = 1+\sqrt{1-\rho_{ae}^2}. \label{equ:gamma} \end{equation} Please refer to Appendix \ref{App:gamma}.\hfill$\blacksquare$ Fig. \ref{fig:Correl} shows the relationship between the correlation of the overall channel of Eve and Alice and the correlation of the minimum-phase component of their channels. As observed from Fig. \ref{fig:Correl}, the correlation factor decreases after performing the channel decomposition causing degradation in the data decoding capability at Eve. Therefore, the correlation function becomes $\rho_{ae}^{\mathrm{min}} = \rho_{ae}/(1+\sqrt{1-\rho_{ae}^2})<\rho_{ae}$. This result shows another advantage of using the proposed scheme in case of correlated eavesdropping channels. Please refer to Appendix \ref{App:corr_min}.\hfill$\blacksquare$ To evaluate the results above, the \ac{BER} performance is analyzed. we adopt the same \ac{BER} expression given by \eqref{equ:BER_perfect} by including the spatial correlation factor $\rho$ \cite{9095399}. Then, the \ac{BER} at both Bob and Eve can be found using \eqref{equ:BER_imperfect} as \begin{equation} P_{\Lambda}(E)=\frac{1}{2}\left[1- \frac{\rho_{\Lambda}}{\sqrt{1+\frac{1}{ \bar{\gamma}_{ab}}}}\right], \end{equation} where $\Lambda=\{ab,ae\}$. \subsection{Pilot Security: Channel NMSE}\label{Subsec:Pilot Security analysis} Using the pilot scheme as adopted in Subsection \ref{Subsec:Pilot Security}, the \ac{MMSE} estimate of the channel can be obtained as \cite{MMSE_OFDM} \begin{equation} \tilde{H}_{ab}^{\mathrm{MMSE}} = F\tilde{\mathbf{h}}_{ab} = FR_{h_{ab}Y_{ab}}R_{Y_{ab}Y_{ab}}Y^{-1}, \label{equ:H_MMSE1} \end{equation} where \begin{equation} \begin{aligned} &R_{h_{ab}Y_{ab}} = \operatorname{E}[h_{ab}Y_{ab}^H]= R_{h_{ab}h_{ab}}F^HP^H\\ &R_{Y_{ab}Y_{ab}} = \operatorname{E}[Y_{ab}Y_{ab}^H]= PFR_{h_{ab}h_{ab}}F^HP^H+\sigma^2I_N, \end{aligned} \label{equ:H_MMSE2} \end{equation} where $R_{h_{ab}h_{ab}}$ is the channel autocorrelation matrix, $F$ is the \ac{DFT} matrix, and $I_N$ is the identity matrix. Substituting \eqref{equ:H_MMSE2} in \eqref{equ:H_MMSE1} we find \begin{equation} \tilde{H}_{ab}^{\mathrm{MMSE}} = FR_{h_{ab}h_{ab}}F^HP^H(PFR_{h_{ab}h_{ab}}F^HP^H+\sigma^2I_N)^{-1}Y_{ab}. \label{equ:H_MMSE} \end{equation} Decomposing the estimated channel $\tilde{H}_{ab}^{\mathrm{MMSE}}$ would result in \begin{equation} \tilde{H}_{ab}^{\mathrm{MMSE}}(k) = \tilde{H}_{ab}^{\mathrm{min}}(k)\left(\tilde{H}_{ab}^{\mathrm{ap}}(k)\right)^2. \label{equ:H_MMSE_dec} \end{equation} \section{Simulation Results}\label{sec:simulation} \begin{table} [t] \begin{center} \caption{Simulation parameters} \label{tab:sim-parm} \begin{tabular}{l|l} \hline Parameters & Specifications \\ \hline \hline FFT Size $N$ & 256 \\ \hline Pilot Rate & $1 / 4$ \\ \hline Guard Interval (CP) & 64 \\ \hline Signal Constellation & $ \mathrm{QPSK}$ \\ \hline Channel Model & IEEE 802.11 channel model PDP \cite{mimo_ofdm}\\ \hline Max channel taps $L$ & $11$\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[t] \centering \includegraphics[scale=0.50]{ perfect_BER_proposed} \footnotesize\caption{ BER performance of the proposed data security algorithm vs channel shortening \cite{8292335}, AN \cite{6516879} and conventional CP-OFDM \cite{mimo_ofdm}.} \label{fig:BER_Perfect_CSI} \end{figure} In this section, we demonstrate the performance of channel decomposition-based \ac{PLS} algorithms proposed for \ac{OFDM} systems. To do so, the \ac{BER}-based secrecy gap and \ac{NMSE}-based secrecy gap metrics are used to evaluate the security of data and pilots, respectively. The \ac{BER}-based secrecy gap will quantify the amount of information leakage to the eavesdropper, evaluate the secrecy, and also shows the effect of the proposed algorithm on the reliability with respect to legitimate nodes \cite{8093591}. On the other hand, \ac{NMSE}-based secrecy gap shows the difference between the quality of estimated channel at the legitimate node and illegitimate node. Moreover, the effect of the proposed algorithm on the \ac{PAPR} is also presented along with the comparison to the conventional algorithms. The simulated \ac{OFDM} system parameters are described in Table \ref{tab:sim-parm}. Fig. \ref{fig:BER_Perfect_CSI} shows the \ac{BER} performance versus \ac{SNR} of the proposed data security algorithm along with the comparison to the conventional algorithms such as channel shortening \cite{8292335}, AN \cite{6516879} and conventional CP-OFDM \cite{mimo_ofdm}. First, it is observed that the derived analytical results match well with the simulations. Also, note that under perfect channel estimation the proposed scheme performs exactly as the CP-OFDM while Eve suffers from high error rates. This implies that the proposed data scheme does not degrade the performance of the legitimate user. Moreover, our novel design exhibits a significant \ac{BER} gap performance compared to AN and channel shortening by ensuring the lowest \ac{BER} values at Bob and the highest error rate at Eve. This result emphasizes the effectiveness of the data security algorithm in maintaining high secrecy levels while preserving the performance of the legitimate user. Fig. \ref{fig:BER_Perfect_CSI} also shows the BER performance of Bob under imperfect channel estimation. It is seen that at low SNR values, Bob has almost the same performance as AN and channel shortening while at high SNR values it performed slightly better. \begin{figure}[t] \centering \includegraphics[scale=0.50]{ Corr_999_99_8.eps} \footnotesize\caption{BER performance of the proposed algorithm (blue color) vs LS \cite{9095399} (red color) under correlated eavesdropping channels. The solid and dashed lines stand for the analytical and simulation results, respectively.} \label{fig:Corr_999_99_8} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.49]{ papr.eps} \footnotesize\caption{The CCDF of the PAPR for the proposed algorithm compared to channel shortening \cite{8292335}, AN \cite{6516879}, LS \cite{9095399}, and conventional CP-OFDM \cite{mimo_ofdm}.} \label{fig:papr} \end{figure} Fig. \ref{fig:Corr_999_99_8} illustrates the analytical as well as the simulated results for \ac{BER} performance of the proposed algorithm and \ac{LS} based algorithm in \cite{9095399} for the case when Eve's channel is correlated with Bob's channel. The analytical results agree well with the simulations confirming the model developed in Subsection \ref{Subsec:Data security analysis}. The \ac{BER} performance is evaluated for the correlation values of $\rho = \{0.999, 0.99, 0.8\}$. It is observed that the performance of the Eve improves as the correlation values increase. However, the \ac{BER} performance difference of Eve between the proposed algorithm and LS is large for the same correlation value. For instance, when $\rho =0.999$ the error floor of the proposed scheme settles at $0.03$ while if LS is used the error falls below $0.001$. This is due to the fact that the proposed algorithm is using one component of the channel instead of the overall one, which provides resilience against eavesdroppers near the legitimate nodes as explained in Subsection \ref{Subsec:Data security analysis}. Thanks to the unit power property of the all-pass channel as explained in Subsection \ref{Subsec:Channel decomp}, the proposed precoding would not cause any power issues since it only changes the phase of each OFDM subcarrier. Fig. \ref{fig:papr} depicts \ac{PAPR} performance of the proposed algorithm compared to CP-\ac{OFDM}, AN, LS, and channel shortening methods. It is observed that the \ac{PAPR} performance of the proposed algorithm is similar to that of conventional \ac{OFDM} while the application of other security schemes causes an increase in the \ac{PAPR}. This result makes the usage of the proposed precoding technique independent of any power constraint. \begin{figure}[h] \centering \includegraphics[scale=0.49]{ NMSE_performance.eps} \footnotesize \caption{The channel estimation's NMSE performance vs SNR at Bob and Eve.} \label{fig:NMSE} \end{figure} Fig. \ref{fig:NMSE} shows the \ac{NMSE} versus \ac{SNR} performance of the estimated channel at the legitimate node and at Eve for the proposed algorithm and CP-OFDM. It is observed that there is a significant estimation error gap between the legitimate and the attacker at the cost of some degradation in the estimation quality at very low SNR values. This ensures that Eve will neither be able to estimate its channel nor acquire the sensing information from the surrounding environment. Moreover, it is also observed that the estimation error slightly increases when estimating the minimum-phase or the all-pass channels compared to the effective channel, which justifies the BER performance degradation in Fig. \ref{fig:BER_Perfect_CSI}. \section{Conclusion and Future Work}\label{sec:conclusion} In this work, we proposed novel security algorithms for providing data and pilot security. Unlike conventional security schemes which use the full channel, the proposed algorithms decompose the channel into its minimum-phase and all-pass components and exploit only the all-pass part. The latter provides enough randomness to secure the communication without causing any power issues such as high \ac{PAPR} value at the transmitter due to its unit amplitude property. Particularly, the all-pass component and its conjugate are used to secure the pilots and the data, respectively. For data security, we have considered two scenarios of correlated and uncorrelated eavesdropping channels and evaluated the \ac{BER} gap. Our results reveal that using one component of the channel provides better security than using the total effective channel in both scenarios. Moreover, the results also ensure that the proposed algorithm provides effective security with minimal degradation in the legitimate user’s performance compared to conventional algorithms. For pilot security, we considered the NMSE gap of the estimated channels. The results show that the proposed algorithm is capable of providing significant degradation in the estimated channel quality at the eavesdropper. This will ensure not only securing the CSI but also securing radio environment mapping information. Additionally, the proposed algorithm can enable flexibility in terms of providing security to data, pilot, or both depending upon the application requirements. As future work, the proposed algorithm will be investigated with multiple-input multiple-output systems. \section{Acknowledgment} The work of H. Arslan was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Grant 120C142 and the work of Haji M. Furqan was supported by the HISAR Lab at TUBITAK BILGEM, Gebze, Turkey. \begin{appendices} \section{Proof of \eqref{equ:rho_ae}}\label{App:rho} Let the correlation function of the legitimate channel gain with the eavesdropping channel be given as \begin{equation} \label{euq:correlation} \begin{aligned} \rho_{ae} &=\frac{\operatorname{Cov}({H}_{ab}^*,H_{ae})}{\sqrt{\operatorname{var}({H}_{ab}^*)\operatorname{var}(H_{ae})}}\\ & = \frac{\operatorname{E}[{H}_{ab}H_{ae}^*]-\operatorname{E}[{H}_{ab}]\operatorname{E}[{H}_{ae}^*]}{\sigma_a\sigma_e}. \end{aligned} \end{equation} We know that $\operatorname{E}[{H}_{ab}]=\operatorname{E}[{H}_{ae}]=0$. Additionally $H_{ae}$ is given by \eqref{equ:corr_chan}, then \begin{equation} \begin{aligned} \rho_{ae}& = \frac{\operatorname{E}[{H}_{ab}^*\cdot(\rho_{ae}H_{ab}+\sqrt{1-\rho_{ae}^2}H_{e})]}{\sigma_a\sigma_e}\\ & = \frac{\rho_{ae}\operatorname{E}[|{H}_{ab}|^2]+\sqrt{1-\rho_{ae}^2}\left(\operatorname{E}[{H}_{ab}^*]\operatorname{E}[H_{e}]\right)}{\sigma_a\sigma_e}\\ & = \frac{\rho_{ae}\operatorname{E}[|{H}_{ab}|^2]}{\sigma_a\sigma_e}. \end{aligned} \end{equation} Note that $\sigma_a^2= \operatorname{var}(H_{ab})= \operatorname{E}[|{H}_{ab}|^2]-\left(\operatorname{E}[{H}_{ab}]\right)^2 = \operatorname{E}[|{H}_{ab}|^2]$, and $\sigma_e^2=\operatorname{var}(\rho_{ae}H_{ab}+\sqrt{1-\rho_{ae}^2}H_{e})=\rho_{ae}^2\sigma_a^2+(1-\rho_{ae}^2)\sigma_a^2=\sigma_a^2$. Therefore, we find \begin{equation} \begin{aligned} \rho_{ae} = \frac{\rho_{ae}\sigma_a^2}{\sigma_a\sigma_a}=\rho_{ae}. \end{aligned} \end{equation} Thus, we prove that in the channel model given by \eqref{equ:corr_chan}, $\rho_{ae}$ corresponds to the channel correlation between Bob and Eve. \section{Proof of \eqref{equ:gamma}}\label{App:gamma} To identify the expression of $\Gamma$ and instead of using complex probability distributions analysis, we exploit the four constraints given in \eqref{equ:const}. For instance, from $C1$, we know that $\Gamma$ is proportional to $\sqrt{1-\rho_{ae}^2}$. And if a linear relationship is assumed, we find \begin{equation} \Gamma = \alpha+\beta \sqrt{1-\rho_{ae}^2}, \end{equation} where $\alpha$ and $\beta$ are the linear model's parameters to be defined, and by solving the equation in $(C3)$, i.e., $\rho_{ae}(\Gamma=1)=1$, we find the value of the slope as $\alpha=1$. Also, by using $(C2)$ and $(C4)$ we find the following \begin{equation} 0\leq\sqrt{1-\rho_{ae}^2}\leq\beta. \end{equation} And since $0\leq\sqrt{1-\rho_{ae}^2}\leq1$, we conclude that $\beta=1$, therefore \begin{equation} \Gamma = 1+\sqrt{1-\rho_{ae}^2}. \end{equation} This model for $\Gamma$ agrees very well with the simulation results as shown in Fig. \ref{fig:Correl}. \section{Proof of \eqref{equ:corr_min}}\label{App:corr_min} The correlation function of the legitimate minimum-phase channel component gain with the eavesdropping channel be given as \begin{equation} \begin{aligned} &\rho^{\mathrm{min}}_{ae} =\frac{\operatorname{Cov}({H}^{\mathrm{min}*}_{ab},H^{\mathrm{min}}_{ae})}{\sqrt{\operatorname{var}({H}^{\mathrm{min}*}_{ab})\operatorname{var}(H^{\mathrm{min}}_{ae}})}\\ & = \frac{\operatorname{E}[{H}^{\mathrm{min}*}_{ab}\cdot H_{ae}^{\mathrm{min}}]-\operatorname{E}[{H}^{\mathrm{min}*}_{ab}]\operatorname{E}[{H}^{\mathrm{min}}_{ae}]}{\sqrt{\left(\operatorname{E}[|{H}^{\mathrm{min}}_{ab}|^2]-\left(\operatorname{E}[{H}^{\mathrm{min}}_{ab}]\right)^2\right)\left(\operatorname{E}[|{H}^{\mathrm{min}}_{ae}|^2]-\left(\operatorname{E}[{H}^{\mathrm{min}}_{ae}]\right)^2\right)}}. \end{aligned} \end{equation} Note that $\operatorname{E}[{H}^{\mathrm{min}}_{ab}]=\operatorname{E}[{H}^{\mathrm{min}}_{ae}]=0$, $\operatorname{E}[|{H}^{\mathrm{min}}_{ae}|^2]=\operatorname{E}[|{H}_{ae}|^2]=\sigma_e^2$, and $\operatorname{E}[|{H}^{\mathrm{min}}_{ab}|^2]=\operatorname{E}[|{H}_{ab}|^2]=\sigma_a^2$. Please refer to Appendix \ref{App:mean}.\hfill$\blacksquare$ Thus, we find \begin{equation} \begin{aligned} &\rho^{\mathrm{min}}_{ae} = \frac{\operatorname{E}[{H}^{\mathrm{min}*}_{ab}H_{ae}^{\mathrm{min}}]}{\sigma_a\sigma_e}\\ & = \frac{\operatorname{E}\bigg[{H}_{ab}^{\mathrm{min}*}\cdot\left(\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}H^{\mathrm{min}}_{ab}+\sqrt{1-\left(\frac{\rho_{ae}}{\Gamma}\right)^2}H^{\mathrm{min}}_{e}\right)\bigg]}{\sigma_a\sigma_e}\\ & = \frac{\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}\operatorname{E}[|{H}^{\mathrm{min}}_{ab}|^2]+\sqrt{1-\left(\frac{\rho_{ae}}{\Gamma}\right)^2}\left(\operatorname{E}[{H}^{\mathrm{min}*}_{ab}]\operatorname{E}[H^{\mathrm{min}}_{e}]\right)}{\sigma_a\sigma_e}\\ & = \frac{\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}\sigma_a^2}{\sigma_a\sigma_e}. \end{aligned} \end{equation} Finally, the correlation is found as \begin{equation} \rho^{\mathrm{min}}_{ae} =\frac{\rho_{ae}}{1+\sqrt{1-\rho_{ae}^2}}<\rho_{ae}. \end{equation} \section{Proof of $\operatorname{E}[{H}^{\mathrm{min}}_{ab}]=\operatorname{E}[{H}^{\mathrm{min}}_{ae}]=0$.} \label{App:mean} As explained in Subsection \ref{Subsec:Channel decomp}, any FIR causal channel $H_{ab}$ can be decomposed as follows: $H_{ab} = H_{ab}^{\mathrm{min}}\cdot H^{\mathrm{ap}}_{ab}$, where $H^{\mathrm{ap}}_{ab}=e^{jU}$ and $U\sim \mathcal{U}(0,2\pi)$. Thus, $H^{\mathrm{ap}}_{ab}$ has the following \ac{PDF}: $f_X(x)=\frac{1}{j2\pi x}$. With reference to the main problem, the expectation of $H_{ab}$ is given as \begin{equation} \operatorname{E}[{H}_{ab}] = \operatorname{E}[{H}^{\mathrm{min}*}_{ab}\cdot{H}^{\mathrm{ap}}_{ab}] = 0. \end{equation} Since the minimum-phase and all-pass components are independent, we find \begin{equation} \operatorname{E}[{H}_{ab}] = \operatorname{E}[{H}^{\mathrm{min}}_{ab}]\operatorname{E}[{H}^{\mathrm{ap}}_{ab}] = 0. \label{equ:proof_exp} \end{equation} As shown in \eqref{equ:proof_exp}, whether $\operatorname{E}[{H}^{\mathrm{min}}_{ab}]=0$ or $\operatorname{E}[{H}^{\mathrm{ap}}_{ab}]=0$. However, $\operatorname{E}[{H}^{\mathrm{ap}}_{ab}]=\int_{-\infty}^{+\infty}\frac{x}{j2\pi x}dx \neq 0 $. Therefore \begin{equation} \operatorname{E}[{H}^{\mathrm{min}}_{ab}]= 0. \end{equation} \end{appendices}
0581d75cfdf3a7e05c782eae90b6cf7e895194d4
\section{Conclusions} \label{sec:conclusions} In this paper we proposed a new methodology to safely identify the zeros of the solutions of the SLOPE problem. In particular, we introduced a family of screening rules indexed by some parameters $\{\idximp_\idxps\}_{\idxps=1}^{n}$ where ${n}$ is the dimension of the primal variable. Each test of this family takes the form of a series of \(\pvdim\) inequalities which, when verified, imply the nullity of some coefficient of the minimizers. Interestingly, the proposed tests encompass standard ``sphere'' screening rule for LASSO as a particular case for some $\{\idximp_\idxps\}_{\idxps=1}^{n}$, although this choice does not correspond to the most effective test in the general case. We then introduced an efficient numerical procedure to jointly evaluate all the tests in the proposed family. Our algorithm has a complexity $\mathcal{O}({n} \log {n} + T\nscreen)$ where $T\leq {n}$ is some problem-dependent constant and $\nscreen$ is the number of elements passing at least one test of the family. We finally assessed the performance of our screening strategy through numerical simulations and showed that the proposed methodology leads to significant improvements of the solving accuracy for a prescribed computational budget. \\ \section{Introduction} During the last decades, sparse linear regression has attracted much attention in the field of statistics, machine learning and inverse problems. It consists in finding an approximation of some input vector \(\mathbf{\obsletter}\in\kR^{m}\) as the linear combination of a few columns of a matrix \(\bfA\in\kR^{{m}\times{n}}\) (often called dictionary). Unfortunately, the general form of this problem is NP-hard and convex relaxations have been proposed in the literature to circumvent this issue. The most popular instance of convex relaxation for sparse linear regression is undoubtedly the so-called ``LASSO'' problem where the coefficients of the regression are penalized by an \(\ell_1\) norm, see~\cite{Chen_siam99}. Generalized versions of LASSO have also been introduced to account for some possible structure in the pattern of the nonzero coefficients of the regression, see~\cite{Bach:2012re}. In this paper, we focus on the following generalization of LASSO: \begin{equation} \label{eq:primal problem} \min_{\scriptstyle{\bfx}\in\kR^{n}} \, \primfun({\bfx}) \triangleq \tfrac{1}{2} \kvvbar{\mathbf{\obsletter} - \bfA{\bfx}}_2^2 + \lambda\, \regslope({\bfx}), \quad \lambda>0 \end{equation} where \begin{equation} \label{eq:def SLOPE reg} \regslope({\bfx}) \triangleq \sum_{k=1}^{\pvdim} \slopeweight_k |\pv|_{\sortedentry{k}} \end{equation} with \begin{equation} \label{eq:hyp slopeweigths} \begin{array}{ccc} \slopeweight_1 > 0, & & \slopeweight_1 \geq \dots \geq \slopeweight_{n} \geq 0, \end{array} \end{equation} and \(|{\bfx}|_{\sortedentry{k}}\) is the \(k\)th largest element of \({\bfx}\) in absolute value, that is \begin{equation} \forall {\bfx}\in\kR^{n}:\ |{\bfx}|_{\sortedentry{1}}\geq |{\bfx}|_{\sortedentry{2}} \geq \ldots \geq |{\bfx}|_{\sortedentry{{n}}} . \end{equation} Problem~\eqref{eq:primal problem} is commonly referred to as \textit{``Sorted L-One Penalized Estimation'' (SLOPE)} or \textit{``Ordered Weighted L-One Linear Regression''} in the literature and has been introduced in two parallel works \cite{Bogdan2015,Figueiredo2016Ordered}.\footnote{We will stick to the former denomination in the following.} The first instance of a problem of the form~\eqref{eq:primal problem} (for some nontrivial choice of the parameters \(\slopeweight_k\)'s) is due to Bondell and Reich in \cite{Bondell2007}. The authors considered a problem similar to \eqref{eq:primal problem}, named ``Octagonal Shrinkage and Clustering Algorithm for Regression'' (OSCAR), where the regularization function is a linear combination of an \(\ell_1\) norm and a sum of pairwise \(\ell_\infty\) norms of the elements of \({\bfx}\), that is \begin{equation} \label{eq:def regularzation oscar} \regoscar({\bfx}) = \beta_1 \|{\bfx}\|_1 + \beta_2 \sum_{j' > j} \max(|\pv_{\entry{j'}}|, |\pv_{\entry{j}}|), \end{equation} for some \(\beta_1\in\kR*+\), \(\beta_2\in\kR+\). It is not difficult to see that \reffunctext{\regoscar} can be expressed as a particular case of \reffunctext{\regslope} with the following choice \(\slopeweight^{\texttt{OSCAR}}_k = \beta_1 + \beta_2 ({n}-k)\). We note that some authors have recently considered ``group'' versions of the SLOPE problem where the ordered \(\ell_2\) norm of subsets of \({\bfx}\) is penalized by a decreasing sequence of parameters \(\slopeweight_k\), see \textit{e.g.},\xspace \cite{Grossmann:2015Iden,Gossmann2018Sparse,Brzyski2019dq}. SLOPE enjoys several desirable properties which have attracted many researchers during the last decade. First, it was shown in several works that, for some proper choices of parameters \(\slopeweight_k\)'s, SLOPE promotes \textit{sparse} solutions with some form of \textit{``clustering''}\footnote{More specifically, groups of nonzero coefficients tend to take on the same value.} of the nonzero coefficients, see \textit{e.g.},\xspace \cite{Bondell2007,Figueiredo2016Ordered,Kremer2019fi,schneider2020geometry}. This feature has been exploited in many application domains: portfolio optimization \cite{Xing2014lu,Kremer2020hi}, genetics \cite{Grossmann:2015Iden}, magnetic-resonance imaging \cite{elgueddari:hal-02292372}, subspace clustering \cite{Oswal2018Scalable}, deep neural networks \cite{Zhang2018learning}, etc. Moreover, it has been pointed out in a series of works that SLOPE has very good statistical properties: it leads to an improvement of the false detection rate (as compared to LASSO) for moderately-correlated dictionaries \cite{Bogdan2013statistical,Gossmann2018Sparse} and is minimax optimal in some asymptotic regimes, see \cite{Su2016jh,Lecue2017reg}. Another desirable feature of SLOPE is its convexity. In particular, it was shown in \cite[Proposition 1.1]{Bogdan2013statistical} and \cite[Lemma 2]{Zeng:2014AtomicNorm} that \reffunctext{\regslope} is a norm as soon as \eqref{eq:hyp slopeweigths} holds. As a consequence, several numerical procedures have been proposed in the literature to find the global minimizer(s) of problem \eqref{eq:primal problem}. In \cite{Bogdan2013statistical} and \cite{Zhong2011icml}, the authors considered an accelerated gradient proximal implementation for SLOPE and OSCAR, respectively. In \cite{Kremer2020hi}, the authors tackled problem \eqref{eq:primal problem} via an alternating-direction method of multipliers \cite{Boyd2017}. An approach based on an augmented Lagrangian method was considered in \cite{Luo2019kr}. In \cite{Zeng:2014AtomicNorm}, the authors expressed \reffunctext{\regslope} as an atomic norm and particularized a Frank-Wolfe minimization procedure \cite{Frank_1956} to problem \eqref{eq:primal problem}. An efficient algorithm to compute the Euclidean projection onto the unit ball of the SLOPE norm was provided in \cite{Davis2015onlogn}. Finally, in \cite{Bu2019Algo} a heuristic ``message-passing'' method was proposed. In this paper, we introduce a new ``safe screening'' procedure to accelerate the resolution of SLOPE. The concept of ``safe screening'' is well known in the LASSO literature: it consists in performing simple tests to identify the zero elements of the minimizers; this knowledge can then be exploited to reduce the problem dimensionality by discarding the columns of the dictionary weighted by the zero coefficients. Safe screening for LASSO has been first introduced by El Ghaoui \textit{et al.}\xspace in the seminal paper \cite{Ghaoui2010} and extended to \textit{group-separable} sparsity-inducing norm in \cite{Ndiaye2017}. Safe screening has rapidly been recognized as a simple yet effective procedure to accelerate the resolution of LASSO, see \textit{e.g.},\xspace \cite{fercoq2015, Dai2012Ellipsoid, Xiang:2017ty, icml2014c2_liuc14,wang2015lasso, Herzet16Screening,Herzet:2019fj,guyard2021screen,Le2022HolderDomeTechreport}. The term ``safe'' refers to the fact that all the elements identified by a safe screening procedure are theoretically guaranteed to correspond to zeros of the minimizers. In contrast, \textit{unsafe} versions of screening for LASSO (often called ``strong screening rules'') also exist, see \cite{RSSB:RSSB1004}. More recently, screening methodologies have been extended to detect saturated components in different convex optimization problems, see \cite{Elvira:2020rv,Elvira2020:Squeezing}. In this paper, we derive \textit{safe} screening rules for SLOPE and emphasize that their implementation enables significant improvements of the solving precision when addressing SLOPE with a prescribed computational budget. We note that the SLOPE norm is not group-separable and the methodology proposed in \cite{Ndiaye2017} does therefore not trivially apply here. Prior to this work, we identified two contributions addressing screening for SLOPE. In~\cite{Larsson2020strong}, the authors proposed an extension of the \textit{strong} screening rules derived in~\cite{RSSB:RSSB1004} to the SLOPE problem. In~\cite{Bao:2020dq}, the authors suggested a simple test to identify some zeros of the SLOPE solutions. Although the derivations made by these authors have been shown to contain several technical flaws~\cite{Elvira2021techreport}, their test can be cast as a particular case of our result in \Cref{th: safe screening for SLOPE} (and is therefore quite unexpectedly safe). The paper is organized as follows. We introduce the notational conventions used throughout the paper in \Cref{sec:notations} and recall the main concepts of safe screening for LASSO in \Cref{sec:Screening: main concepts}. \Cref{sec:screening-rule} contains our proposed safe screening rules for SLOPE. \Cref{sec:simus} illustrates the effectiveness of the proposed approach through numerical simulations. All technical details and mathematical derivations are postponed to \Cref{app:Miscellaneous results,sec:app:main-proofs}.\\ \section{Miscellaneous results}\label{app:Miscellaneous results} \Cref{subsec:app:sub-diff} reminds some useful results from convex analysis applied to the SLOPE problem~\eqref{eq:primal problem}. \Cref{subsec:app:proof cns solutions slope is zero} provides a proof of~\eqref{eq:cns solution slope is zero}. In all the statements below, \(\partial\regslope(\pv)\) denotes the subdifferential of \reffunctext{\regslope} evaluated at \(\pv\).\\ \subsection{Some results of convex analysis} \label{subsec:app:sub-diff} We remind below several results of convex analysis that will be used in our subsequent derivations. The first lemma provides a necessary and sufficient condition for \(\pvopt\in\kR^\pvdim\) to be a minimizer of the SLOPE problem~\eqref{eq:primal problem}: \noindent \begin{lemma} \label{lemma:fermat s rule} \(\text{\(\pvopt\) is a minimizer of~\eqref{eq:primal problem}} \ \Longleftrightarrow\ \kinv{\lambda}\ktranspose{\bfA}(\mathbf{\obsletter} - \bfA\pvopt) \in \partial\regslope(\pvopt)\).\\ \end{lemma} \noindent \Cref{lemma:fermat s rule} follows from a direct application of Fermat's rule~\cite[Proposition~16.4]{Bauschke2017} to problem~\eqref{eq:primal problem}. We note that under condition~\eqref{eq:hyp slopeweigths}, \reffunctext{\regslope} defines a norm on \(\kR^{n}\), see \textit{e.g.},\xspace \cite[Proposition~1.1]{Bogdan2013statistical} or~\cite[Lemma~2]{Zeng:2014AtomicNorm}. The subdifferential \(\partial \regslope({\bfx})\) is therefore well defined for all \({\bfx}\in\kR^{n}\) and writes as \begin{equation} \label{eq:app:proof:subdifferential norm} \partial\regslope({\bfx}) = \kset{ \subdiffvec\in\kR^{n} }{ \ktranspose{\subdiffvec}{\bfx} = \regslope({\bfx}) \text{ and } \dualregslope(\subdiffvec) \leq 1 }, \end{equation} where \begin{equation} \dualregslope(\subdiffvec) \triangleq \sup_{{\bfx}\in\kR^\pvdim} \ktranspose{\subdiffvec}\pv \ \text{ s.t.}\ \regslope({\bfx})\leq 1 \end{equation} is the dual norm of \reffunctext{\regslope}, see \textit{e.g.},\xspace \cite[Eq.~(1.4)]{bach2011book}. The next lemma states a technical result which will be useful in the proof of \Cref{th:safe-screening} in \Cref{sec:app:main-proofs}:\\ \begin{lemma} \label{lemma:sudiff property} If \(\subdiffvec\in\partial\regslope(\pv)\), then \(\ktranspose{\pv}(\subdiffvec-\subdiffvec') \geq 0 \, \forall\subdiffvec'\in\kR^\pvdim \text{ s.t. } \dualregslope(\subdiffvec')\leq 1.\) \end{lemma} \noindent \begin{proof} Let \(\subdiffvec\in\partial\regslope(\pv)\). One has \begin{align} \subdiffvec\in\partial\regslope(\pv) \;\Longleftrightarrow\; & \pv\in\partial \regslope^*(\subdiffvec) \nonumber \\ \;\Longleftrightarrow\; & \forall \subdiffvec'\in\kR^\pvdim, \; \regslope^*(\subdiffvec') \geq \regslope^*(\subdiffvec) + \kangle{\pv, \subdiffvec' - \subdiffvec} \end{align} where \reffunctext{\regslope^*} refers to the Fenchel conjugate of \reffunctext{\regslope}. The first equivalence is a consequence of \cite[Theorem~16.29]{Bauschke2017} and the second of the definition of the subdifferential set. \Cref{lemma:sudiff property} follows by noticing that \(\regslope^*(\subdiffvec')=0\) \(\forall\subdiffvec'\in\kR^\pvdim\) such that \(\dualregslope(\subdiffvec')\leq1\) by property of \reffunctext{\regslope^*} \cite[Item~(v) of Example 13.3]{Bauschke2017}. % % \end{proof} \vspace{2em} In the last lemma of this section, we provide a closed-form expression of the subdifferential and the dual norm of \reffunctext{\regslope}:\footnote{We note that an expression of the subdifferential of \reffunctext{\regslope} has already been derived in~\cite[Fact A.2 in supplementary material]{Bu2019Algo}. However, the expression of the subdifferential proposed in \Cref{lemma:sub-diff} has a more compact form and is better suited to our subsequent derivations.} \begin{lemma} \label{lemma:sub-diff} The dual norm and the subdifferential of \(\regslope(\pv)\) respectively write: \begin{equation*} \begin{split} \nonumber \dualregslope(\subdiffvec) &= \max_{\idxps\in\intervint{1}{\pvdim}} \ \frac{1}{\sum_{k=1}^{\idxps} \slopeweight_k} \sum_{k=1}^{\idxps} |\subdiffvec|_{\sortedentry{k}} ,\\ \nonumber \partial \regslope({\bfx}) &= \kset{ \subdiffvec\in\kR^{n} }{ \ktranspose{\subdiffvec}{\bfx} = \regslope({\bfx}) \text{ and } \forall\idxps\in\intervint{1}{{n}}:\ \sum_{k=1}^{\idxps} |\subdiffvec|_{\sortedentry{k}} \leq \sum_{k=1}^{\idxps} \slopeweight_k }. \end{split} \end{equation*} \end{lemma} \begin{proof} The expression of the dual norm is a direct consequence of~\cite[Lemma~4]{Zeng:2014AtomicNorm}. More precisely, the authors showed that \begin{equation} \label{eq:app:proof:dual norm atomic norm} \dualregslope(\subdiffvec) = \max_{\bfv\in\bigcup_{\idxps=1}^{n}\mathcal{V}_\idxps} \; \ktranspose{\subdiffvec}\bfv \end{equation} where \(\mathcal{V}_\idxps \triangleq \kset{ \tfrac{1}{ \sum_{k=1}^{\idxps} \slopeweight_{k} } \bfs }{ \bfs\in\{0,-1,+1\}^{n}, \card[\{j : \bfs_{\entry{j}} \neq 0\}] = \idxps }\) for all \(\idxps\in\intervint{1}{\pvdim}\). The expression of \reffunctext{\dualregslope} given in \Cref{lemma:sub-diff} is a compact rewriting of \eqref{eq:app:proof:dual norm atomic norm} that can be obtained as follows. See first that for all \(\idxps\in\intervint{1}{\pvdim}\), \begin{equation} \label{eq:upper bound proof sub diff} \max_{\bfv\in\mathcal{V}_\idxps} \; \ktranspose{\subdiffvec}\bfv \leq \frac{1}{ \sum_{k=1}^{\idxps} \slopeweight_{k} } \sum_{k=1}^\idxps \kvbar{\subdiffvec}_{\sortedentry{k}} . \end{equation} Second, for \(\idxps\in\intervint{1}{\pvdim}\), let \(\mathcal{J}_\idxps\subset\intervint{1}{\pvdim}\) be a set \(\idxps\) distinct indices such that \(|\subdiffvec_{\entry{j}}| \geq \kvbar{\subdiffvec}_{\sortedentry{\idxps}}\) for all \(j\in\mathcal{J}_\idxps\). Then, the upper bound in~\eqref{eq:upper bound proof sub diff} is attained by evaluating the left-hand side at \(\bfv\in\mathcal{V}_\idxps\) defined as \begin{equation} \forall j\in\intervint{1}{\pvdim}:\quad \bfv_{\entry{j}}= \begin{cases} \tfrac{1}{ \sum_{k=1}^{\idxps} \slopeweight_{k} }\,\sign[\subdiffvec_{\entry{j}}] & \text{ if } j\in\mathcal{J}_\idxps \\ 0 & \text{otherwise.} \end{cases} \end{equation} The expression of the subdifferential follows from \eqref{eq:app:proof:subdifferential norm} by plugging the expression of the dual norm in the inequality ``\(\dualregslope(\subdiffvec)\leq1\)''. \end{proof} \vspace{0.2cm} \subsection{Proof of~\texorpdfstring{\eqref{eq:cns solution slope is zero}}{(\ref*{eq:cns solution slope is zero})}} \label{subsec:app:proof cns solutions slope is zero} We first observe that \begin{equation} \label{eq:applying-fermat-rule} \text{\({\mathbf 0}_{n}\) is not a minimizer of~\eqref{eq:primal problem}} \Longleftrightarrow \kinv{\lambda} \ktranspose{\bfA} \mathbf{\obsletter} \notin \partial \regslope({\mathbf 0}_{n}) , \end{equation} as a direct consequence of \Cref{lemma:fermat s rule}. Particularizing the expression of \(\partial\regslope({\bfx})\) in \Cref{lemma:sub-diff} to \({\bfx}={\bf0}_{n}\), the right-hand side of~\eqref{eq:applying-fermat-rule} can equivalently be rewritten as \begin{equation} \label{eq:proof-0issol-buf} \exists\idxps\in\intervint{1}{{n}}:\; \kinv{\lambda} \sum_{k=1}^{\idxps} \big\vert{\ktranspose{\bfA}\mathbf{\obsletter}}\big\vert_{\sortedentry{k}} > \sum_{k=1}^{\idxps}\slopeweight_k . \end{equation} Since \(\slopeweight_1>0\) and the sequence \(\{\slopeweight_k\}_{k=1}^{{n}}\) is nonnegative by hypothesis~\eqref{eq:hyp slopeweigths},~\eqref{eq:proof-0issol-buf} can also be rewritten as \begin{equation} \label{eq:proof-0issol-buf 2} \exists\idxps\in\intervint{1}{{n}}:\; \lambda < \frac{ \sum_{k=1}^{\idxps} \big\vert{\ktranspose{\bfA}\mathbf{\obsletter}}\big\vert_{\sortedentry{k}} }{ \sum_{k=1}^{\idxps}\slopeweight_k } . \end{equation} The statement in~\eqref{eq:cns solution slope is zero} then follows by noticing that the right-hand side of \eqref{eq:lambda-such-that-0-is-sol} is a compact reformulation of~\eqref{eq:proof-0issol-buf 2}.\\ \section{Proofs related to screening tests} \label{sec:app:main-proofs} \subsection{Proof of \texorpdfstring{\Cref{th:safe-screening}}{Theorem~\ref*{th:safe-screening}}} \label{subsec:app:proof ideal screening Slope} In this section, we provide the technical details leading to \eqref{eq: ideal safe screening test}. Our derivation leverages the Fermat's rule and the expression of the subdifferential derived in \Cref{lemma:sub-diff}. We prove~\eqref{eq: ideal safe screening test} by contraposition. More precisely, we show that if \(\pvopt_{\entry{\ell}}\neq0\) for some \(\ell\in\intervint{1}{{n}}\), then \begin{equation} \label{proof:eq:target statement} \exists \idxps_0 \in\intervint{1}{\pvdim},\; \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=1}^{\idxps_0-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} = \lambda \sum_{k=1}^{\idxps_0} \slopeweight_k . \end{equation} Using \Cref{lemma:fermat s rule} and the following connection between primal-dual solutions (see~\cite[Section~2.5]{Bogdan2013statistical}) \begin{equation} \label{eq:proof:optimality condition dual and primal sol} \dvopt = \mathbf{\obsletter} - \bfA \pvopt , \end{equation} we have that \(\pvopt\) is a minimizer of~\eqref{eq:primal problem} if and only if \begin{equation} \label{eq:fermatsrule} \subdiffvec^\star \triangleq \kinv{\lambda}\ktranspose{\bfA}\dvopt \in \partial\regslope(\pvopt) . \end{equation} In the rest of the proof, we will use \Cref{lemma:sudiff property} with \({\bfx}=\pvopt\), \(\subdiffvec=\subdiffvec^\star\) and different instances of vector \(\subdiffvec'\) to prove our statement. First, let us define \(\subdiffvec'\in\kR^{n}\) as \begin{equation*} \begin{split} \subdiffvec'_{\entry{j}} \;=\;& \subdiffvec^\star_{\entry{j}} \quad \forallj\in\intervint{1}{\pvdim}\setminus\{\ell\}, \\ % \subdiffvec'_{\entry{\ell}} \;=\;& 0. \end{split} \end{equation*} It is easy to verify that \(\dualregslope(\subdiffvec')\leq 1\). Applying \Cref{lemma:sudiff property} then leads to \begin{equation} \subdiffvec^\star_{\entry{\ell}}\pvopt_{\entry{\ell}}\geq 0. \end{equation} Since \(\pvopt_{\entry{\ell}}\) is assumed to be nonzero, we then have \begin{equation} \label{eq:complementary slackness} \mathrm{sign}\big(\subdiffvec^\star_{\entry{\ell}}\big)\,\mathrm{sign}\big(\pvopt_{\entry{\ell}}\big)\geq 0, \end{equation} where the equality holds if and only if \(\subdiffvec^\star_{\entry{\ell}}=0\). Second, let us consider the following choice for \(\subdiffvec'\in\kR^{n}\): \begin{equation}\label{eq:def g' 2} \begin{split} \subdiffvec'_{\entry{j}} \;=\;& \subdiffvec^\star_{\entry{j}} \quad \forallj\in\intervint{1}{\pvdim}\setminus\{\ell\}, \\ % \subdiffvec'_{\entry{\ell}} \;=\;& \subdiffvec^{\star}_{\entry{\ell}}+s\delta, \end{split} \end{equation} where \begin{equation} \label{eq:def s} s \triangleq \begin{cases} \mathrm{sign}\big(\subdiffvec^\star_{\entry{\ell}}\big) & \text{ if } \subdiffvec^\star_{\entry{\ell}} \neq 0 \\ % \mathrm{sign}\big(\pv^\star_{\entry{\ell}}\big) & \text{ otherwise,} \end{cases} \end{equation} and \(\delta\) is any nonnegative scalar such that \begin{equation} \label{eq:g' constraint 1} \dualregslope(\subdiffvec') \leq 1 . % \end{equation} On the one hand, we note that \eqref{eq:g' constraint 1} is verified for $\delta=0$. On the other hand, it can be seen that \eqref{eq:g' constraint 1} is violated as soon as $\delta>0$ by using the following arguments. First, applying \Cref{lemma:sudiff property} with $\subdiffvec'$ defined as in \eqref{eq:def g' 2} leads to \begin{equation} \label{eq:proof:second application lemma 2} - s \pvopt_{\entry{\ell}} \delta \geq 0. \end{equation} Second, using \eqref{eq:complementary slackness} and the definition of \(s\) in \eqref{eq:def s}, we must have \(s\pvopt_{\entry{\ell}}>0\). Hence, satisfying inequality \eqref{eq:g' constraint 1} necessarily implies that \(\delta= 0\). The contraposition of this result implies: \begin{equation}\label{eq: strict inequality exists} \forall \delta> 0, \exists\idxps_0\in\intervint{1}{\pvdim}:\; \sum_{k=1}^{\idxps_0}|\subdiffvec^\star|_{\sortedentry{k}} + \delta > \sum_{k=1}^{\idxps_0} \slopeweight_k \end{equation} or equivalently \begin{equation} \exists\idxps_0\in\intervint{1}{\pvdim}:\; \sum_{k=1}^{\idxps_0}|\subdiffvec^\star|_{\sortedentry{k}} = \sum_{k=1}^{\idxps_0} \slopeweight_k . \end{equation} Let us next emphasize that the range of values for $\idxps_0$ can be restricted by choosing some suitable value for $\delta$. In particular, define \(\idxps_0'\in\intervint{1}{\pvdim}\) as \begin{align} \label{eq:def q0'} \idxps_0' \triangleq \min \kset{ \idxps\in\intervint{1}{\pvdim} }{ |\subdiffvec^\star_{\entry{\ell}}| = |\subdiffvec^\star|_{\sortedentry{\idxps}} } \end{align} and let \begin{equation} \label{eq:condition on delta 2} 0 < \delta < |\subdiffvec^\star|_{\sortedentry{\idxps_0'-1}}-|\subdiffvec^\star|_{\sortedentry{\idxps_0'}} \end{equation} with the convention \(\subdiffvec^\star_{\sortedentry{0}}=+\infty\). Considering $\subdiffvec'$ as defined in \eqref{eq:def g' 2} with $\delta$ satisfying~\eqref{eq:condition on delta 2}, we have that the first $\idxps_0'-1$ largest absolute elements of $\subdiffvec'$ and $\subdiffvec^\star$ are the same. Since $\dualregslope(\subdiffvec^\star)\leq1$, the inequality in the right-hand side of \eqref{eq: strict inequality exists} can therefore not be verified for $\idxps_0\in\intervint{1}{\idxps_0'-1}$. Hence, considering $\delta$ as in \eqref{eq:condition on delta 2}, we have \begin{equation} \exists\idxps_0\in\intervint{\idxps_0'}{\pvdim}:\; \sum_{k=1}^{\idxps_0}|\subdiffvec^\star|_{\sortedentry{k}} = \sum_{k=1}^{\idxps_0} \slopeweight_k . \end{equation} We finally obtain our original assertion \eqref{proof:eq:target statement} by using the definition of \(\subdiffvec^\star\) in \eqref{eq:fermatsrule} and the fact that \begin{equation} \sum_{k=1}^{\idxps_0}\big\vert\ktranspose{\bfA}\dvopt\big\vert_{\sortedentry{k}} = \big\vert\ktranspose{\bfa}_{\column{\ell}} \dvopt\big\vert + \sum_{k=1}^{\idxps_0-1}\big\vert\ktranspose{\bfA}_{\backslash\ell}\dvopt\big\vert_{\sortedentry{k}} \end{equation} since $|\ktranspose{\bfa}_{\column{\ell}} \dvopt|=|\ktranspose{\bfA} \dvopt|_{\sortedentry{\idxps_0'}}$ by definition of $\idxps_0'$ in \eqref{eq:def q0'} and $|\ktranspose{\bfA} \dvopt|_{\sortedentry{\idxps_0'}}\geq |\ktranspose{\bfA}\dvopt|_{\sortedentry{\idxps_0}}$ by definition of $\idxps_0\geq\idxps_0'$.\\ \subsection[Proof of Lemma~\ref{lemma:upper bound}]{Proof of \Cref{lemma:upper bound}}\label{sec:Proof of lemma:upper bound} We first state and prove the following technical lemma: \begin{lemma} \label{lemma:ordered_inequality} Let \(\bfg\in\kR^{n}\) and \(\bfh\in\kR^{n}\) be such that \(\bfg_{\entry{j}}\leq \bfh_{\entry{j}}\)\(\,\forall j\in\intervint{1}{\pvdim}\). Then \begin{equation} \mbox{\(\bfg_{\sortedentry{k}}\leq \bfh_{\sortedentry{k}}\) \(\,\forall k\in\intervint{1}{\pvdim} \).} \end{equation} \end{lemma} \begin{proof} Let \(k\in\intervint{1}{\pvdim}\). We have by definition \begin{equation*} \begin{split} \bfh_{\sortedentry{k}} &= \max_{ \substack{ \calJ \subseteq \intervint{1}{{n}} \\ \mathrm{card}(\calJ)=k} } \min_{j\in\calJ} \bfh_{\entry{j}}, \nonumber \\ % &\geq \max_{ \substack{ \calJ \subseteq \intervint{1}{{n}} \\ \mathrm{card}(\calJ)=k} } \min_{j\in\calJ} \bfg_{\entry{j}},\nonumber \\ % &= \bfg_{\sortedentry{k}} , \end{split} \end{equation*} where the inequality follows from our assumption \(\bfg_{\entry{j}}\leq \bfh_{\entry{j}}\) \(\forall j\in\intervint{1}{\pvdim}\). \end{proof} \vspace{1em} We are now ready to prove \Cref{lemma:upper bound}. For any \(\idximp\in\intervint{1}{\idxps}\), we can write: \begin{equation} \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=1}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} = \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=1}^{\idximp-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} + \sum_{k=\idximp}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} . \end{equation} First, since \(\dvopt\) is dual feasible, we have: \begin{equation} \label{eq:proof:upper bound 1} \sum_{k=1}^{\idximp-1} \big\vert\ktranspose{\bfA}_{\setminus\ell} \dvopt\big\vert_{\sortedentry{k}} \leq \lambda \sum_{k=1}^{\idximp-1} \slopeweight_{k} . \end{equation} We next show that if \(\dvopt\in\calS(\mathbf{c},R)\), then \begin{equation} \label{eq:proof:upper bound 2} \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=\idximp}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} \leq \big\vert{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }\big\vert + \sum_{k=\idximp}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} }\big\vert_{\sortedentry{k}} + (\idxps-\idximp+1) R . \end{equation} We then obtain the result stated in the lemma by combining \eqref{eq:proof:upper bound 1}-\eqref{eq:proof:upper bound 2}. Inequality \eqref{eq:proof:upper bound 2} can be shown as follows. First, \begin{equation} \forall j\in\intervint{1}{{n}}: \max_{\dv\in\calS(\mathbf{c},R)}|\ktranspose{\bfa}_{\column{j}}\dv| = |\ktranspose{\bfa}_{\column{j}}\mathbf{c}|+ R. \end{equation} Hence, \begin{equation} \kparen{\max_{\dv\in\calS(\mathbf{c},R)}\big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert}_{\sortedentry{k}} = \big\vert\ktranspose{\bfA}_{\backslash \ell}\mathbf{c}\big\vert_{\sortedentry{k}}+ R \end{equation} where the maximum is taken component-wise in the left-hand side of the equation. Applying \Cref{lemma:ordered_inequality} with \(\bfg = |\ktranspose{\bfA}_{\backslash \ell}\dv|\) and \(\bfh = \max_{\tilde{\dv}\in\calS(\mathbf{c},R)}|\ktranspose{\bfA}_{\backslash \ell}\tilde{\dv}|\), we have \begin{equation} \label{eq:inequality max applying lemma} \forall \dv\in\calS(\mathbf{c},R):\ \big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert_{\sortedentry{k}} \leq \kparen{\max_{\tilde{\dv}\in\calS(\mathbf{c},R)}\big\vert\ktranspose{\bfA}_{\backslash \ell}\tilde{\dv}\big\vert}_{\sortedentry{k}} \end{equation} and therefore \begin{equation} \max_{\dv\in\calS(\mathbf{c},R)}\kparen{\big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert_{\sortedentry{k}}} \leq \kparen{\max_{\dv\in\calS(\mathbf{c},R)}\big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert}_{\sortedentry{k}}. \end{equation} Combining these results leads to \begin{equation*} \begin{split} \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=\idximp}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} &\leq \max_{\dv\in\calS(\mathbf{c},R)} \kparen{ \big\vert\ktranspose{\bfa}_{\column{\ell}}\dv\big\vert + \sum_{k=\idximp}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert_{\sortedentry{k}} }\\ &\leq \max_{\dv\in\calS(\mathbf{c},R)} \big\vert\ktranspose{\bfa}_{\column{\ell}}\dv\big\vert + \sum_{k=\idximp}^{\idxps-1} \max_{\dv\in\calS(\mathbf{c},R)} \kparen{\big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert_{\sortedentry{k}} }\\ &\leq \max_{\dv\in\calS(\mathbf{c},R)} \big\vert\ktranspose{\bfa}_{\column{\ell}}\dv\big\vert + \sum_{k=\idximp}^{\idxps-1} \kparen{\max_{\dv\in\calS(\mathbf{c},R)}\big\vert\ktranspose{\bfA}_{\backslash \ell}\dv\big\vert}_{\sortedentry{k} }\\ &\leq \big\vert{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }\big\vert + \sum_{k=\idximp}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} }\big\vert_{\sortedentry{k}} + (\idxps-\idximp+1) R. \end{split} \end{equation*} \subsection[Proof of Lemma~\ref{lemma:optimality p=0 for LASSO}]{Proof of \Cref{lemma:optimality p=0 for LASSO}} \label{sec:Proof of lemma:optimality p=0 for LASSO} We want to show that if test \eqref{eq: general safe screening for SLOPE} is passed for some \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\), then test \eqref{eq: safe screening for SLOPE p=0} is also passed when \(\slopeweight_k=1\) \(\forallk\in\intervint{1}{{n}}\). Assume \eqref{eq: general safe screening for SLOPE} holds for some \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\), that is \(\forall\idxps\in\intervint{1}{{n}}\), \(\exists \idximp_\idxps\in\intervint{1}{\idxps}\) such that \begin{equation} \label{eq:buf:inquality proof2 dirty} \big\vert{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }\big\vert + \sum_{k=\idximp_\idxps}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} }\big\vert_{\sortedentry{k}} < \kappa_{\idxps,\idximp_\idxps} , \end{equation} where \(\kappa_{\idxps,\idximp} \triangleq \lambda \kparen{\sum_{k=\idximp}^{\idxps} \slopeweight_k} - (\idxps-\idximp+1)R \). Considering the case ``\(\idxps=1\)'', we have $\idximp_1=1$, $\kappa_{1,1}=\lambda \slopeweight_1-R$ and \eqref{eq:buf:inquality proof2 dirty} thus particularizes to \begin{equation} \big\vert{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }\big\vert <\lambda \slopeweight_1 - R . \end{equation} Since \(\slopeweight_k=1\) \(\forallk\in\intervint{1}{{n}}\) by hypothesis, the latter inequality is equal to \eqref{eq: safe screening for SLOPE p=0} and the result is proved.\\ \subsection[Proof of Lemma~\ref{lemma:nesting test}]{Proof of \Cref{lemma:nesting test}} \label{subsec:proof lemma nesting test} \newcommand{C}{C} \newcommand{t}{t} \newcommand{\sigma}{\sigma} We prove the result by showing that \(\forall\idxps\in\intervint{1}{{n}}\) the sequence \(\{B_{\idxps,\ell}\}_{\ell\in\intervint{1}{{n}}}\) is non-increasing. To this end, we first rewrite \(B_{\idxps,\ell}\) in a slightly different manner, easier to analyze. Let \begin{equation} \begin{array}{rll} C_{\idxps,\idximp} \triangleq & (\idxps-\idximp+1) R + \lambda\kparen{\sum_{k=1}^{\idximp-1} \slopeweight_{k}} & \forall\idxps\in\intervint{1}{{n}}, \forall\idximp\in\intervint{1}{\idxps} \\ \sigma_\idxps \triangleq & \sum_{k=1}^\idxps |\ktranspose{\bfa}_{\column{k}} \mathbf{c}| & \forall\idxps\in\intervint{0}{{n}} \end{array} \end{equation} with the convention \(\sigma_0\triangleq0\). Using these notations and hypothesis~\eqref{eq:WH}, \(B_{\idxps,\ell}\) can be rewritten as \begin{align} B_{\idxps,\ell} - C_{\idxps,\idximp} \;=\; & \big\vert\ktranspose{\bfa}_{\column{\ell}} \mathbf{c}\big\vert + \sum_{k=1}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\setminus \ell}\mathbf{c} }\big\vert_{\entry{k}} - \sum_{k=1}^{\idximp-1} \big\vert{ \ktranspose{\bfA}_{\setminus \ell}\mathbf{c} }\big\vert_{\entry{k}} \\ % \;=\; & \begin{cases} |\ktranspose{\bfa}_{\column{\ell}} \mathbf{c}| + \sigma_{\idxps-1} - \sigma_{\idximp-1} & \text{ if } \idxps < \ell \\ \sigma_{\idxps} - \sigma_{\idximp-1} & \text{ if } \idximp - 1 < \ell \leq \idxps \\ |\ktranspose{\bfa}_{\column{\ell}} \mathbf{c}| + \sigma_{\idxps} - \sigma_{\idximp} & \text{ if } \ell \leq \idximp - 1. \end{cases} \label{eq:sqlp} \end{align} We next show that \(\forall \idxps\in\intervint{1}{{n}}\) the sequence \(\{B_{\idxps,\ell}\}_{\ell\in\intervint{1}{{n}}}\) is non-increasing. We first notice that \(C_{\idxps,\idximp}\) does not depend on \(\ell\) and we can therefore focus on \eqref{eq:sqlp} to prove our claim. Using the fact that \(|\ktranspose{\bfa}_{\column{\ell}} \mathbf{c}| \geq |\ktranspose{\bfa}_{\column{\ell+1}} \mathbf{c}|\) by hypothesis, we immediately obtain that \(B_{\idxps,\ell} \geq B_{\idxps,\ell+1}\) whenever \(\ell\notin\{\idximp-1,\idxps\}\). We conclude the proof by treating the cases ``\(\ell=\idximp-1\)'' and ``\(\ell=\idxps\)'' separately. If \(\ell=\idxps\) we have from~\eqref{eq:sqlp}: \begin{equation} B_{\idxps,\ell+1} - B_{\idxps,\ell} = |\ktranspose{\bfa}_{\column{\idxps+1}} \mathbf{c}| + \sigma_{\idxps-1} - \sigma_{\idxps} = |\ktranspose{\bfa}_{\column{\idxps+1}} \mathbf{c}| - |\ktranspose{\bfa}_{\column{\idxps}} \mathbf{c}| \leq 0 , \end{equation} where the last inequality holds true by virtue of~\eqref{eq:WH}. If \(\ell=\idximp-1\) (and provided that \(\idximp\geq2\)) the same rationale leads to \begin{equation} B_{\idxps,\ell+1} - B_{\idxps,\ell} = |\ktranspose{\bfa}_{\column{\idximp}} \mathbf{c}| - |\ktranspose{\bfa}_{\column{\idximp-1}} \mathbf{c}| \leq 0 . \end{equation} \vspace{0.05cm} \subsection[Proof of Lemma~\ref{lemma: subset inequality}]{Proof of \Cref{lemma: subset inequality}}\label{proof:lemma: subset inequality} The necessity of \eqref{eq: threshold-based screening test} can be shown as follows. Assume \(\ |\ktranspose{\bfa}_{\column{{n}}}\mathbf{c}|\geq\tau\) for some \(\tau\in\mathcal{T}\) and let \(\idxps\in\intervint{1}{{n}}\) be such that \(\tau=\tau_{\idxps,\idximp^\star(\idxps)}\). From \eqref{eq: meaning r*} we then have \begin{equation} \forall\idximp\in\intervint{1}{\idxps}:\ |{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }|\geq\tau_{\idxps,\idximp} \end{equation} and test~\eqref{eq: screening test compressed form} therefore fails. To prove the sufficiency of \eqref{eq: threshold-based screening test}, let us first notice that the definition of \(\tau_{\idxps,\idximp}\) given in~\eqref{eq:def tau 2} can be naturally extended to any arbitrary couple of indices \(\idxps,\idximp\in\intervint{1}{\pvdim}\), \textit{i.e.},\xspace \begin{equation} \label{eq:def tau 2 extended} \forall \idxps,\idximp\in\intervint{1}{\pvdim}:\quad \tau_{\idxps,\idximp} = g(\idximp) - (g(\idxps) - \lambda \slopeweight_\idxps) -R . \end{equation} On the other hand, the index \(\idxps^{(1)}\) has been defined as \begin{equation} \label{eq:recall def x^(1)} \idxps^{(1)} \triangleq \idxps^\star(\pvdim) = \kargmax_{\idxps\in\intervint{1}{\pvdim}} g(\idxps) - \lambda \slopeweight_\idxps , \end{equation} see~\eqref{eq:def q^star(k)} and~\eqref{eq:def q set}. Combining~\eqref{eq:def tau 2 extended} and~\eqref{eq:recall def x^(1)}, one obtains \(\forall \idximp\in \intervint{1}{{n}}\): \begin{equation} \tau_{\idxps^{(1)},\idximp} = \kargmin_{\idxps\in \intervint{1}{\pvdim}} \tau_{\idxps,\idximp} . \end{equation} In particular, letting \(\idximp=\idximp^{(1)}\), we have \begin{equation} \label{eq:inequality idximp^(1)} \forall \idxps\in\intervint{\idximp^{(1)}}{{n}}:\ \tau_{\idxps^{(1)},\idximp^{(1)}}\leq\tau_{\idxps,\idximp^{(1)}} . \end{equation} Hence, \begin{equation}\label{eq: partial screening test} |{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }| < \tau_{\idxps^{(1)},\idximp^{(1)}} \implies \forall \idxps\in\intervint{\idximp^{(1)}}{{n}}:\ |{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }| < \tau_{\idxps,\idximp^{(1)}} . \end{equation} In other words, satisfying the left-hand side of \eqref{eq: partial screening test} implies that test \eqref{eq: screening test compressed form} is verified for each \(\idxps\in\intervint{\idximp^{(1)}}{{n}}\). We can apply the same reasoning iteratively to show that \(\forall t\in\intervint{1}{\card[\mathcal{T}]}\): \begin{equation} |{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }| < \tau_{\idxps^{(t)},\idximp^{(t)}} \implies \forall \idxps\in\intervint{\idximp^{(t)}}{\idximp^{(t-1)}-1}:\ |{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }| < \tau_{\idxps,\idximp^{(t)}} . \end{equation} Since \(\idximp^{(\card[\mathcal{T}])}=1\), we obtain that \eqref{eq: threshold-based screening test} implies that \eqref{eq: screening test compressed form} is verified \(\forall\idxps\in\intervint{1}{{n}}\).\\ \section*{Acknowledgments} The authors would like to thank the anonymous reviewers for their thoughtful comments and for pointing out one technical flaw in the first version of the manuscript.\\ \section{Notations} \label{sec:notations} Unless otherwise specified, we will use the following conventions throughout the paper. Vectors are denoted by lowercase bold letters (\textit{e.g.},\xspace \({\bfx}\)) and matrices by uppercase bold letters (\textit{e.g.},\xspace \(\bfA\)). The ``all-zero'' vector of dimension \({n}\) is written \({\bf0}_{n}\). We use symbol \(\ktranspose{}\) to denote the transpose of a vector or a matrix. \({\bfx}_{\entry{j}}\) refers to the \(j\)th component of \({\bfx}\). When referring to the sorted entries of a vector, we use bracket subscripts; more precisely, the notation \(\pv_{\sortedentry{k}}\) refers to the \(k\)th largest value of \(\pv\). For matrices, we use \(\bfa_{\column{j}}\) to denote the \(j\)th column of \(\bfA\). We use the notation \(|{\bfx}|\) to denote the vector made up of the absolute value of the components of \({\bfx}\). The sign function is defined for all scalars \(x\) as \(\sign[x]=x / \kvbar{x}\) with the convention \(\sign[x]=0\). Calligraphic letters are used to denote sets (\textit{e.g.},\xspace \(\calJ\)) and \(\card\) refers to their cardinality. If \(a<b\) are two integers, \(\intervint{a}{b}\) is used as a shorthand notation for the set \(\{a, a+1,\dotsc,b\}\). Given a vector \({\bfx}\in\kR^{{n}}\) and a set of indices \(\calJ\subseteq\intervint{1}{{n}}\), we let \({\bfx}_\calJ\) be the vector of components of \({\bfx}\) with indices in \(\calJ\). Similarly, \(\bfA_\calJ\) denotes the submatrix of \(\bfA\) whose columns have indices in \(\calJ\). \({\bfA}_{\backslash \ell}\) corresponds to matrix \(\bfA\) deprived of its \(\ell\)th column.\\ \section{Screening: main concepts}\label{sec:Screening: main concepts} ``Safe screening'' has been introduced by El Ghaoui \textit{et al.}\xspace in \cite{Ghaoui2010} for \(\ell_1\)-penalized problems: \begin{equation}\label{eq:Ghaoui_GenericProblem} \min_{\scriptstyle{\bfx}\in\kR^{n}} \, \primfun({\bfx}) \triangleq f(\bfA{\bfx}) + \lambda\, \|{\bfx}\|_1, \quad \lambda>0 \end{equation} where \(\kfuncdef{f}{\kR^{m}}{\kR}\) is a closed convex function. It is grounded on the following ideas. First, it is well-known that \(\ell_1\)-regularization favors sparsity of the minimizers of \eqref{eq:Ghaoui_GenericProblem}. For instance, if \(f=\tfrac{1}{2}\|\cdot\|_2^2\) and the solution of \eqref{eq:Ghaoui_GenericProblem} is unique, it can be shown that the minimizer contains at most \({m}\) nonzero coefficients, see \textit{e.g.},\xspace \cite[Theorem 3.1]{Foucart2013Mathematical}. Second, if some zeros of the minimizers are identified, \eqref{eq:Ghaoui_GenericProblem} can be shown to be equivalent to a problem of \textit{reduced} dimension. More precisely, let \(\calL\subseteq\intervint{1}{{n}}\) be a set of indices such that we have for any minimizer \(\pvopt\) of \eqref{eq:Ghaoui_GenericProblem}: \begin{equation} \label{eq:hyp zero set} \forall \ell\in\calL:\ \pvopt_{\entry{\ell}} = 0 \end{equation} and let \(\bar{\calL}=\intervint{1}{{n}}\backslash \calL\). Then the following problem \begin{equation}\label{eq:Ghaoui_GenericProblem_reduced} \min_{\pvred\in\kR^{\card[\bar{\calL}]}} \, f(\bfA_{\bar{\calL}}\pvred) + \lambda\, \|\pvred\|_1, \quad \lambda>0 \end{equation} admits the same optimal value as \eqref{eq:Ghaoui_GenericProblem} and there exists a simple bijection between the minimizers of \eqref{eq:Ghaoui_GenericProblem} and \eqref{eq:Ghaoui_GenericProblem_reduced}. We note that \(\pv\) belongs to an \({n}\)-dimensional space whereas \(\pvred\) is a \(\mathrm{card}(\bar{\calL})\)-dimensional vector. Hence, solving \eqref{eq:Ghaoui_GenericProblem_reduced} rather than \eqref{eq:Ghaoui_GenericProblem} may lead to dramatic memory and computational savings if \(\mathrm{card}(\bar{\calL})\ll\pvdim\). The crux of screening consists therefore in identifying (some) zeros of the minimizers of \eqref{eq:Ghaoui_GenericProblem} with marginal cost. El Ghaoui \textit{et al.}\xspace emphasized that this is possible by relaxing some primal-dual optimality condition of problem \eqref{eq:Ghaoui_GenericProblem}. More precisely, let \begin{equation}\label{eq:dual lasso f*} \dvopt \in \kargmax_{\dv \in\kR^{m}}\; \dualfun(\dv) \triangleq -f^*(-\dv) \quad \mbox{\(\mathrm{s.t.} \quad \|\ktranspose{\bfA}\dv\|_\infty\leq \lambda\)} \end{equation} be the dual problem of \eqref{eq:Ghaoui_GenericProblem}, where \reffunctext{f^*} denotes the Fenchel conjugate. Then, by complementary slackness, we must have for any minimizer \(\pvopt\) of \eqref{eq:Ghaoui_GenericProblem}: \begin{equation}\label{eq:ideal screening Lasso from KKT} \forall \ell\in\intervint{1}{{n}}:\ (|\ktranspose{\bfa}_{\column{\ell}}\dvopt|-\lambda)\,\pvopt_{\entry{\ell}}=0. \end{equation} Since dual feasibility imposes that \(|\ktranspose{\bfa}_{\column{\ell}}\dvopt|\leq \lambda\), we obtain the following implication: \begin{equation}\label{eq:ideal screening Lasso} |\ktranspose{\bfa}_{\column{\ell}}\dvopt|<\lambda \implies \pvopt_{\entry{\ell}}=0 . \end{equation} Hence, if \(\dvopt\) is available, the left-hand side of \eqref{eq:ideal screening Lasso} can be used to detect if the \(\ell\)th component of \(\pvopt\) is equal to zero. Unfortunately, finding a maximizer of dual problem \eqref{eq:dual lasso f*} is generally as difficult as solving primal problem \eqref{eq:Ghaoui_GenericProblem}. This issue can nevertheless be circumvented by identifying some region \(\calR\) of the dual space (commonly referred to as \textit{``safe region''}) such that \(\dvopt\in\calR\). Indeed, since \begin{equation}\label{eq:relaxed screening} \max_{\dv\in\calR} \ |\ktranspose{\bfa}_{\column{\ell}}\dv| < \lambda \implies |\ktranspose{\bfa}_{\column{\ell}}\dvopt| < \lambda , \end{equation} the left-hand side of \eqref{eq:relaxed screening} constitutes an alternative (weaker) test to detect the zeros of \(\pvopt\). For proper choices of \(\calR\), the maximization over \(\dv\) admits a simple analytical solution. For example, if \(\calR\) is a ball, that is \begin{equation}\label{eq:definition sphere} \calR = \calS(\mathbf{c},R) \triangleq \kset{\dv\in\kR^{m}}{\|\dv-\mathbf{c}\|_2\leq R}, \end{equation} then \(\max_{\dv\in\calR} |\ktranspose{\bfa}_{\column{\ell}}\dv| = |\ktranspose{\bfa}_{\column{\ell}}\mathbf{c}| +R\kvvbar{\bfa_{\column{\ell}}}_2\) and the relaxation of \eqref{eq:relaxed screening} leads to \begin{equation}\label{eq:sphere screening Lasso} |\ktranspose{\bfa}_{\column{\ell}}\mathbf{c}| < \lambda - R \kvvbar{\bfa_{\column{\ell}}}_2 \implies \pvopt_{\entry{\ell}}=0 . \end{equation} In this case, the screening test is straightforward to implement since it only requires the evaluation of one inner product between \(\bfa_{\column{\ell}}\) and \(\mathbf{c}\).\footnote{We note that the \(\ell_2\)-norm appearing in the expression of the test is usually considered as ``known'' since it can be evaluated offline. } Many procedures have been proposed in the literature to construct safe spheres \cite{NIPS2011_0578,fercoq2015,Ndiaye2017} or safe regions with refined geometries \cite{Xiang2012Fast,Xiang:2017ty,Dai2012Ellipsoid,Le2022HolderDomeTechreport}. If \reffunctext{f^*} is a \(\zeta\)-strongly convex function, a popular approach to construct a safe region is the so-called ``GAP sphere'' \cite{Ndiaye2017} whose center and radius are defined as follows: \begin{equation} \label{eq:gap sphere} \begin{array}{ll} \mathbf{c} & = \dv \\ R & = \sqrt{\tfrac{2}{\zeta}(\primfun(\pv)-\dualfun(\dv))} \end{array} \end{equation} where \((\pv,\dv)\) is any primal-dual feasible couple. This approach has gained in popularity because of its good behavior when \((\pv,\dv)\) is close to optimality. In particular, if \reffunctext{f} is proper lower semi-continuous, \(\pv=\pvopt\) and \(\dv=\dvopt\), then \(\primfun(\pv)-\dualfun(\dv)=0\) by strong duality~\cite[Proposition~15.22]{Bauschke2017}. In this case, screening test~\eqref{eq:sphere screening Lasso} reduces to~\eqref{eq:ideal screening Lasso} and, except in some degenerated cases, all the zero components of \(\pvopt\) can be identified by the screening test. Interestingly, this behavior also provably occurs for sufficiently small values of the dual gap~\cite[Propositions~8 and~9]{Ndiaye2021Converging} and has been observed in many numerical experiments, see \textit{e.g.},\xspace \cite{fercoq2015,Ndiaye2017,Herzet:2019fj,Elvira2020:Squeezing}. As a final remark, let us mention that the framework presented in this section extends to optimization problems where the (sparsity-promoting) penalty function describes a group-separable norm, see \textit{e.g.},~\cite{Ndiaye2017,Dantas2021}. In particular, the complementary slackness condition~\eqref{eq:ideal screening Lasso from KKT} still holds (up to a minor modification), thus allowing to design safe screening tests based on the same rationale. We note that, since the SLOPE penalization does not feature such a separability property, the methodology presented in this section does unfortunately not apply. \\ \section{Safe screening rules for SLOPE} \label{sec:screening-rule} In this section, we propose a new procedure to extend the concept of safe screening to SLOPE. Our exposition is organized as follows. In \Cref{sec:working hyps} we describe our working assumptions and in \Cref{sec:SLOPE screening main} we present a family of screening tests for SLOPE (see \Cref{th: safe screening for SLOPE}). Each test is defined by a set of parameters \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\) and takes the form of a series of inequalities. We show that a simple test of the form \eqref{eq:sphere screening Lasso} can be recovered for some particular values of the parameters \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\), although this choice does not correspond to the most effective test in the general case. In \Cref{sec:implementation SLOPE screening}, we finally propose an efficient numerical procedure to verify simultaneously \textit{all} the proposed screening tests.\\ \subsection{Working hypotheses}\label{sec:working hyps} In this section, we present two working assumptions which are assumed to hold in the rest of the paper even when not explicitly mentioned. We first suppose that the regularization parameter \(\lambda\) satisfies \begin{equation} \label{eq:lambda-such-that-0-is-sol} 0<\lambda < \lambda_{\max} \triangleq \max_{\idxps\in\intervint{1}{{n}}}\kparen{\sum_{k=1}^\idxps \big\vert\ktranspose{\bfA}\mathbf{\obsletter}\big\vert_{\sortedentry{k}}/ \sum_{k=1}^\idxps \slopeweight_k} . \end{equation} In particular, the hypothesis \(\lambda_{\max}>0\) is tantamount to assuming that \(\mathbf{\obsletter}\notin\ker(\ktranspose{\bfA})\). On the other hand, \(\lambda < \lambda_{\max}\) prevents the vector \({\mathbf 0}_{n}\) from being a minimizer of the SLOPE problem~\eqref{eq:primal problem}. More precisely, it can be shown that under condition~\eqref{eq:hyp slopeweigths}, \begin{equation} \label{eq:cns solution slope is zero} \text{\(\lambda\) and \(\kfamily{\slopeweight_k}{k=1}^{n}\) verify~\eqref{eq:lambda-such-that-0-is-sol}} \Longleftrightarrow \text{\({\bf0}_{n}\) is not a minimizer of~\eqref{eq:primal problem}.} \end{equation} A proof of this result is provided in \Cref{subsec:app:proof cns solutions slope is zero}. Second, we assume that the columns of the dictionary \(\bfA\) are unit-norm, \textit{i.e.},\xspace \begin{equation} \label{eq:assumption:atom unit norm} \forallj\in\intervint{1}{{n}}: \quad \kvvbar{\bfa_{\column{j}}}_2 = 1. \end{equation} Assumption~\eqref{eq:assumption:atom unit norm} simplifies the statement of our results in the next subsection. However, all our subsequent derivations can be easily extended to the general case where \eqref{eq:assumption:atom unit norm} does not hold.\\ \subsection{Safe screening rules} \label{sec:SLOPE screening main} In this section, we derive a family of safe screening rules for SLOPE. Let us first note that \eqref{eq:primal problem} admits at least one minimizer and our screening problem is therefore well-posed. Indeed, the primal cost function in \eqref{eq:primal problem} is continuous and coercive since \reffunctext{\regslope} is a norm (see \textit{e.g.},\xspace \cite[Proposition 1.1]{Bogdan2013statistical} or \cite[Lemma 2]{Zeng:2014AtomicNorm}); the existence of a minimizer then follows from Weierstrass theorem~\cite[Theorem~1.29]{Bauschke2017}. In the following, we will assume that the minimizer is unique to simplify our statements. Nevertheless, all our results extend to the general case where there exist more than one minimizer by replacing ``\(\pvopt_{\entry{\ell}}=0\)'' by ``\(\pvopt_{\entry{\ell}}=0\ \mbox{for any minimizer of \eqref{eq:primal problem}}\)'' in all our subsequent statements. Our starting point to derive our safe screening rules is the following primal-dual optimality condition: \begin{theorem \label{th:safe-screening} Let \begin{equation} \label{eq:dual problem} \dvopt = \kargmax_{\dv\in\dfset}\; \dualfun(\dv) \triangleq \tfrac{1}{2}\|\mathbf{\obsletter}\|_2^2 - \tfrac{1}{2}\|\mathbf{\obsletter} - \dv\|_2^2 , \end{equation} where \begin{equation} \label{eq:dual set} \dfset = \kset{\dv}{\sum_{k=1}^\idxps \big\vert\ktranspose{\bfA}\dv\big\vert_{\sortedentry{k}}\leq \lambda \sum_{k=1}^\idxps \slopeweight_k,\,\idxps\in\intervint{1}{{n}}}. \end{equation} Then, for all integers \(\ell\in\intervint{1}{{n}}\): \begin{equation} \label{eq: ideal safe screening test} \forall \idxps\in\intervint{1}{{n}}: \ \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=1}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} < \lambda {\sum_{k=1}^{\idxps} \slopeweight_k} \implies \pv^{\star}_{\entry{\ell}} = 0 . \end{equation} \end{theorem} A proof of this result is provided in \Cref{subsec:app:proof ideal screening Slope}. We mention that, although it differs quite significantly in its formulation, \Cref{th:safe-screening} is closely related to \cite[Proposition~1]{Larsson2020strong}.\footnote{We refer the reader to Section~SM1 of the electronic supplementary material of this paper for a detailed description and a proof of the connection between these two results.} We also note that \eqref{eq:dual problem} corresponds to the dual problem of~\eqref{eq:primal problem}, see \textit{e.g.},\xspace~\cite[Section~2.5]{Bogdan2013statistical}. Moreover, ${\dvopt}$ exists and is unique because \reffunctext{\dualfun} is a continuous strongly-concave function and \(\dfset\) a closed set. The equality in~\eqref{eq:dual problem} is therefore well-defined. \Cref{th:safe-screening} provides a condition similar to \eqref{eq:ideal screening Lasso} relating the dual optimal solution \(\dvopt\) to the zero components of the primal minimizer \(\pvopt\). Unfortunately, evaluating the dual solution \(\dvopt\) requires a computational load comparable to the one needed to solve the SLOPE problem~\eqref{eq:primal problem}. Similarly to \(\ell_1\)-penalized problems, tractable screening rules can nevertheless be devised if ``easily-computable'' upper bounds on the left-hand side of \eqref{eq: ideal safe screening test} can be found. In particular, for any set \(\{B_{\idxps,\ell}\in\kR\}_{\idxps\in\intervint{1}{{n}}}\) verifying \begin{equation}\label{eq:def upper bound} \forall \idxps\in\intervint{1}{{n}}: \ \big\vert\ktranspose{\bfa}_{\column{\ell}}\dvopt\big\vert + \sum_{k=1}^{\idxps-1} \big\vert\ktranspose{\bfA}_{\backslash \ell}\dvopt\big\vert_{\sortedentry{k}} \leq B_{\idxps,\ell} , \end{equation} we readily have that \begin{equation} \label{eq:relaxed SLOPE screening test} \forall \idxps\in\intervint{1}{{n}}: B_{\idxps,\ell} < \lambda {\sum_{k=1}^{\idxps} \slopeweight_k} \implies \pv^{\star}_{\entry{\ell}} = 0 . \end{equation} The next lemma provides several instances of such upper bounds:\vspace{0.2cm} \begin{lemma}\label{lemma:upper bound} Let \(\dvopt\in\calS(\mathbf{c},R)\). Then \(\forall \ell\in\intervint{1}{{n}}\) and \(\forall \idxps\in\intervint{1}{{n}}\), we have that \begin{equation} \nonumber B_{\idxps,\ell} \triangleq \big\vert \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} \big\vert + \sum_{k=\idximp}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} }\big\vert_{\sortedentry{k}} + (\idxps-\idximp+1) R + \lambda\sum_{k=1}^{\idximp-1} \slopeweight_{k} \end{equation} verifies \eqref{eq:def upper bound} for any \(\idximp\in\intervint{1}{\idxps}\).\vspace{0.2cm} \end{lemma} A proof of this result is available in \Cref{sec:Proof of lemma:upper bound}. We note that Lemma 4.2 defines \textit{one} particular family of upper bounds on the left-hand side of \eqref{eq:def upper bound}. The derivation of these upper bounds is based on the knowledge of a safe spherical region and partially exploits the definition of the dual feasible set, see \Cref{sec:Proof of lemma:upper bound}. We nevertheless emphasize that other choices of safe regions or majorization techniques can be envisioned and possibly lead to more favorable upper bounds. Defining \begin{equation} \kappa_{\idxps,\idximp} \triangleq \lambda \Bigg( \sum_{k=\idximp}^{\idxps} \slopeweight_k \Bigg) - (\idxps-\idximp+1)R, \end{equation} a straightforward particularization of \eqref{eq:relaxed SLOPE screening test} then leads to the following safe screening rules for SLOPE:\vspace{0.1cm} \begin{theorem}\label{th: safe screening for SLOPE} Let \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{\pvdim}}\) be a sequence such that \(\idximp_\idxps\in\intervint{1}{\idxps}\) for all \(\idxps\in\intervint{1}{\pvdim}\). Then, the following statement holds: \begin{equation}\label{eq: general safe screening for SLOPE} \forall \idxps\in\intervint{1}{{n}}: \big\vert{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }\big\vert + \sum_{k=\idximp_\idxps}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} }\big\vert_{\sortedentry{k}} < \kappa_{\idxps,\idximp_\idxps} \implies \pv^{\star}_{\entry{\ell}} = 0 . \end{equation} \end{theorem} {We mention that the notation ``\(\idximp_\idxps\)'' is here introduced to stress the fact that a different value of \(\idximp\) can be used for each \(\idxps\) in~\eqref{eq: general safe screening for SLOPE}. Since \(\idxps\in\intervint{1}{{n}}\) and each parameter \(\idximp_\idxps\) can take on \(\idxps\) different values in \Cref{th: safe screening for SLOPE},~\eqref{eq: general safe screening for SLOPE} thus defines \({n}!\) different screening tests for SLOPE where \(\tfrac{\pvdim(\pvdim+1)}{2}\) distinct inequalities are involved. We discuss two particular choices of parameters \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\) below and propose an efficient procedure to jointly evaluate all the tests defined by feasible sequences \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{\pvdim}}\) in the next section. Let us first consider the case where \begin{equation}\label{eq:rq=1} \forall\idxps\in\intervint{1}{{n}}:\ \idximp_\idxps=1. \end{equation} Screening test~\eqref{eq: general safe screening for SLOPE} then particularizes as \begin{equation}\label{eq: safe screening for SLOPE p=q-1} \forall \idxps\in\intervint{1}{{n}}:\ \big\vert{\ktranspose{\bfa}_{\column{\ell}}\mathbf{c}}\big\vert + \sum_{k=1}^{\idxps-1} \big\vert{ \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} }\big\vert_{\sortedentry{k}} < \lambda \Bigg( \sum_{k=1}^{\idxps} \slopeweight_k \Bigg) - \idxpsR \implies \pv^{\star}_{\entry{\ell}} = 0 . \end{equation} Interestingly, \eqref{eq: safe screening for SLOPE p=q-1} shares the same mathematical structure as optimality condition \eqref{eq: ideal safe screening test}. In particular, \eqref{eq: safe screening for SLOPE p=q-1} reduces to \eqref{eq: ideal safe screening test} when \(\mathbf{c}=\dvopt\) and \(R=0\). In this case, it is easy to see that \eqref{eq: safe screening for SLOPE p=q-1} is the best\footnote{In the following sense: if test \eqref{eq: general safe screening for SLOPE} passes for some choice of the parameters \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\), then test \eqref{eq: safe screening for SLOPE p=q-1} also necessarily succeeds. \label{footnote:definition of best screening test}} screening test within the family of tests defined in \Cref{th: safe screening for SLOPE} since an equality occurs in \eqref{eq:def upper bound} \begin{figure} \centering \includegraphics[width=.7\columnwidth]{xp_illustration_screening0.pdf} \caption{ \label{fig:comparing choice of q_r} Percentage of zero entries in \(\pvopt\) detected by the safe screening tests as a function of \(R\), the radius of the safe sphere. Each curve corresponds to a different implementation of the safe screening test~\eqref{eq: general safe screening for SLOPE}: \(\idximp_\idxps=1\) \(\forall \idxps\), see~\eqref{eq: safe screening for SLOPE p=q-1} (green curve), \(\idximp_\idxps=\idxps\) \(\forall \idxps\), see \eqref{eq: safe screening for SLOPE p=0} (blue curve), and all possible choices for \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\) (orange curve). The results are generated by using the \oscar{1} sequence for \(\{\slopeweight_k\}_{k=1}^{n}\), the Toeplitz dictionary and the ratio \(\lambda / \lambda_{\max}=0.5\), see \Cref{subsec:simu:setup}. } \end{figure} In practice, we may expect this conclusion to remain valid when \(R\) is ``sufficiently'' close to zero. This behavior is illustrated in \Cref{fig:comparing choice of q_r}. The figure {represents the proportion of zeros entries of \(\pvopt\) detected by screening test \eqref{eq: general safe screening for SLOPE} for different ``qualities'' of the safe region and different choices of parameters \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\). We refer the reader to \Cref{subsec:simu:setup} for a detailed description of the simulation setup. The center of the safe sphere used to apply \eqref{eq: general safe screening for SLOPE} is assumed to be equal (up to machine precision) to \(\dvopt\) and the \(x\)-axis of the figure represents the radius \(R\) of the sphere region. The green curve corresponds to test~\eqref{eq: safe screening for SLOPE p=q-1}; the orange curve represents the screening performance achieved when test \eqref{eq: general safe screening for SLOPE} is implemented for all possible choices for \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\). We note that, as expected, the green curve attains the best screening performance as soon as \(R\) becomes close to zero. At the other extreme of the spectrum, another case of interest reads as: \begin{equation} \forall\idxps\in\intervint{1}{{n}}:\ \idximp_\idxps=\idxps. \end{equation} Using our initial hypothesis~\eqref{eq:hyp slopeweigths}, the screening test~\eqref{eq: general safe screening for SLOPE} rewrites\footnote{ \label{footnote:proof case pq=q} More precisely, \eqref{eq: general safe screening for SLOPE} reduces to ``\(\forall \idxps\in\intervint{1}{\pvdim}:\ |{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }| < \lambda \slopeweight_\idxps - R \implies \pv^{\star}_{\entry{\ell}} = 0\)'' which, in view of \eqref{eq:hyp slopeweigths}, is equivalent to \eqref{eq: safe screening for SLOPE p=0}. } \begin{equation} \label{eq: safe screening for SLOPE p=0} |{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} }| < \lambda \slopeweight_{n} - R \implies \pv^{\star}_{\entry{\ell}} = 0 . \end{equation} Interestingly, this test has the same mathematical structure as \eqref{eq:sphere screening Lasso} with the exception that \(\lambda\) is multiplied by the value of the smallest weighting coefficient \(\slopeweight_{n}\). In particular, if \(\slopeweight_k=1\) \(\forallk\in\intervint{1}{\pvdim}\) SLOPE reduces to LASSO and test \eqref{eq: safe screening for SLOPE p=0} is equivalent to \eqref{eq:sphere screening Lasso}; \Cref{th: safe screening for SLOPE} thus encompasses standard screening rule \eqref{eq:sphere screening Lasso} for LASSO as a particular case. The following result emphasizes that \eqref{eq: safe screening for SLOPE p=0} is in fact the best screening rule within the family of tests defined by \Cref{th: safe screening for SLOPE} when \(\slopeweight_k=1\) \(\forallk\in\intervint{1}{\pvdim}\): \begin{lemma}\label{lemma:optimality p=0 for LASSO} If \(\slopeweight_k=1\) \(\forallk\in\intervint{1}{\pvdim}\) and test \eqref{eq: general safe screening for SLOPE} passes for some choice of parameters \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\), then test \eqref{eq: safe screening for SLOPE p=0} also succeeds. \end{lemma} A proof of this result is available in \Cref{sec:Proof of lemma:optimality p=0 for LASSO}. As a final remark, let us mention that, although we just emphasized that some choices of parameters $\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}$ can be optimal (in terms of screening performance) in some situations, no conclusion can be drawn in the general case. In particular, we found in our numerical experiments that the best choice for $\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}$ depends on many factors: the weights $\{\slopeweight_k\}_{k=1}^{n}$, the radius of the safe sphere \(R\), the nature of the dictionary, the atom to screen, etc. This is illustrated in Fig.~\ref{fig:comparing choice of q_r}: we see that the blue and green curves deviate from the orange curve for certain values of \(R\), that is the best screening performance is not necessarily achieved for \(\idximp_\idxps=1\) or \(\idximp_\idxps=\idxps \ \forall \idxps\in\intervint{1}{\pvdim}\).\\ \subsection{Efficient implementation} \label{sec:implementation SLOPE screening} Since the best values for \(\{\idximp_\idxps\}_{\idxps\in\intervint{1}{{n}}}\) cannot be foreseen, it is desirable to evaluate the screening rule~\eqref{eq: general safe screening for SLOPE} for \textit{any} choice of these parameters. Formally, this ideal test reads: \begin{equation} \label{eq: screening SLOPE all p} \forall \idxps\in\intervint{1}{{n}},\exists \idximp_{{\idxps}}\in\intervint{1}{\idxps}: \big\vert \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} \big\vert + \sum_{k=\idximp_{{\idxps}}}^{\idxps-1} \big\vert \ktranspose{\bfA}_{\backslash \ell}\mathbf{c} \big\vert_{\sortedentry{k}} < \kappa_{\idxps,\idximp_{{\idxps}}} \implies \pv^{\star}_{\entry{\ell}} = 0 . \end{equation} Since verifying this test for a \textit{given} index $\ell$ involves the evaluation of $\mathcal{O}({n}^2)$ inequalities, a brute-force evaluation of \eqref{eq: screening SLOPE all p} for all atoms of the dictionary requires $\mathcal{O}({n}^3)$ operations. In this section, we present a procedure to perform this task with a complexity scaling as $\mathcal{O}({n} \log {n} + T \nscreen)$ where $T\leq {n}$ is some problem-dependent constant (to be defined later on) and $\nscreen$ is the number of atoms of the dictionary passing test \eqref{eq: screening SLOPE all p}. Our procedure is summarized in \Cref{alg:Implementation generalized SLOPE screening test 2,alg:Implementation generalized SLOPE screening test}, and is grounded on the following nesting properties.\\ \begin{algorithm}[t!] \caption{ \label{alg:Implementation generalized SLOPE screening test 2} Fast implementation of SLOPE screening test \eqref{eq: screening SLOPE all p} } \begin{algorithmic}[1] \REQUIRE{radius \(R\geq0\), sorted elements $\{|\ktranspose{\bfA}\mathbf{c}|_{\sortedentry{k}}\}_{k=1}^{n}$} \STATE $\calL = \emptyset$ \COMMENT{Set of screened atoms: init} \STATE $\ell = {n}$ \COMMENT{Index of atom under testing: init} \STATE Evaluate $\{g({\idximp})\}_{{\idximp}=1}^{{n}}$, $\{\idximp^\star({\idxps})\}_{{\idxps}=1}^{{n}}$, $\{\idxps^\star(k)\}_{k=1}^{{n}}$ \STATE $\mathrm{run} = 1$ \vspace*{.2em} \WHILE{$\mathrm{run} == 1$ and $\ell>0$} \STATE $\mathrm{test}$ = Algorithm \ref{alg:Implementation generalized SLOPE screening test}($R$,$\ell$,$\{g({\idximp})\}_{{\idximp}=1}^{{n}}$,$\{\idximp^\star({\idxps})\}_{{\idxps}=1}^{{n}}$,$\{\idxps^\star(k)\}_{k=1}^{{n}}$) \IF{$\mathrm{test}==1$} \STATE $\calL = \calL \cup\{\ell\}$ \STATE $\ell = \ell-1$ \ELSE \STATE $\mathrm{run} = 0$ \COMMENT{Stop testing as soon as one atom does not pass the test} \ENDIF \ENDWHILE \RETURN{$\calL$ (Set of indices passing test \eqref{eq: screening SLOPE all p})} \end{algorithmic} \end{algorithm} \noindent \paragraph{Nesting of the tests for different atoms} We first emphasize that there exists an implication between the failures of test \eqref{eq: screening SLOPE all p} for some group of indices. In particular, the following result holds:\vspace{0.1cm} \begin{lemma}\label{lemma:nesting test} Let $B_{\idxps,\ell}$ be defined as in \Cref{lemma:upper bound} and assume that \begin{equation} \label{eq:WH} \kvbar{\ktranspose{\bfa}_{\column{1}}\mathbf{c}}\geq \ldots \geq \kvbar{\ktranspose{\bfa}_{\column{{n}}}\mathbf{c}}. \end{equation} Then $\forall\idxps\in\intervint{1}{{n}}$: \begin{equation} \ell<\ell' \implies B_{\idxps,\ell}\geq B_{\idxps,\ell'} . \end{equation} \end{lemma} A proof of this result is provided in \Cref{subsec:proof lemma nesting test}. \Cref{lemma:nesting test} has the following consequence: if \eqref{eq:WH} holds, the failure of test \eqref{eq: screening SLOPE all p} for some $\ell'\in\intervint{2}{{n}}$ implies the failure of the test for any index $\ell\in \intervint{1}{\ell'-1}$. This immediately suggests a backward strategy for the evaluation of \eqref{eq: screening SLOPE all p}, starting from $\ell={n}$ and going backward to smaller indices. This is the sense of the main recursion in \Cref{alg:Implementation generalized SLOPE screening test 2}. We note that hypothesis~\eqref{eq:WH} can always be verified by a proper reordering of the elements of $|\ktranspose{\bfA}\mathbf{c}$|. This can be achieved by state-of-the-art sorting procedures with a complexity of $\mathcal{O}({n} \log {n})$. Therefore, in the sequel we will assume that~\eqref{eq:WH} holds even if not explicitly mentioned.\\ \paragraph{Nesting of some inequalities} We next show that the number of inequalities to be verified may possibly be substantially smaller than $\mathcal{O}({n}^2)$. We first focus on the case ``$\ell={n}$'' and then extend our result to the general case ``$\ell<{n}$''. Let us first note that under hypothesis~\eqref{eq:WH}: \begin{equation} \forall k\in \intervint{1}{{n}-1}:\ |\ktranspose{\bfA}_{\backslash{n}}\mathbf{c} |_{\sortedentry{k}}=| \ktranspose{\bfA}_{\backslash{n}}\mathbf{c} |_{\entry{k}}, \end{equation} that is the $k$th largest element of $|\ktranspose{\bfA}_{\backslash{n}}\mathbf{c} |$ is simply equal to its $k$th component. The particularization of \eqref{eq: screening SLOPE all p} to $\ell={n}$ can then be rewritten as: \begin{equation}\label{eq: screening test compressed form} \forall \idxps\in\intervint{1}{{n}}, \exists \idximp_\idxps\in \intervint{1}{\idxps}:\ \kvbar{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} } < \tau_{\idxps,\idximp_\idxps} \end{equation} where \(\tau_{\idxps,\idximp}\) is defined \(\forall\idxps\in\intervint{1}{\pvdim}\) and \(\idximp\in\intervint{1}{\idxps}\) as \begin{equation} \tau_{\idxps,\idximp} \triangleq \kappa_{\idxps,\idximp} - \sum_{k=\idximp}^{\idxps-1} \kvbar{ \ktranspose{\bfA}\mathbf{c} }_{\entry{k}} = \sum_{k=\idximp}^{\idxps-1} (\lambda\slopeweight_{k} - \kvbar{ \ktranspose{\bfA}\mathbf{c} }_{\entry{k}} - R) + (\lambda \slopeweight_\idxps -R) . \end{equation} We show hereafter that \eqref{eq: screening test compressed form} can be verified by only considering a ``well-chosen'' subset of thresholds $\mathcal{T}\subseteq\kset{\tau_{\idxps,\idximp}}{\idxps\in\intervint{1}{{n}}, \idximp\in\intervint{1}{\idxps}}$, see \Cref{lemma: subset inequality} below. If \begin{equation} \idximp^\star(\idxps) \triangleq \kargmax_{\idximp\in\intervint{1}{\idxps}} \tau_{\idxps,\idximp} , \end{equation} we obviously have \begin{equation}\label{eq: meaning r*} \kvbar{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }<\tau_{\idxps,\idximp^\star(\idxps)} \iff \exists\idximp_{{\idxps}}\in\intervint{1}{\idxps}:\ \kvbar{ \ktranspose{\bfa}_{\column{{n}}}\mathbf{c} }<\tau_{\idxps,\idximp_{{\idxps}}} . \end{equation} In other words, for each $\idxps\in\intervint{1}{{n}}$, satisfying the inequality ``$\kvbar{\ktranspose{\bfa}_{\column{{n}}}\mathbf{c}}<\tau_{\idxps,\idximp}$'' for $\idximp=\idximp^\star(\idxps)$ is necessary and sufficient to ensure that it is verified for some $\idximp_{{\idxps}}\in\intervint{1}{\idxps}$. Motivated by this observation, we show the following items below: \textit{i)} $\idximp^\star(\idxps)$ can be evaluated $\forall \idxps\in\intervint{1}{{n}}$ with a complexity $\mathcal{O}({n})$; \textit{ii)} similarly to $\idximp$, only a subset of values of $\idxps\in\intervint{1}{{n}}$ are of interest to implement \eqref{eq: screening test compressed form}. Let us define the function: \begin{equation}\label{eq:def f} \kfuncdef{g}{\intervint{1}{{n}}}{\kR}[\idximp] [ \sum_{k=\idximp}^{{n}} (\lambda\slopeweight_{k} - \kvbar{ \ktranspose{\bfA}\mathbf{c}}_{\entry{k}} - R) ] . \end{equation} We then have \(\forall \idxps\in\intervint{1}{{n}}\) and \(\idximp\in\intervint{1}{\idxps}\): \begin{equation}\label{eq:def tau 2} \tau_{\idxps,\idximp} = g(\idximp) - (g(\idxps) - \lambda \slopeweight_\idxps) -R . \end{equation} In view of \eqref{eq:def tau 2}, the optimal value $\idximp^\star(\idxps)$ can be computed as \begin{equation} \idximp^\star(\idxps) \label{eq:def idximpstar} = \kargmax_{\idximp\in\intervint{1}{\idxps}} g(\idximp). \end{equation} Considering \eqref{eq:def f}, we see that the evaluation of $g(\idximp)$ $\forall\idximp\in\intervint{1}{{n}}$ (and therefore $\idximp^\star(\idxps)$ $\forall \idxps\in\intervint{1}{{n}}$) can be done with a complexity scaling as $\mathcal{O}({n})$. This proves item \textit{i)}. Let us now show that only some specific indices $\idxps\in\intervint{1}{{n}}$ are of interest to implement \eqref{eq: screening test compressed form}. Let \begin{equation} \label{eq:def q^star(k)} \idxps^\star(k) \triangleq \kargmax_{\idxps\in\intervint{1}{k}} g(\idxps) - \lambda \slopeweight_\idxps , \end{equation} and define the sequence \(\{\idxps^{(t)}\}_t\) as \begin{equation}\label{eq:def q set} \begin{cases} \idxps^{(1)} & = \idxps^\star({n}) \\ \idxps^{(t)} & = \idxps^\star(\idximp^\star(\idxps^{(t-1)})-1) \end{cases} \end{equation} where the recursion is applied as long as $\idximp^\star(\idxps^{(t-1)})>1$.\footnote{We note that the sequence $\{\idxps^{(t)}\}_t$ is strictly decreasing and thus contains at most ${n}$ elements.} We then have the following result whose proof is available in \Cref{proof:lemma: subset inequality}: \begin{lemma}\label{lemma: subset inequality} Let $\mathcal{T} \triangleq \kset{\tau_{\idxps,\idximp^{\star}(\idxps)}}{\idxps\in\{\idxps^{(t)}\}_{t}}$ where $\{\idxps^{(t)}\}_{t}$ is defined in \eqref{eq:def q set}. Test \eqref{eq: screening test compressed form} is passed if and only if \begin{equation}\label{eq: threshold-based screening test} \forall \tau\in\mathcal{T}:\ |\ktranspose{\bfa}_{\column{{n}}}\mathbf{c}|<\tau . \end{equation} \end{lemma} \Cref{lemma: subset inequality} suggests the procedure described in Algorithm~\ref{alg:Implementation generalized SLOPE screening test} (with $\ell={n}$) to verify if \eqref{eq: screening test compressed form} is passed. In a nutshell, the lemma states that only $\mathrm{card}(\mathcal{T})$ inequalities need to be taken into account to implement \eqref{eq: screening test compressed form}. We note that $\mathrm{card}(\mathcal{T})\leq {n}$ since only one value of $\idximp$ (that is $\idximp^\star(\idxps)$) has to be considered for any $\idxps\in\intervint{1}{{n}}$. This is in contrast with a brute-force evaluation of \eqref{eq: screening test compressed form} which requires the verification of $\mathcal{O}({n}^2)$ inequalities. \begin{algorithm}[t] \caption{ \label{alg:Implementation generalized SLOPE screening test} Check if test \eqref{eq: screening SLOPE all p} is passed for $\ell$ if it is passed for $\ell'>\ell$ } \begin{algorithmic}[1] \REQUIRE{radius \(R\geq0\)}, index $\ell\in\intervint{1}{{n}}$, $\{g({\idximp})\}_{{\idximp}=1}^{{n}}$, $\{\idximp^\star({\idxps})\}_{{\idxps}=1}^{{n}}$,$\{\idxps^\star(k)\}_{k=1}^{{n}}$ \STATE $\idxps = \idxps^\star(\ell)$ \STATE $\mathrm{test} = 1$ \STATE $\mathrm{run} = 1$ \vspace*{.2em} \WHILE{$\mathrm{run} == 1$} \STATE $\tau = g(\idximp^\star(\idxps))-g(\idxps)+(\lambda\slopeweight_\idxps-R)$ \COMMENT{Evaluation of current threshold, see \eqref{eq:def tau 2}} \IF{$|\ktranspose{\bfa}_{\column{\ell}} \mathbf{c}|\geq \tau$} \STATE $\mathrm{test} = 0$ \COMMENT{Test failed} \STATE $\mathrm{run} = 0$ \COMMENT{Stops the recursion} \ENDIF \IF{$\idximp^\star(\idxps)>1$} \STATE $\idxps = \idxps^\star(\idximp^\star(\idxps)-1)$ \COMMENT{Next value of $\idxps$ to test, see \eqref{eq:def q set}} \ELSE \STATE $\mathrm{run} = 0$ \COMMENT{Stops the recursion} \ENDIF \ENDWHILE \RETURN{$\mathrm{test}$ ($=1$ if test passed and $0$ otherwise)} \end{algorithmic} \end{algorithm} We finally emphasize that the procedure described in Algorithm~\ref{alg:Implementation generalized SLOPE screening test} also applies to $\ell<{n}$ as long as the screening test is passed for all $\ell'>\ell$. More specifically, if test \eqref{eq: screening SLOPE all p} is passed for all $\ell'\in\intervint{\ell+1}{{n}}$, then its particularization to atom $\bfa_{\column{\ell}}$ reads \begin{equation}\label{eq:concatenated test 3} \forall \tau\in \mathcal{T}':\ \kvbar{ \ktranspose{\bfa}_{\column{\ell}}\mathbf{c} } < \tau \end{equation} for some $\mathcal{T}'\subseteq \mathcal{T}$. Indeed, if screening test \eqref{eq: screening SLOPE all p} is passed for all $\ell'\in\intervint{\ell+1}{{n}}$, the corresponding elements can be discarded from the dictionary and we obtain a reduced problem only involving atoms $\{\bfa_{\column{\ell'}}\}_{\ell'\in\intervint{1}{\ell}}$. Since \eqref{eq:WH} is assumed to hold, $\bfa_{\column{\ell}}$ attains the smallest absolute inner product with $\mathbf{c}$ and we end up with the same setup as in the case ``$\ell={n}$''. In particular, if screening test \eqref{eq: screening SLOPE all p} is passed for all $\ell'\in\intervint{\ell+1}{{n}}$, \Cref{lemma: subset inequality} still holds for $\bfa_{\column{\ell}}$ by letting $\idxps^{(1)}=\idxps^\star(\ell)$ in the definition of the sequence $\{\idxps^{(t)}\}_t$ in \eqref{eq:def q set}. To conclude this section, let us summarize the complexity needed to implement \Cref{alg:Implementation generalized SLOPE screening test 2,alg:Implementation generalized SLOPE screening test}. First, \Cref{alg:Implementation generalized SLOPE screening test 2} requires the entries $|\ktranspose{\bfA}\mathbf{c}|$ to be sorted to satisfy hypothesis \eqref{lemma:nesting test}. This involves a complexity $\mathcal{O}({n}\log{n})$. Moreover, the sequences $\{g({\idximp})\}_{{\idximp}=1}^{{n}}$, $\{\idximp^\star({\idxps})\}_{{\idxps}=1}^{{n}}$, $\{\idxps^\star(k)\}_{k=1}^{{n}}$ can be evaluated with a complexity $\mathcal{O}({n})$. Finally, the main recursion in \Cref{alg:Implementation generalized SLOPE screening test 2} implies to run \Cref{alg:Implementation generalized SLOPE screening test} $\nscreen$ times, where $\nscreen$ is the number of atoms passing test \eqref{eq: screening SLOPE all p}. Since \Cref{alg:Implementation generalized SLOPE screening test} requires to verify at most $T=\card[\mathcal{T}]$ inequalities, the overall complexity of the main recursion scales as $\mathcal{O}(\nscreenT)$. Overall, the complexity of \Cref{alg:Implementation generalized SLOPE screening test 2} is therefore $\mathcal{O}({n}\log{n}+\nscreenT)$. \\ \section{Numerical simulations} \label{sec:simus} We present hereafter several simulation results demonstrating the effectiveness of the proposed screening procedure to accelerate the resolution of SLOPE. This section is organized as follows. In \Cref{subsec:simu:setup}, we present the experimental setups considered in our simulations. In \Cref{subsec:simu:effectiveness} we compare the effectiveness of different screening strategies. In \Cref{subsec:simu:bench}, we show that our methodology enables to reach better convergence properties for a given computational budget.\\ \input{main_4a_setup} \input{main_4b_effectiveness} \input{main_4c_benchmark} \subsection{Experimental setup} \label{subsec:simu:setup} We detail below the experimental setups used in all our numerical experiments. \vspace*{0.2cm} \noindent \textit{Dictionaries and observation vectors}: New realizations of \(\bfA\) and \(\mathbf{\obsletter}\) are drawn for each trial as follows. The observation vector is generated according to a uniform distribution on the \({m}\)-dimensional sphere. The elements of \(\bfA\) obey one of the following models:\vspace*{0.1cm} \begin{enumerate % \item the entries are i.i.d. realizations of a centered Gaussian, % \item the entries are i.i.d. realizations of a uniform distribution on \([0,1]\), % \item the columns are shifted versions of a Gaussian curve.\vspace*{0.1cm} \end{enumerate} For all distributions, the columns of \(\bfA\) are normalized to have unit $\ell_2$-norm. In the following, these three options will be respectively referred to as ``Gaussian'', ``Uniform'' and ``Toeplitz''.\vspace*{0.2cm} \noindent \textit{Regularization parameters}: We consider three different choices for the sequence \(\{\slopeweight_k\}_{k=1}^{n}\), each of them corresponding to a different instance of the well-known OSCAR problem~\cite[Eq.~(3)]{Bondell2007}. More specifically, we let \begin{equation} \label{eq:def-seq-oscar} \forall k\in\intervint{1}{{n}}:\ \slopeweight^{\texttt{OSCAR}}_k \triangleq \beta_1 + \beta_2 ({n}-k) \end{equation} where \(\beta_1\), \(\beta_2\) are nonnegative parameters chosen so that \(\slopeweight^{\texttt{OSCAR}}_1=1\) and \(\slopeweight^{\texttt{OSCAR}}_{n}\in\{.9, .1, 10^{-3}\}\). In the sequel, these parametrizations will respectively be referred to as ``\oscar{1}'', ``\oscar{2}'' and ``\oscar{3}''.\\ \subsection{Performance of screening strategies} \label{subsec:simu:effectiveness} We first compare the effectiveness of different screening strategies described in \Cref{sec:screening-rule}. More specifically, we evaluate the proportion of zero entries in \(\pvopt\) -- the solution of SLOPE problem~\eqref{eq:primal problem} -- that can be identified by tests \eqref{eq: safe screening for SLOPE p=q-1}, \eqref{eq: safe screening for SLOPE p=0} and \eqref{eq: screening SLOPE all p} as a function of the ``quality'' of the safe sphere. These tests will respectively be referred to as ``{\texttt{test-\idximp=1}}{}'', ``{\texttt{test-\idximp=\idxps}}{}'' and ``{\texttt{test-all}}{}'' in the following. Figures \ref{fig:comparing choice of q_r} (see \Cref{sec:SLOPE screening main}) and \ref{fig:effectiveness} represent this criterion of performance as a function of some parameter \(R_0\) (described below) and different values of the ratio \(\lambda / \lambda_{\max}\). The results are averaged over \(50\) realizations. For each simulation trial, we draw a new realization of \(\mathbf{\obsletter}\in\kR^{100}\) and \(\bfA\in\kR^{100\times300}\) according to the distributions described in \Cref{subsec:simu:setup}. We consider Toeplitz dictionaries in \Cref{fig:comparing choice of q_r} and Gaussian dictionaries in \Cref{fig:effectiveness}. The safe sphere used in the screening tests is constructed as follows. A primal-dual solution \((\pv_a,\dv_a)\) of problems~\eqref{eq:primal problem} and~\eqref{eq:dual problem} is evaluated with ``high-accuracy'', \textit{i.e.}, with a duality GAP of \(10^{-14}\) as stopping criterion. More precisely, \(\pv_a\) is first evaluated by solving the SLOPE problem~\eqref{eq:primal problem} with the algorithm proposed in~\cite{Bogdan2015}. {To evaluate \(\dv_a\), we extend the so-called ``dual scaling'' operator \cite[Section~3.3]{Ghaoui2010} to the SLOPE problem: we let \(\dv_a = (\mathbf{\obsletter}- \bfA\pv_a)/\beta(\mathbf{\obsletter}- \bfA\pv_a)\) where \begin{equation} \label{eq:def dual scaling} \forall \bfz\in\kR^{m}:\ \beta(\bfz) \triangleq \max\kparen{ 1, \max_{\idxps\in\intervint{1}{{n}}} \frac{ \sum_{k=1}^\idxps \kvbar{\ktranspose{\bfA}\bfz}_{\sortedentry{k}} }{ \lambda\sum_{k=1}^\idxps \slopeweight_k } } . \end{equation} The couple \((\pv_a,\dv_a)\) is then used to construct a sphere $\calS(\mathbf{c}_a,R_a)$ in \(\kR^{m}\) whose parameters are given by \begin{subequations} \begin{align} \label{eq:xp-effectiveness:spherec} \mathbf{c} \;=\; & \dv_a \\ % \label{eq:xp-effectiveness:spherer} R \;=\; & R_0 + \sqrt{ 2\kparen{ \primfun(\pv_a) - \dualfun(\dv_a) } } \end{align} \end{subequations} where \(R_0\) is a nonnegative scalar.} We note that for \(R_0=0\), the latter sphere corresponds to the GAP safe sphere described in~\eqref{eq:gap sphere}.\footnote{ We note that the GAP safe sphere derived in \cite{Ndiaye2017} for problem~\eqref{eq:Ghaoui_GenericProblem} extends to SLOPE since 1) the dual problem has the same mathematical form and 2) its derivation does not leverage the definition of the dual feasible set. } Hence,~\eqref{eq:xp-effectiveness:spherec} and~\eqref{eq:xp-effectiveness:spherer} define a safe sphere for any choice of the nonnegative scalar \(R_0 \geq 0\). \begin{figure} \includegraphics[width=\columnwidth]{xp0_gaussian0.pdf} \caption{ \label{fig:effectiveness} Percentage of zero entries in the solution of the SLOPE problem identified by {\texttt{test-\idximp=1}}{} (orange lines), {\texttt{test-\idximp=\idxps}}{} (green lines) and {\texttt{test-all}}{} (blue lines) as a function of \(R_0\) for the Gaussian dictionary, three values of \(\lambda/\lambda_{\max}\) and three parameter sequences \(\{\slopeweight_k\}_{k=1}^{n}\). } \end{figure} \Cref{fig:comparing choice of q_r} concentrates on the sequence \oscar{1} whereas each subfigure corresponds to a different choice for \(\{\slopeweight_k\}_{k=1}^{n}\) in \Cref{fig:effectiveness}. For the three considered screening strategies, we observe that the detection performance decreases as \(R_0\) increases. Interestingly, different behaviors can be noticed. For all simulation setups, {\texttt{test-\idximp=1}}{} reaches a detection rate of \(100\%\) whenever \(R_0\) is sufficiently small. The performance of {\texttt{test-\idximp=\idxps}}{} varies from one sequence to another: it outperforms {\texttt{test-\idximp=1}}{} for \oscar{1}, is able to detect at most \(20\%\) of the zeros for \oscar{2} and fail for all values of \(R_0\) for \oscar{3}. Finally, {\texttt{test-all}}{} outperforms quite logically the two other strategies. The gap in performance depends on both the considered setup and the radius \(R_0\) but can be quite significant in some cases. For example, when \(\lambda/\lambda_{\max}=0.5\) and \(R_0=10^{-2}\), there is \(80\%\) more entries passing {\texttt{test-all}}{} than {\texttt{test-\idximp=1}}{} for all parameter sequences. These results may be explained as follows. First, we already mentioned in \Cref{sec:screening-rule} that when the radius of the safe sphere is sufficiently small (that is, when \(R_0\) is close to zero), {\texttt{test-\idximp=1}}{} is expected to be the best\footnote{in the sense defined in \cref{footnote:definition of best screening test} page~\pageref{footnote:definition of best screening test}.} screening test within the family of tests defined in \Cref{th: safe screening for SLOPE}. Similarly, if the SLOPE weights satisfy \(\slopeweight_1=\slopeweight_{n}\), we showed in \Cref{lemma:optimality p=0 for LASSO} that no test in \Cref{th: safe screening for SLOPE} can outperform {\texttt{test-\idximp=\idxps}}{}. Hence, one may reasonably expect that this conclusion remains valid whenever \(\slopeweight_1\simeq\slopeweight_{n}\), as observed for the sequence \oscar{1} in our simulations. On the other hand, passing {\texttt{test-\idximp=\idxps}}{} becomes more difficult as parameter \(\slopeweight_{n}\) is small. As a matter of fact, the test will \emph{never} pass when \(\slopeweight_{n}=0\). In our experiments, the sequences \(\{\slopeweight_k\}_{k=1}^{n}\) are such that \(\slopeweight^{\texttt{OSCAR}}_{n}\) is close to zero for \oscar{2} and \oscar{3}. Finally, since {\texttt{test-all}}{} encompasses the two other tests, it is expected to always perform at least as well as the latter.\\ \subsection{Benchmarks} \label{subsec:simu:bench} As far as our simulation setup is concerned, the results presented in the previous section show a significant advantage in implementing {\texttt{test-all}}{} in terms of detection performance. However, this conclusion does not include any consideration about the numerical complexity of the tests. We note that, although the proposed screening rules can lead to a significant reduction of the problem dimensions, our tests also induce some additional computational burden. In particular, we emphasized in \Cref{sec:implementation SLOPE screening} that {\texttt{test-all}}{} can be verified for all atoms of the dictionary with a complexity \(\calO({n}\log{n} + T\nscreen)\) where \(T\leq {n}\) is a problem-dependent parameter and \(\nscreen\) is the number of atoms passing the test. Moreover, we also note that, as far as a GAP safe sphere is considered in the implementation of the tests, its construction requires the identification of a dual feasible point $\dv$ and this operation typically induces a computational overhead of \(\mathcal{O}({n}\log {n})\) (see below for more details). In this section, we therefore investigate the benefits (from a ``complexity-accuracy trade-off'' point of view) of interleaving the proposed safe screening methodology with the iterations of an accelerated proximal gradient algorithm~\cite{Bogdan2015}. In all our tests, we consider the GAP safe sphere defined in \eqref{eq:gap sphere}. The primal point used in the construction of the GAP sphere corresponds to the current iterate of the solving procedure, say $\pv^{(t)}$. A dual feasible point $\dv^{(t)}$ is constructed as \begin{align} \dv^{(t)} = \frac{\mathbf{\obsletter}- \bfA\pv^{(t)}}{\beta(\mathbf{\obsletter}- \bfA\pv^{(t)})} \end{align} where $\kfuncdef{\beta}{\kR^{m}}{\kR^{m}}$ is either defined as in \eqref{eq:def dual scaling} or as follows: \begin{equation} \label{eq:def dual scaling 2} \forall \bfz\in\kR^{m}:\ \beta(\bfz) \triangleq \max \kparen{ 1, \displaystyle \max_{k\in\intervint{1}{{n}}} \frac{ \big\vert\ktranspose{\bfA}\bfz\big\vert_{\sortedentry{k}} }{ \lambda \slopeweight_k } }. \end{equation} \eqref{eq:def dual scaling} matches the standard definition of the ``dual scaling'' operator proposed in \cite[Section~3.3]{Ghaoui2010} whereas \eqref{eq:def dual scaling 2} corresponds to the option considered in \cite{Bao:2020dq}.\footnote{ See companion code of \cite{Bao:2020dq} available at\\ \url{https://github.com/brx18/Fast-OSCAR-and-OWL-Regression-via-Safe-Screening-Rules/tree/1e08d14c56bf4b6293899ae2092a5e0238d27bf6}.} We notice that the two options require to sort the elements of $\kvbar{\ktranspose{\bfA}\bfz}$ and thus lead to a complexity overhead scaling as $\mathcal{O}({n}\log {n})$. In our simulations, we consider the four following solving strategies:\vspace{0.2cm} \begin{enumerate} \item Run the proximal gradient procedure~\cite{Bogdan2015} with \textit{no} screening. \item Interleave some iterations of the proximal gradient algorithm with {\texttt{test-\idximp=\idxps}}{} and construct the dual feasible point with \eqref{eq:def dual scaling}. \item Interleave some iterations of the proximal gradient algorithm with {\texttt{test-\idximp=\idxps}}{} and construct the dual feasible point with \eqref{eq:def dual scaling 2}. \item Interleave some iterations of the proximal gradient algorithm with {\texttt{test-all}}{} and construct the dual feasible point with \eqref{eq:def dual scaling}. \vspace{0.2cm} \end{enumerate} These strategies will respectively be denoted ``\texttt{PG-no}{}'', ``\texttt{PG-\idximp=\idxps}{}'', ``\texttt{PG-Bao}{}'' and ``\texttt{PG-all}{}'' in the sequel. We note that \texttt{PG-Bao}{} closely matches the solving procedure considered in \cite{Bao:2020dq}. We compare the performance of these solving strategies by resorting to Dolan-Moré profiles~\cite{Dolan2002}. More precisely, we run each procedure for a given budget of time (that is the algorithm is stopped after a predefined amount of time) on \(I=50\) different instances of the SLOPE problems. In \texttt{PG-\idximp=\idxps}{}, \texttt{PG-Bao}{} and \texttt{PG-all}{}, the screening procedure is applied once every 20{} iterations. Each problem instance is generated by drawing a new dictionary \(\bfA\in\kR^{100\times300}\) and observation vector \(\mathbf{\obsletter}\in\kR^{100}\) according to the distributions described in \Cref{subsec:simu:setup}. We then compute the following performance profile for each solver \(\texttt{solv}\in\kbrace{\text{\texttt{PG-no}{}, \texttt{PG-\idximp=\idxps}{}, \texttt{PG-Bao}{}, \texttt{PG-all}{}}}\): \begin{equation} \label{eq:exp:def_performance_profile} \rho_{\texttt{solv}} (\delta) \triangleq 100\,\frac{\card[\kset{i\in\intervint{1}{I}}{d_{i,\texttt{solv}}\leq \delta}]}{I} \quad \forall \delta\in\kR+ \end{equation} where \(d_{i, \texttt{solv}}\) denotes the dual gap attained by solver \(\texttt{solv}\) for problem instance \(i\). \(\rho_{\texttt{solv}}(\delta)\) thus represents the (empirical) probability that solver \(\texttt{solv}\) reaches a dual gap no greater than \(\delta\) for the considered budget of time. \Cref{fig:bench} presents the performance profiles obtained for three types of dictionaries (Gaussian, Uniform and Toeplitz) and three different weighting sequences \(\{\slopeweight_k\}_{k=1}^{n}\) (\oscar{1}, \oscar{2} and \oscar{3}). The results are displayed for \(\lambda/\lambda_{\max} = 0.5\) but similar performance profiles have been obtained for other values of the ratio \(\lambda/\lambda_{\max}\). All algorithms are implemented in Python with Cython bindings and experiments are run on a Dell laptop, 1.80 GHz, Intel Core i7. For each setup, we adjusted the time budget so that \(\rho_{\text{\texttt{PG-all}{}}} (10^{-8})\simeq50\%\) for the sake of comparison. As far as our simulation setup is concerned, these results show that the proposed screening methodologies improve the solving accuracy as compared to a standard proximal gradient. \texttt{PG-all}{} improves the average accuracy over \texttt{PG-no}{} in all the considered settings. The gap in performance depends on the setup but is generally quite significant. \texttt{PG-\idximp=\idxps}{} also enhances the average accuracy in most cases and performs at least comparably to \texttt{PG-Bao}{} in all setups. As expected, the behavior of \texttt{PG-\idximp=\idxps}{} and \texttt{PG-Bao}{} is more sensitive to the choice of the weighting sequence \(\{\slopeweight_k\}_{k=1}^{n}\). In particular, the screening performance of these strategies decreases when \(\slopeweight_{n}\simeq 0\) as emphasized in \Cref{subsec:simu:effectiveness}. This results in no accuracy gain over \texttt{PG-no}{} for the sequence \oscar{3} as illustrated in \Cref{fig:bench}. Nevertheless, we note that, even in absence of gain, \texttt{PG-\idximp=\idxps}{} and \texttt{PG-Bao}{} do not seem to significantly degrade the performance as compared to \texttt{PG-no}{}. \\ \begin{figure} \includegraphics[width=\columnwidth]{setup1a_precision8_exactFalse_time.pdf} \caption{ \label{fig:bench} Performance profiles of \texttt{PG-no}{}, \texttt{PG-\idximp=\idxps}{}, \texttt{PG-Bao}{} and \texttt{PG-all}{} obtained for the ``Gaussian'' (column 1), ``Uniform'' (column 2) and ``Toeplitz'' (column 3) dictionaries and \(\lambda / \lambda_{\max}=0.5\) with a budget of time. First row: \oscar{1}, second row: \oscar{2} and third row: \oscar{3}. } \end{figure}
a46e11190b2fad7b649136177d25e4b07e6f5bc5
\section{Introduction} Quantum graphs, formed by connected networks of bonds and vertices, are ideally suited to study questions coming from quantum chaos and random matrix theory (RMT). In closed quantum graphs the main interest has been focused on the statistical properties of the spectra. Most studies in this respect were motivated by the famous conjecture by Bohigas, Giannoni, and Schmit (BGS) that the universal features of the spectra of chaotic systems should be described by RMT \cite{boh84b}. Using supersymmetry techniques, Gnutzmann and Altland \cite{gnu04b} proved the BGS conjecture for the two-point correlation function for graphs with incommensurate bond lengths. Their result was generalized to all correlation functions by Pluha{\v r} and Weidenm\"{u}ller \cite{plu14}, who furthermore proved the applicability of RMT to the scattering properties of graphs \cite{plu13a}. Just as for billiard systems \cite{stoe90}, there is a one-to-one correspondence between a quantum graph and the corresponding microwave network, called a microwave graph in the following. This correspondence has been used in particular by Sirko and co-workers, in numerous experiments to study spectral and scattering properties of microwave graphs (see Ref.~\cite{hul04} as an example). A specific feature of open graphs is topological resonances corresponding to states existing exclusively within the system and being invisible from the outside \cite{gnu13}. Specifically designed graphs were used to mimic spin-$\frac{1}{2}$ systems for the first experimental realization of the Gaussian symplectic ensemble \cite{reh16,reh18}, following an idea by Joyner {\it et~al.}~\cite{joy14}. Most recent applications of graphs have been on non-Weyl graphs \cite{law19b} and in the study of coherent perfect absorption and complex zeros of the scattering matrix of graphs \cite{che20}. All experiments as well as numerical studies have been performed on graphs with a small number of vertices $V$ typically below 10-20, whereas the above-mentioned proofs of the applicability of RMT hold only for strongly connected graphs in the limit $V\to\infty$. This point will become important later. In the microwave studies the graphs are realized in terms of networks formed by cables connected by \Tjunctions at the vertices. A vector network analyzer measures the reflection at one port attached to the graph, or the transmission from one port to another, if there are more of them. The total $S$ matrix is available experimentally, including the phases, a unique property of the technique. The scattering matrix contains all the information needed for the determination of the graph spectrum. It will be called the Neumann spectrum in the following, since the \Tjunctions at all vertices obey Neumann boundary conditions. This specification is needed since in the following another spectrum will be of importance, the Dirichlet spectrum, describing a graph where all vertices obey Dirichlet boundary conditions. This situation corresponds to a totally disintegrated graph with a spectrum being the sum of the spectra of all individual bonds with Dirichlet boundary conditions at both ends. Both spectra are tightly interlaced with each other, a phenomenon generically found in systems subject to rank-1 perturbations \cite{sim95b}. Consequences of the change of boundary conditions from Neumann to Dirichlet (and any other situation in between) were discussed in the monograph by Berkolaiko and Kuchment \cite{ber13}. The implications of interlacing for the spectral statistics in the context of RMT, however, have not been considered so far. \section{Theory} \subsection{The graph secular equation system} For a better understanding of the interplay between the Neumann and the Dirichlet spectrum we have to look somewhat more in detail into the mathematical description of graphs. In the presentation we follow the work by Kottos and Smilansky \cite{kot99a}. In the accessible frequency range the cables used in the experiments support only one propagating mode. The wave fields within the graph have to obey two constraints. The first one is energy conservation, meaning that at each vertex $n$ there exists a unique potential $\varphi_n$ for all bonds meeting at this vertex. This condition is automatically met by means of the ansatz \begin{equation} \label{eq:potential} \psi_{nm}(x)=\frac{1}{\sin kl_{nm}}\left[\varphi_n\sin k(l_{nm}-x)+\varphi_m\sin kx\right] \end{equation} for the wave within the bond connecting vertices $n$ and $m$, where $x$ is the distance to vertex $n$ and $l_{nm}$ is the length of the bond. Equation~(\ref{eq:potential}) holds for time reversal graphs, the only ones considered here. The second constraint is current conservation at each vertex $n$, \begin{equation} \label{eq:current} \sum\limits_m \left.\frac{\mathrm{d}\psi_{nm}(x)}{\mathrm{d}x}\right|_{x=0}=0\,, \end{equation} where the sum is over all bonds $m$ meeting at vertex $n$. Equation~(\ref{eq:current}) holds for Neumann boundary conditions at the vertices. Plugging the expression (\ref{eq:potential}) into Eq.~(\ref{eq:current}), we obtain an equation system for the potentials, \begin{equation} \label{eq:hom} \sum\limits_{m}h_{nm}\varphi_m=0\,, \end{equation} where \begin{equation} \label{eq:hsec} h_{nm}=-\delta_{nm}\sum\limits_{m'}f_{nm'} +g_{nm}\,, \end{equation} with \begin{equation} \label{eq:fg} f_{nm}=\cot kl_{nm}\,,\quad g_{nm}=1/\sin k l_{nm}\,, \end{equation} if there is a bond connecting vertices $n$ and $m$, and $f_{nm}=g_{nm}=0$ otherwise. For a dangling bond with Dirichlet boundary condition at the open end the ansatz (\ref{eq:potential}) reduces to \begin{equation} \label{eq:dang} \psi_n(x)=\frac{1}{\sin kl_n}\varphi_n\sin k(l_n-x) \end{equation} whence it follows that \begin{equation} \label{eq:gdang} g_n=0 \end{equation} for dangling bonds. The end points of dangling bonds will not be counted in the number of vertices, since the boundary condition $\psi_n(l_n)=0$ has already been taken into account by the ansatz (\ref{eq:dang}). For the homogeneous equation system (\ref{eq:hom}) to have non-trivial solutions the determinant of the matrix $h(k)$ with elements $h_{nm}(k)$ has to vanish, \begin{equation} \label{eq:det} \left| h(k)\right|=0\,. \end{equation} The roots $k_n$ of this equation generate the Neumann spectrum of the graph. On the other hand, $h_{nm}$ becomes singular, whenever $kl_{nm}$ is an integer multiple of $\pi$. This is exactly the resonance condition for the Dirichlet spectrum belonging to the bond connecting vertices $n$ and $m$. The Dirichlet spectrum hence appears via the poles of $|h(k)|$. In the following all lengths will be assumed to be incommensurable to avoid degeneracies of the Dirichlet spectrum, one of the necessary ingredients to obtain a spectrum statistically described by RMT. Since the distance between successive $k$ eigenvalues on a bond of length $l_i$ is $\Delta k=\pi/l_i$, each bond contributes with $\rho^D_i=l_i/\pi$ to the density of states of the Dirichlet spectrum. Its total density of states hence is $\rho^D=\sum_i \rho^D_i= l_\mathrm{tot}/\pi$, where $l_\mathrm{tot}=\sum_il_i$ is the total length of the graph. However, according to Weyl's law this is identical to the mean density of states of the Neumann spectrum \cite{kot99a}. Hence both Neumann and Dirichlet spectra have the same mean density of states. \subsection{The graph Green function} For an experimental study of the spectral properties, the graph has to be opened by attaching one or more open bonds. Let us hence assume that there are open bonds $ n=1,\dots, N$ attached at vertices $1,\dots, N$. The field within the bonds may be written as the superposition of two waves propagating in opposite directions, \begin{equation} \label{eq:psi} \psi_n(x)=a_ne^{-ikx}-b_ne^{ikx}\,, \quad n=1,\dots, N \end{equation} where $x$ is the distance to the vertex, and $a_n$ and $b_n$ are the amplitudes of the waves propagating towards and away from the vertex, respectively. The definition~(\ref{eq:psi}) corresponds to the convention applied in microwave technology. It is also in accordance with definitions applied in the context of quantum dots \cite{bee96} and nuclear physics \cite{guh98}. In quantum graphs \cite{kot99a} another definition is in use, where in contrast to Eq.~(\ref{eq:psi}) both wave components come along with a positive sign. \begin{figure} \includegraphics[width=\columnwidth]{fig1_toygraph}\\ \caption{\label{fig:fig1} Plot of $\mathrm{Re}(G)$ as a function of $k=2\pi\nu/c$ for the graph shown in the inset as obtained from a microwave reflection measurement ($l_{1,2}/\mathrm{m}= 0.448, 0.736$). The Neumann spectrum is formed by the poles $k_n$ and the Dirichlet spectrum by the zeros $k^D_n$ of $\mathrm{Re}(G)$. Due to absorption, all poles are converted into dispersion-like resonances. In all figures Neumann vertices are depicted by filled circles and Dirichlet ones by open circles.} \end{figure} The two constraints, energy and current conservation, yield, for vertices $1,\dots, n$, \begin{equation} \label{eq:potcurr} \begin{array}{rcl} \varphi_k &=& a_k-b_k \\ \sum\limits_m h_{km}\varphi_m&=&i(a_k+b_k) \end{array}\,,\quad \end{equation} for $k=1,\dots, n\,$. The equation system (\ref{eq:hom}) now has become inhomogeneous, \begin{equation} \label{eq:inhom} h\varphi=i(a+b) \end{equation} where $a=(a_0,\dots, a_n,0,\dots)^T$ and $b=(b_0,\dots, b_n,0,\dots)^T$. It follows that \begin{equation} \label{eq:inhom1} \varphi=i h^{-1}(a+b)\,. \end{equation} At the coupling vertices $1,\dots, N$ the $\varphi_k$ are fixed by the constraints (\ref{eq:potcurr}), whence it follows that \begin{equation} \label{eq:G} a-b=iG(a+b)\,, \end{equation} where $G$ is the matrix with elements \begin{equation} \label{eq:G1} G_{kl}=(h^{-1})_{kl}\,, \quad k,l=1, \dots, N\,. \end{equation} Incoming and outgoing amplitudes are connected via the scattering matrix $S$, \begin{equation} \label{eq:S} b=Sa\,. \end{equation} Equation (\ref{eq:G}) yields for the scattering matrix \begin{equation} \label{eq:S1} S=\frac{1-iG}{1+iG}\,, \end{equation} A slightly different expression for the scattering matrix is well known from quantum dots \cite{bee96} and nuclear physics \cite{guh98}, \begin{equation} \label{eq:S2} S=\frac{1-iW^\dag G W}{1+iW^\dag G W}\,, \end{equation} where $G$ is the Green's function, and the components $W_k$ of vector $W$ specify the coupling strengths to the open channels. For an ideal coupling, as provided by the \Tjunctions, $W_k=1$ holds for all channels. The $W_k$ hence do not appear in the present case. The above derivation of the connection between the scattering matrix and $h$ matrix of the graph is more or less equivalent to the one presented in Refs.~\cite{kot99a,kot04}, which yield, however, the negative of expression (\ref{eq:S1}) for $S$, a consequence of the differing sign convention in Eq.~(\ref{eq:psi}). \section{Experiment and numerics} \subsection{Interlacing features of the experimental Green function} Our microwave equipment allows us to take spectra from 50\,MHz to 20\,GHz. The networks are constructed from standard microwave coaxial cables connected by \Tjunctions. In the experimentally accessible frequency range, each cable supports only one propagating mode. A vector network analyzer (VNA) measures the reflection from one open cable attached to the graph or the transmission from one cable to another one. Microwave technology is subject to the 50 $\Omega$ convention meaning an ideal matching of the cables connected to the VNA. Each attached cable hence means an open channel with no reflections from the end. In the present work only one attached cable has been used; hence the scattering matrix reduces just to a phase factor if absorption is ignored, $S=e^{i\alpha}$, and $G=G_{11}= -\tan(\alpha/2)$ is just a number. Since $G$, and not $S$, is the main quantity of interest, we present our experimental results in the following in terms of \begin{equation} \label{eq:G} G=-i\frac{1-S}{1+S}\,, \end{equation} holding for one open channel. Mathematically, the conversion from the expression (\ref{eq:S1}) to the expression (\ref{eq:G}) is trivial; experimentally it is not. Usually, even after a careful calibration there remains a global phase drift of the $S$ matrix of typically about 0.2$\pi$/GHz, which has to be removed before the conversion (\ref{eq:G}) can be performed. As an illustrative toy example we present experimental results for the graph shown in Fig.~\ref{fig:fig1}. It consists of just two dangling bonds of lengths $l_1$ and $l_2$ terminated by short ends, corresponding to Dirichlet boundary conditions. For this case we obtain \begin{equation} \label{eq:Gtoy} G=-\frac{\sin kl_1 \sin kl_2}{\sin k(l_1+l_2)}\,. \end{equation} In the presence of absorption, always the case in the experiment, $k$ has to be replaced by $k+i\lambda$. For the toy graph there is only one vertex, the secular matrix $h$ becomes just a number, and Eq.~(\ref{eq:G1}) simplifies to $G=h^{-1}$. The zeros of $h$ are thus converted into poles of $G$ and vice versa. Both Eq.~(\ref{eq:Gtoy}) and Fig.~\ref{fig:fig1} nicely illustrate these features: The Neumann spectrum, formed by the zeros of $h$, at $k_n=n\pi/(l_1+l_2)$, shows up at the poles of $G$, and the Dirichlet spectrum, formed by the poles of $h$, at $k^D_{1n}=n\pi/l_1$ and $k^D_{2n}=n\pi/l_2$, in the zeros of $G$. Here one has to keep in mind that a vertex with Neumann boundary conditions and connected to only two bonds may just be removed without changing the spectrum, an immediate consequence of current conservation. For larger graphs the situation is somewhat more complicated. Now $G=G_{11}$ is obtained from the matrix inverse of $|h|$, \begin{equation} \label{eq:G2} G=(h^{-1})_{11}=\left|h_{11}\right|/\,|h|\,, \end{equation} where $h_{11}$ is the matrix obtained from $h$ by removing the first row and first column. \begin{figure} \includegraphics[width=\columnwidth]{fig2_g} \caption{\label{fig:fig2} Experimental $|\mathrm{Im}(G)|$ (blue) and $-|\mathrm{Im}(G^{-1})|$, (orange) for the upper tetrahedral graph shown on the right ($l_{1,\dots, 7}/\mathrm{m}= 0.376, 0.440, 0.786, 0.870, 0.952, 1.593, 1.754$). The combs denote the calculated spectra for the two tetrahedrons depicted on the right.} \end{figure} Since the determinant of $h$ appears in the denominator, the Neumann spectrum still is made up of the poles of $G$, but now what is the meaning of the zeros? This question is answered by the following reasoning: $G=0$ means, according to Eq.~(\ref{eq:S1}), $S=1$ at the coupling vertex. The waves are hence totally reflected at the entrance. But this can only happen, if there is a resonance within the graph with zero amplitude at the coupling point. {\em Hence the zeros of $G$ represent the spectrum of the graph obtained from the original one by changing the boundary conditions at the coupling point from Neumann to Dirichlet, or, equivalently, of the graph, where the coupling vertex has been removed, and the two appearing dangling bonds have been short-end terminated.} A more rigorous foundation of this qualitative argument can be found in the Appendix. As an illustration Fig.~\ref{fig:fig2} shows the spectrum of a tetrahedral graph. Now $|\mathrm{Im}(G)|$ is plotted, which, up to broadening of the resonances by absorption and up to a factor, is just the density of states of the Neumann resonances. The $G$ has been obtained from the measured reflection as described above. Furthermore, $|\mathrm{Im}(G^{-1})|$ is shown, mirrored at the abscissa, converting the zeros of $G$ into broadened $\delta$ peaks. The combs in the upper and the lower part of the figure mark the positions of the calculated eigenfrequencies of the closed tetrahedron with Neumann and with Dirichlet boundary conditions at the coupling vertex, respectively. The bond lengths have been determined from transmission measurements for each individual bond, with errors, however, of several millimeters resulting from uncertainties due to the connecting junctions. Therefore, the lengths entering the calculation have been optimized (within these uncertainties) in a least-squares fit procedure to adjust the experimental spectra to the theoretical spectra. If this is done, perfect agreement is found (see Fig.~\ref{fig:fig2}). The eigenvalues of the two spectra are strictly alternating, as it is the case for the toy graph as well (see Fig.~\ref{fig:fig1}). This is a manifestation of the Neumann-Dirichlet interlacing theorem (see Chap.~3.11 of Ref.~\cite{ber13}; for a tutorial introduction see Ref.~\cite{ber17}): {\em If the boundary conditions at one vertex of a graph are changed from Neumann to Dirichlet, the eigenvalues of the original and the new graph appear strictly alternating.} In its general form the interlacing theorem allows for arbitrary changes of mixed boundary conditions between Neumann and Dirichlet, which, however, is not of relevance in the present context. \subsection{The Neumann-Dirichlet interlacing} There is a spectral interlacing also for $|h|$, just as for $G$ for the one-channel case. The situation now is somewhat different however. To move from the Neumann spectrum to the Dirichlet, one has to change the boundary conditions from Neumann to Dirichlet one after the other at {\em all} vertices, not just at one of them. Now there is no longer a strict alternation in the sequence of the respective eigenvalues, but a strong correlation still remains: The maximum number of Neumann eigenvalues confined between two successive Dirichlet ones is given by the number of vertices. We checked this numerically for the tetrahedron shown in Fig.~\ref{fig:fig2} and found for a total of 942 Neumann eigenvalues, that in 38.4\%, 32.1\%, 18.5\%, 8.5\%, and 0.4\% of all cases zero, one, two, three, and four Neumann eigenvalues, respectively, were confined between two neighboring Dirichlet eigenvalues. There was no example with more than four Neumann eigenvalues confined between Dirichlet eigenvalues, in accordance with the interlacing theorem. \begin{figure} \includegraphics[width=\columnwidth]{fig3_h} \caption{\label{fig:fig3} Numerical $|\mathrm{Im}(|h|^{-1})|$ (blue) and $-|\mathrm{Im}(|h|)|$ (orange) for the upper tetrahedral graph shown on the right. The lengths are the same as in Fig.~\ref{fig:fig2}. The combs denote the calculated spectra for the two tetrahedrons depicted on the right (see the text for details).} \end{figure} To determine the total $h$ matrix experimentally, one would have to attach open channels to each of the $V$ vertices and would have to measure the total $V\times V$ scattering matrix. We spared ourselves this considerable effort and resorted to numerics. Figure~\ref{fig:fig3} shows the results for a tetrahedron with the same lengths as in the experiment, but with the coupling vertex removed. The upper part shows the spectrum of the Neumann resonances, obtained by adding a small imaginary part $i\varepsilon$ to $k$ and taking the imaginary part of $|h|^{-1}$. For the purpose of a better visualization of the resonances, we did not perform the limit $\varepsilon\to 0$ but kept a non-zero value for $\varepsilon$. The upper comb shows the same spectrum again. Comparison with Fig.~\ref{fig:fig2} shows that the spectra of the two graphs depicted in the upper right of Figs~\ref{fig:fig2} and \ref{fig:fig3} are identical, illustrating the above-mentioned fact that the spectrum of a graph is not changed if a vertex with Neumann boundary condition is added along a bond. The lower part of Fig.~\ref{fig:fig3} shows $|\mathrm{Im}(|h|)|$, mirrored at the abscissa, corresponding to the Dirichlet spectrum. The combs in the lower part of the figure mark the positions of the spectra of the individual bonds constituting the Dirichlet spectrum, the three lowermost combs for the three longest bonds; in the upmost comb all Dirichlet eigenvalues associated with the shorter bonds are combined. \subsection{Number variances of Neumann and Dirichlet spectra} Each bond of a graph contributes with a series of equally distant resonances to the Dirichlet spectrum, which eventually, for large-$k$ values, add up to a sequence with more or less randomly distributed eigenvalues. However, since on average Dirichlet and Neumann eigenvalues have to alternate, it is unavoidable that to some extent the spectral statistics of the Dirichlet eigenvalues must leave a mark on the statistics of the regular resonances. A suitable quantity to explore the consequences of this spectral interlacing is the number variance $\Sigma^2(L)$, i.e., the variance of the number of resonances in a spectral range of length $L$. It is plotted in Fig.~\ref{fig:fig4} for both the Dirichlet and the Neumann spectrum of a tetrahedron, this time obtained from computer simulation of a closed graph. The mean level spacing has been normalized to one, $\Delta=\pi/l_\mathrm{tot}=1$. In addition, the expectations for a Poissonian and a Gaussian orthogonal random matrix ensemble (GOE) are shown. \begin{figure} \includegraphics[width=\columnwidth]{fig4_sigma2}\\ \caption{\label{fig:fig4} Number variance $\Sigma^2(L)$ for the Dirichlet (orange) and Neumann (blue) resonances of a tetrahedral graph (numerics, lengths as in Fig.~\ref{fig:fig2}, with the coupling point removed). Only data for $k>\pi/l_\mathrm{min}$ have been considered. The dash-dotted green and red lines correspond to the expectations for the Poissonian and the Gaussian orthogonal random matrix ensemble, respectively, with the mean level spacing normalized to one. The vertical dashed line marks $L_\mathrm{min}=\pi/l_\mathrm{min}$, the $L$ value associated with the shortest periodic orbit. The left part of the figure shows the range $0<L<2$ in more detail. } \end{figure} In the case of many bonds with irrational lengths the Dirichlet spectrum locally approximates Poissonian statistics; however, the corresponding number variance does not follow the Poissonian expectation. Responsible are the long-range spectral correlations resulting from the picket-fence structure of the spectra of the individual bonds. The number variance for a spectrum of equidistant resonances with a spacing of one is given by \begin{equation} \label{eq:sigma2_top} \Sigma^2(L)= \{L\}\left(1-\{L\}\right)\,, \quad \{L\}=L-[L]\,. \end{equation} Furthermore, for different families of Dirichlet spectra $\Sigma^2(L)$ is additive as long as the lengths are incommensurable. The curve depicted in Fig.~\ref{fig:fig4} is in agreement with this prediction with deviations of the order of the line strength. The main message from Fig.~\ref{fig:fig4} is the number variance of the Neumann resonances. For small values $\Sigma_2(L)$ follows the GOE expectation (see the left part of the figure), but already for $L=1.5$ it starts to deviate from the RMT expectation and eventually oscillates slowly about an average value of about 0.5. For $L>\pi/l_\mathrm{min}$ each bond contributes with at least one resonance in a spectral range of length $L$; beyond this $L$ value the oscillations in the number variances of the Neumann and the Dirichlet spectrum start to synchronize, a clear indication of the correlation between the two spectra. \begin{figure} \includegraphics[width=\columnwidth]{fig5_sigmas} \caption{\label{fig:fig5} Number variance $\Sigma^2_{n+d}(L)$ for the sum of Neumann and Dirichlet resonances of a tetrahedral graph in a window of length $L$ (dotted green). To improve the statistics the results from $10^4$ tetrahedrons of the same total length as the one shown in Fig.~\ref{fig:fig4} have been superimposed. In addition, the number variance of the Neumann resonances $\Sigma^2_n(L)$ (dashed blue), of the Dirichlet resonances $\Sigma^2_d(L)$ (dash-dotted orange), and the difference $\Delta_{n+d}(L)=\Sigma^2_{n+d}(L)-\Sigma^2_n(L)-\Sigma^2_d(L)$ (solid red) are shown. } \end{figure} The best tool to quantify the correlation between the two spectra is the variance of the sum (or difference) of the number of Neumann and Dirichlet resonances in an interval of length $L$, $\Sigma^2_{n+d}(L)$. For uncorrelated Neumann and Dirichlet spectra $\Sigma^2_{n+d}(L)$ should be just the sum of number variances of the Neumann and the Dirichlet spectrum, $\Sigma^2_n(L)$ and $\Sigma^2_d(L)$, respectively. The difference $\Delta_{n+d}(L)=\Sigma^2_{n+d}(L)-\Sigma^2_n(L)-\Sigma^2_d(L)$ is hence a measure for the correlations. Figure~\ref{fig:fig5} shows the various number variances involved for a set of tetrahedrons of the same total length as used in the experiment, obtained by superimposing the results from $10^4$ realizations. With increasing $L$, all number variances begin to saturate, $\Delta_{n+d}(L)$ in particular at a value of about $-0.2$, significantly different from zero, thus illustrating again the correlation between the two spectra. \section{Discussion} These findings require an explanation. Hitherto there has been no doubt that the spectral statistics of graphs are well described by RMT. There are even proofs of this fact mentioned in the Introduction. In addition, in all experimental graph studies, including our own, agreement with RMT predictions had been found, with respect both to spectral and to scattering properties. How does this fit together? The standard tool to characterize spectral statistics is the spacing distribution $p(s)$ of neighboring levels. In our experimental results we could not find any deviations from RMT predictions, but the statistical evidence was moderate with only about 800 levels involved. The same had been done by Kottos and Smilansky \cite{kot97a} in a numerical study with a much larger data ensemble of 80\,000 levels, and in fact they {\em did} observe deviations from RMT predictions, though only on the percent level, comparable in size to the deviation between the Wigner distribution and the exact RMT expression. This is in accordance with our own results as well as with previous results for the number variance. Kottos and Smilansky studied $\Sigma^2(L)$ of graphs in dependence on the connectivity \cite{kot99a}. They argued that for totally connected graphs $\Sigma^2(L)$ should approach the random matrix limit. For $L<2$ they found similar agreement with RMT, as exhibited in the left part of Fig.~\ref{fig:fig4}; they did not show, however, results for $L>2$. Deviations from RMT predictions for the number variance have been reported already in Refs.~\cite{die17,lu20} and for the spectral rigidity in Ref.~\cite{hul04}. It is known from semiclassical quantum mechanics that the shortest periodic orbit shows up in a saturation of number variance and spectral rigidity \cite{ber85}; however, in the present case this cannot be the explanation. In the graph the shortest periodic orbit is along the shortest bond with length $l_\mathrm{min}$. The lowest resonance associated with this bond is found at $\pi/l_\mathrm{min}= l_\mathrm{tot}/l_\mathrm{min}$ [using $l_\mathrm{tot}=\pi$, following from the normalization of the mean level spacing to one (see above)]. Thus saturation is expected beyond this value, i.e., for the present graph at $L=17.6$, by far beyond $L=1.5$, where saturation starts. Thus the saturation must have another origin, and our explanation is the interlacing of the Neumann and the Dirichlet spectrum. This might explain why the consequences of spectral duality have remained unnoticed so far: The picket-fence structure of the Dirichlet spectrum leaves its marks in the long-range correlation, but only slightly influences the near-distance Neumann eigenvalue statistics. This is not in contradiction with the proofs that the spectra of incommensurable graphs obey RMT statistics: The theory works in the limit of large vertex numbers, whereas in the experiments as well as in the numerical studies the vertex number typically is below 10-20. However, even for a large number of vertices $V$ the interlacing feature of the Dirichlet and Neumann spectra is still present: After $V$ Neumann eigenvalues at the latest a Dirichlet eigenvalue appears and vice versa. Thus, on the $k$ axis there are windows containing up to $V$ Neumann eigenvalues alternating with windows containing up to $V$ Dirichlet eigenvalues. Within each Dirichlet window the eigenvalues are Poisson distributed, but (and this is the essential point) a Dirichlet window contributes {\em only once} to the Neumann nearest level spacing distribution $p(s)$. Thus, in the limit $V\to\infty$ the contribution from the Dirichlet windows to $p(s)$ becomes negligible. This is a qualitative way to reconcile spectral interlacing with the RMT behavior of the Neumann resonances for large $V$, but it shows at the same time that essential features are missed in the RMT approach; it reflects only half of the truth! \section{Conclusion} The conclusion is clear: For graphs the range of validity of RMT is restricted to at most $V$ neighbors, where $V$ is the number of vertices. This does not mean that one has to question all previous graph experiments. Many of them concentrated on level spacing statistics, which anyway is sensitive mainly to the level repulsion of close neighbors. But whenever larger distance properties are involved, a study of the interplay of Neumann and Dirichlet eigenvalues is mandatory. The discussion of the variance of the sum of Neumann and Dirichlet eigenvalues in a window of a given length $L$, presented in this paper, means a first step in this direction. \begin{acknowledgments} We are grateful to Sven Gnutzmann, Nottingham, and Holger Schanz, Magdeburg, for clarifying discussions, and for calling our attention to a number of relevant references. J.L.~acknowledges financial support from the China Scholarship Council via Grant No.~202006180008. \end{acknowledgments} \section*{Appendix} Here we proof that the zeros of $G$ [see Eq.~(\ref{eq:G2})] are the eigenvalues of the graph obtained by removing vertex $1$, the coupling vertex, and by terminating the emerging dangling bonds by Dirichlet boundary conditions, in the following shortly termed the ``truncated graph''. For the sake of simplicity, we assume that there are just two bonds connecting the coupling vertex via vertices $L$ and $R$ to the rest of the graph, but the proof holds for an arbitrary number of coupling bonds. Applying the sequence $1$, $L$, $R$, and then the rest of the graph, of rows and columns, the secular matrix $h$ [see Eq.~(\ref{eq:hsec})] may be written as \begin{equation} \label{eq:A1} h=\left( \begin{array}{c|c} -f_L-f_R &\quad g^T \\[0.5ex]\hline\vspace{0.5ex} g &\quad \hat{h} \\ \end{array} \right)\,, \end{equation} where \begin{equation} \label{eq:A2} g^T=(g_L,g_R,0,\dots, 0) \end{equation} and $\hat{h}$ is the secular matrix of the truncated graph. Elementary matrix calculation yields \begin{equation} \label{eq:A3} G=\left(h^{-1}\right)_{11}=\left[-f_L-f_R-g^T\hat{h}^{-1} g\right]^{-1}\,. \end{equation} The spectrum of the truncated graph is given by the zeros of $|\hat{h}|$. They show up in the poles of the denominator, resulting in zeros of $G$. All eigenvalues of the truncated graph thus generate zeros of $G$. To complete the proof we have to show that there are {\em no further} zeros of $G$, not associated with the spectrum of the truncated graph. The only possible candidates are the poles of $f_L$ and $f_R$; however, these poles are canceled by the corresponding poles of the third term in the denominator, which we are now going to prove. For the rest of this appendix we assume that $\hat{h}$ is invertible, i.e., we avoid the positions of the resonances of the truncated graph. Similarly to the above we write $\hat{h}$ in block form, \begin{equation} \label{eq:A4} \hat{h}=\left( \begin{array}{c|c} -F-\tilde{F} &\quad \tilde{g}^T \\[0.5ex]\hline\vspace{0.5ex} \tilde{g} &\quad \tilde{h} \\ \end{array} \right)\,. \end{equation} The upper left corner element contains the $L,R$-block with \begin{equation} \label{eq:A5} F=\left( \begin{array}{cc} f_L & 0 \\ 0 & f_R \\ \end{array} \right)\,,\quad \tilde{F}=\left( \begin{array}{cc} \sum f_{Li} & -g_{LR} \\ -g_{LR} & \sum f_{Ri} \\ \end{array} \right)\,, \end{equation} where the sums $\sum f_{Li}$ and $\sum f_{Ri}$ run over all bonds connecting vertices $L$ and $R$, respectively, with the rest of the graph. The $g_{LR}$ term is present only, if vertices $L$ and $R$ are directly connected via a bond. Here $\tilde{h}$ is the secular matrix of the graph obtained by removing vertices $L$, and $R$, and terminating the emerging dangling bonds by short ends, and $\tilde{g}$ is the matrix containing the $g_{Li}$ and $g_{Ri}$ terms of the coupling vertices $L$ and $R$, respectively, to the rest of the graph. It follows that \begin{equation} \label{eq:A6} g^T \hat{h}^{-1} g = (g_L g_R)\left[-F-\tilde{F}-\tilde{g}^T\tilde{h}^{-1}\tilde{g}\right]^{-1}\left( \begin{array}{c} g_L \\ g_R \\ \end{array} \right) \end{equation} Using $g_n^2-f_n^2=1$, $n=L,R$, following immediately from the definitions~(\ref{eq:fg}), we obtain \begin{equation} \label{eq:A7} -f_L-f_R= \Tr F^{-1}-(g_L g_R)F^{-1}\left( \begin{array}{c} g_L \\ g_R \\ \end{array} \right)\,. \end{equation} Plugging the expressions (\ref{eq:A6}) and (\ref{eq:A7}) into Eq.~(\ref{eq:A3}), we obtain \begin{equation} \label{eq:A8} G=\left[\Tr F^{-1}+(g_L g_R)D\left( \begin{array}{c} g_L \\ g_R \\ \end{array} \right)\right]^{-1}\,, \end{equation} where \begin{eqnarray} \label{eq:A9} \nonumber D &=&\left[F+\tilde{F}+\tilde{g}^T\tilde{h}^{-1}\tilde{g}\right]^{-1}-F^{-1}\\\nonumber &=&\left[F+\tilde{F}+\tilde{g}^T\tilde{h}^{-1}\tilde{g}\right]^{-1}\\\nonumber &&\times\left[F-(F+\tilde{F}+\tilde{g}^T\tilde{h}^{-1}\tilde{g})\right] F^{-1}\\%\nonumber &=&-\left(F+\tilde{F}+\tilde{g}^T\tilde{h}^{-1}\tilde{g}\right)^{-1}\!\!\!(\tilde{F}+\tilde{g}^T\tilde{h}^{-1}\tilde{g})\,F^{-1}. \end{eqnarray} This was the crucial step; the singularities resulting from the $f_L$ and $f_R$ dropped out. Inserting the result into Eq.~(\ref{eq:A8}), we get \begin{equation} \label{eq:A10} G=\left[\Tr F^{-1}-(g_L g_R)F^{-1}\tilde{D}F^{-1}\left( \begin{array}{c} g_L \\ g_R \\ \end{array} \right)\right]^{-1}\,, \end{equation} where \begin{eqnarray} \label{eq:A11} \nonumber \tilde{D} &=&\!-FDF\\%\nonumber &=&\!\left(1+\tilde{F}F^{-1}+\tilde{g}^T\tilde{h}^{-1}\tilde{g}F^{-1}\right)^{-1} \!\!\!(\tilde{F}+\tilde{g}^T\tilde{h}^{-1}\tilde{g})\,. \end{eqnarray} Now the limits $\sin kl_L \to 0 $ or $\sin kl_R \to 0$ can be performed, with all terms depending on $kl_L$ or $kl_R$, and $f_n^{-1}=\tan kl_n$ and $g_n/f_n=1/\cos kl_n$ ($n=L,R$), remain regular. Q.\,E.\,D.
07a9760f052a488a034903d7da7753b0a2dfe133
\section{Introduction} One of the topical problems of the high-$T_c$ cuprate physics is the coexistence and competition of antiferromagnetic, superconducting, and charge orderings~\cite{Fradkin2012}. Recent accurate measurements of various physical characteristics on thousands of cuprate samples~\cite{Bozovic2016} indicate fundamental discrepancies with the ideas based on the canonical Bardeen-Cooper-Schrieffer approach, and rather support the bosonic mechanism of the high-$T_c$ cuprates. The study is complicated by the presence of heterogeneity due to dopants or non-isovalent substitution, as well as to the internal electronic tendency to heterogeneity~\cite{Moskvin2019}. A large number of theoretical models have been developed to account for the exotic electronic properties of cuprates in the normal state and to reveal the nature of unconventional superconductivity. However, to date, the problem of a consistent theoretical description of the cuprate phase diagram is still far from being solved. Earlier, we developed a minimal model of cuprates where the CuO$_2$ planes are considered as lattices of CuO$_4$ clusters, which are the main element of the crystal and electronic structure of cuprates. The on-site Hilbert space~\cite{Moskvin2011,Moskvin2013} is formed by three effective valence states of the cluster: [CuO$_4$]$^{7-}$, [CuO$_4$]$^{6-}$, and [CuO$_4$]$^{5-}$. The very possibility of considering these centers on an equal basis is due to the strong electron-lattice relaxation effects in cuprates~\cite{Mallett2013,Moskvin2019-2}. The centers differ in the usual spin state: $s=1/2$ for the [CuO$_4$]$^{6-}$ center and $s=0$ for the [CuO$_4$]$^{7-}$ and [CuO$_4$]$^{5-}$, respectively, and in the orbital symmetry: $B_{1g}$ for the ground states of the [CuO$_4$]$^{6-}$ center, $A_{1g}$ for the [CuO$_4$]$^{7-}$ center, and the Zhang-Rice $A_{1g}$ or more complicated low-lying non-Zhang-Rice states for the [CuO$_4$]$^{5-}$ center. In these many-electron atomic complexes with strong $p{-}d$ covalence and strong intra-center correlations, electrons cannot be described within any conventional (quasi)particle approach that addresses the [CuO$_4$]$^{7-,6-,5-}$ centers within the on-site hole representation $|n\rangle$, $n = 0, 1, 2$, respectively. Instead of conventional quasiparticle $k$-momentum description, we make use of a real space on-site $S=1$ pseudospin formalism to describe the charge triplets and an effective spin-pseudospin Hamiltonian which takes into account both local and nonlocal correlations, single and two-particle transport, as well as Heisenberg spin-exchange interaction. The pseudospin approach is used for the strongly correlated electron systems~\cite{Castellani1979,Rice1981} and for the superconductivity~\cite{Low1994} of cuprates for a long time. The pseudospin formalism leads to the possibility of simulation using the well-developed classical Monte Carlo (MC) method for constructing phase diagrams and studying the features of the thermodynamic properties of the system. A similar effective $S=1$ spin-charge model for cuprates and its MC implementation were considered in recent papers~\cite{Cannas2019,Frantz2021}. We organize the article as follows. In Section 2, we present the $S=1$ pseudospin formalism and the effective spin-pseudospin Hamiltonian of the model. In Section 3, we introduce quasi-classical approximation for the wave functions and formulate the state selection algorithm. The results of classical MC simulations of our model and their discussion are presented in Section 4. \section{Model} A minimal model to describe the charge degree of freedom in cuprates~\cite{Moskvin2011,Moskvin2013} implies that for the CuO$_4$ centers in CuO$_2$ plane the on-site Hilbert space reduced to a charge triplet formed by the three many-electron valence states [CuO]$_4^{7-,6-,5-}$ (nominally Cu$^{1+, 2+, 3+}$). These states can be considered to be the components of the $S=1$ pseudospin triplet with projections $M_S = {-}1,\, 0,\, {+}1$. Effective pseudospin Hamiltonian of the model cuprate with the addition of the Heisenberg spin-spin exchange coupling of the $s=1/2$ [CuO]$_4^{6-}$ (Cu$^{2+}$) centers can be written as follows: \begin{equation} \mathcal{H} = \mathcal{H}_{ch} + \mathcal{H}_{ex} + \mathcal{H}_{tr}^{(1)} + \mathcal{H}_{tr}^{(2)} - \mu \sum_i S_{zi} . \label{eq:Ham0} \end{equation} Here, the first term \begin{equation} \mathcal{H}_{ch} = \Delta \sum_i S_{zi}^2 + V \sum_{\left\langle ij\right\rangle} S_{zi} S_{zj} \end{equation} describes the on-site and inter-site nearest-neighbour density-density correlations, respectively, so that $\Delta=U/2$, $U$ being the correlation parameter, and $V>0$. The sums run over the sites of a 2D square lattice, $\left\langle ij \right\rangle$ means the nearest neighbors. The second term \begin{equation} \mathcal{H}_{ex} = Js^2 \sum_{\langle ij \rangle} \boldsymbol{\sigma}_i \boldsymbol{\sigma}_j \end{equation} is the antiferromagnetic ($J>0$) Heisenberg exchange coupling for the CuO$_4^{6-}$ centers, where $\boldsymbol{\sigma}=P_0 \mathbf{s}/s$ operators take into account the on-site spin density $P_0 = 1-S_z^2$, and $\mathbf{s}$ is the spin $s=1/2$ operator. The third term has the following form: \begin{multline} \mathcal{H}_{tr}^{(1)} \;=\; - t_p \sum_{\left\langle ij\right\rangle } \big( P_{i}^{+} P_{j}^{} + P_{j}^{+} P_{i}^{} \big) - t_n \sum_{\left\langle ij\right\rangle } \big( N_{i}^{+} N_{j}^{} + N_{j}^{+} N_{i}^{} \big) \\ {} - \frac{t_{pn}}{2} \sum_{\left\langle ij\right\rangle } \big( P_{i}^{+} N_{j}^{} + P_{j}^{+} N_{i}^{} + N_{i}^{+} P_{j}^{} + N_{j}^{+} P_{i}^{} \big) , \end{multline} where the transfer integrals $t_p$, $t_n$, $t_{pn}$ describe three types of the correlated "one-particle" transport. $P$ and $N$ operators are the combinations of the pseudospin $S=1$ operators~\cite{Moskvin2011}: $P^{+} = \tfrac{1}{2} \left(S_{+} + T_{+}\right)$, $N^{+} = \tfrac{1}{2} \left(S_{+} - T_{+}\right)$, where $T_{+} = S_z S_{+} + S_{+} S_z$. The next term is \begin{equation} \mathcal{H}_{tr}^{(2)} = - t_b \sum_{\left\langle ij\right\rangle} \big( S_{{+}i}^2 S_{{-}j}^2 + S_{{+}j}^2 S_{{-}i}^2 \big) , \end{equation} where the transfer integral $t_b$ describes the two-particle ("composite boson") transport \cite{Moskvin2011}. The last term with the chemical potential $\mu$ allows to account the constraint for the total charge density $n$: \begin{equation} n = \frac{1}{N}\left\langle \sum_i S_{zi} \right\rangle = const. \label{eq:charge_const} \end{equation} The mean-field approximation (MFA) for a model with the Hamiltonian \eqref{eq:Ham0} allowed us to find~\cite{Panov2019} the critical temperatures equations for the antiferromagnetic (AFM) ordering, charge ordering (CO), superconducting ordering (SC), and for the ``metal'' phase (M). The MFA phase diagram~\cite{Moskvin2020,Moskvin2021} of the model \eqref{eq:Ham0} demonstrates the possibility of correctly describing the features of phase diagrams typical of cuprates. In the quasi-classical approximation, we write the on-site wave function of the charge triplet as follows \begin{equation} \left| \Psi \right\rangle = c_{{+}1} \left| {+}1 \right\rangle + c_0 \left| 0 \right\rangle \left| \sigma \right\rangle + c_{{-}1} \left| {-}1 \right\rangle , \label{eq:Psi} \end{equation} where $\left| {+}1 \right\rangle$ and $\left| {-}1 \right\rangle$ are the orbital wave functions of [CuO$_4$]$^{5-}$ and [CuO$_4$]$^{7-}$ centers, respectively, $\left| 0 \right\rangle$ is the orbital wave function of [CuO$_4$]$^{6-}$ center, $\left| \sigma \right\rangle$ is the spin $s=1/2$ function: \begin{equation} \left| \sigma \right\rangle = e^{ -i \frac{\chi}{2} } \cos \frac{\eta}{2} \left| \uparrow \right\rangle + e^{ i \frac{\chi}{2} } \sin \frac{\eta}{2} \left| \downarrow \right\rangle , \label{eq:sigma} \end{equation} and the coefficients can be written in the following form: \begin{equation} c_{{+}1} = e^{- i\frac{\alpha}{2}} \sin\frac{\theta}{2}\cos\frac{\phi}{2} ,\quad c_{0} = e^{i\frac{\beta}{2}} \cos\frac{\theta}{2} , \quad c_{{-}1} = e^{ i\frac{\alpha}{2}} \sin\frac{\theta}{2}\sin\frac{\phi}{2} . \label{eq:ccc} \end{equation} Here $0 \le \theta \le \pi$, $0 \le \phi \le \pi$, $0 \le \alpha \le 2\pi$, $0 \le \beta \le 2\pi$. The matrices of pseudospin operators $S_z$ and $S_{\pm}$ in a basis $\left\{ \left| {+}1 \right\rangle , \left| 0 \right\rangle , \left| {-}1 \right\rangle \right\}$ are written as \begin{equation} S_z = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix} ,\quad S_{+} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} ,\quad S_{-} = \begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} . \end{equation} Using equations (\ref{eq:Psi}-\ref{eq:ccc}) we can write average values for all operators in the Hamiltonian~\eqref{eq:Ham0} on the $i$th site: \begin{eqnarray} \left\langle \Psi_i \left| S_{zi} \right| \Psi_i \right\rangle &=& \frac{1}{2} \left( 1- \cos\theta_i \right) \cos\phi_i , \\ \left\langle \Psi_i \left| S_{zi}^2 \right| \Psi_i \right\rangle &=& \frac{1}{2} \left( 1- \cos\theta_i \right) , \\ \left\langle \Psi_i \left| S_{+ i}^2 \right| \Psi_i \right\rangle &=& \frac{1}{4} \, e^{ i \alpha_i} \left( 1- \cos\theta_i \right) \sin\phi_i , \\ \left\langle \Psi_i \left| P_{i}^{+} \right| \Psi_i \right\rangle &=& \frac{1}{2} \, e^{i \frac{\alpha_i+\beta_i}{2}} \sin\theta_i \cos\frac{\phi_i}{2} , \\ \left\langle \Psi_i \left| N_{i}^{+} \right| \Psi_i \right\rangle &=& \frac{1}{2} \, e^{i \frac{\alpha_i-\beta_i}{2}} \sin\theta_i \sin\frac{\phi_i}{2} , \\ \left\langle \Psi_i \left| \boldsymbol{\sigma}_i \right| \Psi_i \right\rangle &=& \frac{1}{2} \left( 1 + \cos\theta_i \right) \big\{ \sin \eta_i \cos \chi_i , \, \sin \eta_i \sin \chi_i , \, \cos \eta_i \big\} , \end{eqnarray} where the $x$-, $y$-, and $z$-components of vector are listed in curly brackets. This allows us to obtain the energy for a model~\eqref{eq:Ham0} in the quasi-classical approximation \begin{equation} E = \Big\langle \prod_i \Psi_i \Big| \, \mathcal{H} \, \Big| \prod_i \Psi_i \Big\rangle \end{equation} in the following form: \begin{eqnarray} E &=& \frac{\Delta}{2} \sum_i \left(1 - \cos\theta_i\right) + \frac{V}{4} \sum_{\left\langle ij\right\rangle} \left(1 - \cos\theta_i\right) \left(1 - \cos\theta_j\right) \cos\phi_i \, \cos\phi_j \nonumber\\[0.5em] &&{} +\frac{J}{16} \sum_{\langle ij \rangle} \left( 1 + \cos\theta_i \right) \left( 1 + \cos\theta_j \right) \big( \sin \eta_i \sin \eta_j \cos \left( \chi_i-\chi_j \right) + \cos \eta_i \cos \eta_j \big) \nonumber\\[0.5em] &&{} - \frac{t_p}{2} \sum_{\left\langle ij\right\rangle} \sin\theta_i \, \sin\theta_j \, \cos\frac{\phi_i}{2} \, \cos\frac{\phi_j}{2} \, \cos \frac{\alpha_i-\alpha_j+\beta_i-\beta_j}{2} \nonumber\\[0.5em] &&{} -\; \frac{t_n}{2} \sum_{\left\langle ij\right\rangle} \sin\theta_i \, \sin\theta_j \, \sin\frac{\phi_i}{2} \, \sin\frac{\phi_j}{2} \, \cos \frac{\alpha_i-\alpha_j-\beta_i+\beta_j}{2} \nonumber\\[0.5em] &&{} - \frac{t_{pn}}{4} \, \sum_{\left\langle ij\right\rangle} \sin\theta_i \, \sin\theta_j \, \bigg( \sin\frac{\phi_i+\phi_j}{2} \, \cos \frac{\alpha_i-\alpha_j}{2} \, \cos \frac{\beta_i+\beta_j}{2} \nonumber\\[0.5em] &&{}\qquad + \sin\frac{\phi_i-\phi_j}{2} \, \sin \frac{\alpha_i-\alpha_j}{2} \, \sin \frac{\beta_i+\beta_j}{2} \bigg) \nonumber\\[0.5em] &&{} - \frac{t_B}{8} \sum_{\left\langle ij\right\rangle} \left(1 - \cos\theta_i\right) \left(1 - \cos\theta_j\right) \sin\phi_i \, \sin\phi_j \, \cos(\alpha_i-\alpha_j) \nonumber\\[0.5em] &&{} - \frac{\mu}{2} \sum_i \left(1 - \cos\theta_i\right) \cos\phi_i . \end{eqnarray} The constraint \eqref{eq:charge_const} can be written as: \begin{equation} n = \frac{1}{2N} \sum_i \left(1 - \cos\theta_i\right) \cos\phi_i = const. \end{equation} \section{State selection algorithm} An average value of the spin $s=1/2$ operator \begin{equation} \left\langle \sigma | \mathbf{s} | \sigma \right\rangle = \frac{1}{2} \left\{ \sin \eta \cos \chi , \, \sin \eta \sin \chi , \, \cos \eta \right\} \end{equation} maps the spin states to the unit sphere. It is well known, that the uniform distribution of randomly generated points over the unit sphere is given by the following state selection algorithm: \begin{enumerate} \item $\chi = 2 \pi \gamma_1$; \item $ \eta = \arccos \gamma_2$, \end{enumerate} where $\gamma_{1,2}$ are random numbers in the $[0,1]$ range. The $(\chi,\eta)$-histogram is shown in Fig.\ref{fig:spin-hist}, left panel, and this produces the flat $(\chi,z)$-histogram shown in Fig.\ref{fig:spin-hist}, right panel. \begin{figure}[t] \centerline{\includegraphics[width=0.9\textwidth]{eta-z-chi-1.pdf}} \caption{(Color online) The $(\chi,\eta)$-histogram (left panel), $(\chi,z)$-histogram (right panel)} \label{fig:spin-hist} \end{figure} This state selection algorithm is based on the well-known lemmas~\cite{Sobol1973} of probability theory: \begin{lemma} Let $\gamma \in [0,1]$ is the uniformly distributed random value, and $f(x)$ is the probability density function. Then the random value $\xi$ that satisfies the equation $$\int_{-\infty}^{\xi} f(x)dx = \gamma$$ has the probability density function $f(x)$. \end{lemma} \begin{lemma} Let $\gamma_1$ and $\gamma_2$ be uniformly distributed in $[0,1]$ random values, and $f(x_1,x_2)=f_1(x_1)f_2(x_2|x_1)$ is the joint density function, where $f_1(x_1)$ is the marginal density, and $f_2(x_2|x_1)$ is the conditional density. Let $\xi_1$ and $\xi_2$ be continuous random variables that satisfy the system of equations $$ \int_{-\infty}^{\xi_1} f_1(x)dx = \gamma_1, \qquad \int_{-\infty}^{\xi_2} f_2(x|\xi_1)dx = \gamma_2. $$ Then $\xi_1$ and $\xi_2$ have the joint density function $f(x_1,x_2)$. \end{lemma} The states with coefficients~\eqref{eq:ccc} corresponds to a point in the octant of the unit sphere. We use the Metropolis algorithm for a system with conservation of the total charge. The charge at the site, $n_i$, is related to the parameters of the wave function by the expression \begin{equation} 2 n_i = \left( 1 - \cos \theta_i \right) \cos \phi_i . \end{equation} We require when the states of sites 1 and 2 change simultaneously, the total charge of the pair is preserved, $n_1 + n_2 = n_1' + n_2' = 2n$, and the points representing states uniformly fill the allowed area in the octant. The state selection algorithm based on the Lemmas 1 and 2 consists of the following steps: \begin{enumerate} \item caclulation of $n_1$, where $-1+n+|n| \le n_1 \le 1+n-|n|$, from equation \begin{equation} G_1(n_1;n) = \gamma , \end{equation} where $\gamma \in [0,1]$ is the uniformly distributed random value, \begin{equation} G_1(n_1;n) = \frac{ \Phi(n_1) - \Theta(n) \, \Phi(-1+2|n|) }{ \Phi(1-2|n|) } , \end{equation} \begin{equation} \Phi(x) = \sgn x \Bigg[ \frac{2\sqrt{1+|x|}}{\pi} \bigg( \frac{ 2 \, \Pi\left(-1,\frac{\pi}{2} \,\big|\, m(x) \right) }{ 1+|x| } - m(x) \, K \left( m(x) \right) \bigg) -\frac{1}{2} \Bigg] + \frac{1}{2} , \end{equation} $m(x) = \left(1 - |x|\right) /\left(1 + |x|\right)$, $\Theta(x)$ is the Heaviside step function, $\Pi\left(-1,\frac{\pi}{2} \,\big|\, m \right) = \Pi_1 (1 , \sqrt{m} )$ is the complete elliptic integral of the third kind, $K(m)$ is the complete elliptic integral of the first kind; \item calculation of the value $n_2 = 2n - n_1$; \item calculation of $ \cos \frac{\theta_i}{2} $ from equation \begin{equation} \cos \frac{\theta_i}{2} = \sqrt{1 - |n_i|} \; \sn \left( \gamma_i K \left(m(n_i)\right) , m(n_i) \right) , \end{equation} where $\gamma_i \in [0,1]$, $i=1,2$, are the uniformly distributed random values, $\sn \left( x , m \right)$ is the Jacobi function. If $n_i=0$, we take $\cos \frac{\theta_i}{2} = \gamma_i$. \item calculation of $\cos \phi_i$ from equation \begin{equation} \cos \phi_i = \frac{n_i}{1-\cos^2 \frac{\theta_i}{2}} . \end{equation} If $n_i=0$ and $\cos \frac{\theta_i}{2}=1$, $\phi_i$ is a random uniformly distributed quantity, $0 \le \phi_i \le \pi$. \end{enumerate} The distributions in the case $n=0$ for angles $(\phi,\theta)$ and for states on the octant of unit sphere are shown in Fig.\ref{fig:orb-hist}, left and right panels, respectively. \begin{figure}[t] \centerline{\includegraphics[width=0.35\textwidth]{theta-phi-2.pdf} \qquad \includegraphics[width=0.35\textwidth]{z-phi-2.pdf} } \caption{(Color online) The $(\phi,\theta)$-histogram (left panel), $(\phi,z)$-histogram (right panel)} \label{fig:orb-hist} \end{figure} \section{Results} \begin{figure}[t] \centerline{\includegraphics[width=\textwidth]{fig3.pdf}} \caption{(Color online) Left panel: dependencies on the charge doping of the structure factors near the ground state calculated with parameters $\Delta$\,=\,0.8, $V$\,=\,0.625, $J$\,=\,1, $t_p$\,=\,0.35, $t_n$\,=\,0, $t_{pn}$\,=\,-0.24, (all in units of the $t_b$). Right panel: The $T$\,-\,$x$ ($x$ the charge doping) phase diagram for the model cuprate calculated with the same parameters as in left panel. The insert shows a schematic phase diagram of hole-doped cuprates~\cite{Hamidian2016}.} \label{fig3} \end{figure} In MC simulation, we calculated the structure factors \begin{equation} F_{\mathbf{q}}(A,B) = \frac{1}{N^2} \sum_{lm} e^{i\mathbf{q}\,(\mathbf{r}_l - \mathbf{r}_m)} \left\langle A_{l} B_{m} \right\rangle , \end{equation} where $A_{l}$ and $B_{m}$ are the on-site operators and the summation is performed over all sites of the square lattice. To determine the type of ordering, we monitored the following structure factors: \begin{itemize} \item $F_{(\pi,\pi)}(\boldsymbol{\sigma},\boldsymbol{\sigma})$ for antiferromagnetic (AFM) order, \item $F_{(\pi,\pi)}(S_z,S_z)$ for the charge order (CO), \item $F_{(0,0)}(S_{{+}}^2,S_{{-}}^2)$ for the superconducting order (SC), \item $F_{(0,0)}(P^{+},P)$ for the ``metal'' phase ($M$). \end{itemize} The results of the MC simulation for the doping dependencies of the main structure factors near the ground state, $T/t_b=0.05$, of the model cuprate are presented in Fig.~\ref{fig3}, left panel. The critical temperatures for the AFM, CO, and SC phases were determined from the jump in the structure factor from zero to a certain finite value. Fig.~\ref{fig3}, right panel, shows the reconstruction from the MC simulations of the $T$\,-\,$x$ phase diagram. The insert shows the typical for the hole doped cuprates~\cite{Hamidian2016}. The obtained phase diagram for model cuprate with the Hamiltonian \eqref{eq:Ham0} reproduces some most important features of real phase diagrams: with given parameters, $\Delta=0.8$, $V=0.625$, $J=1$, $t_p=0.35$, $t_n=0$, $t_{pn}=-0.24$, (all in units of the $t_b$), near the "parent" composition $x=0$, we get AFM ordering, which is replaced with increasing $x$ by the SC ordering, that coexists with the CO ordering in the form of phase separation. The found SC ordering exits in the finite region of doping, and it is replaced by the ``metal'' phase at $x\geq0.3$. The main problems in the implementation of our modeling, such as inhomogeneous phase states and the associated difficulties in identifying them, are predetermined by the enhanced role of fluctuations in low-dimensional systems. But at the same time, the obtained phase diagrams show promising possibilities to describe the coexistence and competition of various phase orders in cuprates. \section*{Acknowledgments} The research was supported by the Ministry of Education and Science of the Russian Federation, project FEUZ-2020-0054, and by scholarship of the president of the Russian Federation No. SP-2278.2019.1. \providecommand{\newblock}{}
45cb776d225b4703341d549e680c964a063db500
\section*{\nomname } }{\typeout{Success}}{\typeout{Failure}} \usepackage{ifthen} \renewcommand{\nomgroup}[1]{% \ifthenelse{\equal{#1}{A}}{\item[\textbf{Abbreviations}]}{% \ifthenelse{\equal{#1}{G}}{\item[\textbf{Symbols}]}{% \ifthenelse{\equal{#1}{C}}{\item[\textbf{Abbreviations}]}{% \ifthenelse{\equal{#1}{S}}{\item[\textbf{Subscripts}]}{% \ifthenelse{\equal{#1}{Z}}{\item[\textbf{Mathematical Symbols}]}{} \newcommand{\xfig}[2][1]{ \ifpdf \resizebox{#1\textwidth}{!}{ \input{./figures_ok/#2.pdftex_t} } \else \resizebox{#1\textwidth}{!}{ \input{./figures_ok/#2.pstex_t} } \fi } \ifpdf \usepackage{pdfsync} \fi \bibliographystyle{amsplain_mod} \usepackage{mathabx} \begin{document} \newcommand{\mbox{d}}{\mbox{d}} \title[]{An efficient FV-based Virtual Boundary Method for the simulation of fluid-solid interaction} \author{Michele Girfoglio\textsuperscript{1,*}} \thanks{\textsuperscript{*}Corresponding Author.} \address{\textsuperscript{1}SISSA, International School for Advanced Studies, Mathematics Area, mathLab Trieste, Italy.} \email{mgirfogl@sissa.it} \author{Giovanni Stabile\textsuperscript{1}} \email{gstabile@sissa.it} \author{Andrea Mola\textsuperscript{1}} \email{amola@sissa.it} \subjclass[2010]{78M34, 97N40, 35Q35} \author{Gianluigi Rozza\textsuperscript{1}} \email{grozza@sissa.it} \subjclass[2010]{78M34, 97N40, 35Q35} \keywords{Immersed boundary method; virtual boundary method; fluid-structure interaction; finite volume approximation; Navier-Stokes equations.} \date{} \dedicatory{} \begin{abstract} In this work, the Immersed Boundary Method (IBM) with feedback forcing introduced by \cite{Goldstein1993} and often referred in the literature as the Virtual Boundary Method (VBM), is addressed. The VBM has been extensively applied both within a Spectral and a Finite Difference (FD) framework. Here, we propose to combine the VBM with a computationally efficient Finite Volume (FV) method. We will show that for similar computational configurations, FV and FD methods provide significantly different results. Furthermore, we propose to modify the standard feedback forcing scheme, based on a Proportional-Integral (PI) controller, with the introduction of a derivative action, in order to obtain a Proportial-Integral-Derivative (PID) controller. The stability analysis for the Backward Differentiation Formula of order 1 (BDF1) time scheme is modified accordingly, and extended to the Backward Differentiation Formula of order 2 (BDF2) time scheme. We will show that, for the BDF2 time scheme, the derivative action allows to improve the stability characteristics of the system. Our approach is validated against numerical data available in the literature for a stationary/rigidly moving 2D circular cylinder in several configurations. Finally, a Fluid-Structure Interaction (FSI) benchmark, related to the frequency response of a cantilever beam coupled with a fluid, is presented: we numerically demonstrate that the introduction of the derivative action plays an important role in order to properly detect the fluid-structure interaction coupling. \end{abstract} \maketitle \input{sections/section1} \input{sections/section2} \input{sections/section3} \input{sections/section4} \input{sections/section5} \input{sections/section6} \input{sections/section7} \bibliographystyle{amsplain_mod} \section{Conclusions and Perspectives}\label{sec:conclusion} We showed the effectiveness of a FV-based VBM solver in simulating flow problems involving fixed, moving and deforming bodies embedded in a fluid region. The interest in the Finite Volume approximation is due to the fact that, to the best of our knowledge, the application of the VBM in a FV framework has been unexplored. We proposed to modify the standard feedback forcing scheme of the VBM, based on a PI controller, with the introduction of a derivative action, in order to obtain a PID controller. We showed that the derivative action affects strongly the stability region and in different way, depending on the time scheme used. In order to showcase the features of our approach, we presented a computational study related to 2D flow past a moving/fixed rigid cylinder in several configurations and eigenfrequencies of a cantilever beam coupled with a surrounding fluid. We showed that when bodies at rest are considered, the accuracy and efficiency of the computation is essentially governed by the integral controller. On the other hand, when bodies in rigid motion are considered, proportional and derivative actions help to mitigate unphysical oscillations that affect the solution at large integral controller coefficients. Finally, in the FSI context, we showed the importance of the derivative action in order to obtain proper results when added mass is comparable to solid body mass. As a follow-up of the present work, we are going to carry out further investigations in order to better clarify the role played by the derivative action, especially within the FSI context. Moreover, we would like to use the present approach to simulate turbulent flows both in RANS and LES contexts. We are also interested in developing a VBM Reduced Order Model (ROM) within a Finite Volume framework. \section{Stability analysis}\label{sec:stability_analysis} In this section, the stability analysis of the VBM is presented. Concerning the standard formulation of the feedback forcing based on the PI controller, i.e. with $\gamma = 0$, precise stability boundaries in the forcing parameters space related to several time schemes were discussed in previous works. In particular, BDF1 \cite{Lee2003, Shin2008}, second-order and third-order Runge-Kutta schemes \cite{Lee2003}, second-order and third-order Adams-Bashfort schemes \cite{Goldstein1993, Lee2003}. Here, we are going to analyze the influence of the derivative controller term on the stability features of the BDF1 scheme. Furthermore, we will extend the analysis to the BDF2 time scheme that, to the best of our knowledge, has not been explored within the VBM framework. Following the approach performed in \cite{Shin2008}, the maximum magnitude of the Eulerian forcing can be expressed as: \begin{equation}\label{eq:max_forc} f_{max}(t) = C_{max} F(t) = C_{max} \left(\alpha \int_0^t \left({\bm u} - {\bm u}_{b}\right) d \tau + \beta \left({\bm u} - {\bm u}_{b}\right) + \gamma \left(\dot{{\bm u}} - \dot{{\bm u}}_{b}\right)\right), \end{equation} where $C_{max}$ is a coefficient depending on the type of regularized delta function as well as the dimensionality of the problem \cite{Shin2008}. For the sake of simplicity, we set ${\bm u}_{b} = \dot{{\bm u}}_{b} = 0$. Since $\alpha$ and $\beta$ makes the forcing term much larger than the other terms of the equation \eqref{eq:ns-lapls-1}, we can limit to investigate the stability features of the following equation \cite{Shin2008}: \begin{equation}\label{eq:ODE1} \dfrac{d {\bm u}}{dt} = C_{max} \left(\alpha \int_0^t {\bm u} d \tau + \beta {\bm u} + \gamma \dot{{\bm u}}\right). \end{equation} \subsection{BDF1 time scheme} When the BDF1 scheme is adopted for the temporal discretization, we have \begin{equation}\label{eq:BDF1_classic} {\bm u}^{n+1} - {\bm u}^{n}= \alpha^\star \sum_{i=0}^{n} {\bm u}^{i} + \beta^\star {\bm u}^n + \gamma^\star \left({\bm u}^n - {\bm u}^{n-1}\right), \end{equation} where $\alpha^\star = C_{max}\alpha \Delta t^2$, $\beta^\star = C_{max}\beta \Delta t$ and $\gamma^\star = C_{max}\gamma$. In order to obtain the recurrence formula for the stability analysis, the equation at the previous time step, $n-1$, is subtracted from the equation at the present time step, $n$, resulting in: \begin{equation}\label{eq:BDF1_classic_2} {\bm u}^{n+1} - 2{\bm u}^{n} + {\bm u}^{n-1} = \alpha^\star {\bm u}^{n} + \beta^\star \left( {\bm u}^{n} - {\bm u}^{n-1}\right) + \gamma^\star \left({\bm u}^{n} - 2{\bm u}^{n-1} + {\bm u}^{n-2}\right). \end{equation} The stability features are related to $r = {\bm u}^{i+1}/{\bm u}^{i}$, $i = 0,1,2,...,n$. By recasting the equation \eqref{eq:BDF1_classic_2} in terms of $r$, we obtain \begin{equation}\label{eq:BDF1_stab} r^3 - (2 + \alpha^\star + \beta^\star + \gamma^\star)r^2 + (1 + \beta^\star + 2 \gamma^\star) r - \gamma^\star = 0. \end{equation} Notice that, for $\gamma^\star = 0$, we obtain the characteristic equation reported in \cite{Shin2008}: \begin{equation}\label{eq:BDF1_stab2} r^2- (2 + \alpha^\star + \beta^\star + \gamma^\star)r + 1 + \beta^\star = 0. \end{equation} The stability region, $|r|\leq1$, can be found by using the Jury criterion. For $\gamma^\star = 0$, we have \cite{Shin2008}: \begin{equation}\label{eq:stab1} -\alpha^\star - 2\beta^\star \leq 4. \end{equation} Now, we are going to investigate the influence of $\gamma^\star$ on the stability region. Since the stability region is too complicated to be expressed in closed form by leaving free all the parameters, $\alpha^\star$, $\beta^\star$ and $\gamma^\star$, we fix the value of $\gamma^\star$ and limit to investigate some specific cases in order to obtain a qualitative trend. Based on the equation \eqref{eq:BDF1_stab}, necessary condition for the stability is that $\gamma^\star \in \left[-1, 1\right]$. We are only interested in investigating negative values. We set $-\gamma^\star = 0.25$, $-\gamma^\star = -0.5$, $-\gamma^\star = 0.75$, and $-\gamma^\star = 1$, by obtaining, respectively, \begin{equation}\label{eq:stab2} \begin{cases} -\alpha^\star - 2\beta^\star \leq 3, \\ -\alpha^\star - 2\beta^\star \leq 2, \\ -\alpha^\star - 2\beta^\star \leq 1, \\ -\alpha^\star = -\beta^\star = 0. \end{cases} \end{equation} Stability regimes \eqref{eq:stab1}-\eqref{eq:stab2} in 2D flow, obtained by set $C_{max} = \dfrac{1}{2}$ \cite{Shin2008}, \begin{equation}\label{eq:stab2_without_star} \begin{cases} -\alpha \Delta t^2 - 2\beta \Delta t \leq 8 & \text{for $-\gamma = 0$}, \\ -\alpha \Delta t^2- 2\beta \Delta t \leq 6 & \text{for $-\gamma = 0.5$}, \\ -\alpha \Delta t^2- 2\beta \Delta t \leq 4 & \text{for $-\gamma = 1$}, \\ -\alpha \Delta t^2 - 2\beta \Delta t \leq 2 & \text{for $-\gamma = 1.5$}, \\ -\alpha \Delta t^2 = -\beta \Delta t = 0 & \text{for $-\gamma = 2$}, \end{cases} \end{equation} are displayed in Figure \ref{fig:BDF_stability_1} a). The flow is stable in the region below the line and unstable above the line. We observe that the widest stability region is obtained for $\gamma = 0$, its size reduces, i.e. the intercept of the line moves downwards, to increasing of $-\gamma$ and finally degenerates into the axis origin for $-\gamma = 2$. On the other hand, the shape of the stability region is not affected by $\gamma$. In order to validate such theoretical predictions, simulations of transverse oscillations of a circular cylinder in a free-stream (see Sec \ref{sec:osc_y_cyl}) are performed for $\gamma$ = 0 (Figure \ref{fig:BDF_stability_2} a)) and $-\gamma$ = 1 (Figure \ref{fig:BDF_stability_3} a)). Numerical stable and unstable cases are denoted by circles and crosses, respectively. We observe that the numerical stability regions are wider than analytical ones because the analytical predictions are obtained based on the maximum value of the forcing (eq. \ref{eq:max_forc}). This result is in agreement with \cite{Shin2008}. For $\gamma$ = 0, we observe that the simulation is stable until to the line $-\alpha\Delta t^2 - 2\beta\Delta t = 11$. On the contrary, in \cite{Shin2008}, the simulation is stable until to the curve $-\alpha\Delta t^2 - 2\beta\Delta t = 10$. However, the marginal stability line is expected to change depending on the kind of flow. In conclusion, we learned that when the BDF1 time scheme is used, the PI controller is to be preferred to the PID controller in order to obtain larger stability regions. Therefore, the derivative action deteriorates the stability features of the system. \subsection{BDF2 time scheme} For the BDF2 time scheme, the approximation of the equation \eqref{eq:ODE1} yields \begin{equation} 3{\bm u}^{n+1} - 4{\bm u}^{n} + {\bm u}^{n-1} = 2\left(\alpha^\star \sum_{i=0}^{n} {\bm u}^{i} + \beta^\star {\bm u}^n + \gamma^\star \left({\bm u}^n - {\bm u}^{n-1}\right)\right). \end{equation} Notice that for $-\gamma^\star = 0.5$ the solution degenerates into that one obtained for the BDF1 time scheme with $\gamma^\star = 0$ unless a scale factor. The characteristic equation determining $r$ for this scheme is given by \begin{equation}\label{eq:stab_BDF2} 3r^3 - (7 + 2 \alpha^\star + 2 \beta^\star + 2\gamma^\star)r^2 + (5 + 2 \beta^\star + 4 \gamma^\star) r - (1+2\gamma^\star) = 0. \end{equation} For $\gamma^\star = 0$, the stability region $|r|\leq1$ is \begin{equation}\label{eq:stab1BDF2} \begin{cases} - \alpha^\star - 2 \beta^\star \leq 8, \\ \alpha^\star - 2 \beta^\star \geq 0.\\ \end{cases} \end{equation} As we did for the BDF1 scheme, we investigate the influence of $\gamma^\star$ on the stability region. Based on the equation \eqref{eq:stab_BDF2}, necessary condition for the stability is $\gamma^\star \in \left[-2, 1\right]$. We consider only negative values. We set $\gamma^\star = -0.4$, $\gamma^\star = -0.5$, $\gamma^\star = -0.6$, $\gamma^\star = -1$, $\gamma^\star = -1.5$, and $\gamma^\star = -2$, by obtaining, respectively, \begin{equation}\label{eq:stab2BDF2} \begin{cases} \begin{cases} - \alpha^\star - 2 \beta^\star \leq 6.4, \\ \alpha^\star - 14 \beta^\star \geq 0,\\ \end{cases}\\ -\alpha^\star - 2\beta^\star \leq 6, \\ -\alpha^\star - 2\beta^\star \leq 5.6, \\ -\alpha^\star - 2\beta^\star \leq 4, \\ -\alpha^\star - 2\beta^\star \leq 2,\\ -\alpha^\star = -\beta^\star = 0. \\ \end{cases} \end{equation} Stability regimes \eqref{eq:stab1BDF2}-\eqref{eq:stab2BDF2} in 2D flow, obtained by set $C_{max} = \dfrac{1}{2}$ \cite{Shin2008}, \begin{equation}\label{eq:stab3BDF2} \begin{cases} - \alpha \Delta t^2 - 2 \beta \Delta t \leq 16 \text{ and } \alpha \Delta t^2 - 2 \beta \Delta t \geq 0 & \text{for $-\gamma = 0$}, \\ - \alpha \Delta t^2 - 2 \beta \Delta t \leq 12.8 \text{ and } \alpha \Delta t^2 - 14 \beta \Delta t \geq 0 & \text{for $-\gamma = 0.8$}, \\ -\alpha \Delta t^2 - 2\beta \Delta t \leq 12 & \text{for $-\gamma = 1$}, \\ -\alpha \Delta t^2 - 2\beta \Delta t \leq 11.2 & \text{for $-\gamma = 1.2$}, \\ -\alpha \Delta t^2 - 2\beta \Delta t \leq 8 & \text{for $-\gamma = 2$}, \\ -\alpha \Delta t^2 - 2\beta \Delta t \leq 4 & \text{for $-\gamma = 3$}, \\ -\alpha \Delta t^2 = -\beta \Delta t = 0 & \text{for $-\gamma = 4$}, \\ \end{cases} \end{equation} are displayed in Figure \ref{fig:BDF_stability_1} b). The flow is stable in the region inside the line(s) and unstable in the region outside the line(s). Unlike the BDF1 time scheme, for the BDF2 time scheme $\gamma$ affects not only the size of the stability region but also its shape. We observe that for $\gamma = 0$ the stability region is limited by a superior line that has the same slope of that obtained for the BDF1 time scheme but greater intercept, and an inferior one that has equal and opposite slope with respect to the superior one and null intercept. When $-\gamma$ is approaching to $1$ from the left, we observe that the slope of the inferior line as well as the intercept of the superior one reduce. It is worth to point out that for $-\gamma < 1$ it is not possible to choose as forcing gains $\alpha \neq 0$ and $\beta = 0$ because no portion of the $-\alpha\Delta t^2$ axis belongs to the stability region. On the other hand, it is possible to choose values of $-\alpha \Delta t^2$ larger than ones related to the BDF1 time scheme by introducing low values of $-\beta\Delta t$ as well as very larger values of $-\beta\Delta t$ for medium-low values of $-\alpha\Delta t^2$. For $-\gamma = 1$, the inferior line concides with the portion of the $-\alpha\Delta t^2$ axis for which $-\alpha\Delta t^2 \leq 12$. Then, for $-\gamma > 1$, the shape of the stability region becomes the same of that one obtained for the BDF1 time scheme whilst the size is bigger than the biggest one related to the BDF1 time scheme that is obtained for $\gamma = 0$. The two regions coincide for $-\gamma = 2$. Finally, for $-\gamma > 2$, it continues to reduce until to degenerate into the axis origin for $-\gamma = 4$. Simulations of transverse oscillations of a circular cylinder are performed for $\gamma$ = 0 (Figure \ref{fig:BDF_stability_2} b)) and $-\gamma$ = 2 (Figure \ref{fig:BDF_stability_3} b)). Numerical stable and unstable cases are denoted by circles and crosses, respectively. Just like for the BDF1 time scheme, we obtain numerical stability regions wider than the analytical ones. By comparing Figure \ref{fig:BDF_stability_2} a) and Figure \ref{fig:BDF_stability_3} b), we observe that, thanks to the derivative action, the BDF2 time scheme allows to increase the maximum time step of about 13 \% with respect to the BDF1 time scheme for a given $\alpha$ when $\beta = 0$, \begin{equation}\label{eq:BDF2vsBDF1} \dfrac{\left(\Delta t\right)_{BDF2}}{\left(\Delta t\right)_{BDF1}} \approx \sqrt{\dfrac{14}{11}} \approx 1.13. \end{equation} Of course, just like for the BDF1 scheme, the marginal stability line is expected to change depending on the kind of flow. Now, we investigate in detail how the numerical system behaves around $-\gamma = 1$ because, as previously noted, for this value the BDF2 time scheme degenerates into the BDF1 time scheme unless a scale factor, and the shape of the stability region changes. In Figure \ref{fig:alpha_max} numerical and analytical evolutions of the maximum value of $-\alpha \Delta t^2$ over $-\gamma$ for which the stability occurs are displayed. We can observe the discontinuity exhibited by the analytical curve for $-\gamma = 1$, associated to the changing of shape of the stability region. Numerically, the behaviour is smoother and we observe an early onset of instability at values of $-\alpha \Delta t^2$ lower than analytical predictions for $-\gamma$ next to 1 from right. In conclusion, we learned that the when the BDF2 time scheme is used the PID controller with an \emph{ad hoc} choice of $\gamma$ allows to obtain larger stability regions with respect to the combination of the standard PI controller with the BDF1 scheme. Therefore, the derivative action provides a general improvement of the stability features of the system when higher order time advancing schemes are considered. \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/BDF1_stability.pdf} \put(50,82){\small{a)}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/BDF2_stability.pdf} \put(50,82){\small{b)}} \end{overpic} \caption{Analytical stability regions in 2D flow: a) BDF1 time scheme, b) BDF2 time scheme.} \label{fig:BDF_stability_1} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/BDF1_0.pdf} \put(50,84){\small{a)}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/BDF2_0.pdf} \put(50,84){\small{b)}} \end{overpic} \caption{Numerical stability regions in 2D flow for $\gamma = 0$. Circles/crosses denote stable/unstable cases, respectively. Continuos lines denote the corresponding analytical curves: a) BDF1 time scheme, b) BDF2 time scheme.} \label{fig:BDF_stability_2} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/BDF1_1.pdf} \put(50,82){\small{a)}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/BDF2_2.pdf} \put(50,82){\small{b)}} \end{overpic} \caption{Numerical stability regions in 2D flow for $\gamma \neq 0$. Circles/crosses denote stable/unstable cases, respectively. Continuos lines denote the corresponding analytical curves: a) BDF1 time scheme and $-\gamma = 1$, b) BDF2 time scheme and $-\gamma = 2$.} \label{fig:BDF_stability_3} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{img/alpha_max.pdf} \caption{Evolution of $\left(-\alpha \Delta t^2\right)_{max}$ over $-\gamma$ for 2D flow related to the BDF2 time scheme: comparison between analytical (black line) and numerical (blue line) curves.} \label{fig:alpha_max} \end{figure} \section{Problem definition}\label{sec:problem_def} \subsection{The Navier-Stokes equations} We consider the motion of an incompressible viscous fluid in a time-independent domain $\Omega$ over a time interval of interest $(t_0, T)$. The flow is described by the incompressible Navier-Stokes equations: \begin{align} \rho\,\partial_t {\bm u} + \rho\,\nabla\cdot \left({\bm u} \otimes {\bm u}\right) - \nabla\cdot \boldsymbol{\sigma} & = {\bm f}\quad \mbox{ in }\Omega \times (t_0,T),\label{eq:ns-mom}\\ \nabla\cdot {\bm u} & = 0\quad\, \mbox{ in }\Omega \times(t_0,T),\label{eq:ns-mass} \end{align} complemented by the boundary conditions \begin{align} {\bm u} & = {\bm u}_D\quad\ \mbox{ on } \partial\Omega_D \times(t_0,T),\label{eq:bc-ns-d}\\ \boldsymbol{\sigma}\cdot{\bm n} & = {\bm g}\quad \quad\mbox{ on } \partial\Omega_N \times(t_0,T),\label{eq:bc-ns-n} \end{align} and the initial data ${\bm u} = {\bm u}_0$ in $\Omega \times\{t_0\}$. Here $\overline{\partial\Omega_D}\cup\overline{\partial\Omega_N}=\overline{\partial\Omega}$ and $\partial\Omega_D \cap\partial\Omega_N=\emptyset$. In addition $\rho$ is the fluid density, ${\bm u}$ is the fluid velocity, $\partial_t$ denotes the time derivative, $\boldsymbol{\sigma}$ is the Cauchy stress tensor, ${\bm f}$ is the momentum forcing applied to enforce the no-slip boundary condition along the immersed boundary, ${\bm u}_D,{\bm g}$ and ${\bm u}_0$ are given. Equation (\ref{eq:ns-mom}) represents the conservation of the linear momentum, while eq. (\ref{eq:ns-mass}) represents the conservation of the mass. For Newtonian fluids $\boldsymbol{\sigma}$ can be written as \begin{equation}\label{eq:newtonian} \boldsymbol{\sigma} ({\bm u}, p) = -p \mathbf{I} +\mu (\nabla{\bm u} + \nabla{\bm u}^T), \end{equation} where $p$ is the pressure and $\mu$ is the constant \emph{dynamic} viscosity. Notice that by plugging \eqref{eq:newtonian} into eq.~\eqref{eq:ns-mom} this can be rewritten as \begin{align} \rho\, \partial_t {\bm u} + \rho\,\nabla\cdot \left({\bm u} \otimes {\bm u}\right) - 2\mu\,\Delta{\bm u} + \nabla p = {\bm f}\mbox{ in }\Omega \times (t_0,T).\label{eq:ns-lapls-1} \end{align} We define the Reynolds number as \begin{equation} \mbox{Re} = \frac{U_r L_r}{\nu}, \label{eq:re} \end{equation} where $\nu=\mu/\rho$ is the \emph{kinematic} viscosity of the fluid, and $U_r$ and $L_r$ are characteristic macroscopic velocity and length, respectively. \subsection{The feedback forcing scheme} The interaction force between the fluid and the immersed boundary in the Lagrangian reference frame can be calculated by the feedback law \cite{Goldstein1993}: \begin{equation}\label{eq:PI} \bm F(s,t) = \alpha \int_0^t \left({\bm u}_{ib}(s,\tau) - {\bm u}_{b}(s,\tau)\right) d \tau + \beta \left({\bm u}_{ib}(s,t) - {\bm u}_{b}(s,t)\right), \end{equation} where $s$ is the curvilinear abscissa, $\alpha$ and $\beta$ are large negative free constants, ${\bm u}_{ib}$ is the flow velocity obtained by the interpolation at the immersed boundary, and ${\bm u}_{b}$ is the velocity of the immersed boundary. The transfer of variables between Eulerian and Lagrangian domains is obtained by the Dirac delta function $\delta$ \cite{Peskin2002}: \begin{equation}\label{eq:4} {\bm u}_{ib}\left(s, t\right) = \int_\Omega {\bm u}\left(\bm{x},t\right) \delta \left(\bm{r}(s,t) - \bm{x}\right) d\bm{x}, \qquad {\bm f}\left(\bm{x}, t\right) = \int_\Gamma \bm{F}\left(s,t\right) \delta \left(\bm{x} - \bm{r}(s,t)\right) ds, \end{equation} where $\bm{r}(s,t)$ is the virtual boundary position and $\bm{x}$ are the locations of the nearby fluid grid points. The equation \eqref{eq:PI} provides a Proportional-Integral (PI) feedback control of the velocity near the immersed boundary \cite{Goldstein1993}. Also, it acts as a spring with damping, where the spring is represented by the integral control term (i.e., $\alpha$ is the stiffness), and the damping is represented by the proportional control term (i.e., $\beta$ is the damping coefficient) \cite{Fadlun2000, Iaccarino2003, Margnat2009}. Note that, to the best of our knowledge, all the works available in the literature focus on the feedback law \eqref{eq:PI} without a derivative action, although the idea to consider other control terms is contemplated in \cite{Goldstein1993}. In this work, we introduce this further contribution in order to obtain a Proportional-Integral-Derivative (PID) controller, \begin{equation}\label{eq:PID} \bm F = \alpha \int_0^t \left({\bm u}_{ib} - {\bm u}_{b}\right) d \tau + \beta \left({\bm u}_{ib} - {\bm u}_{b}\right) + \gamma \left(\dot{{\bm u}}_{ib} - \dot{{\bm u}}_{b}\right). \end{equation} We observe that $\gamma$ can be interpreted as a coefficient that modifies the mass and inertia characteristics of the system. We will show that the derivative action strongly affects stability limits in the forcing parameters space (see Sec. \ref{sec:stability_analysis}) and plays an important part in FSI problems (see Sec. \ref{sec:FSI}). \section*{Akwnoledgements} We acknowledge the support provided by the European Research Council Executive Agency by the Consolidator Grant project AROMA-CFD “Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics” - GA 681447, H2020-ERC CoG 2015 AROMA-CFD and INdAM-GNCS projects. \section{Numerical discretization}\label{sec:space_discrete} \subsection{Space discretization of Navier-Stokes equations: the Finite Volume approximation} In this section we briefly discuss the space discretization of problems \eqref{eq:ns-lapls-1}-\eqref{eq:ns-mass}. We adopt the Finite Volume (FV) approximation that is derived directly from the integral form of the governing equations. We have chosen to implement the VBM within the finite volume C++ library OpenFOAM\textsuperscript{\textregistered} \cite{Weller1998}. We partition the computational domain $\Omega$ into cells or control volumes $\Omega_i$, with $i = 1, \dots, N_{c}$, where $N_{c}$ is the total number of cells in the mesh. Let \textbf{A}$_j$ be the surface vector of each face of the control volume. The integral form of eq.~\eqref{eq:ns-lapls-1} for each volume $\Omega_i$ is given by: \begin{align}\label{eq:evolveFVtemp-1.1} \rho \int_{\Omega_i} \dfrac{\partial {\bm u}}{\partial t} d\Omega + \rho\, \int_{\Omega_i} \nabla\cdot \left({\bm u} \otimes {\bm u}\right) d\Omega - 2\mu \int_{\Omega_i} \Delta{\bm u} d\Omega + \int_{\Omega_i}\nabla p d\Omega = \int_{\Omega_i}{\bm f}d\Omega. \end{align} By applying the Gauss-divergence theorem, eq.~\eqref{eq:evolveFVtemp-1.1} becomes: \begin{align}\label{eq:evolveFV-1.1} \rho \int_{\Omega_i} \dfrac{\partial {\bm u}}{\partial t}d\Omega + \rho\, \int_{\partial \Omega_i} \left({\bm u} \otimes {\bm u}\right) \cdot d\textbf{A} - 2\mu \int_{\partial \Omega_i} \nabla{\bm u} \cdot d\textbf{A} + \int_{\partial \Omega_i}p d\textbf{A} = \int_{\Omega_i}{\bm f} d\Omega. \end{align} Each term in eq.~\eqref{eq:evolveFV-1.1} is approximated as follows: \begin{itemize} \item[-] \textit{Gradient term}: \begin{align}\label{eq:grad} \int_{\partial \Omega_i}p d\textbf{A} \approx \sum_j^{} p_j \textbf{A}_j, \end{align} where $p_j$ is the value of the pressure relative to centroid of the $j^{\text{th}}$ face. The face center pressure values $p_j$ are obtained from the cell center values by means of a linear interpolation scheme. \item[-] \textit{Convective term}: \begin{align}\label{eq:conv} \int_{\partial \Omega_i} \left({\bm u} \otimes {\bm u}\right) \cdot d\textbf{A} \approx \sum_j^{} \left({\bm u}_j \otimes {\bm u}_j\right) \cdot \textbf{A}_j = \sum_j^{} \varphi_j {\bm u}_j, \quad \varphi_j = {\bm u}_j \cdot \textbf{A}_j, \end{align} where ${\bm u}_j$ is the fluid velocity relative to the centroid of each control volume face. In \eqref{eq:conv}, $\varphi_j$ is the convective flux associated to ${\bm u}$ through face $j$ of the control volume. The convective flux at the cell faces is obtained by a linear interpolation of the values from the adjacent cells. Also ${\bm u}$ needs to be approximated at cell face $j$ in order to get the face value ${\bm u}_j$. Different interpolation methods can be applied: central, upwind, second order upwind and blended differencing schemes \cite{jasakphd}. In this work, we make use of a second order upwind scheme. \item[-] \textit{Diffusion term}: \begin{align \int_{\partial \Omega_i} \nabla{\bm u} \cdot d\textbf{A} \approx \sum_j^{} (\nabla{\bm u})_j \cdot \textbf{A}_j, \nonumber \end{align} where $(\nabla{\bm u})_j$ is the gradient of ${\bm u}$ at face $j$. We are going to briefly explain how $(\nabla{\bm u})_j$ is approximated with second order accuracy on structured, orthogonal meshes, that are used in this work. Let $P$ and $Q$ be two neighboring control volumes. The term $(\nabla{\bm u})_j$ is evaluated by subtracting the value of velocity at the cell centroid on the $P$-side of the face, denoted with ${\bm u}_P$, from the value of velocity at the centroid on the $Q$-side, denoted with ${\bm u}_Q$, and dividing by the magnitude of the distance vector $\textbf{d}_j$ connecting the two cell centroids: \begin{align} (\nabla{\bm u})_j \cdot \textbf{A}_j = \dfrac{{\bm u}_Q - {\bm u}_P}{|\textbf{d}_j|} |\textbf{A}_j|. \nonumber \end{align} \end{itemize} A partitioned approach has been used to deal with the pressure-velocity coupling. In particular a Poisson equation for pressure has been used. This is obtained by taking the divergence of the momentum equation \eqref{eq:ns-lapls-1} and exploiting the divergence free constraint \eqref{eq:ns-mass}: \begin{equation}\label{eq:Poisson} \Delta p = -\nabla \left({\bm u} \otimes {\bm u}\right). \end{equation} The segregated algorithms available in OpenFOAM\textsuperscript{\textregistered} are SIMPLE \cite{SIMPLE} for steady-state problems, and PISO \cite{PISO} and PIMPLE \cite{PIMPLE} for transient problems. For this work, we choose the PISO algorithm. \subsection{Time discretization} To discretize in time the equation \eqref{eq:evolveFV-1.1}, let $\Delta t \in \mathbb{R}$, $t^n = t_0 + n \Delta t$, with $n = 0, ..., N_T$ and $T = t_0 + N_T \Delta t$. Moreover, we denote by ${\bm u}^n$ the approximation of the flow velocity at the time $t^n$. We adopt Backward Differentiation Formula of order 1 (BDF1) and Backward Differentiation Formula of order 2 (BDF2), see e.g. \cite{quarteroni2007numerical}. Given ${\bm u}^n$, for $n \geq 0$, we have, respectively, \begin{equation}\label{eq:BDF1_disc} \partial_t {\bm u} \approx \dfrac{{\bm u}^{n+1} - {\bm u}^{n}}{\Delta t}, \end{equation} \begin{equation}\label{eq:BDF2_disc} \partial_t {\bm u} \approx \dfrac{3{\bm u}^{n+1} - 4{\bm u}^{n} + {\bm u}^{n-1}}{2\Delta t}. \end{equation} \subsection{The interaction force} We represent the virtual boundary as a discrete set of $N_L$ Lagrangian points, $k$, with $k = 0, \dots, N_L$. The fluid-solid interaction force term is integrated over time using an explicit scheme: at the time $t^n$, we have \begin{equation}\label{eq:forc1} \bm F_k^{n} = \alpha \sum_{l = 1}^{n} (({\bm u}_{ib})_k^{l} - ({\bm u}_{b})_k^{l})\Delta t + \beta (({\bm u}_{ib})_k^{n} - ({\bm u}_{b})_k^{n}) + \gamma ((\dot{{\bm u}}_{ib})_k^{n} - (\dot{{\bm u}}_{b})_k^{n}) \end{equation} where $({\bm u}_{b})_k^{l}$ and $({\bm u}_{ib})_k^{l}$, $(\dot{{\bm u}}_{b})_k^{l}$ and $(\dot{{\bm u}}_{ib})_k^{l}$, are computed as follows \begin{equation}\label{eq:forc2} ({\bm u}_{b})_k^{l} = \dfrac{\bm{r}_k^l - \bm{r}_k^{l-1}}{\Delta t}, \qquad ({\bm u}_{ib})_k^{l} = \sum_{\bm{x}_m \in \tilde{g}} {\bm u}_m^l \tilde{\delta}(\bm{r}_k^l - \bm{x}_m)\Omega_m, \end{equation} \begin{equation}\label{eq:forc2_bis} (\dot{{\bm u}}_{b})_k^{l} = \dfrac{({\bm u}_b)_k^l - ({\bm u}_b)_k^{l-1}}{\Delta t}, \qquad (\dot{{\bm u}}_{ib})_k^{l} = \dfrac{(\dot{{\bm u}}_{ib})_k^l - (\dot{{\bm u}}_{ib})_k^{l-1}}{\Delta t}, \end{equation} $\tilde{g}$ is the support of the smoothed delta function $\tilde{\delta}$ defined as \begin{equation}\label{eq:forc3} \tilde{\delta}(\bm{x}) = \dfrac{1}{\Omega_m}\prod_{\omega=1}^2 \varphi \left(\dfrac{x_{\omega}}{h}\right), \end{equation} $h$ the uniform mesh size next to the immersed boundary and $\Omega_m$ is given by \begin{equation}\label{eq:vol2D} \Omega_m = h^2. \end{equation} In this work, we use the 4-point regularized delta function \cite{Peskin2002, Shin2008}: \begin{equation} \varphi(r) = \begin{cases} \dfrac{1}{8}\left(3 - 2|r| + \sqrt{1+4|r|-4r^2}\right), & \text{if $0 \leq |r| \leq 1$}\\ \\ \dfrac{1}{8}\left(5 - 2|r| - \sqrt{-7+12|r|-4r^2}\right), & \text{if $1 \leq |r| \leq 2$}\\ \\ 0, & \text{otherwise}. \end{cases} \end{equation} Finally, the interaction force in the Eulerian reference frame is given by \begin{equation}\label{eq:forc5} \bm f_m^{n} = \sum_{k=1}^{N_L} \bm F_k^{n} \tilde{\delta}(\bm{x}_m - \bm{r}_k^n) \Delta V, \end{equation} where $\Delta V$ is defined as \cite{Shin2008, Uhlmann2005} \begin{equation}\label{eq:forc6} \Delta V = h \cdot \Delta s, \end{equation} and $\Delta s$ is the uniform distance between two Lagrangian points. \section{Numerical results}\label{sec:numerical_results} In this section, we present several numerical results for the VBM. Four 2D benchmarks, involving stationary or ridigly moving bodies, are investigated: flow past a stationary cylinder, oscillatory flow past a stationary cylinder, transverse oscillation of a cylinder in a free-stream and inline oscillation of a cylinder in a fluid at rest. Finally, we will present a FSI benchmark related to the natural oscillation of a submerged cantilever rectangular beam in a fluid at rest. The number of PISO \cite{PISO} loops has been fixed to 2 for all the simulations. The linear algebraic system associated with the momentum equation \eqref{eq:evolveFV-1.1} is solved using an iterative solver with symmetric Gauss-Seidel smoother. For Poisson problem \eqref{eq:Poisson} we use Geometric Agglomerated Algebraic Multigrid Solver GAMG with the Gauss-Seidel smoother. The required accuracy is 1e-7 at each time step. \begin{table}[h] \centering \begin{tabular}{ccccc} \multicolumn{2}{c}{} \\ \cline{1-5} mesh name & Domain & $h_{min}$ & $h_{max}$ & No. of cells \\ \hline $117\mbox{k}$ & $[-50 \quad 50]^2$ & 1.63e-2 & 9.76 & 117,312 \\ $260\mbox{k}$ & $[0 \quad 8]^2$ & 1.56e-2 & 1.56e-2 & 262,144 \\ $1000\mbox{k}$ & $[0 \quad 16]^2$ & 1.56e-2 & 1.56e-2 & 1,048,576 \\ \hline \end{tabular} \caption{Name, size of the computational domain, minimum diameter $h_{min}$, maximum diameter $h_{max}$, and number of cells for all the meshes used for the tests presented in Secs. \ref{sec:staz_cyl}, \ref{sec:osc_y_cyl} and \ref{sec:osc_x_cyl}.} \label{tab:mesh} \end{table} \begin{table}[h] \centering \tiny \begin{tabular}{l ccc ccccc cccccc} \multicolumn{2}{c}{} \\ \cline{1-12} & & Mesh & $-\alpha$ & $-\beta$& $-\gamma$ & $\bar{C}_d$ & $C'_l$ & $\mbox{St}$ & $\Delta t$ & $CFL$ & Time scheme \\ \hline Case 1 & present & $260\mbox{k}$ & 4.8e4 & 0 & 0 & 1.57 & 0.44 & 0.159 & 1.2e-2 & 1.35 & BDF1 \\ & present & $260\mbox{k}$ & 4.8e4 & 0 & 2 & 1.6 & 0.52 & 0.163 & 1.2e-2 & 1.35 & BDF2 \\ & \cite{Shin2008} & $260\mbox{k}$ & 4.8e4 & 0 & 0 & 1.44 & 0.35 & 0.168 & 1.2e-2 & 1.35 & BDF1 \\ \hline Case 2 & present & $260\mbox{k}$ & 4.8e4 & 0 & 0 & 1.56 & 0.45 & 0.162 & 6.0e-3 & 0.7 & BDF1 \\ & present &$260\mbox{k}$ & 4.8e4 & 0 & 2 & 1.58 & 0.49 & 0.164 & 6.0e-3 & 0.7 & BDF2 \\ & \cite{Shin2008} & $260\mbox{k}$ & 4.8e4 & 0 & 0 & 1.44 & 0.35 & 0.168 & 6.0e-3 & 0.7 & BDF1 \\\hline Case 3 & present & $1000\mbox{k}$ & 4.8e4 & 0 & 0 & 1.56 & 0.45 & 0.162 & 6.0e-3 & 0.7 & BDF1 \\ & present & $1000\mbox{k}$ & 4.8e4 & 0 & 2 & 1.57 & 0.48 & 0.165 & 6.0e-3 & 0.7 & BDF2 \\ & \cite{Shin2008} & $1000\mbox{k}$ & 4.8e4 & 0 & 0 & 1.37 & 0.34 & 0.163 & 6.0e-3 & 0.7 & BDF1 \\\hline Case 4 & present & $117\mbox{k}$ & 4.8e4 & 0 & 0 & 1.34 & 0.33 & 0.16 & 1.2e-2 & 1.15 & BDF1 \\ & present & $117\mbox{k}$ & 4.8e4 & 0 & 2 & 1.35 & 0.35 & 0.161 & 1.2e-2 & 1.15 & BDF2 \\ & present & $117\mbox{k}$ & 4.8e4 & 0 & 0 & 1.34 & 0.33 & 0.16 & 1.6e-2 & 1.55 & BDF1 \\ & present & $117\mbox{k}$ & 4.8e4 & 0 & 2 & 1.36 & 0.36 & 0.161 & 1.8e-2 & 1.75 & BDF2 \\\hline & \cite{Lai2000} & $260\mbox{k}$ & 4.8e4 & 0 & 0 & 1.52 & 0.29 & 0.155 & 1.8e-3 & & Crank–Nicolson \\ & \cite{Uhlmann2005} & & & & & 1.50 & 0.35 & 0.172 & 3e-3 \\ & \cite{Kim2001} & & & & & 1.33 & 0.32 & 0.165 \\ & \cite{Constant2017} & & & & & 1.38 & & 0.165 & & 0.5 & BDF1 \\ & \cite{Lee2006} & & & & & 1.33 & 0.28 & 0.166 \\ & \cite{Linnick2005} & & & & & 1.34 & 0.33 & 0.166 \\ & \cite{Huang2007_2} & & & & & 1.36 & 0.33 & 0.167 \\ \hline \end{tabular} \caption{Flow past a stationary cylinder at $\mbox{Re} = 100$: values of aerodynamic coefficients, $C'_l$ and $\bar{C}_d$, and Strouhal number, $\mbox{St}$, compared against the results reported in other studies.} \label{tab:cyl_staz_1} \end{table} \begin{table}[h] \centering \begin{tabular}{l ccc ccccc c} \multicolumn{2}{c}{} \\ \cline{1-4} $-\alpha\Delta t^2$ & $\bar{C}_d$ & $C'_l$ & $\mbox{St}$ \\ \hline 0 & 1.4151 & 0.0532 & 0.1493 \\ 0.01 & 1.3376 & 0.2715 & 0.1672 \\ 0.1 & 1.3147 & 0.2939 & 0.1599 \\ 0.5 & 1.3348 & 0.3274 & 0.1601 \\ 1 & 1.3375 & 0.3311 & 0.16 \\ 3.5 & 1.3389 & 0.3333 & 0.16 \\ \hline \end{tabular} \caption{Flow past a stationary cylinder at $\mbox{Re} = 100$: influence of $-\alpha \Delta t^2$ on values of aerodynamic coefficients, $C'_l$ and $\bar{C}_d$, and Strouhal number, for $-\beta \Delta t= 1.5$ and $-\gamma = 0$, and the BDF1 time scheme. } \label{tab:cyl_staz_alpha} \end{table} \begin{table}[h] \centering \begin{tabular}{l ccc ccccc c} \multicolumn{2}{c}{} \\ \cline{1-4} $-\beta\Delta t$ & $\bar{C}_d$ & $C'_l$ & $\mbox{St}$ \\ \hline 0 & 1.3215 & 0.3088 & 0.1602 \\ 0.5 & 1.3219 & 0.3087 & 0.1603 \\ 1 & 1.3221 & 0.3085 & 0.1604 \\ 1.5 & 1.3147 & 0.2939 & 0.1599 \\ 2 & 1.3231 & 0.3092 & 0.1606 \\ 2.5 & 1.3239 & 0.3099 & 0.1607 \\ 3.5 & 1.3253 & 0.3110 & 0.1608 \\ \hline \end{tabular} \caption{Flow past a stationary cylinder at $\mbox{Re} = 100$: influence of $-\beta \Delta t$ on values of aerodynamic coefficients, $C'_l$ and $\bar{C}_d$, and Strouhal number, for $-\alpha \Delta t^2= 0.1$ and $-\gamma = 0$, and the BDF1 time scheme. } \label{tab:cyl_staz_beta} \end{table} \begin{table}[h] \centering \begin{tabular}{l ccc ccccc c} \multicolumn{2}{c}{} \\ \cline{1-4} $-\gamma$ & $\bar{C}_d$ & $C'_l$ & $\mbox{St}$ \\ \hline 0 & 1.3147 & 0.2939 & 0.1599 \\ 0.5 & 1.3156 & 0.2945 & 0.16 \\ 1 & 1.3165 & 0.301 & 0.1603 \\ \hline \end{tabular} \caption{Flow past a stationary cylinder at $\mbox{Re} = 100$: influence of $-\gamma$ on values of aerodynamic coefficients, $C'_l$ and $\bar{C}_d$, and Strouhal number, for $-\alpha \Delta t^2= 0.1$ and $-\beta \Delta t = 1.5$, and the BDF1 time scheme.} \label{tab:cyl_staz_gamma} \end{table} \begin{table}[h] \centering \begin{tabular}{l ccc ccccc c} \multicolumn{2}{c}{} \\ \cline{1-5} $\Delta s/h$ & $N_L$ & $\bar{C}_d$ & $C'_l$ & $\mbox{St}$ \\ \hline 0.1 & 1880 & 1.321 & 0.3271 & 0.1587 \\ 0.25 & 770 & 1.324 & 0.328 & 0.1587 \\ 0.5 & 385 & 1.3268 & 0.3304 & 0.1587 \\ 1 & 194 & 1.3267 & 0.3351 & 0.1613 \\ 2 & 97 & 1.3031 & 0.3156 & 0.1613 \\ \hline \end{tabular} \caption{Flow past a stationary cylinder at Re = 100: influence of the ratio of Lagrangian point distance to Eulerian grid width, $\Delta s/h$, on values of aerodynamic coefficients, $C'_l$ and $\bar{C}_d$, and Strouhal number, $\mbox{St}$, for $-\alpha \Delta t^2= 3.9$ and $-\beta \Delta t= 1.9$, and the BDF1 time scheme.} \label{tab:cyl_staz_ds_h} \end{table} \subsection{Flow past a stationary cylinder}\label{sec:staz_cyl} The first test we consider is the flow past a stationary cylinder at $\mbox{Re} = 100$. Table \ref{tab:mesh} reports details related to the orthogonal Cartesian grids that we use. Concerning meshes with $260\mbox{k}$ and $1000\mbox{k}$ cells, the cylinder has a radius of $0.15$ and its center is located at ($1.85$, $4.0$). On the other hand, for the mesh with $117\mbox{k}$ cells, the cylinder has a radius of $0.5$ and its center is located at ($0$, $0$). All the meshes are uniform with the exeption of the mesh with $117\mbox{k}$ cells that however has a uniform local refinement next to the region occupied by the cylinder comparable with the grid spacing of the others meshes. These meshes have been selected to compare our results with computational data reported in \cite{Shin2008}, in which the VBM is combined with a Finite Difference method and the BDF1 time scheme is used. We impose free-stream boundary condition, ${\bm u} = \left(1, 0\right)$, at the inflow and far-field boundaries, and an advective boundary condition, \begin{equation}\label{ref:advective} \dfrac{\partial {\bm u}}{\partial t} + a \dfrac{\partial {\bm u}}{\partial {\bm n}} = 0, \end{equation} at the outflow, where $a$ is the advection velocity magnitude computed so that the total mass is conserved, and ${\bm n}$ is the outwards normal. The partitioned algorithm we use (see Sec. \ref{sec:space_discrete}) requires a boundary condition for the pressure too. We choose $p = 0$ at the outflow and $\partial p/ \partial {\bm n} = 0$ on all the other boundaries. We set the density $\rho = 1$ and the viscosity $\mu = 10^{-2}, 3\cdot10^{-3}$ for mesh $117\mbox{k}$, and for meshes $260\mbox{k}$ and $1000\mbox{k}$, respectively. We start the simulations from fluid at rest. The quantities of interest for this benchmark are the drag and lift coefficients given by \cite{Lai2000}, \begin{equation} C_d = -\dfrac{2}{\rho L_r U_r^2} \sum_{i} f_x^i, \quad C_l = -\dfrac{2}{\rho L_r U_r^2} \sum_{i} f_y^i, \end{equation} where $f_x$ and $f_y$ are streamwise and crosswise component of the Euler forcing respectively, $U_r = 1$ is the maximum velocity at far-field boundaries and $L_r = d$ is the diameter of the cylinder, as well as the Strouhal number, $\mbox{St}$, based on the oscillation frequency of the lift force $f$, \begin{equation}\label{eq:Str} \mbox{St} = \dfrac{fL_r}{U_r}. \end{equation} Table \ref{tab:cyl_staz_1} shows the temporal average of the drag coefficient, $\bar{C}_d$, the amplitude of the lift coefficient oscillation, $C'_l$, and $\mbox{St}$ obtained using the proposed method. Also, computational data provided in \cite{Shin2008}, as well as results related to other studies \cite{Lai2000, Uhlmann2005, Kim2001, Constant2017, Lee2006, Linnick2005, Huang2007_2}, are reported. Notice that when the BDF2 time scheme is used with $\alpha \neq 0$ and $\beta = 0$, ones should necessarily set $\gamma \neq 0$ based on what reported in Sec. \ref{sec:stability_analysis}. We observe that \emph{Case 1} and \emph{Case 2}, when the Finite Volume method here proposed is adopted, provide significantly different results with respect to the Finite Difference method used in \cite{Shin2008}. Moreover, in \cite{Shin2008}, an improvement of the results is obtained when the domain size is enlarged downstream and on the top (\emph{Case 3}) whilst we do not appreciate significant differences. On the contrary, we are able to obtain better predictions by enlarging the domain size in all the directions (\emph{Case 4}). Results obtained with the two time schemes here considered, BDF1 and BDF2, show a significant difference on the value of $C'_l$ for the mesh $260\mbox{k}$ (\emph{Case 1}). Such a discrepancy decreases when the time step is reduced (\emph{Case 2}), as well as the domain size is enlarged (\emph{Case 3} and \emph{Case 4}). Numerical experiments at maximum time steps based on eq. \eqref{eq:BDF2vsBDF1} for both time schemes are also reported for \emph{Case 4}. Notice that the values of the maximum $CFL$ number obtained by the present simulations are in perfect agreement with ones reported in \cite{Shin2008}. Figure \ref{fig:vel_mag} shows a snapshot of the velocity field and Figure \ref{fig:coeffs} displays the time evolution of the drag and lift coefficients obtained for \emph{Case 4}. We observe that results obtained by using different time schemes show differences in terms of phase. It should be noted that the mesh with $117\mbox{k}$ was not used in \cite{Shin2008} for the case of a stationary cylinder but a very similar mesh was adopted for the simulation of inline/transverse oscillation of a circular cylinder (see Secs. \ref{sec:osc_y_cyl} and \ref{sec:osc_x_cyl}). For further comparison, we evaluate also the error for the virtual boundary velocity in the streamwise direction, $E_x$, defined as \begin{align}\label{eq:error_x} E_x = \left(\dfrac{1}{N_L} \sum_{k = 1}^{N_L} \left((u_{ib})_k - (u_{b})_k\right)^2\right)^{1/2}. \end{align} We are going to investigate the influence of the feedback forcing gains, $\alpha$, $\beta$ and $\gamma$, as well as the number of Lagrangian points, $N_L$, on the error, for the mesh $117\mbox{k}$. We select the same time step, $\Delta t$ = $0.01$, of \cite{Shin2008}, and the values of $-\alpha \Delta t^2$ and $-\beta \Delta t$ include, among others, those ones used in \cite{Shin2008}. Moreover, concerning the tests carried out by varying the forcing parameters, we set $\Delta s/h = 1$ as done in \cite{Shin2008}. However, we observe that in \cite{Shin2008} such an analysis was carried out for $\mbox{Re} = 185$ by using a mesh with about the half of the cells and a 3-point regularized delta function. Figure \ref{fig:Err_x_alpha} shows the evolution over time of $E_x$ varying $-\alpha \Delta t^2$ for $-\beta \Delta t = 1.5$ and $-\gamma = 0$. We observe that the error converges to a smaller value for the larger value of $-\alpha \Delta t^2$. This result is in agreement with that one in \cite{Shin2008}. We note that the difference between $-\alpha \Delta t^2 = 1$ and $-\alpha \Delta t^2 = 3.5$ is very little. This suggests that the convergence is reached for low values of $-\alpha \Delta t^2$, far from the stability limits (see Sec. \ref{sec:stability_analysis}). Figure \ref{fig:Err_x_beta} shows the evolution over time of $E_x$ varying $-\beta \Delta t$ for $-\alpha \Delta t^2 = 0.1$ and $-\gamma = 0$. We observe that the transient decay of the error is greater for the larger value of $-\beta \Delta t$. On the other hand, all the tested cases show the same level of convergence of the error, suggesting that $-\alpha \Delta t^2$ is a more critical parameter. These results are in agreement with \cite{Shin2008}. Nevertheless, in our calculations, $-\beta \Delta t$ does not reduce the convergence time, as showed in \cite{Shin2008}. Moreover, errors are not in phase as in \cite{Shin2008}. Figure \ref{fig:Err_x_gamma} shows the evolution over time of $E_x$ varying $-\gamma$ for $-\alpha \Delta t^2 = 0.1$ and $-\beta \Delta t = 1.5$. We observe a very low sensitivity with respect to $-\gamma$. Figure \ref{fig:Err_N_L} shows the evolution over time of $E_x$ varying $\Delta s/h$. We observe a non monotonic convergence of the error when $\Delta s/h < 1$. Finally, we observe that the sensitivity of the error with respect to the time scheme is very low. For the sake of completeness, Tables \ref{tab:cyl_staz_alpha}, \ref{tab:cyl_staz_beta}, \ref{tab:cyl_staz_gamma} and \ref{tab:cyl_staz_ds_h} show the influence of the feedback forcing gains, $\alpha$, $\beta$ and $\gamma$, and of the number of Lagrangian points, $N_L$, on the values of $\bar{C}_d$, $C'_l$ and $\mbox{St}$. We highlight that, as in \cite{Shin2008}, the number of Lagrangian points has a negligible effect on the results. Moreover, it is confirmed that $\alpha$ is the forcing parameter that mostly affect the performances of the method. Results related to the BDF2 time scheme, not reported for brevity, does not show substantial difference with respect to this scenario. We notice that, despite the fact that a Finite Volume approximation is used in \cite{Lee2006}, a detailed comparison with our results is complicated because computational features, such as time step, number of Lagrangian points, errors, are missing. In conclusion, we learned that for this benchmark $-\alpha\Delta t^2$ is the most critical parameter to be properly tuned in order to optimize the accuracy and efficiency of the computation. \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/U_mag_BDF1.png} \put(50,78){\small{a)}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/U_mag_BDF2.png} \put(50,78){\small{b)}} \end{overpic} \caption{Flow past a stationary cylinder at Re = 100. Velocity magnitude for \emph{Case 4}: a) BDF1 time scheme, b) BDF2 time scheme.} \label{fig:vel_mag} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.46\textwidth]{img/coeff_staz_cyl_BDF1.pdf} \put(50,84){\small{a)}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/coeff_staz_cyl_BDF2.pdf} \put(50,85){\small{b)}} \end{overpic} \caption{Flow past a stationary cylinder at Re = 100. Time history of drag and lift coefficients for \emph{Case 4}: a) BDF1 time scheme, b) BDF2 time scheme.} \label{fig:coeffs} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_alpha_BDF1_vers2.pdf} \put(32,78.5){\small{BDF1, -$\beta\Delta t$ = 1.5, $\gamma$ = 0}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_alpha_BDF2.pdf} \put(32,78.5){\small{BDF2, -$\beta\Delta t$ = 1.5, $\gamma$ = 0}} \end{overpic}\\ \caption{Flow past a stationary cylinder at Re = 100: influence of $-\alpha \Delta t^2$ on the time evolution of $E_x$.} \label{fig:Err_x_alpha} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_beta_BDF1.pdf} \put(32,75){\small{BDF1, -$\alpha\Delta t^2$ = 0.1, $\gamma$ = 0}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_beta_BDF2.pdf} \put(32,75){\small{BDF2, -$\alpha\Delta t^2$ = 0.1, $\gamma$ = 0}} \end{overpic}\\ \caption{Flow past a stationary cylinder at Re = 100: influence of $-\beta \Delta t$ on the time evolution of $E_x$.} \label{fig:Err_x_beta} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_gamma_BDF1.pdf} \put(27,75){\small{BDF1, -$\alpha\Delta t^2$ = 0.1, -$\beta\Delta t$ = 1.5}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_gamma_BDF2.pdf} \put(27,75){\small{BDF2, -$\alpha\Delta t^2$ = 0.1, -$\beta\Delta t$ = 1.5}} \end{overpic}\\ \caption{Flow past a stationary cylinder at Re = 100: influence of $-\gamma$ on the time evolution of $E_x$.} \label{fig:Err_x_gamma} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_ds_h_BDF1.pdf} \put(13,75){\small{BDF1, -$\alpha\Delta t^2$ = 3.9, -$\beta\Delta t$ = 1.9, -$\gamma = 0$}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_ds_h_BDF2.pdf} \put(16,75){\small{BDF2, -$\alpha\Delta t^2$ = 3.9, -$\beta\Delta t$ = 1.9, -$\gamma = 2$}} \end{overpic}\\ \caption{Flow past a stationary cylinder at Re = 100: influence of $\Delta s/h$ on the time evolution $E_x$.} \label{fig:Err_N_L} \end{figure} \subsection{Oscillatory flow past a stationary cylinder}\label{sec:John} In order to provide a further test of the capabilities of the proposed method to simulate flow fields involving bodies at rest, we additionally consider a periodic oscillation of the flow past a stationary cylinder \cite{John2004, turek1996, Girfoglio2019}. The computational domain is a $2.2 \times 0.41$ rectangular channel with a cylinder of radius $0.05$ centered at ($0.2$, $0.2$), where the bottom left corner of the channel as the origin of the axes. We impose a no slip boundary condition on the upper and lower wall. At the inflow and the outflow we prescribe the following velocity profile: \begin{align}\label{eq:cyl_bc} {\bm u}(0,y,t) = \left(\dfrac{6}{0.41^2} \sin\left(\pi t/8 \right) y \left(0.41 - y \right), 0\right), \quad y \in [0, 2.2], \quad t \in (0, 8]. \end{align} Concerning the pressure, we impose $\partial p/\partial {\bm n} = 0$ on all the boundaries. We set density $\rho = 1$ and viscosity $\mu = 10^{-3}$. We start the simulations from fluid at rest. Note that the Reynolds number is time dependent, with $0 \leq \mbox{Re} \leq 100$ \cite{John2004,turek1996,Girfoglio2019}. Such benchmark requires a roughly uniform hexaedral/prismatic mesh of about $200\mbox{k}$ cells for a DNS with a standard Finite Volume method \cite{Girfoglio2019}. Here we use a uniform orthogonal Cartesian mesh of $200\mbox{k}$ cells. Based on the results in Sec. \ref{sec:staz_cyl}, we set $-\alpha \Delta t^2 = 1$ and $-\beta \Delta t = -\gamma = 0$, $\Delta s/h = 1$, and we limit to consider only the BDF1 time scheme. Figure \ref{fig:John2004} shows the time evolution of the lift and drag coefficients that are in very good agreement with those ones reported in \cite{Girfoglio2019}. \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Cl_John.pdf} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Cd_John.pdf} \end{overpic}\\ \caption{Oscillatory flow past a stationary cylinder: time evolution of lift and drag coefficients against the results reported in \cite{Girfoglio2019}.} \label{fig:John2004} \end{figure} \subsection{Transverse oscillation of a circular cylinder in a free-stream}\label{sec:osc_y_cyl} Next, we consider a benchmark involving a body that rigidly moves. We investigate a periodic transerve oscillation of a circular cylinder in a free-stream \cite{Guilmineau2002}. The time-periodic motion of the center of the cylinder is given by \begin{equation}\label{eq:y_osc} \left(x_c(t),y_c(t)\right) = \left(0, A_{m} \cos \left(2\pi f_e t\right)\right), \end{equation} where $A_m$ and $f_e$ are the amplitude and the frequency of the oscillation, respectively. The Reynolds number, based on the free-stream velocity, is 185, and $A_m/d = 0.2$ \cite{Guilmineau2002}. We impose free-stream boundary condition, ${\bm u} = (1, 0)$, at the inflow and far-field boundaries, and advective boundary condition \eqref{ref:advective} at the outflow. For comparison with \cite{Shin2008}, we consider the mesh $117\mbox{k}$ and perform a first simulation by setting $-\alpha\Delta t^2 = 3.9$, $-\beta\Delta t = 1.9$, $-\gamma = 0$ and $\Delta s/h = 1$. We investigate the two possible cases, $f_e/f_0 < 1$ and $f_e/f_0 > 1$, where $f_0 = 0.19$ is the natural shedding frequency for a stationary cylinder at $\mbox{Re} = 185$ \cite{Guilmineau2002}. The computational time step, $\Delta t = T/720$, is based on the period of the oscillation $T = 1/f_e$, and the BDF1 time scheme is employed. Finally, we set density $\rho = 1$ and viscosity $\mu = 5.4e-3$. Figure \ref{fig:mov_y_1} shows time evolution of the drag and lift coefficients for $f_e/f_0 < 0.9$ (a) and $f_e/f_0 > 1$ (b). We can note that the coefficients shows a regular trend once vortex shedding is established. Moreover, we observe that for $f_e/f_0 > 1$, both the drag and lift coefficients exhibit beat phenomena. These results are in good agreement with those ones reported in \cite{Guilmineau2002, Shin2008}. Concerning the investigation of the error, we focus on $f_e/f_0 < 0.9$. We observe that in \cite{Shin2008} the error analysis was carried out for a mesh having twice the number of the cells, i.e. the half of the grid size, of the mesh $260\mbox{k}$. Figure \ref{fig:Err_x_alpha_mov_y} shows the evolution over time of $E_x$ varying $-\alpha \Delta t^2$ for $-\beta\Delta t$ = 1 and $-\gamma$ = 0. We oberve that the error converges to a smaller value for the larger value of $-\alpha \Delta t^2$ as observed in Sec. \ref{sec:staz_cyl} for problems involving stationary bodies. This result is in agreement with that one in \cite{Shin2008}. We note there is very little difference between $-\alpha \Delta t^2 = 1$ and $-\alpha \Delta t^2 = 4$. This suggests that also for moving problems the convergence is reached for low values of $-\alpha \Delta t^2$, far from the stability limits (See Sec. \ref{sec:stability_analysis}). High frequency spurious oscillations affect the error at increasing of $-\alpha \Delta t^2$. Figure \ref{fig:Err_x_beta_mov_y} shows the evolution over time of $E_x$ for different values of $-\beta \Delta t$ for $-\alpha\Delta t^2$ = 0.4 and $-\gamma$ = 0. We observe that the transient decay of the error is greater for the larger value of $-\beta \Delta t$ but unlike stationary problems the error also converges to a slightly smaller value for larger $-\beta \Delta t$. On the other hand, such effect seems to be stronger in \cite{Shin2008} where by moving from $-\beta \Delta t = 0$ to $-\beta \Delta t = 1$ one obtains a greater reduction of the error with respect to that one obtained in the present study by moving from $-\beta \Delta t = 0$ to $-\beta \Delta t = 3$. Figure \ref{fig:Err_x_gamma_mov_y} shows the evolution over time of $E_x$ varying -$\gamma$ for $-\alpha\Delta t^2$ = 0.1 and $-\beta\Delta t$ = 1. Just like for the stationary problems, we observe a very low sensitivity to -$\gamma$. Figure \ref{fig:Err_x_N_L_mov_y} shows the evolution over time of $E_x$ varying $\Delta s/h$ for $-\alpha\Delta t$ = 0.4, $-\beta \Delta t$ = 1, and $-\gamma = 0$. We do not observe convergence for the values of $\Delta s/h$ here considered but however, like stationary problems, it is evident that the trend of the error is not monotonic. Finally, like stationary problems, we observe with respect to the time scheme. Figure \ref{fig:non_grow_osc_y} displays a close-up of the time history of the drag coefficient for different forcing gains. We observe that $-\beta \Delta t$ helps to reduce spurious oscillations that affect the solution at large $-\alpha \Delta t$. This result is in agreement with that one in \cite{Shin2008}. Moreover, we see that the same effect can be obtained by using $-\gamma$. We note that when the BDF2 time scheme is used a larger value of $-\beta \Delta t$ is necessary to break down the numerical oscillations with respect to the BDF1 time scheme. In conclusion, we learned that, like stationary problems, even for rigidly moving bodies, $-\alpha\Delta t^2$ is the most critical parameter. Nevertheless, both $-\beta\Delta t$ and $-\gamma$ help to optimize the accuracy and efficiency of the computation \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/BDF1_mov_y_0_9.pdf} \put(48,80){\small{a)}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/BDF1_mov_y_1_1.pdf} \put(48,80){\small{b)}} \end{overpic}\\ \caption{Transverse oscillation of a circular cylinder in a free-stream: a) $f_e/f_0 < 1$, b) $f_e/f_0 > 1$.} \label{fig:mov_y_1} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_alpha_BDF1_mov_y.pdf} \put(32,80){\small{BDF1, -$\beta\Delta t$ = 1, $\gamma$ = 0}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_alpha_BDF2_mov_y.pdf} \put(32,80){\small{BDF2, -$\beta\Delta t$ = 1, $\gamma$ = 0}} \end{overpic}\\ \caption{Transverse oscillation of a circular cylinder in a free-stream: influence of $-\alpha \Delta t^2$ on the time evolution of $E_x$.} \label{fig:Err_x_alpha_mov_y} \end{figure} \begin{figure} \vspace{3cm} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_beta_BDF1_mov_y.pdf} \put(32,75){\small{BDF1, -$\alpha\Delta t^2$ = 0.4, $\gamma$ = 0}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_beta_BDF2_mov_y.pdf} \put(32,75){\small{BDF2, -$\alpha\Delta t^2$ = 0.4, $\gamma$ = 0}} \end{overpic}\\ \caption{Transverse oscillation of a circular cylinder in a free-stream: influence of $-\beta \Delta t$ on the time evolution of $E_x$.} \label{fig:Err_x_beta_mov_y} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_gamma_BDF1_mov_y.pdf} \put(32,73){\small{BDF1, -$\alpha\Delta t^2$ = 0.4, -$\beta\Delta t$ = 1}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_gamma_BDF2_mov_y.pdf} \put(32,73){\small{BDF2, -$\alpha\Delta t^2$ = 0.4, -$\beta\Delta t$ = 1}} \end{overpic}\\ \caption{Transverse oscillation of a circular cylinder in a free-stream: influence of $-\gamma$ on the time evolution of $E_x$.} \label{fig:Err_x_gamma_mov_y} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/Err_ds_h_BDF1_mov_y.pdf} \put(16,75){\small{BDF1, -$\alpha\Delta t^2$ = 0.4, -$\beta\Delta t$ = 1, -$\gamma = 1$}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/Err_ds_h_BDF2_mov_y.pdf} \put(19,75){\small{BDF2, -$\alpha\Delta t^2$ = 0.4, -$\beta\Delta t$ = 1, -$\gamma = 1$}} \end{overpic}\\ \caption{Transverse oscillation of a circular cylinder in a free-stream: influence of $\Delta s/h$ on the time evolution of $E_x$.} \label{fig:Err_x_N_L_mov_y} \end{figure} \begin{figure} \centering \begin{overpic}[width=0.45\textwidth]{img/BDF1_non_grow_osc.pdf} \put(40,75){\small{BDF1}} \end{overpic} \begin{overpic}[width=0.45\textwidth]{img/BDF2_non_grow_osc.pdf} \put(40,75){\small{BDF2}} \end{overpic}\\ \caption{Transverse oscillation of a circular cylinder in a free-stream: close-up of the time evolution of the drag coefficient for different forcing gains.} \label{fig:non_grow_osc_y} \end{figure} \subsection{Inline oscillation of a circular cylinder in a quiescent environment}\label{sec:osc_x_cyl} Finally, we consider a periodic inline oscillation of a circular cylinder in a fluid at rest \cite{Dutsch1998}. The time-periodic motion of the center of the cylinder is given by \begin{equation}\label{eq:x_osc} \left(x_c(t), y_c(t)\right) = \left(-A_{m} \sin \left(2\pi f_e t\right),0\right), \end{equation} where $A_m$ and $f_e$ are the amplitude and the frequency of the oscillation, respectively. The dimensionless parameters that characterize the dynamics are the Reynolds number, $\mbox{Re}$, and the Keulegan-Carpenter number, $KC$, defined as, \begin{equation}\label{eq:KC} KC = \dfrac{U_r}{f_e L_r}, \end{equation} where $U_r = 2\pi f_e A_m$ and $L_r = d$. We set $\mbox{Re} = 100$ and $KC = 5$, according to \cite{Shin2008,Dutsch1998}. We prescribe do-nothing boundary conditions at all far-fields boundaries. We set $\rho = 1$, $\mu = 10^{-2}$, $A_m = 0.796$ and $f_e = 0.2$. We adopt a computational set-up similar to that one of \cite{Shin2008} to compare the results. We use the mesh $117\mbox{k}$, set $-\alpha \Delta t^2 = 3.9$, $-\beta \Delta t = 1.9$ and $-\gamma = 0$, the computational time step is $\Delta t = T/720$ where $T = 1/f_e$ and the BDF1 time scheme is employed. Figure \ref{fig:Cd_osc_x} shows the time evolution of the drag coefficient. The amplitude of the signal is about 3.5 that results to be in very good agreement with that one computed in \cite{Shin2008,Dutsch1998}, that is 3.54. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{img/Cd_osc_x_BDF1.pdf} \caption{Inline oscillation of a circular cylinder in a quiescent environment: time evolution of the drag coefficient at $\mbox{Re} = 100$ and $KC = 5$.} \label{fig:Cd_osc_x} \end{figure} \subsection{Eigenfrequencies of a rectangular cantilever beam coupled with a surrounding fluid} \label{sec:FSI} In this subsection, we are going to test our algorithm for simulating FSI problems. The VBM, in its standard formulation as PI controller, was successfull used within FSI framework: see, e.g., \cite{Huang2007, Shin2010, Song2011, Qin2012, Uddin2015, Son2017}. Here, the main goal is to show the role played by the derivative controller term. We investigate the natural oscillation frequencies of a rectangular cantilever beam embedded in a fluid domain \cite{VanEysden2006, Sader1998}. The structural model equation used here, including inertia and bending effects, is \begin{equation}\label{eq:structural1} \rho_s A_t \dfrac{\partial^2 r_y}{\partial t^2} = -EI \dfrac{\partial^4 r_y}{\partial x^4} - A_t F_y, \end{equation} where $x$ is the beam axis, $r_y$ is the transversal displacement, $\rho_s$ is the mass density, $A_t = bh$ is the transversal section with $b$ and $h$ the width and thickness, respectively, $E$ is the Young modulus, $I = bh^3/12$ is the moment of inertia, $F_y$ is the transversal component of the Lagragian forcing acted on the body by the surrounding fluid. The boundary conditions for the equation \eqref{eq:structural1} explored here refer to clamped-free configuration. They read \begin{equation}\label{eq:structural2} \begin{cases} r_y\rvert_{x = 0} = {\dfrac{\partial r_y}{\partial x}}\bigg{\rvert}_{x = 0} = 0, \\ \\ \dfrac{\partial^2 r_y}{\partial x^2}\bigg{\rvert}_{x = L} = \dfrac{\partial^3 r_y}{\partial x^3}\bigg{\rvert}_{x = L} = 0. \end{cases} \end{equation} We have chosen to implement the elasticity model \eqref{eq:structural1}-\eqref{eq:structural2} within the finite element C++ library \emph{deal.II}. We adopt linear finite elements for the space discretization. The second-order time derivative is discretized by a finite difference method based on a three-points centered scheme. The linear algebraic system associated is solved by using the Generalized Minimal Residual Method GMRM. The required accuracy is 1e-7 at each time step. The coupling process between flow solver and structural solver can be summarized as follows: \begin{enumerate} \item At $t^n$, we know ${\bm u}^{n}$ and $r_y^n$ and $r_y^{n-1}$. Then we calculate $F_y^n$ by eq. \eqref{eq:forc1}; \item We obtain $f_y^n$ by eq. \eqref{eq:forc5} and we solve the problem \eqref{eq:evolveFV-1.1}-\eqref{eq:Poisson} to obtain ${\bm u}^{n+1}$ and $p^{n+1}$; \item We solve the problem \eqref{eq:structural1}-\eqref{eq:structural2} to obtain $r_y^{n+1}$. \end{enumerate} We compare our computational results with the following analytical predictions, here considered as \emph{true} solutions \cite{Sader1998}: \begin{equation}\label{eq:analytical_frequency} \dfrac{f_{fluid}}{f_{vacuum}} = \left(1 + \dfrac{\pi \rho b}{4 \rho_s h} \right)^{-1/2}, \end{equation} where $f_{fluid}$ and $f_{vacuum}$ are the natural frequencies of the beam in fluid and vacuum, respectively. The natural frequencies in the vacuum, $f_{vacuum}$, are given by \cite{VanEysden2006, Sader1998} \begin{equation}\label{eq:analytical_frequency_1} f_{vacuum} = \dfrac{1}{2\pi}\dfrac{{K_n}^2}{L^2}\sqrt{\dfrac{EI}{\rho_s A_t}}, \end{equation} where $K_n$ are the solutions of the following trascendental equation \cite{VanEysden2006, Sader1998} \begin{equation} 1 + \cos(K_n L)\cosh(K_n L) = 0, \end{equation}\label{eq:analytical_frequency_2} where $L$ is the length of the beam. For further comparison, we will consider also the numerical data provided in \cite{Hengstler2013}, that performed two-way coupled 3D FSI simulations using the commercial simulation software ANSYS. Geometrical and structural features of the beam are reported in Table \ref{tab:FSI1}. The fluid domain is a $1\times1$ rectangular box. The undisturbed vertical location of the beam is 0.5. The mesh consists of $10\mbox{k}$ cells. 20 cells are uniformly distributed along the beam and the remaining grid is stretched. The beam is discretized in 10 cells. The fluid is initially at the rest and do-nothing boundary conditions are imposed on all the boundaries. The beam is excited by a sinusoidal forcing applied at its free tip for 10 time steps with a total length of 0.2 ms \cite{Hengstler2013}. The final computational time is 0.2 s. We remark that in this case the choice of the gain coefficients can not be done based on the stability analysis reported in Sec. \ref{sec:stability_analysis} because the structural equation should be considered also. We set $-\alpha = 2e3$ and $-\beta = 7e2$ that have demonstrated to not lead to stability issues. Then, we carry out a sensitivity analysis with respect to $-\gamma$ ranging from 0 to 1.8. \begin{table}[h] \centering \begin{tabular}{ccccc} \multicolumn{5}{c}{} \\ \cline{1-5} $\rho$ [$Kg/m^3$] & $L [m]$ & $b [m]$ & $h [m]$ & $E [Pa]$ \\ \hline 2670 & 0.15 & 0.01 & 0.005 & 6.5e10 \\ \hline \end{tabular} \caption{Geometrical and structural features of the beam.} \label{tab:FSI1} \end{table} \begin{table}[h] \centering \begin{tabular}{ccccc} \multicolumn{5}{c}{} \\ \cline{1-5} & $K_n$ & Vacuum [Hz] \cite{VanEysden2006, Sader1998} & Air (\cite{VanEysden2006, Sader1998}/\cite{Hengstler2013}) [Hz] & Water (\cite{VanEysden2006, Sader1998}/\cite{Hengstler2013}) [Hz] \\ \hline $f_1$ & 1.875 & 177 & 177/178 & 140/135 \\ $f_2$ & 4.694 & 1108 & 1108/1109 & 879/849 \\ \hline \end{tabular} \caption{First two bending modes computed by using the analytical predictions reported in \cite{VanEysden2006, Sader1998} (equation \eqref{eq:analytical_frequency}) and by numerical simulations reported in \cite{Hengstler2013} both for the air and the water.} \label{tab:FSI2} \end{table} \begin{table}[h] \centering \begin{tabular}{cccccc} \multicolumn{6}{c}{} \\ \cline{1-6} Fluid & $-\gamma$ & $f_1$ (Hz) & $E_{f_1}$ (\%) & $f_2$ (Hz) & $E_{f_2}$ (\%) \\ \hline Air & 0 & 175 & 1.1\% & 1080 & 2.5\% \\ & 0.5 & 175 & 1.1\% & 1080 & 2.5\% \\ & 1 & 175 & 1.1\% & 1080 & 2.5\% \\ & 1.8 & 175 & 1.1\% & 1080 & 2.5\% \\ \hline Water & 0 & 175 & 25\% & 1065 & 21.1\% \\ & 0.5 & 160 & 14.3\% & 985 & 12.1\% \\ & 1 & 150 & 6.7 \% & 925 & 5.2\% \\ & 1.8 & 140 & 0\% & 865 & 1.6\% \\ \hline \end{tabular} \caption{First two bending modes computed by using the present algorithm both for the air and the water and percentage errors with respect to the analytical predictions reported in \cite{VanEysden2006, Sader1998} (equation \eqref{eq:analytical_frequency}).} \label{tab:FSI3} \end{table} Table \ref{tab:FSI2} reports the first two bending modes, $f_1$ and $f_2$, of the cantilever beam both in air and in water computed by using the analytical formula \eqref{eq:analytical_frequency} \cite{VanEysden2006, Sader1998} and by the numerical simulations reported in \cite{Hengstler2013}. Table \ref{tab:FSI3} shows the first two bending modes obtained by using the FFT algorithm based on the average displacement computed by present simulations as well as associated percentage errors with respect to the analytical predictions \eqref{eq:analytical_frequency} \cite{VanEysden2006, Sader1998}. We observe that, for the air, the modes predicted with $-\gamma = 0$ are in very good agreement with the \emph{true} values and that the introduction of the derivative action does not affect the results. On the contrary, for the water, we note a large difference with respect to the corresponding \emph{true} values: in particular, the values obtained are practically the natural frequencies of the beam in the air. When $-\gamma \neq 0$, we observe a significant improvement of the results obtained that are closer and closer to the \emph{true} values at increasing of $-\gamma$. We speculate that, for the water, the derivative action is necessary for properly detecting the eigenfrequencies because the right amount of added mass should be taken into account. On the contrary, for the air, the amount of added mass is negligible and the method works well even without derivative action. To the best of our knowledge, this benchmark has been unexplored within the VBM framework. However, we observe that very similar problems such as flexible filaments \cite{Huang2007} and flapping dynamics of coupled flexible flags in a uniform flow \cite{Son2017} were successfull simulated with the standard formulation of the VBM. In conclusion, the present findings indicate that the introduction of the derivative action could play an important part in order to properly take into account the fluid-structure coupling. \section{Introduction}\label{sec:intro} In the scientific literature we may find works related with the Immersed Boundary Methods (IBM) \cite{Mittal2005, Kim2019} since early 1970's. Many researchers have progressively expanded their attention to these methods for their ability to simulate moving or deforming bodies characterized by complex surface geometries embedded in a fluid region. The main feature that makes IBM an useful and versatile technique is related to the fact that the Navier-Stokes equations are discretized over an orthogonal Cartesian grid and the effect of the no-slip boundary condition at the physical surface of the immersed body is obtained by introducing an additional body force term in the momentum equation. The first IBM has been introduced by \cite{Peskin1972, Peskin1977, Peskin1981} in order to perform numerical analyses for cardiovascular flows. The fluid flow is described by Eulerian variables defined on a fixed Cartesian mesh; on the other hand, the immersed boundary motion is described in a moving Lagrangian grid. In order to transfer variables between Eulerian and Lagrangian domains, a proper approximation of the Dirac delta function is used. Later, several authors have introduced modifications and extensions of the original method. Depending on whether the force is applied onto the continuous or discretized Navier-Stokes equations, these methods can be categorized into a continuous forcing approach and a discrete forcing approach, respectively \cite{Mittal2005}. The continuous forcing approach, in addition to the original Peskin method, includes also the Virtual Boundary Method (VBM) introduced by \cite{Goldstein1993}. Since the present work deals with the VBM, we focus on the literature review of this method. \cite{Goldstein1993, Goldstein1995, Goldstein1998} developed and validated the VBM within a Spectral framework. Such works employ theory of control methodologies to impose that, in correspondance with the body location, the fluid velocity is coinciding with that of the solid. In particular, the method discussed employs a feedback forcing scheme, based on a Proportial-Integral (PI) controller, to enforce the no-slip condition at boundaries immersed in the fluid domain. Two free (negative) gains, one related to the proportional action and the other one related to the integral action, are used in the standard formulation of the virtual forcing. In these works, highly accurate spectral interpolation of the velocities from the grid points to the virtual boundary points was implemented whilst linear interpolation was used to distribute the effect of the forcing term to the nearby grid points. Examples including 2D flow past a cylinder, 3D turbulent channel and turbulent flow over riblets are presented. \cite{Saiki1996, Saiki1997} formulated an improved version of the method. The fluid velocities were here interpolated to the virtual boundary points by means of bilinear interpolation. On the other hand, the effect of the virtual boundary force was extrapolated back to the grid points by area-weighted averages. The method was coupled with high-order Finite Difference (FD) schemes so as to suppress the numerical oscillations caused by the forcing observed in the Chebyshev Spectral method used in \cite{Goldstein1993, Goldstein1995}. The method was used to simulate stationary, rotating, and oscillating cylinders in uniform flow as well as the transition process in a flat plate boundary layer. In these works \cite{Goldstein1993, Goldstein1995, Goldstein1998, Saiki1996, Saiki1997}, it was highlighted that the VBM is affected by very strict time-step restrictions since the amplitude of the feedback forcing needs to be large for accurately enforcing the boundary conditions, resulting in an extremely stiff system. \cite{Goldstein1993} performed the stability analysis when the forcing term is computed explicitly for a second-order Adams-Bashfort scheme. Very small Courant-Friedrichs-Lewy ($CFL$) numbers (of order $10^{-2}$) were reported in various cases, making the method unattractive. A partial and preliminary improvement of the stability limit was obtained by \cite{Fadlun2000} when the proportional action in the feedback forcing scheme is computed implicitly in time ($CFL \approx 10^{-1}$). Further improvements were provided by \cite{Lee2003} that extended the stability analysis performed by \cite{Goldstein1993} to the Backward Euler scheme (BDF1), third-order Adams-Bashfort schemes, and second-order and third-order Runge-Kutta schemes. Analytical predictions for the stability limits as functions of the forcing gains were provided and validated by using a Spectral method against three cases: 3D turbulent flow caused by a surface-mounted box, the flow around an impulsively starting cylinder, and the rough-wall turbulent boundary layer flow. It was shown that when the third-order Runge-Kutta scheme is adopted the VBM performs properly with $CFL \approx 1$. \cite{Shin2008} coupled the Peskin's regularized delta function approach with the VBM in order to relax the stability limits, i.e. to achieve large $CFL$ numbers, and to improve the transfer process between Eulerian and Lagrangian domains. The stability analysis process performed by \cite{Lee2003} was significantly simplified and precise stability limits related to the BDF1 scheme for 2-point, 3-point, 4-point and 6-point regularized delta functions were provided. The method was implemented in a Finite-Difference context and applied to the 2D flow past stationary and oscillating cylinders. \cite{Fadlun2000, Iaccarino2003} showed that from a physical viewpoint the forcing scheme can be intepreted as a simple damped oscillator where the stiffness and the damping are related to the integral and proportional actions, respectively. Within this framework, \cite{Margnat2009} rigorously investigated the behaviour of the VBM as a second-order damped control system. The natural frequency and the damping coefficient are introduced as driving parameters of the method in the place of the usual gain coefficients. Reliable insights related to the role of each parameter as well as to the time-step optimisation were provided by considering simulations of flows involving sharp edges. It should be highlighted that the stability of the method depends not only on the values of free constans but also on the flow geometry \cite{Fadlun2000, Lee2003, Iaccarino2003, Margnat2009}. Thus, at the current state, no general rule to select the optimum values of the forcing gains is provided. We also report the contribution of \cite{Lee2006} that used the ``area-weighted" VBM \cite{Saiki1996} within a Finite-Volume framework: simulations related to 2D flow past a cylinder in several different configurations were provided. In conclusion, we mention contributions focused on special cases of the VBM obtained using only one of the two controller actions: see, e.g., \cite{Lai2000} (proportional controller) and \cite{Khadra2000} (integrating controller). As discussed above, the VBM demonstrated to be efficient to simulate the presence of fixed or moving solid bodies immersed in a fluid domain. Moreover, the VBM has been successfull used within a Fluid-Structure Interaction (FSI) framework. See, e.g., \cite{Huang2007, Shin2010, Song2011, Qin2012, Uddin2015, Son2017} for examples. Recently \cite{Park2017} investigated the coupling between fluid-flexible body interactions and heat transfer by opening the door towards new multiphysics scenarios. In all these works, FD methods are used for the numerical discretization of both flow and structural governing equations. As showed by the literature review, the VBM has been extensively investigated within a Finite Difference framework as well as a Spectral framework, although obiously other space approximations are possible. Here, we focus on Finite Volume (FV) methods, which have been used within discrete forcing approach IBMs \cite{Constant2017, Jasak2014}. To best of our knowledge, except for \cite{Lee2006}, the application of the VBM in a FV framework has been unexplored. Moreover, we observe that in \cite{Lee2006} some relevant computational parameters, such as time step size, $CFL$, number of Lagrangian points, error estimation and sensitivity analysis respect to the gain coefficients, are missing. In this manuscript, we intend to fill this gap by proposing an exhaustive analysis of the features of the VBM within a FV framework. We will show that for similar computational configurations (mesh refinement, time step size, time discretization scheme, gain coefficients, number of Lagrangian points) FV and FD methods provide significantly different results. Also, we propose to modify the classic feedback forcing scheme by introducing a derivative action in order to obtain a Proportial-Integral-Derivative (PID) controller as contemplated in \cite{Goldstein1993}. In order to highlight the role played by the derivative action, the stability analysis originally performed by \cite{Shin2008} for the BDF1 time scheme will be extended. Next, we will consider also the BDF2 time scheme that, to the best of our knowledge, has not been explored within the VBM framework. We will show that when the BDF1 scheme is used the derivative action deteriorates the stability characteristics of the system. On the contrary, when the BDF2 scheme is used, the derivative action improves the stability characteristics of the system and, in particolar, allows to obtain a stability region wider than the widest one related to the BDF1 scheme, that is obtained with the usual PI controller. Our approach is validated against numerical data available in the literature for a stationary/moving 2D circular cylinder in several configurations. Finally, we will present a FSI benchmark, related to the frequency response of a cantilever beam coupled with a surrounding fluid, where the introduction of the derivative action plays a crucial role in order to obtain reliable results. All the computational results presented in this article have been performed with OpenFOAM\textsuperscript{\textregistered}\cite{Weller1998}, an open source finite volume C++ library widely used by commercial and academic organizations. See \cite{Constant2017, Jasak2014} for IBM techniques implemented in OpenFOAM\textsuperscript{\textregistered}. The FSI benchmark has been implemented by coupling OpenFOAM with an open source finite element C++ library, \emph{deal.ii} \cite{dealii}, within a partitioned approach framework. An important outcome of this work is that all the used for the preparation of this work are incorporated in open-source libraries\footnote{\url{https://mathlab.sissa.it/cse-software}} and therefore are readily available to the scientific community. This work is organized as follows. In Sec. \ref{sec:problem_def}, we introduce the continuous formulation of the VBM. In Sec. \ref{sec:space_discrete}, we detail our strategy for space discretization, which combines the VBM with a Finite Volume method. The stability analysis is reported in Sec. \ref{sec:stability_analysis}, while numerical results are presented in Sec. \ref{sec:numerical_results}. Finally, conclusions and perspectives are drawn in Sec. \ref{sec:conclusion}.
04146f33cd8b5623d3ea5162fd0f19d12f973d5a
\section{Introduction} \begin{figure} \centering \includegraphics[width=\linewidth]{punch_v2.pdf} \caption{Qualitative results of our robust object pose estimation on the Occluded-LINEMOD (top and middle) and the YCB-Video (bottom) datasets. \textbf{Left}: prediction of bounding boxes and landmarks of the target object in a test image (zoomed view). \textbf{Right}: prediction of 6DOF poses without post-processing or refinement.} \label{fig:punch} \end{figure} Object pose estimation is the task of inferring the relative orientation and position between the target object and the observer. Such inference is crucial in many vision applications such as robotic manipulation~\cite{Zuo2019craves, zhu2014single, collet2011moped}, augmented reality~\cite{marchand2015pose, crivellaro2018robust}, autonomous driving~\cite{chen2017multi, wu20196d, xu2018pointfusion} and spacecraft navigation~\cite{Cassinis2019review, sharma2018pose}. The problem can be simplified if depth information is available~\cite{michel2017global, wang2019densefusion, He2020pvn3d, chen2021fs}. However, depth sensors are not always practical. Pose estimation from images is thus an important research problem. In this paper we consider the problem of object pose estimation from a single RGB image. Our focus lies in the base estimator, \ie, from input image to the output pose, before any refinement step. For the base estimator, a number of works~\cite{Kehl2017ssd,Xiang2018posecnn,poirson2016fast,do2018deep} adopt direct regression approaches which map the input image that contains the target object to its 6 DOF pose. However, such approaches tend to be sensitive to occlusions and are observed to be similar to performing image retrieval~\cite{sattler2019understanding}. Rather than directly regressing the pose, two-stage approaches~\cite{Hu2019segmentation, Jafari2018ipose, Li2019cdpn, Oberweger2018making, Park2019pix2pose, Peng2019pvnet, rad2017bb8, Zakharov2019dpod, song2020hybridpose, pavlakos20176, tekin2018real} first predict landmarks on the object to establish 2D-3D correspondences, then use a Perspective-n-Point (PnP) like algorithm to solve for the pose. Previous results suggest that two-stage methods are generally more accurate~\cite{Oberweger2018making, Hu2020single}. Their strengths derive from training the model with richer supervision signals (\ie, groundtruth landmarks) rather than just the pose, and injecting tolerance towards inaccurate landmark predictions by robust PnP. However, two-stage approaches are not intrinsically immune to occlusion. Current works to improve robustness often take the pixel-wise or patch-wise approach~\cite{Peng2019pvnet, Oberweger2018making, Hu2019segmentation, Jafari2018ipose, Li2019cdpn, Park2019pix2pose}, \ie, generating an ensemble of predictions from each image pixel or patch, and aggregate them to obtain a more robust final prediction. Although ensembling can mitigate some occlusion-induced inaccuracies, landmark coherence are easily disrupted by large and novel occlusions, because the network predicts landmarks independently and consistency is only imposed by the PnP algorithm, which is not part of the network~\cite{Hu2020single}. In this paper, we aim to address the shortcomings of current two-stage approaches. Firstly, we enforce occlusion-robust feature learning to enable models to deal with novel and severe occlusions. Secondly, a good pose representation should produce landmarks that are consistent to the object shape, rather than predicting individual landmarks independently. To this end, during model training we encourage a holistic pose representation learning in order to strengthen the connections between landmark predictions and enhance their coherence. \paragraph{Our contributions} We propose the Robust Object Pose Estimation (ROPE) framework which achieves excellent robustness against occlusions without the need of pose refinement. As shown in Figure~\ref{fig:punch}, our model predicts landmarks and pose robustly without any post-processing. To enforce occlusion-robust feature learning, we combine hide-and-seek~\cite{Singh2017hide}, random erasing~\cite{Zhong2020random} and batch augmentation~\cite{Hoffer2020augment} and propose a occlude-and-blackout batch augmentation technique for model training. To encourage the model to learn holistic pose representations, we propose a multi-precision supervision architecture, which boosts the model's ability to extrapolate occluded object parts, leading to spatially more accurate and structurally more coherent landmark predictions. To alleviate the need for pose refinement we further utilise the multi-precision supervision architecture to filter landmark predictions with a simple verification step. We conduct extensive experiments to verify the efficacy of the proposed techniques, and compare our method to SOTA object pose estimators. In terms of the ADD(-S) metirc, our method outperforms all contestants on LINEMOD~\cite{Hinterstoisser2012model} and all non-refinement methods on YCB-Video~\cite{Xiang2018posecnn}. Without any refinement, it is also competitive to SOTA methods that includes a refinement step. Compared to methods that relies on large amount of synthetic training images, we show that ROPE is highly data-efficient. \section{Related works} Traditional object pose estimation methods~\cite{gu2010discriminative, hinterstoisser2011gradient, huttenlocher1993comparing, Hinterstoisser2012model, lepetit2005monocular, lowe1999object} rely on hand-crafted features or template matching techniques, which are susceptible to occlusions or other appearance change. Recent advancements of deep learning has nurtured a lot of learning-based methods. We briefly survey a few prominent works from one-stage, two-stage and other methods. PoseNet~\cite{kendall2015posenet} was a pioneer work on using a deep model to directly regress the 6DOF from an image. Although it was proposed for camera localisation rather than object pose estimation, its principle applies to both tasks. SSD-6D~\cite{Kehl2017ssd} combines an SSD detector~\cite{Liu2016ssd} and a pose regressor in a single network. RenderForCNN~\cite{su2015render} uses an image renderer to synthesize training images as well as groundtruth pose for training a pose regressor. Compared to one-stage approaches, two-stage methods typically predicts intermediate features in the first stage, and then solve for the pose in the second stage. This mechanism receives more attention because its intermediate feature learning facilitates more potential improvements. For example, Tekin \etal~\cite{tekin2018real} apply the YOLO object detector~\cite{redmon2017yolo9000} in the first stage to predict object landmarks. Hu \etal~\cite{Hu2019segmentation} predict landmark locations for each small patch of the input image. They then aggregate all patch predictions to establish 2D-3D correspondences for solving the pose. Oberweger \etal~\cite{Oberweger2018making} on the other hand, only use patches of images to train the landmark predictor. The idea is that at least some patches are not corrupted by the occluder and they could produce accurate landmark heatmaps. The ensemble of heatmaps predicted from many patches are combined to obtain final landmarks. PVNet~\cite{Peng2019pvnet} predicts the object mask and, for each pixel within the mask, unit vectors that points to the landmarks. It then utilises a generalised Hough voting scheme~\cite{ballard1981generalizing} to determine the distribution of the landmarks. There are other notable works tackling object pose estimation from different perspectives. Sundermeyer \etal~\cite{sundermeyer2020augmented, sundermeyer2020multi} use autoencoders to learn implicit pose representations by reconstructing the input objects. Cai and Reid~\cite{Cai2020reconstruct} propose a 3D model-free pose estimator via 2D-3D mapping. To make two-stage methods into a single stage pipeline, Hu \etal~\cite{Hu2020single} and Wang \etal~\cite{wang2021gdr} propose deep architectures to replace the PnP algorithm in the second stage, while Chen \etal~\cite{Chen2020end} propose a differentiable PnP method to achieve end-to-end learning. \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{hm_arch.png} \caption{Illustration of an occlude-and-blackout augmented example and the architecture of our heatmap prediction network. For clarity, the backbone and the RPN are represented in the RoI Align module, other modules in the Mask R-CNN framework such as the box head, as well as relevant losses, are not shown. Our model replaces the original mask head with three keypoint heads.} \label{fig:hm_arch} \end{figure*} \section{The ROPE framework} We focus on the problem of 6DOF object pose estimation from a single RGB image. Given an image $I$ and a known 3D point cloud $\{ \bm{z}_i \}^{n}_{i=1}$ of the target object, we first predict a set of 2D landmarks $\{ \bm{x}_i \}^{n}_{i=1}$ in $I$ that correspond to the point cloud, then solve the pose $\bm{y}$ via a RANSAC-based PnP solver from filtered 2D-3D correspondences. \subsection{Robust landmark prediction} Our 2D landmark prediction is based on the Mask R-CNN~\cite{He2018mask} framework. The specific architecture and training scheme are shown in Figure~\ref{fig:hm_arch}. A basic improvement is substituting the original backbone network with HRNet~\cite{Sun2019deep, Wang2020deep} to exploit its high-resolution feature maps which preserve rich semantic information and increase spatial accuracy. Next, we describe two key innovations to boost occlusion robustness and landmark coherence. \subsubsection{Occlude-and-blackout batch augmentation} Fundamentally, pose estimation for the typical 3D object will suffer from the problem of self-occlusion. Landmarks that are at the opposite side of the object would be hard to predict since their visual features are hidden. In fact, a practical pose estimator must also contend with additional occlusions due to, \eg, other objects or scene elements that further conceal part of the target object from view. It is thus important that the landmark predictor infers the robust pose information from potentially different kinds occlusions imposed on the object. Inspired by the ideas of random erasing~\cite{Zhong2020random}, hide-and-seek~\cite{Singh2017hide}, and batch augmentation~\cite{Hoffer2020augment} (all not originally developed for pose estimation), we develop a novel Occlude-and-blackout Batch Augmentation (OBA) to promote robust landmark prediction under occlusion. For each training batch, after performing regular data augmentations including rotation, translation, scaling and color jitter, we extend the batch by including a copy of itself with extra augmentations, namely, occlude and blackout. Similar to hide-and-seek, we divide the image region enveloped by the object bounding box into a grid of patches and replace each patch, under certain probability, with either noise or a random patch elsewhere from the same image. We then blackout everything outside of the object bounding box. An example is shown in Figure~\ref{fig:hm_arch}. With random occlusions the network is forced to infer the pose information from a partial view of the object. Erasing the background helps reducing overfitting and enhance generalisability. Moreover, the OBA augmented images are fed to the network with the original ones in the same batch, and supervised by the same groundtruth labels. This encourages the network to learn occlusion-robust and background-invariant representations. If the potential occluders are known beforehand, injecting occluder specific information in the training phase can significantly improve performance~\cite{Oberweger2018making}. However this knowledge is often not available in practice. Compared to methods that augment training images with known objects~\cite{Jafari2018ipose, li2017deep, alhaija2017augmented}, our method is occluder-agnostic yet it generalises well in the testing sets. \subsubsection{Multi-precision supervision} Current heatmap-based landmark prediction networks use a single groundtruth Gaussian heatmap per landmark for training. The variance of these heatmaps is a hyper parameter which requires careful tuning: a smaller variance may increase prediction accuracy for each individual landmark however risk structural inconsistency in the case of occlusion, due to the lack of holistic understanding of the object pose. To address this issue we propose a Multi-Precision Supervision (MPS) architecture: using three keypoint heads to predict groundtruth Gaussian heatmaps with different variance. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sps_vs_mps.png} \caption{Conceptual illustration of holistic representation learning via MPS. Note the difference on the information learned by the feature section $S1$.} \label{fig:sps_vs_mps} \end{figure} In Mask R-CNN, the output feature map of the backbone is aligned with RoI proposals and the RoI features are then passed to the mask head. We replace the mask head with three keypoint heads to regress the landmark heatmaps, as shown in Figure~\ref{fig:hm_arch}. Each keypoint head consists of 8 convolutional layers and 2 upsampling layers. In the training phase, the groundtruth heatmaps $\Phi^*$ are constructed as 2D Gaussian feature maps centred on groundtruth 2D landmarks $\bm{x}^*$ and spreading with variance $\sigma^2$. We use $\sigma$ equal to 8, 3 and 1.5 pixels respectively for the three keypoint heads, thus creating low, medium and high precision target heatmaps $\Phi^*$. The loss function is \begin{equation} L_{JS} = \text{JSD}(\phi(\Phi), \Phi^*), \end{equation} where JSD$(\cdot)$ is the Jensen–Shannon divergence~\cite{Fuglede2004jensen} and $\phi(\cdot)$ is the channel-wise softmax function, \ie, each channel is normalised to be a probability distribution over the pixels. In the testing phase, we only use the predicted heatmaps $\Phi$ from the high-precision keypoint head to obtain the landmark coordinates $\bm{x}$. Instead of simply taking the ``argmax'' of $\Phi$ as $\bm{x}$, we treat the normalised heatmaps $\phi(\Phi)$ as probability maps and take their spatial expectations as $\bm{x}$. This has two advantages over the ``argmax'' approach: it has higher accuracy because it is continuous rather than discrete; it is more robust to outlying pixel values. Although only the high-precision heatmaps are used to compute the landmark coordinates, the medium and low-precision keypoint heads play an important role in the pipeline. Firstly, having target heatmaps with different variances $\sigma^2$ helps the model adapt to objects of different sizes. This also relieves the need for tuning $\sigma$ as a hyper parameter for each object. Secondly, heatmaps from the medium-precision keypoint head are used as an auxiliary for filtering predicted landmarks, as will be explained in the next subsection. Lastly and most importantly, MPS boosts holistic representation learning in the feature maps and increases landmark coherence. An conceptual illustration is shown in Figure~\ref{fig:sps_vs_mps}. In Figure~\ref{fig:sps_vs_mps}, we take one section of the feature tensor $S1$ for examination. With single precision supervision, $S1$ is only responsible for activating the region $A1$ in the predicted heatmap of Landmark 1. It does not learn useful information about Landmark 2. In the MPS scenario, besides learning about Landmark 1 via $A1$ and $A3$, $S1$ is also exposed to the receptive field of $A4$ from Landmark 2. This enforces $S1$ to incorporate relevant information and become more ``aware'' of the location of Landmark 2. The overall effect is that, each part of the feature tensor not only learns the necessary information to predict a local landmark, but also integrates knowledge of other landmarks to understand a wider context, thus learns a more holistic representation of the target object pose. A holistic representation enables heatmap predictions to be more robust against occlusions. As shown in Figure~\ref{fig:hm_comp}, when trained without MPS, novel occlusions result in confused heatmap activations. On the other hand, a holistic representation learned via MPS is able to produces stable heatmaps for the occluded landmarks. This also boosts the structural consistency of landmark predictions as shown in Figure~\ref{fig:ablation1} and \ref{fig:mps}, which is further discussed in Section~\ref{sec:ablation}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{hm_comp.pdf} \caption{The effect of holistic representation learning in heatmap prediction. Predictions of heatmap 1 are from a model (MV1) trained without MPS while those of heatmap 2 are from the full model (original) with MPS. Details of models (MV1 and original) are provided in Section~\ref{sec:mvs}. } \label{fig:hm_comp} \end{figure} \subsection{Landmark filtering} Many pose estimation pipelines include a refinement stage which is either optimisation-based \cite{Kehl2017ssd, Chen2019satellite, song2020hybridpose} or learning-based \cite{rad2017bb8, li2020deepim, labbe2020cosypose, Zakharov2019dpod}. While such post-processing is effective in boosting prediction accuracy, it adds additional computation burdens which is a disadvantage especially for real-time applications. In order to boost prediction accuracy while at the same time avoiding heavy post-processing computation, we make use of the multi-heads design of MPS for selecting high-quality landmark predictions before passing them to the PnP solver, thus alleviating the need for significant pose refinement. Specifically, for an image $I$, let $\{\bm{x}_i\}$ denote the set of predicted landmark coordinates from the high-precision keypoint head, and $\{\bm{x}_i^m\}$ denote the set of landmark coordinates predicted from the medium-precision keypoint head. We then select a subset \begin{equation} \{\bm{x}_i | \rVert \bm{x}_i - \bm{x}_i^m \rVert_2 \leq \epsilon \} \end{equation} for the PnP solver to compute the pose. In other words, a landmark prediction from the high-precision head will only be selected for the pose solver if it is verified by the corresponding medium-precision prediction, where $\epsilon$ is the verification threshold. In the case that the selected subset has fewer than 4 points, which is the minimum number required by a PnP solver, we then use the 4 points with the smallest $\rVert \bm{x}_i - \bm{x}_i^m \rVert_2$ values as the subset. While in this work we focus on the base pose estimator and report its performances without any refinement, our pipeline can be easily extended to stack one or multiple refiners such as \cite{Manhardt2018deep, Li2018deepim, sundermeyer2020multi}. \section{Experiments} In this section we conduct experiments to validate the effectiveness of ROPE as well as to compare it to SOTA methods of RGB image-based pose estimation. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{ablation1.pdf} \caption{Comparing performances of model variants on the Occluded-LINEMOD dataset with qualitative examples. } \label{fig:ablation1} \end{figure*} \subsection{Datasets and metrics} We choose the widely used LINEMOD~\cite{Hinterstoisser2012model}, its extension Occluded-LINEMOD~\cite{Brachmann2014learning} and the YCB-Video~\cite{Xiang2018posecnn} datasets for our experiments. For LINEMOD, we follow the convention of previous works~\cite{rad2017bb8, tekin2018real, Peng2019pvnet, Zakharov2019dpod} by using 15\% of the images of each object as training set and the remaining 85\% as testing set. The training images are selected in such a way that the relative rotation between them are larger than a threshold. For each object, we additionally use 1312 rendered images of the isolated object for training, which are obtained from~\cite{hodan2018bop}. For Occluded-LINEMOD the whole dataset is used for testing while images of the corresponding objects in LINEMOD, as well as the rendered images, are used for training. We also follow the protocol of \cite{Xiang2018posecnn, Oberweger2018making} for the YCB-Video dataset: we use 80 out of the 92 video sequences as well as the 80000 synthetic images for training, and test on 2949 key frames from the reserved 12 sequences. We report the ADD(-S) metric which combines the ADD metric~\cite{Hinterstoisser2012model} for asymmetric objects and the ADD-S metric~\cite{Xiang2018posecnn} for symmetric ones. The ADD metric computes the percentage of correctly estimated poses. A pose is considered correct if the object model points, when transformed by the predicted and groundtruth poses respectively, have an average distance of less 10\% of the model diameter. For ADD-S, this distance is instead computed based on the closest point distance. The ADD(-S) metric is preferred over the 2D projection metric~\cite{brachmann2016Uncertainty} because it directly measures the alignment discrepancy in 3D. For the YCB-Video dataset we also report the AUC metric proposed in~\cite{Xiang2018posecnn} and adopted in \cite{Oberweger2018making, Peng2019pvnet}. The AUC metric is the area under the ADD(-S) curve when varying the distance threshold for a pose to be deemed correct. We vary this threshold from 0 to 10 cm, in accordance with~\cite{Xiang2018posecnn}. \subsection{Implementation details} For each object model we apply the farthest point sampling (FPS) algorithm~\cite{Peng2019pvnet} on the 3D point cloud and select 11 landmarks. The groundtruth 2D landmarks are then obtained by projecting the 3D landmarks with groundtruth camera pose and intrinsics. We use ImgAug~\cite{imgaug} for regular data augmentations including rotation, translation, scaling and color jitter before the OBA. We use the Adam optimizer~\cite{Kingma2015adam} and train the model for 250 epochs on LINEMOD and 200 epochs on Occluded-LINEMOD and YCB-Video. We set the landmark verification threshold $\epsilon$ to 1 pixel for all datasets. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{coherence_example.pdf} \caption{A toy example for the intuition of incoherence measure $c_i$. The mean residual $r_i$ for prediction 1 (blue) and prediction 2 (green) are both 0.608. However, their mean incoherence measure $c_i$ are 0.604 and 0.074, respectively. Although both predictions are identical in terms of accuracy, prediction 2 has much better coherence as the green triangle is much more similar in shape to the groundtruth than the blue one.} \label{fig:cohere} \end{figure} \subsection{Ablation studies} \label{sec:ablation} We conduct various ablation tests to investigate the effect of the proposed OBA and MPS. \subsubsection{Model variations} \label{sec:mvs} To verify the efficacy of OBA and MPS, we create two Model Variants (MV) of ROPE: \begin{enumerate} \item (MV1: w/ OBA, w/o MPS) While keeping everything else of the original ROPE unchanged, we remove the low and medium-precision keypoint heads, and train the one-head-model with high-precision groundtruth heatmaps. \item (MV2: w/o OBA, w/o MPS) On top MV1, we further remove OBA in training. Note that \emph{common data augmentations including rotation, translation, scaling and color jitter, are still kept}. \end{enumerate} Figure~\ref{fig:ablation1} shows the overall ADD(-S) on the Occluded-LINEMOD dataset, as well as qualitative results of all model variants. Without both OBA and MPS, object detection can easily fail and landmark prediction is precarious. We can clearly see that occlusion-robust feature learning enforced by OBA significantly increases the reliability of object detection and landmark prediction. In addition, by comparing MV1 and the original model, it is obvious that MPS boosts the structural consistency of the predicted landmarks, especially in occluded regions. This shows that a holistic representation induced by MPS enhances landmark coherence, strengthening the model's ability to extrapolate to the occluded part of the object. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{bubble.png} \caption{Comparing the results of training with and without MPS on LINEMOD, while keeping all else equal. The vertical location of each bubble represents the mean prediction residual $r_i$ of all landmarks in the testing sets. The size of each bubble indicates the mean incoherence $c_i$. } \label{fig:mps} \end{figure} \begin{table*}[t] \begin{center} \begin{tabular}{l|cccccc|cccc} \hline \multicolumn{1}{c|}{\multirow{3}{*}{ADD(-S)} } &\multicolumn{6}{c|}{Without refinement} &\multicolumn{4}{c}{With refinement} \\ \cline{2-11} & \multicolumn{1}{c}{PVNet} & \multicolumn{1}{c}{Pix2Pose} & \multicolumn{1}{c}{DPOD} & \multicolumn{1}{c}{CDPN} & \multicolumn{1}{c}{GDR} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c}{SSD-6D} & \multicolumn{1}{c}{DPOD+} & \multicolumn{1}{c}{HybridPose} & \multicolumn{1}{c}{DeepIM} \\ & \multicolumn{1}{c}{\cite{Peng2019pvnet}} & \multicolumn{1}{c}{\cite{Park2019pix2pose}} & \multicolumn{1}{c}{\cite{Zakharov2019dpod}} & \multicolumn{1}{c}{\cite{Li2019cdpn}} & \multicolumn{1}{c}{\cite{wang2021gdr}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{\cite{Kehl2017ssd}} & \multicolumn{1}{c}{\cite{Zakharov2019dpod}} & \multicolumn{1}{c}{\cite{song2020hybridpose}} & \multicolumn{1}{c}{\cite{Li2018deepim}} \\ \hline ape & 43.62 & 58.10 & 53.28 & 64.38 & - & \textbf{81.52} & 65.00 & \textbf{87.70} & 63.10 & 77.00 \\ benchevise & 99.90 & 91.00 & 95.34 & 97.77 & - & \textbf{100.00} & 80.00 & 98.50 & \textbf{99.90} & 97.50 \\ cam & 86.86 & 60.90 & 90.36 & 91.67 & - & \textbf{96.86} & 78.00 & \textbf{96.10} & 90.40 & 93.50 \\ can & 95.47 & 84.40 & 94.10 & 95.87 & - & \textbf{98.72} & 86.00 & \textbf{99.70} & 98.50 & 96.50 \\ cat & 79.34 & 65.00 & 60.38 & 83.83 & - & \textbf{94.71} & 70.00 & \textbf{94.70} & 89.40 & 82.10 \\ driller & 96.43 & 76.30 & 97.72 & 96.23 & - & \textbf{99.01} & 73.00 & \textbf{98.80} & 98.50 & 95.00 \\ duck & 52.58 & 43.80 & 66.01 & 66.76 & - & \textbf{85.35} & 66.00 & \textbf{86.30} & 65.00 & 77.70 \\ eggbox* & 99.15 & 96.80 & 99.72 & 99.72 & - & \textbf{100.00} & \textbf{100.00} & 99.90 & \textbf{100.00} & 97.10 \\ glue* & 95.66 & 79.40 & 93.83 & \textbf{99.61} & - & 99.42 & \textbf{100.00} & 96.80 & 98.80 & 99.40 \\ holepuncher & 81.92 & 74.80 & 65.83 & 85.82 & - & \textbf{90.39} & 49.00 & 86.90 & \textbf{89.70} & 52.80 \\ iron & 98.88 & 83.40 & 99.80 & 97.85 & - & \textbf{100.00} & 78.00 & \textbf{100.00} & \textbf{100.00} & 98.30 \\ lamp & 99.33 & 82.00 & 88.11 & 97.89 & - & \textbf{99.42} & 73.00 & 96.80 & \textbf{99.50} & 97.50 \\ phone & 92.41 & 45.00 & 74.24 & 90.75 & - & \textbf{97.64} & 79.00 & 94.70 & \textbf{94.90} & 87.70 \\ \hline average & 86.27 & 72.38 & 82.98 & 89.86 & 93.70 & \textbf{95.61} & 76.69 & \textbf{95.15} & 91.36 & 88.60 \\ \hline \end{tabular} \end{center} \caption{ Test accuracy on the LINEMOD dataset in terms of the ADD(-S) metric. Objects with a ``*'' sign are considered as symmetric objects and the ADD-S metric is used. The result of SSD-6D is obtained from \cite{tekin2018real}. The result of HybridPose is from its fourth version update in~\cite{song2020hybridposev4}.} \label{tab:lm} \end{table*} \begin{table*}[h] \begin{center} \begin{tabular}{l|cccccccc|cc} \hline \multicolumn{1}{c|}{\multirow{3}{*}{ADD(-S)} } &\multicolumn{8}{c|}{Without refinement} &\multicolumn{2}{c}{With refinement} \\ \cline{2-11} & \multicolumn{1}{c}{HM} & \multicolumn{1}{c}{PVNet} & \multicolumn{1}{c}{Hu} & \multicolumn{1}{c}{Pix2Pose} & \multicolumn{1}{c}{DPOD} & \multicolumn{1}{c}{Hu2} & \multicolumn{1}{c}{GDR} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c}{DPOD+} & \multicolumn{1}{c}{HybridPose} \\ & \multicolumn{1}{c}{\cite{Oberweger2018making}} & \multicolumn{1}{c}{\cite{Peng2019pvnet}} & \multicolumn{1}{c}{\cite{Hu2019segmentation}} & \multicolumn{1}{c}{\cite{Park2019pix2pose}} & \multicolumn{1}{c}{\cite{Zakharov2019dpod}} & \multicolumn{1}{c}{\cite{Hu2020single}} & \multicolumn{1}{c}{\cite{wang2021gdr}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{\cite{Zakharov2019dpod}} & \multicolumn{1}{c}{\cite{song2020hybridpose}} \\ \hline ape & 15.30 & 15.81 & 12.10 & 22.00 & - & 19.20 & \textbf{39.30} & 28.03 & - & \textbf{20.90} \\ can & 44.70 & 63.30 & 39.90 & 44.70 & - & 65.10 & \textbf{79.20} & 75.06 & - & \textbf{75.30} \\ cat & 9.33 & 16.68 & 8.20 & 22.70 & - & 18.90 & 23.50 & \textbf{25.53} & - & \textbf{24.90} \\ driller & 55.40 & 65.65 & 45.20 & 44.70 & - & 69.00 & \textbf{71.30} & 61.86 & - & \textbf{70.20} \\ duck & 19.60 & 25.24 & 17.20 & 15.00 & - & 25.30 & \textbf{44.40} & 19.07 & - & \textbf{27.90} \\ eggbox* & 23.00 & 50.17 & 22.10 & 25.20 & - & 52.00 & \textbf{58.20} & 45.62 & - & \textbf{52.40} \\ glue* & 41.40 & 49.62 & 35.80 & 32.40 & - & 51.40 & 49.30 & \textbf{56.92} & - & \textbf{53.80} \\ holepuncher & 20.40 & 39.67 & 36.00 & 49.50 & - & 45.60 & \textbf{58.70} & 55.54 & - & \textbf{54.20} \\ \hline average & 28.64 & 40.77 & 27.06 & 32.03 & 32.80 & 43.30 & \textbf{53.00} & 45.95 & 47.30 & \textbf{47.45} \\ \hline \end{tabular} \end{center} \caption{ Test accuracy on the Occluded-LINEMOD dataset in terms of the ADD(-S) metric. Objects with a ``*'' sign are considered as symmetric objects and the ADD-S metric is used. The result of HybridPose is from its fourth version update in~\cite{song2020hybridposev4}.} \label{tab:lmo} \end{table*} \subsubsection{Accuracy and coherence of landmarks} To formally analyse the effect of holistic representation learning, we quantify accuracy and structural consistency of landmark predictions and compare them when trained with and without MPS. For accuracy, we define \begin{equation} \bm r_i = \lVert \bm{x}_i - \bm{x}_i^* \rVert_2 \end{equation} as the prediction residual of a 2D landmark $\bm{x}_i$. We also define a measure of incoherence \begin{equation} c_i = \lVert (\bm{x}_i - \bm{x}_i^*)-\bm m \rVert_2 \end{equation} for a landmark prediction $\bm{x}_i$ where $\bm m = \frac{1}{n}\sum_{i=1}^n (\bm{x}_i - \bm{x}_i^*)$ is the mean error vector for an image. The smaller $c_i$ is, the more coherent a prediction $\bm{x}_i$ is, resulting a more consistent structure of prediction to the groundtruth. An intuitive example is shown in Figure~\ref{fig:cohere}. As shown in Figure~\ref{fig:mps}, training with MPS effectively lowers the mean residuals. Furthermore, the mean incoherence are also smaller for all objects. This confirms that a more holistic understanding of the object pose can produce more accurate and structurally consistent landmark predictions. \begin{table*}[h] \begin{center} \setlength\tabcolsep{5.5pt} \begin{tabular}{l|ccccc||cccc|cc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{} } &\multicolumn{5}{c||}{ADD(-S)} &\multicolumn{6}{c}{AUC of ADD(-S)} \\ \cline{2-12} \multicolumn{1}{c|}{} &\multicolumn{5}{c||}{Without refinement} &\multicolumn{4}{c|}{Without refinement} &\multicolumn{2}{c}{With refinement} \\ \cline{2-12} & \multicolumn{1}{c}{HM} & \multicolumn{1}{c}{Hu} & \multicolumn{1}{c}{Hu2} & \multicolumn{1}{c}{GDR} & \multicolumn{1}{c||}{Ours} & \multicolumn{1}{c}{HM} & \multicolumn{1}{c}{PVNet} & \multicolumn{1}{c}{GDR} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c}{DeepIM} & \multicolumn{1}{c}{CosyPose}\\ & \multicolumn{1}{c}{\cite{Oberweger2018making}} & \multicolumn{1}{c}{\cite{Hu2019segmentation}} & \multicolumn{1}{c}{\cite{Hu2020single}} & \multicolumn{1}{c}{\cite{wang2021gdr}} & \multicolumn{1}{c||}{} & \multicolumn{1}{c}{\cite{Oberweger2018making}} & \multicolumn{1}{c}{\cite{Peng2019pvnet}} & \multicolumn{1}{c}{\cite{wang2021gdr}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{\cite{li2020deepim}} & \multicolumn{1}{c}{\cite{labbe2020cosypose}}\\ \hline master chef can & 31.20 & 33.00 & - & - & \textbf{46.52} & 69.00 & - & - & \textbf{71.17} & \textbf{71.20} & - \\ cracker box & 75.00 & 44.60 & - & - & \textbf{92.63} & 80.20 & - & - & \textbf{89.86} & \textbf{83.60} & - \\ sugar box & 47.20 & 75.60 & - & - & \textbf{99.15} & 76.20 & - & - & \textbf{93.21} & \textbf{94.10} & - \\ tomato soup can & 30.20 & 40.80 & - & - & \textbf{60.90} & 70.00 & - & - & \textbf{82.53} & \textbf{86.10} & - \\ mustard bottle & 72.50 & 70.60 & - & - & \textbf{100.00} & 84.80 & - & - & \textbf{95.34} & \textbf{91.50} & - \\ tuna fish can & 4.31 & 18.10 & - & - & \textbf{52.96} & 49.40 & - & - & \textbf{88.01} & \textbf{87.70} & - \\ pudding box & 48.30 & 12.20 & - & - & \textbf{79.91} & 82.20 & - & - & \textbf{90.5} & \textbf{82.70} & - \\ gelatin box & 37.20 & \textbf{59.40} & - & - & 58.88 & 81.80 & - & - & \textbf{89.36} & \textbf{91.90} & - \\ potted meat can & 40.30 & 33.30 & - & - & \textbf{58.62} & 66.20 & - & - & \textbf{74.54} & \textbf{76.20} & - \\ banana & 6.20 & 16.60 & - & - & \textbf{36.94} & 52.90 & - & - & \textbf{58.77} & \textbf{81.20} & - \\ pitcher base & 53.80 & 90.00 & - & - & \textbf{99.65} & 69.90 & - & - & \textbf{92.86} & \textbf{90.10} & - \\ bleach cleanser & 57.20 & 70.90 & - & - & \textbf{75.22} & 73.30 & - & - & \textbf{77.35} & \textbf{81.20} & - \\ bowl* & \textbf{49.50} & 30.50 & - & - & 45.07 & \textbf{80.30} & - & - & 70.81 & \textbf{81.40} & - \\ mug & 10.50 & 40.70 & - & - & \textbf{66.04} & 50.50 & - & - & \textbf{89.1} & \textbf{81.40} & - \\ power drill & 63.00 & 63.50 & - & - & \textbf{94.99} & 78.30 & - & - & \textbf{89.4} & \textbf{85.50} & - \\ wood block* & 48.20 & 27.70 & - & - & \textbf{55.37} & 65.20 & - & - & \textbf{70.62} & \textbf{81.90} & - \\ scissors & 0.55 & 17.10 & - & - & \textbf{71.27} & 28.20 & - & - & \textbf{84.82} & \textbf{60.90} & - \\ large marker & 11.70 & 4.80 & - & - & \textbf{11.73} & 48.20 & - & - & \textbf{53.25} & \textbf{75.60} & - \\ large clamp* & 12.20 & 25.60 & - & - & \textbf{68.12} & 47.20 & - & - & \textbf{77.1} & \textbf{74.30} & - \\ extra large clamp* & 17.30 & 8.80 & - & - & \textbf{56.16} & 47.50 & - & - & \textbf{55.19} & \textbf{73.30} & - \\ foam brick* & 63.80 & 34.70 & - & - & \textbf{68.40} & \textbf{85.60} & - & - & 83.78 & \textbf{81.90} & - \\ \hline average & 37.15 & 38.98 & 53.90 & 60.10 & \textbf{66.59} & 66.04 & 73.40 & \textbf{84.40} & 79.88 & 81.90 & \textbf{84.50} \\ \hline \end{tabular} \end{center} \caption{ Test accuracy on the YCB-Video dataset. Objects with a ``*'' sign are considered as symmetric objects.} \label{tab:ycbv} \end{table*} \subsection{Comparing to SOTA methods} We report results on the LINEMOD dataset in Table~\ref{tab:lm}. We group methods into two types depending on whether they include a separate refinement step or not. Our method achieves the best average ADD(-S), as well as the best ADD(-S) on most individual objects. Moreover, our method even outperforms all SOTA methods with refinement, further attesting the power of ROPE. The results on the Occluded-LINEMOD dataset are summarised in Table~\ref{tab:lmo}. In the non-refinement group, our method ranked second amongst current SOTA methods overall and best on two individual objects. A sample of qualitative results are provided in Figures~\ref{fig:punch} and \ref{fig:ablation1}. The results on the YCB-Video dataset are reported in Table~\ref{tab:ycbv}. Without refinement, ROPE has the best performance when evaluated with ADD(-S). \subsection{Data efficiency} The LINEMOD dataset has about 1200 images for each object, which results in approximately 180 images (15\%) for the training set. To supplement such a small training set many methods generate a large amount of synthetic images. For example, PVNet~\cite{Peng2019pvnet} renders 20000 images for each object and the same strategy is adopted in~\cite{song2020hybridposev4}. Although we only use a moderate amount of 1312 synthetic images on top of the 180 in training, we test our model's performance in a extremely data-efficient case: only using the $\sim$180 images for training. \begin{table} \begin{center} \begin{tabular}{l|cc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{ADD(-S)} } &\multicolumn{2}{c}{Training images} \\ \cline{2-3} & \multicolumn{1}{c}{$\sim$180} & \multicolumn{1}{c}{$\sim$1500} \\ \hline ape & 78.57 & \textbf{81.52} \\ benchvise & 98.93 & \textbf{100.00} \\ cam & 90.88 & \textbf{96.86} \\ can & 98.03 & \textbf{98.72} \\ cat & 92.22 & \textbf{94.71} \\ driller & 98.02 & \textbf{99.01} \\ duck & 79.06 & \textbf{85.35} \\ eggbox* & 99.72 & \textbf{100.00} \\ glue* & 97.68 & \textbf{99.42} \\ holepuncher & 88.30 & \textbf{90.39} \\ iron & 96.83 & \textbf{100.00} \\ lamp & 98.85 & \textbf{99.42} \\ phone & 94.81 & \textbf{97.64} \\ \hline average & 93.22 & \textbf{95.61} \\ \hline \end{tabular} \end{center} \caption{Comparing performances of ROPE in the extremely data-efficient setting ($\sim$180) and in the original setting ($\sim$1500) on the LINEMOD dataset. Both models are without refinement.} \label{tab:data_effi} \end{table} As shown in Table~\ref{tab:data_effi}, despite having slightly lower ADD(-S) then the baseline, our model achieves an overall accuracy of 93.22\% which is close to the current SOTA method GDR~\cite{wang2021gdr}. This is accomplished with as few as around 180 training images, demonstrating superior data efficiency for our method. \section{Conclusion} We propose ROPE, a framework for robust object pose estimation against occlusions. We show that enforcing occlusion-robust feature learning and encouraging holistic representation learning are the key to achieve occlusion-robustness. Evaluations on three popularly used benchmark datasets, LINEMOD, Occluded-LINEMOD and YCB-Video, show that ROPE either outperforms or is competitive to SOTA methods, without the need of refinement. Our method is also highly data-efficient. {\small \bibliographystyle{ieee_fullname}
34d87d5dcf03ae9b06accd3db98a8ce6ba9d79af
\section{Background and Related Work} \input{sections/characterization} \input{sections/selfish-scam} \input{sections/dark-fee} \input{sections/conclusion} \input{sections/acknowledgments.tex} \bibliographystyle{ACM-Reference-Format} \section{Conclusions \& Discussion}\label{sec:conclusion} \section{Concluding Discussion}\label{sec:conclusion} At a high-level, our analysis of transaction ordering in the Bitcoin blockchain offers three important takeaways. \begin{enumerate} \item {\it Selfish transaction prioritization:} We showed that miners do not fully follow the expected fee-rate based prioritization norms in Bitcoin, especially for transactions where they have a vested (selfish) interest. \item {\it Dark-fee transaction prioritization:} We demonstrated that not all fees offered by transactions are transparent and public. Miners can accept so-called ``dark-fee'' payments via side channels to accelerate transactions. \item {\it Collusive transaction prioritization:} We showed that miners collude on accelerating transactions in which other miners have a vested interest. \end{enumerate} While the percentage of transactions that are affected by selfish, non-transparent, and collusive behaviors of miners is relatively small today, if unchecked, the spreading of such misbehaviors portends serious trouble for future blockchain systems. The transaction fees offered by Bitcoin users during periods of congestion crucially relies, for instance, on the assumption that the total fees offered by other transactions are public and transparent. If some transactions offer opaque (or dark) fees, it becomes hard for Bitcoin users wishing to get their transactions confirmed before a deadline to offer the correct fee and have their transaction accepted. Similarly, miners receiving dark fees have a clear, unfair advantage over other miners, as they receive higher fees for mining the same transaction. Worse, the dark fee receiving miners get to keep the additional fee, even when the transaction is mined by other miners. Finally, collusion between mining pools further concentrates the activities of the whole network to the hands of a few large mining pools. Since the mechanism for prioritizing transactions is similar across most popular cryptocurrencies~\cite{strehle2020exclusive,Daian@S&P20,Roughgarden@EC21}, our methodology to study miners' adherence can be generalized to other blockchains (e.g., Ethereum). \subsection{The Case for Chain Neutrality} Our findings call for a community-wide debate on defining transaction prioritization norms and enforcing them in a transparent manner. Specifically, we highlight three challenging questions that need to be addressed for the future. $\star{}$ {\it What are the desired transaction prioritization norms in public \pow{} blockchains?} What aspects of transactions besides fee-rate should miners be allowed to consider when ordering them? For instance, should the waiting time of transactions also be considered to avoid indefinitely delaying some transactions? Should the transaction value (i.e., amount of bitcoins transferred between different accounts) be a factor in ordering, as fee-rate based ordering favors larger value over smaller value transactions? Similarly, while we did not find evidence of miners decelerating or censoring (i.e., refusing to mine) transactions, the current protocols do not disallow such discriminatory behaviors by miners. Should prioritization norms also explicitly disallow discriminating transactions based on certain transaction features like sending or receiving wallet addresses? Such norms would be analogous to {\it network neutrality} norms for ISPs that disallow flows from being treated differently based on their source/destination addresses or payload. $\star{}$ {\it How can we ensure that the distributed miners are adhering to desired and defined norms?} Miners in public PoW blockchains operate in a distributed manner, over a P2P network. This model of operation results in different miners potentially having distinct, typically different, views of the state of the system (e.g., set of outstanding transactions). Given these differences, are there mechanisms (say, based on statistical tests~\cite{lev2020fairledger,Orda2019,Asayag18a}) that any third-party observer could use to verify that a miner adheres to the established norm(s)? $\star{}$ {\it How can we model and analyze the impact of selfish, non-transparent, collusive behaviors of miners?} While the above themes align well with a long-term vision of defining and enforcing well-defined ordering norms in blockchains, in the short term one could focus on examining the implications of the norm violations in today's blockchains. Specifically, how can we characterize the ordering that would result from different miners following different prioritization norms, especially given an estimate of miners' hashing or mining powers (i.e., their likelihood of mining a block). Such characterization has crucial implications for Bitcoin users. \section{On differential observability}\label{sec:diff-obs} In estimating the baseline blocks, we relied \stress{only} on the transactions in our full-node's Mempool\xspace. This ``view'' of the Mempool\xspace, however, could be substantially different from that of another node or miner simply because of where that miner is geographically located. Information on transactions introduced or blocks mined in the network disseminates through the peer-to-peer network via a gossip protocol at different speeds depending, at least, on the latency between the nodes in the network. The discrepancies observed, $B_i \setminus \hat{B_i}$ and $\hat{B_i} \setminus B_i$, could, perhaps, be explained by such network delays. The \stress{ignored} transactions\footnote{Ignored by the miner and, hence, missing in the actual block, but included in the baseline.} in $\hat{B_i} \setminus B_i$, for instance, could have been committed in block $B_{i+1}$ or later instead of $B_i$, perhaps because the miner received them later than when we observed it. They could also have been committed earlier in block $B_{i-1}$ or earlier instead of $B_i$, perhaps because our full node observed these transactions later than when the miner observed them. We explore, hence, different scenarios that could explain the discrepancies and ascertain whether to give miners the benefit of doubt---rather than flag their behavior as a violation of the norm---and to what extent. \begin{figure}[tb] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/top_5_txs_intersection_dist} \figcap{} \label{fig:ab-bb-common-top5-scatter} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/txs_ignored_cdf} \figcap{} \label{fig:bb-ab-at-diff-time-thresh} \end{subfigure} \figcap{(a) All mining pools occasionally deviate from the norm. (b) Even with a 2-block cutoff period, miners ignore some transactions half the time} \end{figure} \subsection{Where ``our view'' is at fault}\label{subsec:our-view-lacking} Let us suppose that the discrepancies between the baseline ($\hat{B_i}$) and actual blocks ($B_i$) are due to our full node ``missing'' some of the transactions observed by the miners. We now examine to what extent this premise holds true. \parai{Were miners privy to certain transactions?} Of the set of transactions in $B_i \setminus \hat{B_i}$, we measured the fraction that our full node \stress{never} observed in its Mempool\xspace. In nearly $80\%$ of the $\num{3079}$ blocks, the full node does not miss even one transaction; stated differently, transactions in $B_i \setminus \hat{B_i}$ were observed, in most cases, at some point in time in our Mempool\xspace (and were included in some baseline block, but not $\hat{B_i}$). Even in the $99$-th percentile, the full node fails to observe fewer than $10\%$ of the transactions in $B_i \setminus \hat{B_i}$. These small number of cases could be explained by network delays, resulting in some transactions being received after ``their'' blocks: Transactions received after the block, in which they were included, causes the full node to drop the transaction (silently) even before adding them to the Mempool\xspace. \stress{Even if miners were being privy to certain transactions, the small fraction of transactions our full node ``misses'' to observe cannot explain the large discrepancies.} \parai{Did we miss transactions with high fees?} Recall that in computing the baselines we used the fee-per-byte metric to prioritize transactions. If our full node missed some transactions with higher fee-per-byte values than the minimum across all transactions in a given baseline, these missed transactions will explain some of the discrepancies. Across the $\num{3079}$ blocks in our data set, we observe, however, only $1\%$ of the transactions in $\hat{B_i} \setminus B_i$ (i.e., observed in the baseline but not in the actual) to have higher fee-per-byte values than the minimum across all transactions in $B_i$. \stress{Therefore, of the $22\%$ discrepancy (i.e., $\scriptstyle\frac{|\hat{B_i} \setminus B_i|}{|B_i|}$) we observe, only $1\%$ are perhaps because of our full node missing some transactions with high fee rates.} In summary, the transactions that our full node either never observes or fails to observe ``on time'' explain, at best, a percent of those ignored (or absent) in $B_i \cap \hat{B_i}$. \subsection{Where ``their view'' is at fault}\label{subsec:their-view-lacking} We are now left with only one premise to verify: Perhaps it is the miners who do not observe some of the transactions ``on time''. The ``All'' line in Figure~\ref{fig:bb-ab-at-diff-time-thresh} shows, for each block, the number of (ignored) transactions in baseline but not in the actual block (i.e., $\hat{B_i} \setminus B_i$) as a fraction of that in the actual block. For this line to be true, every node in the peer-to-peer network should have observed the transactions at the same time (i.e., with zero delays), which is infeasible. To account for delays, we remove from this set of ignored transactions, those we received within some \stress{cutoff period}, e.g., one minute, before a given block is mined. More concretely, if a transaction $t_i$ belongs to $\hat{B_i} \setminus B_i$, but the time at which we received this transaction was within the cutoff period before we received the actual block $B_i$, we drop it from the ignored set. Per Figure~\ref{fig:bb-ab-at-diff-time-thresh}, the ``1 min.'' line, corresponding to a $1$-minute cutoff period, significantly reduces the fraction of ignored transactions: Fraction of ignored transactions drops (from $22\%$ in ``All'') to $12\%$ or less; $2\%$ of the blocks (compared with \stress{none} in ``All'') also have no ignored transactions. Increasing the cutoff period to $2$ minutes further reduces the discrepancies, in favor of the miners. The cutoff period accounts for the scenario that perhaps the miners received the transactions ``later'' than our full node. It is unlikely, however, that the mining pools would experience a delay, as high as, one minute: It is in the best interests (economically speaking) of the mining-pools to equip their infrastructure with low-latency network connections, after having spent millions in hardware~\cite{Zhao-WebArticle2019}; the fixed infrastructure costs alone should reduce the cost of providing low-latency Internet connectivity (e.g., \cite{FIBRE-URL2019} and \cite{Basu-Talk2016}) to their nodes. The economic argument notwithstanding, even with a 2-minute cutoff period, $50\%$ of the blocks ignore nearly $10\%$ of the transactions. Rather than use absolute time spans for the cutoff period we also used block-based cutoff periods. A one-block cutoff period implies that we drop any transaction in the ignored set if it was received anytime before the current block $B_i$ (where it is being flagged as ignored) but after we received the prior block $B_{i-1}$. Recall that block generation times, in Bitcoin, vary with an average of about $10$ minutes. Even with a two-block cutoff period (i.e., $20$ minutes on average), we observe, in Figure~\ref{fig:bb-ab-at-diff-time-thresh}, $50\%$ of the blocks to have at least some ignored transactions! \stress{The analyses using (absolute) time-based as well as block-based cutoff periods indicate that a significant fraction of transactions is being ignored from immediate inclusion, for whatever reason, by miners.} \subsection{Musings on miners' behavior} For each transaction, regardless of whether it was present in both the baseline and actual blocks ($B_i \cap \hat{B_i}$) or only in the actual ($B_i \setminus \hat{B_i}$) or just in the baseline ($\hat{B_i} \setminus B_i$), we compute the transaction delay as the difference between when the transaction was received in Mempool\xspace and when it was included in the baseline or actual, depending on to which of the three aforementioned categories the transaction belongs. Figure~\ref{fig:tx-delay-diff-class} shows the CDF of the delays for all transactions, separately for each of the three categories. \begin{figure}[tpb] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/txs_delay_cdf.pdf} \figcap{} \label{fig:tx-delay-diff-class} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/txs_feerate_cdf.pdf} \figcap{} \label{fig:tx-feerate-diff-class} \end{subfigure} \figcap{(a) It is unlikely that the miners are ignoring transactions because these arrived too close to when the block was mined: Some, if not all, of the transactions in $B_i \setminus \hat{B_i}$ arrived minutes later than those in $B_i \cap \hat{B_i}$. (b) Transactions in $B_i \setminus \hat{B_i}$ have significantly lower fee-per-byte compared to those in $B_i \cap \hat{B_i}$, suggesting that fee-per-byte does not completely explain miners' dequeuing policies.} \end{figure} Per Figure~\ref{fig:tx-delay-diff-class}, some of the transactions that the miners included (in the actual block) that are not in the baseline (i.e., the category $B_i \setminus \hat{B_i}$) arrived several minutes later than those that appear in both the actual and baseline blocks (i.e., the category $B_i \cap \hat{B_i}$). Some of the transactions in the baseline but not in the actual block (i.e., category $\hat{B_i} \setminus B_i$), arrived much earlier compared to those in the $B_i \setminus \hat{B_i}$. \stress{Perhaps the miners are using a different protocol, which takes other parameters, in addition to fee-per-byte, into consideration.} \stress{Perhaps the miners are being ``altruistic''.} Said differently, miners might be committing transactions that have been waiting in the Mempool\xspace for a ``long'' time, despite those transactions having comparatively lower fees---the CDF of $|B_i \setminus \hat{B_i}|$ in Figure~\ref{fig:tx-delay-diff-class} lends some credence to this line of reasoning. Figure~\ref{fig:tx-feerate-diff-class}, which is similar to Figure~\ref{fig:tx-delay-diff-class} except that the x-axis is transaction fee rate instead of delays, also shows that the transactions prioritized by miners have comparatively lower fee rates. The observation that $60\%$ or more of the transactions that have been waiting for 10 minutes or longer (in the CDF of $|\hat{B_i} \setminus B_i|$ in Figure~\ref{fig:tx-delay-diff-class}) were ignored by the miners, however, clearly refutes the claim of an ``altruistic'' behavior. \stress{Are miners adopting other strategies?} We hypothesize that mining pool operators might be sending a different set of transactions to each miner (or perhaps changing this set every a fixed time interval or event) to reduce the network overhead, avoiding miners to request a new task often. It is possible due to the stratum protocol\footnote{Stratum is a pooled mining protocol available at \url{https://stratumprotocol.org}}. Stratum is a pooled mining protocol that focuses on reducing network communication between the mining pool and its miners by allowing miners to change some bytes on the coinbase transaction and consequently changing the Merkle root. Another reason is that some clients might be using services like transaction accelerators to speed up the commit time of a particular transaction. They pay the mining pool to use this service off-chain (i.e., with another cryptocurrency or via credit-cards) to, hopefully, increase the probability of their transactions get included in the next block. One example of this service is the \textit{BTC.com transaction accelerator}\footnote{\textit{BTC.com} is one of the biggest Bitcoin mining pools operators currently available. Its transaction accelerator service is available at: \url{https://pushtx.btc.com/}}. \paraib{Implications.} Regardless of whether the miners are altruistic, Figure~\ref{fig:tx-feerate-diff-class} strongly suggests that the dequeuing policy is not simply a function of the fee-per-byte metric. The transactions in $B_i \setminus \hat{B_i}$ have, for instance, significantly smaller fees than those available in $B_i \cap \hat{B_i}$. Further, the $B_i \cap \hat{B_i}$ and $B_i \setminus \hat{B_i}$ lines in Figure~\ref{fig:tx-feerate-diff-class} suggest that even if users pay a fee significantly higher---one or two orders of magnitude higher---than the lowest fee ($\uTxFee{e-5}$), there is virtually no guarantee that their transaction will be included in the next block. Only beyond an exorbitant fee rate ($\uTxFee{e-1}$) there is, unsurprisingly, a guarantee that the concerned transaction will be immediately committed. Today, virtually all of the fee predictors, however, falsely assume that miners follow the fee-per-byte metric for prioritizing transactions for inclusion. \if 0 \section{Discussion}\label{sec:discussion} \subsection{Impact of non-transparent } \subsubsection{On the right norm(s) for transaction prioritisation:} \subsubsection{Enforcing the right norm:} \subsubsection{Fairness of } \fi \section{On Transaction Commit Prioritization}\label{sec:interblock} In a decentralized system, it is vital that all participants follow a ``norm,'' to avoid compromising the stability and fairness of the system. Bitcoin Core~\cite{BitcoinCore-2019}, the widely used software to access the Bitcoin blockchain history, uses the fee-per-byte metric (or the GBT\xspace protocol~\cite{GBT-Bitcoin-2019}), for prioritizing transactions for inclusion. Miners, hence, should use GBT\xspace as the ``norm.'' In this section, we check the adherence of miners to this norm. \subsection{Broader deviations from the norm} \label{subsec:deviations-from-the-norm} \if 0 Having shown selfish violations of the norm for vested interest transactions, we now focus on norm violations for all transactions, in general. If miners do not follow the norm for most transactions, feerate prediction has serious economic implications for the users. It could become even worse in the following years. To justify our concerns of the miners' behaviors, we checked for transaction pairs that unequivocally show that miners are not completely following the norm. To this end we sampled, uniformly at random, $30$ Mempool\xspace snapshots (see~\S\ref{subsec:cong-delays}) from the set of all available snapshots. Suppose that, in each snapshot, we denote, for any transaction $i$, the time at which it was received in the Mempool\xspace by $t_i$, its feerate by $f_i$, and the block in which it was committed by $b_i$. We then selected, from each snapshot, all pairs of transactions $(i,j)$ such that $t_{i} < t_{j}$ and $f_{i} > f_{j}$, but $b_{i} > b_{j}$. Each pair essentially has one transaction that appeared earlier and has a higher feerate than the other, but still was committed later than the other; such pairs clearly constitute a violation of the ``norm". Bitcoin supports notions of dependent transactions, involving a parent and child, where the child pays a high fee to incentivize miners to also confirm the parent from which it draws its inputs. This mechanism enables users to ``accelerate'' a transaction that has been ``stuck'' because of low fee~\cite{CoinStaker-2018}. As the existence of such \newterm{child-pays-for-parent (CPFP)} transactions would introduce false positives in our analysis we decided to discard them. Figure~\ref{fig:violation-non-cpfp} shows that violations exist even after discarding all such dependent pairs. Figure~\ref{fig:violation} shows a CDF of the percentage of the number of such transaction pairs (line labelled ``$\ast$'') violating the norm across all sampled snapshots. Even if relax the time constraint as $t_{i} + \epsilon < t_{j}$ and use an $\epsilon$ of either $10$ seconds or $10$~minutes, there exist (in Figure~\ref{fig:violation}) a non-trivial number of violations. \fi \paraib{Establishing a baseline.} To systematically evaluate how well the fee-per-byte metric explains the dequeuing behavior of miners, we establish a \term{baseline} as follows. We run a full node and stamp each transaction added to the Mempool with the \stress{chain length}. Chain length represents the number of blocks already present in the blockchain when a given transaction was received in the node's Mempool. For every block $B_i$ mined (in reality, in Bitcoin), we estimate the \stress{candidate} set of transactions that were available to the miner. More concretely, the candidate set of $B_i$ comprises all transactions that were observed in the Mempool before block $B_i$ but have not been confirmed yet. We order the transactions within a candidate set using the fee-per-byte metric (the same adopted on the GBT\xspace mining protocol and well believed to be the norm) and create a \stress{baseline} block $\hat{B_i}$ of the same size as that of $B_i$, i.e., $|B_i| = |\hat{B_i}|$, from the candidate set. To simplify the analyses, we removed child-pays-for-parent transactions prior to creating the baseline block. The number of such transactions dropped out from both the baselines and actual blocks, represents (in the median) $29.6\%$ of the size of the candidate sets. \paraib{Deviations from the baseline.} We examined the blocks and transactions in data set \dsa and estimated the baselines for $\num{3079}$ actual blocks, observed during this period. The ratio of the size of the intersection between each actual block ($B_i$) and its corresponding baseline ($\hat{B_i}$), i.e., $|{B_i}\cap\hat{B_i}|$, to the size of the corresponding $B_i$ (or $\hat{B_i}$) quantifies the extent to which miners adhere to the fee-per-byte dequeuing policy; Figure~\ref{fig:ab-bb-common-all} plots the CDF of the ratios across all $\num{3079}$ blocks. In the median, there is a $78\%$ overlap between actual and baseline blocks: The fee-per-byte metric seems, on average, to explain the dequeuing of transactions from the Mempool\xspace. \if 0 \begin{figure}[tb] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/violation/tx-violation-all.pdf} \figcap{All transactions} \label{fig:violation-all} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/violation/tx-violation-non_cpfp.pdf} \figcap{Only Non-CPFP transactions} \label{fig:violation-non-cpfp} \end{subfigure} \figcap{There exists a non-trivial fraction of transaction pairs across all snapshots, clearly indicating that miners do not adhere to the norm.} \label{fig:violation} \end{figure} \fi The magnitude of the intersection between the baselines and actual blocks is, however, not $100\%$! $22\%$ of the transactions in the baselines do not appear in the corresponding actual blocks, i.e., $B_i \setminus \hat{B_i}$; by symmetry, $22\%$ of the actual blocks do not intersect with the corresponding baselines, i.e., $\hat{B_i} \setminus B_i$. Succinctly, $22\%$ of the composition of a block on average deviates from the ``norm.'' There exist, hence, a significant number of transactions whose inclusion (or the lack thereof) in corresponding actual blocks cannot be explained by the GBT\xspace like strategy where miners rank transactions based on a fee-per-byte metric. \stress{Could the deviating behavior be attributed to a small subset of miners?} Performing the same analysis (of quantifying the overlap between baseline and actual blocks), but only for the blocks mined by the top five\footnote{Based on the number of blocks mined by each pool over the three-week study period.} mining pools (Figure~\ref{fig:ab-bb-common-top5-scatter}) indicates that the pools exhibit almost identical behavior. The CDFs in Figure~\ref{fig:ab-bb-common-top5} are similar to those for the bottom five mining pools as well. The discrepancies between actual and baseline blocks is consistent across all miners, regardless of size: Deviations from the ``norm'' are consistent across all mining pools (or miners). \begin{figure}[tb] \center \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/txs_intersection_cdf.pdf} \figcap{} \label{fig:ab-bb-common-all} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/top_5_txs_intersection_cdf} \figcap{} \label{fig:ab-bb-common-top5} \end{subfigure} \figcap{(a) In the median, $78\%$ of the transactions in baselines appear in the corresponding actual blocks, and (b) the observations are consistent across the top-5 mining-pool operators.} \label{fig:interblock} \end{figure} \section{Introduction} \label{sec:introduction} At its core, a blockchain is an append-only list of cryptographically linked records of transactions called ``blocks.'' In public blockchains such as Bitcoin~\cite{Nakamoto-WhitePaper2008} and Ethereum~\cite{Wood@Ethereum}, any user can broadcast a transaction to be included in the blockchain. Participants, called miners, include (or confirm) the issued transactions in a new block and extend the blockchain by solving a cryptographic puzzle. Many blockchains are maintained in a decentralized manner by a peer-to-peer (P2P) network of nodes that follow a well-defined protocol (i.e., ground rules) for validating new blocks. For example, the protocol for maintaining the Bitcoin ledger, laid down by Nakamoto in 2008, is based on a proof-of-work (PoW) scheme~\cite{Nakamoto-WhitePaper2008}. Noticeably absent from Bitcoin and other decentralised blockchain protocols is the requirement of any a-priori trust between the users issuing transactions, the miners confirming transactions, and the P2P nodes maintaining the blockchain. Decentralized blockchains, without any notion of trusted entities, have not only been used to implement cryptocurrencies, but are increasingly being adopted as a substrate for a variety of decentralized financial applications (smart contracts) such as exchanges~\cite{Daian@S&P20,UniswapDEX}, lending~\cite{Qin@FC21,Perez@FC21}, and auctions~\cite{NFTs}. Despite their widespread use in ordering critical applications~\cite{Mccorry@FC17,Daian@S&P20,pilkington2016blockchain,kharif2017cryptokitties,UniswapDEX,Perez@FC21}, blockchain protocols formally specify \stress{neither} the manner by which miners should select transactions for inclusion in a new block from the set of all available transactions, \stress{nor} the order in which they should be included in the block. While informal conventions or norms for prioritizing transactions exist, to our knowledge, no one has systematically verified if these norms are being followed by miners in practice. In this paper, we present an in-depth analysis of transaction prioritization by Bitcoin miners. Bitcoin is the largest cryptocurrency in the world, with a market capitalization of over \$742.6B as of May 2021~\cite{CoinMarketCap-URL2021}. It has been observed that the increasing volume of Bitcoin transactions issued introduces \stress{congestion} among transactions for confirmation~\cite{Kuzmanovic-QUEUE2019}: Due to size limits on Bitcoin blocks, at any time, there may be more transactions than can be immediately committed or confirmed\footnote{We use the terms `confirmation' and `commit' interchangeably to refer to the inclusion of a transaction in a block.} in the next block. Unconfirmed transactions must, consequently, wait for their ``turn'' to be included in subsequent blocks, thereby introducing \stress{delays}. So the order in which miners choose transactions for inclusion in a new block crucially determines how long individual transactions (e.g., currency transfers) are delayed. Worse, some transactions may be \stress{conflicting}, meaning at most one of the transactions can be included in the blockchain; for such transactions, the order in which a miner chooses to include transactions will determine the ultimate state of the system. The conventional wisdom today is that many miners follow the prioritization norms, implicitly, by using widely shared blockchain software like the Bitcoin Core~\cite{BitcoinCore-2021,CoinDance-2021}. Then, in Bitcoin, the presumed ``norm'' is that miners prioritize a transaction for inclusion based on its offered \stress{fee-rate} or fee-per-byte, which is the transaction's fee divided by the transaction's size in bytes. We show evidence of this presumed norm in Figure~\ref{fig:different-norms-in-btc}. The norm is also justified as ``incentive compatible'' because miners wanting to maximize their rewards, i.e., fees collected from all transactions packed into a size-limited block, would be incentivized to include preferentially transactions with higher fee-rates. Assuming that miners follow this norm, Bitcoin users are issued a crucial recommendation: To accelerate the confirmation of a transaction, particularly during periods of congestion, they should increase the transaction transaction fees. We show that miners are, however, free to deviate from this norm and such norm violations cause irreparable economic harm to users. In this paper, we perform an extensive empirical audit of the miners' behavior to check whether they conform to the norms.\footnote{ We use the terms ``miners,'' ``mining pool operators (MPOs),'' and ``mining pools'' interchangeably throughout this paper.} At a high-level, we find that transactions are indeed primarily prioritized according to the assumed norms. We also, nevertheless, offer evidence of a non-trivial fraction of priority-norm violations amongst confirmed transactions. An in-depth investigation of these norm violations uncovered many highly troubling mis-behaviors by miners. Specifically, we present two key findings. \indent{}{\noindent}~$\blacktriangleright$\,{} Multiple large mining pools tend to {\it selfishly prioritize} transactions in which they have a vested interest; e.g., transactions in which payments are made from or to wallets owned by the mining pool operators. Some even {\it collude} with other large mining pools to prioritize their transactions. \indent{}{\noindent}~$\blacktriangleright$\,{} Many large mining pools accept additional {\it dark (opaque) fees} to accelerate transactions via non-public side-channels (e.g., their websites). Such dark-fee transactions violate an important, but unstated assumption in blockchains that confirmation fees offered by transactions are transparent and equal to all miners. While some of the above miner misbehaviors have been speculated in prior work~\cite{Kelkar@CRIPTO20,Kursawe@AFT20}, to the best of our knowledge, our work is the first to offer a strong empirical evidence of such miner misbehaviors in practice. In the process, we have developed robust tests to detect miner misbehaviors in the Bitcoin blockchain. We view the design of these tests as an important contribution of independent interest to researchers auditing blockchains. Our findings have important implications for both Bitcoin users and miners. Specifically, when setting fees for their transactions, Bitcoin users (i.e., through their wallet software) assume that the fees offered by all their competing transactions are fully transparent---our findings contradict this assumption. Similarly, when transactions offer different confirmation fees to different miners, it raises significant unfairness concerns. Finally, the collusion we uncovered between mining pools exacerbates the growing concerns about the concentration of hash rates amongst a small number of miners~\cite{Gervais@CCS-16,bahack2013theoretical}. We release the data sets and the scripts used in our analyses to facilitate others to reproduce our results~\cite{Messias-DataSet-Code-2021}. \begin{figure}[tb] \centering \includegraphics[width={\onecolgrid}]{images/norms/avg-txs-deviation-per-block-scam-2015-2016-cdf-all} \figcap{CDF of the error in predicting where a transaction would be positioned or ordered within a block according to the greedy fee-rate-based norm. Bitcoin Core code shifted completely to the fee-rate-based norm starting April 2016: Transaction ordering in Bitcoin closely tracks the fee-rate-based norm from April 2016, but differs significantly from it prior to April 2016 when a different norm was in place.} \label{fig:different-norms-in-btc} \end{figure} \if 0 Our findings that the miners unfairly prioritize some transactions and that top mining pools (i.e., top in the rank ordering of mining pools based on the number of blocks mined) consistently deviate from the ``norm'' has serious implications for the users. \stress{Of what use is a blockchain, if miners can unfairly prioritize some transactions and arbitrarily delay the others?} \stress{What is the purpose of transaction fees if they cannot assure users, who pay the fees, a~fair ordering in inclusion?} Furthermore, transaction-fee predictions from any predictor that assumes that miners follow the ``norm,'' will be misleading, and some client-side software ship with in-built predictors. \REVISIT{Today, the reward for mining a block is at least two orders of magnitude larger than the aggregate reward gained by the miner through transaction fees.} \PENDING{Considering a data set from July 14th to August 9th 2020, the transaction fees correspond, on average, to $11\%$, with a median of $10.87\%$ of the total miner reward. -- John} \REVISIT{ As mining rewards decrease and transaction fees account for a significant share (c.f.~Figure~1 in~\cite{Easley-SSRN2017}) of the miners' payouts, the miners' aberrant behavior may cause potentially significant financial harm to users. We urge, hence, the community to question their trust in miners. } \fi \section{Analyzing Norm Adherence} \label{sec:prioritization-norms} In this section, we analyze whether Bitcoin miners adhere to prioritization norms, when selecting transactions for confirmation. To this end, we first investigate whether transaction ordering matters to Bitcoin users in practice, i.e., are there times when transactions suffer extreme delays and do users offer high transaction fees in such times to confirm their transactions faster? We then conduct a progressively deeper investigation of the norm violations, including potential underlying causes, which we investigate in greater detail in the subsequent sections. \input{sections/ordering.tex} \subsection{Do miners follow the norms?}\label{subsec:mining-prioritization-based-feerate} Whether miners follow the transaction prioritization norms (as widely assumed) has implications for both Bitcoin and its users: The software used by users, for instance, assumes an adherence to these norms when suggesting a transaction fee to the user~\cite{BitcoinCore-2021,Fees@Coinbase,Lavi-WWW2019}. Deviations from these norms, hence, have far reaching implications for both the blockchain and crucially for Bitcoin users. \subsubsection{Fee-rate based selection when mining new blocks} \if 0 {\bf In this subsubsection, establish that by and large transaction selection follows fee-rate, but there exist non-insignificant violations. --KG} \paraib{Establishing a baseline.} To systematically evaluate how well the fee-per-byte metric explains the dequeuing behavior of miners, we establish a \term{baseline} as follows. We run a full node and stamp each transaction added to the Mempool with the \stress{chain length}. Chain length represents the number of blocks already present in the blockchain when a given transaction was received in the node's Mempool. For every block $B_i$ mined (in reality, in Bitcoin), we estimate the \stress{candidate} set of transactions that were available to the miner. More concretely, the candidate set of $B_i$ comprises all transactions that were observed in the Mempool before block $B_i$ but have not been confirmed yet. We order the transactions within a candidate set using the fee-per-byte metric (the same adopted on the GBT\xspace mining protocol and well believed to be the norm) and create a \stress{baseline} block $\hat{B_i}$ of the same size as that of $B_i$, i.e., $|B_i| = |\hat{B_i}|$, from the candidate set. To simplify the analyses, we removed child-pays-for-parent transactions prior to creating the baseline block. The number of such transactions dropped out from both the baselines and actual blocks, represents (in the median) $29.6\%$ of the size of the candidate sets. \paraib{Deviations from the baseline.} We examined the blocks and transactions in data set \dsa and estimated the baselines for $\num{3079}$ actual blocks, observed during this period. The ratio of the size of the intersection between each actual block ($B_i$) and its corresponding baseline ($\hat{B_i}$), i.e., $|{B_i}\cap\hat{B_i}|$, to the size of the corresponding $B_i$ (or $\hat{B_i}$) quantifies the extent to which miners adhere to the fee-per-byte dequeuing policy; Figure~\ref{fig:ab-bb-common-all} plots the CDF of the ratios across all $\num{3079}$ blocks. In the median, there is a $78\%$ overlap between actual and baseline blocks: The fee-per-byte metric seems, on average, to explain the dequeuing of transactions from the Mempool\xspace. \if 0 \begin{figure}[tb] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/violation/tx-violation-all.pdf} \figcap{All transactions} \label{fig:violation-all} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/violation/tx-violation-non_cpfp.pdf} \figcap{Only Non-CPFP transactions} \label{fig:violation-non-cpfp} \end{subfigure} \figcap{There exists a non-trivial fraction of transaction pairs across all snapshots, clearly indicating that miners do not adhere to the norm.} \label{fig:violation} \end{figure} \fi The magnitude of the intersection between the baselines and actual blocks is, however, not $100\%$! $22\%$ of the transactions in the baselines do not appear in the corresponding actual blocks, i.e., $B_i \setminus \hat{B_i}$; by symmetry, $22\%$ of the actual blocks do not intersect with the corresponding baselines, i.e., $\hat{B_i} \setminus B_i$. Succinctly, $22\%$ of the composition of a block on average deviates from the ``norm.'' There exist, hence, a significant number of transactions whose inclusion (or the lack thereof) in corresponding actual blocks cannot be explained by the GBT\xspace like strategy where miners rank transactions based on a fee-per-byte metric. \stress{Could the deviating behavior be attributed to a small subset of miners?} Performing the same analysis (of quantifying the overlap between baseline and actual blocks), but only for the blocks mined by the top five\footnote{Based on the number of blocks mined by each pool over the three-week study period.} mining pools (Figure~\ref{fig:ab-bb-common-top5-scatter}) indicates that the pools exhibit almost identical behavior. The CDFs in Figure~\ref{fig:ab-bb-common-top5} are similar to those for the bottom five mining pools as well. The discrepancies between actual and baseline blocks is consistent across all miners, regardless of size: Deviations from the ``norm'' are consistent across all mining pools (or miners). \begin{figure}[tb] \center \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/txs_intersection_cdf.pdf} \figcap{} \label{fig:ab-bb-common-all} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/inter-block/top_5_txs_intersection_cdf} \figcap{} \label{fig:ab-bb-common-top5} \end{subfigure} \figcap{(a) In the median, $78\%$ of the transactions in baselines appear in the corresponding actual blocks, and (b) the observations are consistent across the top-5 mining-pool operators.} \label{fig:interblock} \end{figure} \fi Our finding above that transactions offering higher fee-rates experience lower confirmation delays suggests that miners tend to account for transaction fee-rates when choosing transactions for new blocks. We now want to check, however, if transaction fee-rate is the primary or the sole determining factor in transaction selection. To this end, we check our data sets for transaction pairs, where one transaction was issued earlier and has a higher fee-rate than the other, but was committed later than the other. The existence of such transaction pairs would unequivocally show that fee-rate alone does not explain the order in which they are selected. We sampled $30$ Mempool\xspace snapshots, uniformly at random, from the set of all available snapshots in data set \dsa{}. Suppose that, in each snapshot, we denote, for any transaction $i$, the time at which it was received in the Mempool\xspace by $t_i$, its fee-rate by $f_i$, and the block in which it was committed by $b_i$. We then selected, from each snapshot, all pairs of transactions $(i,j)$ such that $t_{i} < t_{j}$ and $f_{i} > f_{j}$, but $b_{i} > b_{j}$. Such pairs clearly constitute a violation of the fee-rate-based transaction-selection norm. \begin{figure}[tb] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/violation/tx-violation-all.pdf} \sfigcap{All transactions} \label{fig:violation-all} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/violation/tx-violation-non_cpfp.pdf} \sfigcap{Only Non-CPFP transactions} \label{fig:violation-non-cpfp} \end{subfigure} \figcap{There exists a non-trivial fraction of transaction pairs violating the norm across all snapshots, clearly indicating that miners do \underline{not} adhere to the norm.} \label{fig:violation} \end{figure} Figure~\ref{fig:violation-all} shows a cumulative distribution of the fraction of all transaction pairs (line labeled ``$\star$'') violating the norm over all sampled snapshots. Across all snapshots, a small but non-trivial fraction of all transaction pairs violate the norm. One potential explanation for violations might be that the transactions are received by the mining pools in different order than the one in which our Mempool\xspace receives. To account for such differences, we tighten the time constraint as $t_{i} + \epsilon < t_{j}$ and use an $\epsilon$ of either $10$ seconds or $10$~minutes. Even with the tightened time constraints, Figure~\ref{fig:violation-all} shows that a non-trivial fraction of all transaction pairs violate the norm. Another potential source of violations are Bitcoin's dependent (or, parent and child) transactions, where the child pays a high fee to incentivize miners to also confirm the parent from which it draws its inputs. This mechanism enables users to ``accelerate'' a transaction that has been ``stuck'' because of low fee~\cite{CoinStaker-2018}. As the existence of such \newterm{child-pays-for-parent (CPFP)} transactions (formally defined in~\S\ref{sec:cpfp-txs}) would introduce false positives in our analysis we decided to discard them. Figure~\ref{fig:violation-non-cpfp} shows that the violations exist even after discarding all such dependent transaction pairs. \if 0 \paraib{Potential explanations for broader norm deviations.} We now present a few conjectures, backed by empirical observations, that could explain the miners' behavior. \fi \subsubsection{Fee-rate based ordering within blocks} We now turn our attention to transaction ordering within individual (mined) blocks in Bitcoin. If a miner followed GBT, transactions would be ordered based on their fee-rate. In this case, given the set of non-CPFP transactions $T = \{T_1, T_2, .... T_n\}$ included in a block $B$, we should be able to predict their position in the block by simply ordering the transactions based on their fee-rate (as specified in the GBT implementation in Bitcoin Core). To quantify the deviation from the norm, we compute a measure that we call \textit{\textbf{position prediction error (PPE)}}: PPE of a block $B$ is the average absolute difference between the predicted and the observed (actual) positions for all transactions in block $B$, normalized by the size of the block ($n$) and expressed as a percentage. More precisely, \begin{align*} PPE(B) = \sum_{i=1}^n \dfrac{(|T^{p}_{i} - T^{o}_{i})|) \cdot 100}{n} \end{align*} where $T^{p}_{i}$ and $T^{o}_{i}$ are the predicted and observed positions of a transaction, respectively. \begin{figure}[tb] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/fig-2020-bitcoin/avg-txs-deviation-per-block-scam-2020-cdf.pdf} \sfigcap{Overall position prediction error}\label{fig:deviation-within-blocks-overall} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/fig-2020-bitcoin/avg-txs-deviation-per-block-miners-scam-2020-cdf.pdf} \sfigcap{Position prediction error of the top-6 MPOs}\label{fig:deviation-within-blocks-top6-mpo} \end{subfigure} \figcap{ Position prediction error (PPE). (a) There are 52,974 (99.55\%) blocks with at least one non-CPFP txs. The mean PPE is 2.65\%, with an std of 2.89. 80\% of all blocks has PPE less than 4.03\%. (b) The PPEs of blocks mined by the top-6 MPOs according to their normalized hash rate.} \label{fig:deviation-within-blocks} \end{figure} Figure~\ref{fig:deviation-within-blocks-overall} shows the cumulative distribution of PPE values for each block in our data set \dsc{}, containing \num{53214} blocks. $80\%$ of the blocks have PPE values less than $4.03\%$. The mean PPE across all blocks is $2.65\%$, with a standard deviation of $2.89$. Per this plot the position of a transaction within a block can be predicted with very high accuracy (within a few percentile position error), suggesting that transactions are by and large ordered within a block based on their fee-rate. Figure~\ref{fig:deviation-within-blocks-top6-mpo} shows PPE values separately for each of the $6$ largest mining pools in data set \dsc{}. The plots show that all mining pools by and large follow the norm, though some like ViaBTC seems to deviate slightly more from the norm compared to the other mining pools. \subsubsection{Fee-rate threshold for excluding transactions} In their default configuration, many nodes in the Bitcoin P2P network drop (i.e., ignore) transactions that offer less than a threshold fee-rate (typically, $10^{-5}$ BTC/KB). As miners select transactions for inclusion from their local Bitcoin P2P node, this (default) norm would result in such low-fee transactions never being included in the blockchain, even during periods of non-congestion (when blocks have spare capacity to accommodate additional transactions). We collected data set \dsa{} using a default Bitcoin node, and our node, hence, did not accept or record low-fee transactions. When gathering data set \dsb{}, however, we configured our Bitcoin node to accept all transactions, irrespective of their fee-rates. In data set \dsb{}, our node, consequently, received \num{1084} transactions that offered less than the recommended fee-rate and $489$ ($45.11\%$) of them were zero-fee transactions. From these low fee-rate transactions, only \num{53} ($4.89\%$) were confirmed in the Bitcoin blockchain; $9$ ($16.98\%$) were confirmed months after they were observed in our data set. In contrast, the vast majority ($99.7\%$) of the transactions that offered greater than or equal to the recommended fee-rate were all (eventually) confirmed. Interestingly, the low-fee transactions were confirmed by just three mining pools: F2Pool, ViaBTC, and BTC.com included $38$, $14$, and $1$ low-fee transactions, respectively. Our findings suggest that while the norm of ignoring transactions offering less than the recommended fee-rate is being by and large followed by all miners, a few occasionally deviate from the norm. \subsection{Transaction prioritization norms}\label{sec:prelim} A crucial detail absent in the design of a \pow{} blockchain per~\cite{Nakamoto-WhitePaper2008} is any notion of a formal specification of transaction prioritization. Said differently, Nakamoto's design does not formally specify how miners should select a set of candidate transactions for confirmation from all available unconfirmed transactions. Notwithstanding this shortcoming, ``norms'' have originated from miners' use of a shared software implementation: Miners predominantly use the Bitcoin Core~\cite{BitcoinCore-2021} software for communicating with their peers (e.g., to advertise blocks and learn about new unconfirmed transactions) and reaching a consensus regarding the chain. Of particular note in the popular Bitcoin core’s implementation is the \texttt{GetBlockTemplate (GBT)} mining protocol, implemented by the Bitcoin community around February 2012.\footnote{% Even within mining pools, the widely used Stratum protocol internally uses the \texttt{GetBlockTemplate}\xspace mechanism~\cite{Stratum-v1-2021}.} \texttt{GetBlockTemplate}\xspace{} rank orders transactions based on the fee-per-byte (i.e., transaction fees normalized by the transaction's size) metric~\cite{GBT-Bitcoin-2019}. The term \stress{size}, here and in the rest of the paper, refers to \newterm{virtual} size, each unit of which corresponds to four \newterm{weight units} as defined in the Bitcoin improvement proposal BIP-141~\cite{Lombrozo-BIP141-2015}. The predominant use of GBT (through the use of Bitcoin core) by miners coupled with the fact that GBT is maintained by the Bitcoin community \stress{implicitly} establishes two norms. A~third norm stems from a configuration parameter of the Bitcoin core implementation. We now elucidate these three norms. \textbf{I.} \stress{When mining a new block, miners select transactions for inclusion, from the Mempool\xspace{}, based solely on their fee-rates.} \textbf{II.} \stress{When constructing a block, miners order (place) higher fee-rate transactions before lower fee-rate transactions.} \textbf{III.} \stress{Transactions with fee-rate below a minimum threshold are ignored and never committed to the blockchain.} The GBT protocol implementation in Bitcoin core is the source of the first two norms. GBT's rank ordering determines both which set of transactions are selected for inclusion (from the Mempool\xspace{}) and in what order they are placed within a block. GBT dictates that a transaction with higher fee-per-byte \stress{will} be selected before all other transactions with a lower fee-per-byte. It also stipulates that within a block a transaction with the highest fee-per-byte appears first, followed by next highest fee-per-byte, and so on. The third norm stems from the fee-per-byte threshold configuration parameter. Bitcoin core, by design, will not accept any transaction with fee-rate below this threshold, essentially filtering out low-fee-rate transactions from even being accepted into the Mempool\xspace{}. The default (and recommended) value for this configurable threshold is set to $\DefTxFee{}$.\footnote{One Bitcoin (BTC) is equal to $10^{8}$ satoshi (sat).} \subsection{Related Work} \label{sec:related-work} A few recent papers proposed solutions to enforce that transaction ordering follows a certain norm, mostly based on statistical tests of potential deviations \cite{Orda2019,Asayag18a,lev2020fairledger}. These work were, however, mostly of theoretical nature in that they did not contain empirical evidence of deviation by miners, but rather assumed that miners might deviate. Prior efforts also proposed consensus algorithms to guarantee fair-transaction selection~\cite{baird2016swirlds,Kursawe@AFT20,Kelkar@CRIPTO20}. Kelkar \textit{et al.}~\cite{Kelkar@CRIPTO20} proposed a consensus property called \textit{transaction order-fairness} and a new class of consensus protocols called \textit{Aequitas} to establish fair-transaction ordering in addition to also providing consistency and liveness. A number of prior work focused on enabling miners to select transactions. For instance, SmartPool~\cite{Luu2017} gave transaction selection back to the miners. Similarly, an improvement of Stratum, a well-used mining protocol, allows miners to select their desired transaction set through negotiation with a mining pool~\cite{Stratum-2021}. All these prior work are, again, mostly of theoretical nature. In contrast, our study provides empirical evidence of deviation from the norm by miners in the current Bitcoin system. \begin{table*}[tb] \begin{center} \small \tabcap{Bitcoin data sets (\dsa and \dsb) used for testing miners' adherence to transaction-prioritization norms and (\dsc) for investigating the behaviour of mining pool operators}\label{tab:datasets} \begin{tabular}{rrrr} \toprule \thead{Attributes} & \thead{Data set \dsa{}} & \thead{Data set \dsb{}} & \thead{Data set \dsc{}}\\ \midrule \textit{Time span} & Feb. $20\tsup{th}$ -- Mar. $13\tsup{th}$, 2019 & Jun. $1\tsup{st}$ -- $30\tsup{th}$, 2019 & Jan. $1\tsup{st}$ -- Dec. $31\tsup{st}$, 2020\\ \textit{Block height} & \num{563833} -- \num{566951} & \num{578717} -- \num{583236} & \num{610691} -- \num{663904} \\ \textit{Number of blocks} & $\num{3119}$ & $\num{4520}$ & $\num{53214}$ \\ \textit{Count of transactions issued} & $\num{6816375}$ & $\num{10484201}$ & $\num{112489054}$\\ \textit{Percentage of CPFP-transactions} & $26.45\%$ & $23.17\%$ & $19.11\%$ \\ \textit{Count of empty-blocks} & \num{38} & \num{18} & \num{240}\\ \bottomrule \end{tabular} \end{center} \end{table*} \begin{figure*}[tb] \centering \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/blockchain/miner-blocks-and-txs-distribution-dataset-a-bar.pdf} \sfigcap{Data set \dsa{}}\label{fig:dist-txs-blks-dataset-a} \end{subfigure} \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/blockchain/miner-blocks-and-txs-distribution-dataset-b-bar.pdf} \sfigcap{Data set \dsb{}}\label{fig:dist-txs-blks-dataset-b} \end{subfigure} \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/fig-2020-bitcoin/miner-blocks-and-txs-distribution-dataset-c-2020-bar.pdf} \sfigcap{Data set \dsc{}}\label{fig:dist-txs-blks-dataset-c} \end{subfigure} \figcap{Distribution of blocks mined and transactions confirmed by the top-20 MPOs in data sets \dsa{}, \dsb{}, and \dsc{}. Their combined normalized hash-rates account for 94.97\%, 93.52\%, and 98.08\% of all blocks mined in data set \dsa, \dsb, and \dsc, respectively.}\label{fig:dist-tx-blks-dataset-all} \end{figure*} Fairness issues have been studied in blockchain from the point of view of miners. Pass \textit{et al.}~\cite{Pass@PODC17} proposed a fair blockchain where transaction fees and block rewards are distributed fairly among miners, decreasing the variance of mining rewards. Other studies focused on the security issues showing that miners should not mine more blocks than their ``fair share''~\cite{Eyal-CACM2018} and that mining rewards payout is centralized in mining pools and therefore unfairly distributed among their miners~\cite{Romiti2019ADD}. Chen \textit{et al.}~\cite{Chen@AFT19} studied the allocation of block rewards on blockchains showing that Bitcoin's allocation rule satisfies some properties. It does not, however, hold when miners are not risk-neutral, which is the case for Bitcoin. In contrast to these prior work, this paper touches upon fairness issues from the viewpoint of transaction issuers and not miners. There is a vast literature on incentives in mining. Most of it, however, considers only block rewards~\cite{Romiti2019ADD,Chen@AFT19,Eyal-CACM2018,Pass_Seeman_Shelat_2017,Zhang_Preneel_2019,sompolinsky2015secure,Kiayias@EC16,Fiat@EC19,Goren@EC19,Noda@EC20}. As the block reward halves every four years, some recent work focused on analyzing how the incentives will change when transaction fees dominate the rewards. Carlsen \textit{et al.}~\cite{Carlsten@CCS16} showed that having only transaction fees as incentives will create instability. Tsabary and Eyal~\cite{Tsabary@CCS18} extended this result to more general cases including both block rewards and transaction fees. Easley \textit{et al.}~\cite{Easley19a} proposed a general economic analysis of the system and its welfare with various types of rewards. Those prior work, however, assume that miners follow a certain norm for transaction selection and ordering (mostly the fee-rate norm) and look at miners' incentives in terms of how much compute power to exert and when (or some equivalent metric). There are also prior studies on the security issues of having transaction fees as the prime miners' incentive~\cite{Carlsten@CCS16,Li@IV18}; and a vast literature on the security of blockchains more generally (e.g., \cite{Gencer-FC2018,Karame-CCS2016,Vasek-FC2014}). Again, however, these studies focus on miners' incentives to mine and not on transaction ordering; for the latter, they assume that miners follow a norm. These prior studies are, hence, somewhat orthogonal to our work. Only a few recent work touched upon the issue of how miners select and order transactions, and how this is interlaced with how the fees are set. Lavi~\textit{et al.}~\cite{Lavi-WWW2019} and Basu~\textit{et al.}~\cite{Basu-CoRR2019} highlighted the inefficiencies in the existing transaction fee-setting mechanisms and proposed alternatives. They showed that miners might not be trustworthy, but without providing empirical evidence. Siddiqui \textit{et al.}~\cite{Siddiqui@AAMAS20} showed through simulations that, with transaction fees only as incentives, miners would have to select transactions greedily, increasing the latency for most of the transactions. They proposed an alternative selection mechanism and performed numerical simulations on it. Our work takes a complementary approach: We analyze empirical evidence of miners deviations from the transaction ordering norm in the current ecosystem. We also empirically analyze existing collusion at the level of transaction inclusion. To the best of our knowledge, our study is the first of its kind---showing empirical evidence of norm violations in Bitcoin---and our results help motivate the theoretical studies mentioned above. \section{Investigating Norm Violations} \label{sec:self-interest-scam-txs} Our analysis so far showed that while Bitcoin miners by and large follow transaction-prioritization norms, there are many clear instances of norm violations. Our next goal is to develop a deeper understanding of the underlying reasons or motivations for miners to deviate from the fee-rate based norms, at least for some subset of all transactions. To this end, we focus our investigation on the following three types of transactions, where we hypothesize miners might have an incentive to deviate from the current norms, which are well-aligned towards maximizing their rewards for mining. \begin{enumerate} \item {\it Self-interest Transactions:} Miners have a vested interest in a transaction, where the miners themselves are a party to the transaction, i.e., a sender or a receiver of bitcoins. Miners may have an incentive to selfishly accelerate the commitment of such transactions in the blocks mined by themselves. \item {\it Scam-payment Transactions:} Bitcoins are increasingly being used to launch a variety of ransomware and scam attacks~\cite{Frenkel@nyt17,Mathews@forbest17,Frenkel@nyt20}. A recent scam attack involved using hijacked Twitter accounts of celebrities to encourage their followers to send bitcoins to a specific Bitcoin wallet address~\cite{Frenkel@nyt20}. % Given the timely and widespread coverage of this attack in popular press and other similar attacks on crowd-sourced websites for reporting scam transactions~\cite{Scam@BitcoinAbuse,Scam@ScamAlert}, and with governments trying to blacklist wallet addresses of entities suspected of illegal activities~\cite{De@Coindesk,Hinkes@Coindesk}, we hypothesize that some miners might decelerate or even absolutely exclude the commitment of scam-payment transactions out of fear or ethical concerns. \item {\it Dark-fee Transactions:} Recently, some mining pool operators have started offering transaction acceleration services~\cite{BTC@accelerator,ViaBTC@accelerator,Poolin@accelerator,F2Pool@accelerator,AntPool@accelerator}, where anyone wanting to prioritize their transactions can pay an additional fee to a specific mining pool via a side-channel (often, the MPO's website or via a private-channel~\cite{strehle2020exclusive}). Such transaction fees are ``dark'' or opaque to other mining pools and the public, and we hypothesize that some of the committed low-fee transactions might have been accelerated by using such services. \end{enumerate} To detect whether a mining pool has accelerated or decelerated the above types of transactions, we first design a robust statistical test. Later, we report our findings from applying the test on the three types of transactions. \subsection{Statistical test for differential prioritization} Our goal here is to propose a robust statistical test for detecting whether a given mining pool $m$ is prioritizing a given set of committed transactions $c$ \stress{differently} than all other miners. The basic idea behind the statistical test is as follows. Suppose a mining pool is accelerating (decelerating) transactions in set $c$. In that case, these transactions will have a disproportionately high (low) chance of being included in blocks mined by this mining pool compared to the mining pool's hashing power (or rate). \subsubsection{Test for differential transaction acceleration} Consider a miner $m$ with normalized hash rate $h = \theta_0$ (estimated as fraction of blocks mined by $m$). Assume that we are given a set of transactions, denoted as $c$-transactions (for committed transactions), for which we wish to test whether miner $m$ is treating them preferentially. To test whether $m$ is prioritizing $c$-transactions, we look at all blocks that include at least one $c$-transaction, call them $c$-blocks. Suppose that there are $y$ such blocks. If $m$ is not prioritizing $c$-transactions, then a fraction $\theta_0$ of all $c$-blocks should be $m$-blocks (i.e., mined by $m$); if $m$ is prioritizing $c$-transactions (compared to other miners) then the fraction will be higher. We want to test whether the true fraction $\theta$ is indeed $\theta_0$ or is higher. We formalize this as follows: We assume that each $c$-block has a probability $\theta$ to be an $m$-block and do the following test. \begin{align*} & H_0: \theta = \theta_0 \\ & H_1: \theta > \theta_0. \end{align*} Assuming that the observed number of $c$-blocks that are mined by $m$ is $x$, the $p$-value of the test is \begin{align*} p = Pr (B \ge x), \end{align*} where $B$ is a binomial distribution of parameter $\theta_0$ and $y$, that is \begin{align*} p = \sum_{k=x}^y \binom{y}{k} \theta_0^k (1-\theta_0)^{(y-k)}. \end{align*} We may fix the size of the test (i.e., the maximal probability of type I error that corresponds to rejecting $H_0$ when $H_0$ is true) to $\alpha = 0.01$. Then $H_0$ should be rejected whenever $p<\alpha$. The smaller $p$, the higher the confidence in rejecting $H_0$, that is declaring that $m$ prioritizes c-transactions. The above test is relative in the sense that we can only detect if a miner treats $c$-transactions more preferentially than the rest of the miners. This test cannot conclude on whether it is the miner accelerating the $c$-transactions (relative to their deserved, i.e., fee-rate based, priority) or the rest of the miners are decelerating them. So, we look at additional empirical evidence from the position of the $c$-transactions within the $c$-blocks that include them. Specifically, given the set of $c$-transactions $\{c_1, c_2, .... c_n\}$ committed by a miner $m$, we compute a measure that we call \textit{\textbf{signed position prediction error (SPPE)}} as the average signed difference between the predicted and observed positions (measured as percentile rank) for all $c$-transactions within the blocks committed by $m$. More precisely, \begin{align*} SPPE(m) = \dfrac{\sum_{i=1}^n (c^{p}_{i} - c^{o}_{i}) \cdot 100}{n} \end{align*} where $c^{p}_{i}$ and $c^{o}_{i}$ are the predicted and the observed (percentile rank) positions, respectively, of transaction $c_i$ within the blocks committed by $m$. \subsubsection{Test for differential transaction deceleration} While the previous test checks for prioritization (or acceleration), one may also want to test for deceleration. To that end, a symmetric test can be used. Specifically, with the previous notation, the test would be \begin{align*} & H_0: \theta = \theta_0 \\ & H_1: \theta < \theta_0; \end{align*} and its $p$-value would be \begin{align*} p = Pr (B \le x), \end{align*} where $B$ is a binomial distribution of parameter $\theta_0$ and $y$, that is \begin{align*} p = \sum_{k=0}^x \binom{y}{k} \theta_0^k (1-\theta_0)^{(y-k)}. \end{align*} \begin{figure}[b] \centering \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/fig-2020-bitcoin/miner-wallet-addresses-log-2020.pdf} \sfigcap{}\label{fig:bar-number-wallet-addresses} \end{subfigure} \begin{subfigure}[b]{\onecolgrid} \includegraphics[width={\textwidth}]{images/fig-2020-bitcoin/miner-own-txs-distribution-log-bar-2020.pdf} \sfigcap{}\label{fig:bar-num-wallet-addresses-mpo} \end{subfigure} \figcap{ (a) Distribution of the number of wallet addresses used by each of the top-20 MPOs to receive its block rewards; SlushPool and Poolin, for instance, used 56 and 23 distinct wallet addresses, respectively. (b) The counts of inferred MPO transactions; in total, 12,121 transactions were inferred as MPOs' transactions, which corresponds to 0.011\% of the total issued transactions recorded in the Bitcoin blockchain. Poolin has the majority with 2232 (18.41\%), followed by Okex with 2089 (17.24\%) and Huobi with 1666 (13.74\%) transactions. BitDeer and Buffett have the same wallet address as BTC.com and Lubian.com, respectively. We count the addresses of the former as belonging to the latter.}\label{fig:bar-wallet-addresses} \end{figure} \subsubsection{Scaling the tests} While we did not face them in the present work, our test may have two limitations when scaling to large time windows and/or large numbers of transactions. \begin{table*}[tb] \tiny \tabcap{Differential prioritization of self-interest transactions}\label{tab:self-interest-txs} \resizebox{0.95\textwidth}{!}{ \begin{tabular}{@{}lccccccc@{}} \toprule \multicolumn{1}{p{0.8cm}}{\thead{Transactions}} & \multicolumn{1}{l}{\thead{mining pool}} & \multicolumn{1}{l}{\thead{norm. hash rate}} & \multicolumn{1}{c}{\multirow{2}{*}{$\thead{x}$}} & \multicolumn{1}{c}{\multirow{2}{*}{$\thead{y}$}} & \multicolumn{2}{c}{\thead{p-value}} & \multicolumn{1}{c}{\thead{\% SPPE}} \\ \multicolumn{1}{c}{\thead{of ...}} & \multicolumn{1}{c}{\thead{(m)}} & \multicolumn{1}{c}{\thead{($\theta_0$)}} & \multicolumn{1}{c}{~} & \multicolumn{1}{c}{~} & \multicolumn{1}{c}{\thead{(accel.)}} & \multicolumn{1}{c}{\thead{(decel.)}} & \multicolumn{1}{c}{\thead{(m)}} \\ \midrule {\quad\quad \textit{\textbf{F2Pool}}} & F2Pool & 0.1753 & 466 & 839 & \attention{0.0000} & 1.0000 & \attention{78.5494} \\ \arrayrulecolor{gray}\midrule {\quad\quad \textit{\textbf{ViaBTC}}} & ViaBTC & 0.0676 & 412 & 720 & \attention{0.0000} & 1.0000 & \attention{98.9175} \\ \arrayrulecolor{gray}\midrule \multirow{2}{*}{\quad\quad \textit{\textbf{1THash \& 58Coin}}} & ViaBTC & 0.0676 & 34 & 201 & \attention{0.0000} & 1.0000 & \attention{81.4516} \\ & 1THash \& 58Coin & 0.0611 & 39 & 201 & \attention{0.0000} & 1.0000 & \attention{96.9143} \\ \arrayrulecolor{gray}\midrule \multirow{2}{*}{\quad\quad \textit{\textbf{SlushPool}}} & SlushPool & 0.0375 & 214 & 1343 & \attention{0.0000} & 1.0000 & \attention{88.3082} \\ & ViaBTC & 0.0676 & 140 & 1343 & \attention{0.0000} & 1.0000 & \attention{45.1523} \\ \arrayrulecolor{black}\bottomrule \end{tabular} } \end{table*} First, it may become difficult to compute the $p$-value from the binomial distribution for large values of $y$. In such cases, we can use the following approximation for our analysis: If $y$ is large enough and $\theta_0$ is not close to zero or one (i.e., $x$ and $y-x$ are large enough), the binomial distribution of parameters $\theta_0$ and $y$ is well approximated by the normal distribution with mean $y\theta_0$ and variance $y\theta_0(1-\theta_0)$. Hence, the $p$-value for the acceleration test can be computed as, \begin{align*} p \simeq \Phi \left( \frac{x-y\theta_0}{\sqrt{y\theta_0(1-\theta_0)}} \right), \end{align*} where $\Phi$ is the CDF of a standard normal random variable. A similar approximation can be done for the deceleration test. Second, the hash rates of miners in our $p$-value test are assumed to be more or less constant (i.e., $\theta_0$ is a constant). This assumption is a limitation of our test as, in reality, hash rates of miners may vary over time, particularly over large time windows. In such situations, our test results may be affected, particularly when the arrival times of transactions are not regularly spread over the time window of our analysis. We address this issue in the current paper by confirming the results of the $p$-value test through the SPPE-test, which is not affected by variable hash rates. It is possible, however, to alleviate this limitation of our analysis. One natural way is to divide the total time window into multiple windows such that the hash rate is more or less constant in those shorter time windows; and compute $p$-values in each time window. We can then combine the obtained $p$-values using Fisher's method \cite{Fisher@1992Statistical,Fisher@1948}. We leave the investigation of such extended test procedures to future work, when they might be needed. \subsection{Self-interest transactions} \begin{table*}[tb] \begin{center} \tiny \tabcap{Differential prioritization of scam-payment transactions}\label{tab:twitter-scam-txs} \resizebox{0.75\textwidth}{!}{% \begin{tabular}{@{}ccccccr@{}} \toprule \multicolumn{1}{c}{\thead{mining pool}} & \multicolumn{1}{l}{\thead{norm. hash rate}} & \multicolumn{1}{c}{\multirow{2}{*}{$\thead{x}$}} & \multicolumn{1}{c}{\multirow{2}{*}{$\thead{y}$}} & \multicolumn{2}{c}{\thead{p-value}} & \multicolumn{1}{c}{\thead{\% SPPE}} \\ \multicolumn{1}{c}{\thead{(m)}} & \multicolumn{1}{c}{\thead{($\theta_0$)}} & \multicolumn{1}{c}{~} & \multicolumn{1}{c}{~} & \multicolumn{1}{c}{\thead{(accel.)}} & \multicolumn{1}{c}{\thead{(decel.)}} & \multicolumn{1}{c}{\thead{(m)}} \\ \midrule Poolin & 0.1528 & 10 & 53 & 0.2856 & 0.8227 & $-3.9787$ \\ F2Pool & 0.1450 & 10 & 53 & 0.2323 & 0.8629 & $0.8735$ \\ BTC.com & 0.1147 & 9 & 53 & 0.1483 & 0.9233 & $-2.8333$ \\ AntPool & 0.1093 & 4 & 53 & 0.8450 & 0.2989 & $31.5000$ \\ Huobi & 0.0955 & 1 & 53 & 0.9951 & 0.0323 & $-1.6428$ \\ Okex & 0.0698 & 3 & 53 & 0.7248 & 0.4890 & $-5.0000$ \\ 1THash \& 58COIN & 0.0684 & 8 & 53 & 0.0268 & 0.9907 & $-0.5000$ \\ Binance Pool & 0.0590 & 3 & 53 & 0.6120 & 0.6180 & $-2.6000$ \\ ViaBTC & 0.0552 & 1 & 53 & 0.9507 & 0.2020 & $-4.0000$ \\ \bottomrule \end{tabular}% } \end{center} \end{table*} To identify transactions where a mining pool is a sender or receiver of transactions, we first need to identify Bitcoin wallets (addresses) that belong to mining pools. In Bitcoin, whenever a mining pool discovers a new block, it specifies a wallet address to receive the mining rewards. This mining pool address is included in the Coinbase transaction (refer~\S\ref{sec:background}) that appears at the start of every block. In our data set \dsc{}, we gathered all the wallet addresses used by the top-$20$ mining pools to receive their rewards. For each mining pool, we then retrieved all committed transactions, in which coins were sent from the mining pool's wallet. Figure~\ref{fig:bar-wallet-addresses} shows the statistics for the mining pool wallets and the transactions spending (sending) coins from (to) the wallets, for each of the top-$20$ mining pools in data set \dsc{}. We found hundreds or thousands of self-interest transactions for most of the mining pools. \subsubsection{Acceleration of self-interest transactions} For self-interest transactions belonging to each of the top-20 mining pools, we separately applied our statistical test to check whether any of the top-10 mining pools (that mined at least $4\%$ of all mined blocks in data set \dsc{}) are preferentially accelerating or decelerating the transactions. In Table~\ref{tab:self-interest-txs}, we report the statistics from our test for mining pools that were found to preferentially treat transactions belonging to their own or other mining pools. Strikingly, Table~\ref{tab:self-interest-txs} shows that 4 out of the top-10 mining pools namely, F2Pool, ViaBTC, 1THash \& 58Coin, and SlushPool \stress{selfishly accelerated} their own transactions, i.e., coin transfers from or to their own accounts (p-value for acceleration test is less than $0.001$). Equally, if not more interestingly, Table~\ref{tab:self-interest-txs} shows collusive behavior among mining pools. Specifically, it shows that transactions issued by 1THash \& 58Coin and SlushPool were \stress{collusively accelerated} by ViaBTC (p-value for acceleration test is less than $0.001$). That these mining pools were accelerating the transactions is further confirmed by the SPPE measure, which clearly shows that in each of the above cases, the self-interest transactions were also being included within the blocks ahead of other higher fee-rate transactions. \subsection{Scam-payment transactions} Next, we investigate whether any mining pool attempted to decelerate or exclude scam-payment transactions. On July 15, 2020, multiple celebrities' accounts on Twitter fell prey to a scam attack. The scammers posted the message that anyone who transferred bitcoins to a specific wallet will receive twice the amount in return~\cite{Frenkel@nyt20}. In response, several people sent, in total, $12.87051731$ bitcoins---then worth nearly $\num{142000}$ (USD)---to the attacker's wallet via $386$ transactions, which were confirmed across $53$ blocks by $12$ miners. To examine the miners' behavior during this scam attack, we selected all blocks mined from July 14 to August 9, 2020 (i.e., \num{3697} blocks in total, containing \num{8318621} issued transactions as described in~\S\ref{sec:supp-scam-txs}) from our data set \dsc. Once again, we applied our statistical test to check whether any of the top-$9$ mining pools (that mined at least $5\%$ of all mined blocks from this data) are preferentially accelerating or decelerating the transactions. Table~\ref{tab:twitter-scam-txs} shows the test statistics. Interestingly, we find no statistically significant evidence (i.e., p-value less than $0.001$) of scam-payment acceleration or deceleration across all top mining pools. Looking at SPPE measure across the mining pools, we find no evidence of mining pools (other than AntPool) preferentially ordering the scam-payment transactions within blocks. In short, our findings show that most mining pool operators today do not distinguish between normal and scam-payment transactions. \section*{Acknowledgments}\label{sec:ack} K. P. Gummadi acknowledges support from the European Research Council (ERC) Advanced Grant ``Foundations for Fair Social Computing,'' funded under the European Union's Horizon 2020 Framework Programme (grant agreement no. 789373). P. Loiseau was supported by MIAI @ Grenoble Alpes (ANR-19-P3IA-0003) and by the French National Research Agency through grant ANR-20-CE23-0007. A. Mislove acknowledges support from NSF grants CNS-1900879 and CNS-1955227. J. Messias dedicates this work to his late father, Jota Missias~\cite{JotaMissias@Wiki}. \section{Data Sets} \label{sec:datasets} To understand the importance of transaction ordering to users and to investigate when and how miners violate the transaction prioritization ``norms,'' we resort to an empirical, data-driven approach. Below, we briefly describe three different data sets that we curated from Bitcoin and highlight how we use the data sets in different analyses in the rest of the paper. \paraib{Data set \dsa{}.} To check miners' compliance to prioritization norms in Bitcoin, we analyzed all transactions and blocks issued in Bitcoin over a three-week time frame from February 20 through March 13, 2019 (see Table~\ref{tab:datasets}). We obtained the data by running a \term{full} node, a Bitcoin software that performs nearly all operations of a miner (e.g., receiving broadcasts of transactions and blocks, validating the data, and re-broadcasting them to peers) with the exception of mining. The data set contains a set of periodic \term{snapshots}, recorded once per $15$ seconds for the entire three-week period, where each snapshot captures the state of the full node’s Mempool\xspace{}. We plot the distribution of the count of blocks and transactions mined by the top-20 MPOs for data set \dsa in Figure~\ref{fig:dist-txs-blks-dataset-a}. If we rank the MPOs in data set \dsa by the number of blocks ($B$) mined (or, essentially, the approximate hashing capacity $h$), the top five MPOs turn out to be BTC.com ($B$: $\num{536}$; $h$: $17.18\%$), AntPool ($B$: $\num{399}$; $h$: $12.79\%$), F2Pool ($B$: $\num{352}$; $h$: $11.29\%$), Poolin ($B$: $\num{344}$; $h$: $11.03\%$), and SlushPool ($B$: $\num{279}$; $h$: $8.94\%$). We use this data for checking whether miners adhere to prioritization norms when selecting transactions for confirmation or inclusion in a block (\S\ref{sec:prioritization-norms}). \paraib{Data set \dsb{}.} Differences in configuration of the Bitcoin software may subtly affect the inferences drawn from \dsa{}. A full node connects to \(8\) peers, for instance, in the default configuration, and increasing this number may reduce the likelihood of missing a transaction due to a “slow” peer. The default configuration also imposes a minimum fee-rate threshold of $\DefTxFee$ for accepting a transaction. We instantiated, hence, another full node to expand the scope of our data collection. We configured this second node, for instance, to connect to as many as \(125\) peers. We also removed the fee-rate threshold to accept even zero-fee transactions. \dsb{} contains Mempool\xspace{} snapshots of this full node, also recorded once per \usd{15}, for the entire month of June 2019 (refer Table~\ref{tab:datasets}). We notice that $99.7\%$ of the transactions received by our Mempool\xspace{} were included by miners. Figure~\ref{fig:dist-txs-blks-dataset-b} shows the distribution of the count of blocks and transactions mined by the top-20 MPOs for data set \dsb. The top five MPOs are BTC.com ($B$: $\num{889}$; $h$: $19.67\%$), AntPool ($B$: $\num{577}$; $h$: $12.77\%$), F2Pool ($B$: $\num{523}$; $h$: $11.57\%$), SlushPool ($B$: $\num{438}$; $h$: $9.69\%$), and Poolin ($B$: $\num{433}$; $h$: $9.58\%$). As in the case of \dsa, we use this data set in \S\ref{sec:prioritization-norms}. \begin{figure*}[tbh] \centering \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/blockchain/amount_of_blocks_and_transactions_created.pdf} \sfigcap{}\label{fig:cdf-tx-blks-btc} \end{subfigure} \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/mempool/mempool-congestion-both-dataset.pdf} \sfigcap{}\label{fig:mempool-congestion} \end{subfigure} \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/mempool/mempool-distribution-feb-march.pdf} \sfigcap{}\label{fig:mpool-sz-a} \end{subfigure} \figcap{(a) Volume of transactions issued and blocks mined as a function of time, showing that transactions have been issued at high rates since mid-2017; (b) Distributions of Mempool\xspace{} size in both data sets \dsa{} and \dsb{}, and (c) the size Mempool\xspace in \dsa{} as a function of time, both indicating that congestion is typical in Bitcoin.} \end{figure*} \paraib{Data set \dsc{}.} The insights derived from the above data motivated us to shed light on the aberrant behavior of mining pool operators (MPOs). To this end, we gathered all ($\num{53214}$) Bitcoin blocks mined and their \num{112542268} transactions from Jan. 1\tsup{st} to Dec. 31\tsup{st} 2020. These blocks also contain one Coinbase transaction per block, which the MPO creates to receive the block and the fee rewards. This data set, labeled \dsc{}, contains $\num{112489054}$ issued transactions (see Table~\ref{tab:datasets}). MPOs typically include a \stress{signature} or \stress{marker} in the Coinbase transaction, probably to claim their ownership of the block. Following prior work (e.g., \cite{judmayer2017merged,Romiti2019ADD}), we use such markers for identifying the MPO (owner) of each block. We failed to identify the owners of $\num{703}$ blocks (or approximately $1.32\%$ of the total), albeit we inferred $30$ MPOs in our data set. In this paper, we consider only the top-20 MPOs whose combined normalized hash-rates account for $98.08\%$ of all blocks mined. Figure~\ref{fig:dist-txs-blks-dataset-c} shows the count of blocks mined by the top-20 MPOs according to \dsc{}. The top five MPOs in terms of the number of blocks ($B$) mined are F2Pool ($B$: $\num{9326}$; $h$: $17.53\%$), Poolin ($B$: $\num{7876}$; $h$: $14.80\%$), BTC.com ($B$: $\num{6381}$; $h$: $11.99\%$), AntPool ($B$: $\num{5832}$; $h$: $10.96\%$), and Huobi ($B$: $\num{3990}$; $h$: $7.5\%$). We use this data set in \S\ref{sec:prioritization-norms} and \S\ref{sec:self-interest-scam-txs}. \subsection{Dark-fee transactions} \label{sec:dark-fee-txs} We refer to transactions that offer additional fees to specific mining pools through an opaque and non-public side-channel payment as dark-fee transactions. Many large mining pool operators allow such side-channel payments on their websites for users wanting to ``accelerate'' the confirmation of their transactions, especially during periods of congestion. Such private side-channel payments that hide the fees a user pays to miners from others have other benefits for the users~\cite{BTC@accelerator,Taichi@accelerator,F2Pool@accelerator,Poolin@accelerator,AntPool@accelerator}. One well-known advantage is, for instance, avoiding the fee-rate competition in transaction inclusion, particularly during periods of high Mempool\xspace congestion; private side-channel payments would reduce a user's transaction cost volatility and curb front-running risks~\cite{Daian@S&P20,strehle2020exclusive,Eskandari@FC-2020}. We use the data set \dsc{} to first investigate how such transaction acceleration services work and later propose a simple test for detecting accelerated transactions in the Bitcoin blockchain. \subsubsection{Investigating transaction acceleration services} We examined transaction acceleration services offered by $5$ large Bitcoin mining pools namely, BTC.com~\cite{BTC@accelerator}, AntPool~\cite{AntPool@accelerator}, ViaBTC~\cite{ViaBTC@accelerator}, F2Pool~\cite{F2Pool@accelerator}, and Poolin~\cite{Poolin@accelerator}. Specifically, we queried BTC.com for the prices of accelerating all transactions in a real-time snapshot of the Mempool\xspace in data set \dsc (see~\S\ref{sec:tx-accelerator-comparison}). % We found that the dark fee requested by BTC.com to accelerate each transaction is so high that if it was added to the publicly offered transaction fee, the resulting total fee-rate would be higher than the fee-rate offered by any other transaction in the Mempool\xspace snapshot. Put differently, had users included the requested acceleration fees in the publicly offered fee when issuing the transaction, every miner would have included the transaction with the highest priority. The above observation raises the following question: \stress{why would rational users offer a dark fee to incentivize a subset of miners to prioritize their transaction rather than publicly announce the fee to incentivize all miners to prioritize their transaction?} One potential explanation could be that as payment senders determine the publicly offered transaction fees, payment receivers might wish to accelerate the transaction confirmation by offering an acceleration fee. Another explanation could be that the user issuing the transaction might want to avoid revealing the true fees they are willing to offer publicly, to avoid a fee-rate battle with transactions competing for inclusion in the chain during congestion. Opaque transaction fees can reduce transaction cost volatility, but they may also unfairly bias the level playing field amongst user transactions attempting to front-run one another~\cite{strehle2020exclusive,Daian@S&P20}. On the other hand, every rational mining pool has clear incentives to offer such acceleration services. They receive a very high fee by mining the accelerated transaction. Better still, they keep the offered fee, even if the accelerated transaction were mined by some other miners. \subsubsection{Detecting accelerated transactions} \begin{table}[tb] \small \begin{center} \tabcap{For an SPPE $\ge$ 99\%, we observe that 64.98\% of BTC.com transactions were accelerated; the fourth column values are derived by dividing the values in the second with those in the third. The number of accelerated transactions decreases to 18.12\% for an SPPE $\ge$ 90\% and to 1.06\% for an SPPE $\ge$ 50\%.}\label{tab:sppe-tx-violation-acceleration} \resizebox{.45\textwidth}{!}{% \begin{tabular}{rrrr} \toprule \multicolumn{1}{c}{\thead{SPPE ($\ge$)}} & \thead{\# txs} & \thead{\# acc. txs} & \thead{\% acc. txs} \\ \midrule $100\%$ & \num{628} & \num{464} & $73.89$ \\ $99\%$ & \num{1108} & \num{720} & $64.98$ \\ $90\%$ & \num{5365} & \num{972} & $18.12$ \\ $50\%$ & \num{95282} & \num{1007} & $1.06$ \\ $1\%$ & \num{657423} & \num{1029} & $0.16$ \\ \bottomrule \end{tabular} } \end{center} \end{table} Given the high fees demanded by acceleration services, we anticipate that \stress{accelerated transactions would be included in the blockchain with the highest priority}, i.e., in the first few blocks mined by the accelerating miner and amongst the first few positions within the block. We would also anticipate that \stress{without the acceleration fee, the transaction would not stand a chance of being included in the block based on its publicly offered transaction fee}. The above two observations suggest a potential method for detecting accelerated transactions in the Bitcoin blockchain: An accelerated transaction would have a very high \textit{\textbf{signed position prediction error (SPPE)}}, as its predicted position based on its public fee would be towards the bottom of the block it is included in, while its actual position would be towards the very top of the block. To test the effectiveness of our method, we analyzed all \num{6381} blocks and \num{13395079} transactions mined by BTC.com mining pool in data set \dsc{}. We then extracted all transactions with SPPE greater or equal than $100\%$, $99\%$, $90\%$, $50\%$, $1\%$ and checked what fraction of such transactions were accelerated. Given a transaction identifier, BTC.com's acceleration service~\cite{BTC@accelerator} allows anyone to verify whether the transaction has been accelerated. Our results are shown in Table~\ref{tab:sppe-tx-violation-acceleration}. We find that more than $64\%$ of the \num{1108} transactions with SPPE greater or equal than $99\%$ were accelerated, while only $1.06\%$ of transactions with SPPE greater or equal than $50\%$ were accelerated. In comparison, we found no accelerated transactions in a random sample of \num{1000} transactions drawn from the \num{13395079} transactions mined by BTC.com. Our results show that large values of SPPE for confirmed transactions indicate the potential use of transaction acceleration services. In particular, a transaction with SPPE $\ge 99\%$ (i.e., a transaction that is included in the top $1\%$ of the block positions, when it should have been included in the bottom $1\%$ of the block positions based on their public fee-rate) has a high chance of being accelerated. \if 0 \begin{figure}[tpb] \centering \includegraphics[width={\onecolgrid}]{images/fig-2018--2020-bitcoin/blocks-violation-2018--2020.pdf} \figcap{We observe blocks with accelerated transactions to be quite common among the top 15 MPOs. For an SPPE greater or equal than $99\%$, the mining pools with a high percentage of blocks containing accelerated transactions are ViaBTC ($41.36\%$), 1THash \& 58COIN ($17.58\%$), SlushPool ($11.58\%$), BTC.com ($10.03\%$), and F2Pool ($9.63\%$). }\label{fig:block-with-tx-violation} \end{figure} \fi \if 0 \subsection{Detecting collusive transaction acceleration} Given the prevalence of transaction acceleration amongst many large mining pools, we want to check if some of the mining pools collude in including accelerated transactions. On one hand, collusion makes acceleration services more effective as transactions would be prioritized by more mining pools with higher combined hash rates. On the other hand, collusion amongst mining pools to share dark fees via side-channels to alter transaction ordering, exacerbates the growing concerns about the concentration of hash rates within a small number of large mining pools~\cite{Eyal-CACM2018,Kelkar@CRIPTO20,Siddiqui@AAMAS20,strehle2020exclusive}. To detect collusion between mining pools, we decided to run active real-world experiments. Specifically, we paid ViaBTC~\cite{ViaBTC@accelerator} to accelerate selected transactions during periods of high congestion between November 26\tsup{th} and December 1\tsup{st} 2020. From 10 Mempool\xspace snapshots during this period, we selected transactions that offered a very low fee-rate (i.e., 1--2 sat-per-byte) for acceleration. To keep our acceleration costs low, we selected transactions with the smallest size (which was 110 bytes) within this set. For each of the 10 snapshots, we had multiple transactions with such low fee-rates and small size, for a total of 212 transactions across all the snapshots. We randomly selected one transaction from each snapshot and paid ViaBTC to accelerate it. In total, we paid 205 Euros for all 10 transaction accelerations. \PENDING{Refer App.C for more details.} \begin{table}[tb] \begin{center} \tabcap{Accelerated transactions have fewer delays and are included at the top of the block, i.e., at higher positions compared to non-accelerated transactions.}\label{tab:active-experiment-delay-position} \resizebox{.45\textwidth}{!}{% \begin{tabular}{rcccc} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\thead{metrics}}} & \multicolumn{2}{c}{\thead{delay in \# of blocks}} & \multicolumn{2}{c}{\thead{perc. position in a block}} \\ \multicolumn{1}{c}{} & \thead{acc.} & \thead{non-acc.} & \thead{acc.} & \thead{non-acc.} \\ \midrule minimum & 1 & 9 & 0.07 & 17.47 \\ 25-perc & 1 & 148 & 0.08 & 75.88 \\ median & 2 & 191 & 0.09 & 87.92 \\ 75-perc & 2 & 247 & 0.20 & 95.00 \\ maximum & 3 & 326 & 4.39 & 99.95 \\ average & 1.8 & 198.5 & 0.79 & 84.46 \\ \bottomrule \end{tabular} } \end{center} \end{table} We then compare the priority with which the accelerated transactions and the $202$ ($= 212-10$) non-accelerated transactions with similar fee rates and sizes were included in the Bitcoin blockhain. The impact of acceleration was strikingly apparent as shown in Table~\ref{tab:active-experiment-delay-position}. All $10$ accelerated transactions were included within $1$--$3$ blocks after their acceleration, with an average delay of $1.8$ blocks. In contrast, the minimum delay for the $202$ non-accelerated transactions of comparable fee-rates and sizes was $9$ blocks, with an average delay of 198.5 blocks. Interestingly, $38$ of the non-accelerated transactions are yet to be included in the blockchain by December 4\tsup{th}, 2020. Similarly, the accelerated transactions were included in top $0.07$--$4.39$ percentile positions, with an average $0.79$ percentile position, while the non-accelerated transactions were included in the beyond top $17.47$--$99.95$ percentile positions, with an average $84.46$ percentile position. From the above observations, it is clear that the transactions we accelerated were included with high priority. \begin{table}[tb] \begin{center} \tabcap{If we rank the miners who confirmed the accelerated transactions based on their daily, weekly, and monthly hash-rate power, at the time these experiments were conducted, the combined hash power of these mining pools exceeds 55\% of the Bitcoin's total hashing power.}\label{tab:active-experiment-hash-rate} \resizebox{.45\textwidth}{!}{% \begin{tabular}{rccc} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\thead{MPO}}} & \multicolumn{3}{c}{\thead{Hash-rate}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\thead{last 24h}} & \multicolumn{1}{c}{\thead{last week}} & \multicolumn{1}{c}{\thead{last month}} \\ \midrule F2Pool & \multicolumn{1}{c}{19.9\%} & \multicolumn{1}{c}{18.7\%} & \multicolumn{1}{c}{19.9\%} \\ AntPool & \multicolumn{1}{c}{12.5\%} & \multicolumn{1}{c}{10.6\%} & \multicolumn{1}{c}{10.2\%} \\ Binance & \multicolumn{1}{c}{9.6\%} & \multicolumn{1}{c}{10.3\%} & \multicolumn{1}{c}{10.0\%} \\ Huobi & \multicolumn{1}{c}{8.1\%} & \multicolumn{1}{c}{9.3\%} & \multicolumn{1}{c}{9.8\%} \\ ViaBTC & \multicolumn{1}{c}{5.1\%} & \multicolumn{1}{c}{7.1\%} & \multicolumn{1}{c}{7.7\%} \\ \thead{Total} & \multicolumn{1}{c}{\attention{55.2\%}} & \multicolumn{1}{c}{\attention{56\%}} & \multicolumn{1}{c}{\attention{57.6\%}} \\ \bottomrule \end{tabular} } \end{center} \end{table} We then, examined the mining pools that confirmed our accelerated transactions. Interestingly, even though we accelerated our transactions using ViaBTC mining pool, our $10$ transactons were included by $5$ different mining pools, namely F2Pool, AntPool, Binance, Huobi, and ViaBTC. All these colluding pools rank amongst the top-8 mining pools in terms of their hash rates at the time of our experiments. Table~\ref{tab:active-experiment-hash-rate} shows the individual as well as the combined hash rates of these $5$ colluding mining pools over the last day, last week, and last month before the conclusion of our experiment on December 1\tsup{st}, 2020. The most striking and the most worrisome fact is that the combined hash rates of these colluding mining pools exceeds $55\%$ of the total Bitcoin hash rate. \if 0 Finally, we investigate whether any mining pool attempted to accelerate or decelerate transactions that offer less than recommended fees. We used our data set \dsb{}, where we identified 53 low-fee transactions that were committed across 48 blocks that we mined by 5-different miners. Once again, we applied our statistical test to check whether any of the top-7 mining pools (that mined at least 5\% of all mined blocks in the data set) are preferentially accelerating or decelerating the transactions. Table\ref{tab:low-fee-txs} shows the test statistics. Interestingly, we find statistically significant evidence (i.e., p-value less than 0.01) that F2Pool and ViaBTC are accelerating the low-fee transactions compared to other mining pools, while the other mining pools are decelerating the low-fee transactions compared to other mining pools. Looking at SPPE measure across the mining pools, we find confirmatory evidence that F2Pool and ViaBTC are preferentially ordering these low-fee transactions within their blocks. One potential explanation of our findings is that the low-fee transactions have been accelerated via transaction acceleration service offered by F2Pool and ViaBTC. \fi \fi \subsection{Why transaction ordering matters?} \label{sec:mempool} \subsection{Does transaction ordering matter?} \label{sec:mempool} \begin{figure*}[tbh] \centering \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/mempool/mempool-commit-times-both-dataset.pdf} \sfigcap{}\label{fig:tx-commit-times} \end{subfigure} \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/blockchain/blockchain-txs-feerate-cdf.pdf} \sfigcap{}\label{fig:cdf-fee-all} \end{subfigure} \begin{subfigure}[b]{\threecolgrid} \includegraphics[width={\textwidth}]{images/mempool/mempool-feerate-congested-feb-march-cdf.pdf} \sfigcap{}\label{fig:fee-cong-rel-a} \end{subfigure} \figcap{(a) Distributions of delays until transaction inclusion show that a significant fraction of transactions experience at least 3 blocks (or approximately 30 minutes) of delay; Distributions of fee-rates for (b) all transactions and (c) transactions (in \dsa{}) issued at different congestion levels clearly indicate that users incentivize miners through transaction fees.} \end{figure*} A~congestion in the Mempool\xspace leads to contention among transactions for inclusion in a block. Transactions that fail to contend with others (i.e., win a spot for inclusion) experience inevitable delays in commit times. Transaction ordering, hence, has crucial implications for users when the Mempool\xspace{} experiences congestion. For instance, the Bitcoin Core code and most of the wallet software rely on the distribution of transactions' fee-rates included in previous blocks to suggest to users the fees that they should include in their transactions~\cite{BitcoinCore-2021,Fees@Coinbase,Lavi-WWW2019}. Such transaction-fee predictions from any predictor, which assume that miners follow the norm, will be misleading.\footnote{% Coinbase, one of the top cryptocurrency exchanges, does not allow users to set transaction fees manually. Instead it charges a fee based on how much they expect to pay for the concerned transaction, which in turn relies on miners following the norm~\cite{Fees@Coinbase}.} Below, we examine whether Mempool\xspace{} in a real-world blockchain deployment experiences congestion and its impact on transaction-commit delays. We then analyze whether, and how, users adjust transaction fees to cope with congestion, and the effect of these fee adjustments on commit delays. \subsubsection{Congestion and delays} \label{subsec:cong-delays} Bitcoin's design---specifically, the adjustment of hashing difficulty to enforce a constant mining rate---ensures that there is a steady flow of currency generation in the network. The aggregate number of blocks mined in Bitcoin, consequently, increases linearly over time (Figure~\ref{fig:cdf-tx-blks-btc}). Transactions, however, are \stress{not} subject to such constraints and have been issued at much higher rates, particularly, according to Figure~\ref{fig:cdf-tx-blks-btc}, since mid-2017: $60\%$ of all transactions ever introduced were added in only in the last 3.5 years of the nearly decade-long life of the cryptocurrency. Should this growth in transaction issues continue to hold, transactions will increasingly have to contend with one another for inclusion within the limited space (of $\uMB{1}$) in a block. Below, we empirically show that this contention among transactions is already common in the Bitcoin network. Using the data sets \dsa{} and \dsb{} (refer~\S\ref{sec:datasets}), we measured the number of unconfirmed transactions in the Mempool\xspace{}, at the granularity of \usd{15}. Per Figure~\ref{fig:mempool-congestion}, congestion in Mempool\xspace{} is \stress{typical} in Bitcoin: During the three-week period of \dsa, the aggregate size of all unconfirmed transactions was above the maximum block size (of $\uMB{1}$) for nearly $75\%$ of the time; per data set \dsb the Mempool\xspace was congested for nearly $92\%$ of the time period. Figure~\ref{fig:mpool-sz-a} provides a complementary view of the Mempool\xspace congestion in \dsa{}, by plotting the Mempool\xspace{} size as a function of time. The measurements reveal a huge variance in Mempool\xspace{} congestion, with size of unconfirmed transactions at times exceeding 15-times the maximum size of a block. Transactions queued up during such periods of high congestion will have to contend with one another until the Mempool\xspace{} size drains below $\uMB{1}$. These observations also hold in data set \dsb{}, the details of which are in~\S\ref{sec:supp-tx-ord}. The Mempool\xspace congestion, which in turns leads to the contention among transactions for inclusion in a block, has one serious implication for users: delays in transaction-commit times. While $65\%$ ($60\%$) of all transactions in data set \dsa (\dsb) get committed in the next block (i.e., in the block immediately following their arrival in the Mempool\xspace), Figure~\ref{fig:tx-commit-times} shows that nearly $15\%$ ($20\%$) of them wait for at least $3$ blocks (i.e., $30$~minutes on average). Moreover, $5\%$ ($10\%$) of the transactions wait for $10$ or more blocks, or $100$~minutes on average, in data set \dsa (\dsb). While no transaction waited for more than a day in data set \dsa, a~small percentage of transactions waited for up to five days (because of the high levels of congestion in June 2019) in data set \dsb. \parai{Takeaways.} Mempool\xspace{} is typically congested in Bitcoin. Transactions, hence, typically contend with one another for inclusion in a block. The Mempool\xspace{} congestion has non-trivial implications for transaction-commit times. \subsubsection{Transaction fee-rates and delays}\label{subsec:feerates} To combat the delays and ensure that a transaction is committed ``on time'' (i.e., selected for inclusion in the earliest block), users may include a transaction fee for incentivizing the miner. While the block reward from May $11$, 2020 is $\uBTC{6.25}$, the aggregate fees accrued per block is becoming considerable (i.e., $6.29\%$ of the total miner revenue in 2020 per Table~\ref{tab:fee-revenue} in~\S\ref{sec:signif-tx-fees}). Prior work also show that revenue from transaction fees is clearly increasing~\cite{Easley-SSRN2017}. With the volume of transactions growing aggressively (Figure~\ref{fig:cdf-tx-blks-btc}) over time and the block rewards, in Bitcoin, halving every four years, it is inevitable that transaction fees will be an important, if not the only, criterion for including a transaction. Below, we analyze whether Bitcoin users incentivize miners via transaction fees and if such incentives are effective today. Per Figure~\ref{fig:cdf-fee-all} the transaction fee-rate of committed transactions in both data sets \dsa{} and \dsb{} exhibits a wide range, from $10^{-6}$ to beyond $\uTxFee{1}$. The fee-rate distributions of committed transactions also do not vary much between different mining pool operators (refer Figure~\ref{fig:cdf-fee-top5} in \S\ref{sec:tx-fees-across-mpos}). A~few transactions ($0.001\%$ in \dsa{} and $0.07\%$ in \dsb{}) were committed, despite offering fee-rates less than the recommended minimum of $10^{-5}$~BTC/KB{}. A non-trivial percentage of transactions offered fee-rates that are two orders of magnitude higher than the recommended value; particularly, in data set \dsb, perhaps due to the comparatively high levels of congestion (cf. Figure~\ref{fig:mpool-sz-a} and Figure~\ref{fig:mpool-sz-b}), $34.7\%$ of transactions offered fee-rates higher than $10^{-3}$~BTC/KB{}. Approximately $70\%$ ($51.3\%$) of the transactions in data set \dsa{} (\dsb{}) offer fee-rates between $10^{-4}$ and $10^{-3}$~BTC/KB{}, i.e., between one and two orders of magnitude more than the recommended minimum. Such high fee-rates clearly capture the users’ intents to incentivize the miners. Our premise is that the (high) fee-rates correlate with the level of Mempool\xspace{} congestion. Said differently, we hypothesize that users increase the fee-rates to curb the delays induced by congestion. To test this hypothesis, we separate the Mempool\xspace{} snapshots (cf.~\S\ref{subsec:cong-delays}) into $4$ different bins. Each bin corresponds to a specific level of congestion identified by the Mempool\xspace{} size as follows: lower than \uMB{1} (\stress{no congestion}), in $(1, 2]$ MB (\stress{lowest congestion}), in $(2, 4]$ MB, and higher than \uMB{4} (\stress{highest congestion}). The fee-rates of transactions observed in the different bins or congestion levels, in Figure~\ref{fig:fee-cong-rel-a}, then validates our hypothesis: Fee-rates are strictly higher (in distribution, and hence also on average) for higher congestion levels. Figure~\ref{fig:fee-delay-rel-a} shows that users' strategy of increasing fee-rates to combat congestion seems to work well in practice. Here, we compare the CDF of commit delays of transactions with low (i.e., less than $10^{-4}$~BTC/KB{}), high (i.e., between $10^{-4}$ and $10^{-3}$~BTC/KB{}), and exorbitant (i.e., more than $10^{-3}$) fee-rates, in data set \dsa{}. Similar analysis with data set \dsb{} is provided in~\S\ref{sec:fees-and-cong}. We observe that an increase in the transaction fee-rates is consistently rewarded (by miners) with a decrease in the commit delays. This observation suggests that, at least to some extent, miners prioritize transactions for inclusion based fee-rates or the fee-per-byte metric. \begin{figure}[tb] \centering \includegraphics[width={\onecolgrid}]{images/mempool/mempool-commit-times-feerate-feb.pdf} \figcap{Distributions of transaction-commit delays for different fee-rates for transactions in \dsa{}; incentivizing miners via fee-rates works well in practice.}\label{fig:fee-delay-rel-a} \end{figure} \parai{Takeaways.} A significant fraction of transactions offer fee-rates that are well above the recommended minimum. Fee-rates are typically higher at higher congestion levels, and reduce the commit delays. These observations suggest that users are indeed willing to spend money to decrease commit delays for their transactions during periods of congestion. \section{Background}\label{sec:background} A Bitcoin user or client issues transactions that move currency from one or more \newterm{wallets} (i.e., addresses) owned by the client to another. \newterm{Miners}, who are a~subset of these users, validate the transactions and include them in a \newterm{block}. A~block is a set of zero\footnote{Miners can mine an ``empty'' block without including any transaction in it.} or more transactions in addition to the \newterm{Coinbase} transaction, which moves the rewards to the miner's wallet. Until these transactions are included in a block, they remain \newterm{unconfirmed}. Miners create a block by including such unconfirmed transactions and solving a cryptographic puzzle that includes, among other things, a hash of the most recent block mined in the network. The chain of cryptographic hashes linking each block to an ancestor all the way to the initial (or \newterm{genesis}) block~\cite{blockchain-2009} constitutes the blockchain. Miners are rewarded for their work in two ways. First, miners reap a block reward upon mining a block. Second, miners also collect fees, if any, from each transaction; fees are included by users to incentivize the miners to commit their transaction. We refer to the software implementation (along with the hardware) used by a miner as a \newterm{node}. A~node allows a miner to receive broadcasts of transactions and blocks from their peers, validate the data, and mine a block. Nodes queue the unconfirmed transactions received via broadcasts in an in-memory buffer, called the \newterm{Mempool\xspace}, from where they are dequeued for inclusion in a block. One can also configure the node to skip mining and simply use it as an observer. \input{sections/norms} \section{Transaction-Acceleration Fees} \label{sec:tx-accelerator-comparison} In this experiment, we compare the transaction-acceleration fee with the typical transaction fees in Bitcoin. To this end, we retrieved a snapshot containing \num{26332} unconfirmed transactions from our node's Mempool\xspace on November 24\tsup{th} 2020 at 10:08:41 UTC. Then, for each transaction, we searched its respective transaction accelerator price (or acceleration fee) via the acceleration service provided by BTC.com~\cite{BTC@accelerator}. We inferred the acceleration fees for \num{23341} ($88.64\%$) out of the \num{26332} unconfirmed transactions. Figure~\ref{fig:acceleration-fee-price-comparison} shows the CDF of both the Bitcoin transaction fees as well as the acceleration fees provided by BTC.com. Acceleration fee is on average $566.3$ times higher (\num{4734.67} of std.) and on median 116.64 times higher than the Bitcoin transaction fees. At the time of this experiment, 1~BTC was worth \num{18875.10}~USD. \section{Congestion in Mempool\xspace{} of \dsb{}} \label{sec:supp-tx-ord} Congestion in Mempool\xspace{} is typical not only in \dsa{} (as discussed in~\S\ref{subsec:cong-delays}), but also in \dsb{}. Indeed, Figure~\ref{fig:mpool-sz-b} reveals a huge variance in Mempool\xspace{} congestion, much higher than that observed in \dsa{}. Mempool\xspace{} size fluctuations in \dsb{} are, for instance, approximately three times higher than that in \dsa{}. Around June $22\tsup{nd}$, there was a surge in Bitcoin price following the announcements of Facebook’s Libra\footnote{On June 18\tsup{th}, Facebook announced its cryptocurrency, Libra, which was later renamed to Diem.~\url{https://www.diem.com}} and another surge around June $25\tsup{th}$ after the news of US dollar depreciation~\cite{CNN-BITCOIN-2019}. These price surges significantly increased the number of transaction issued, which in turn introduced delays. As a consequence, at times, Mempool\xspace{} in \dsb{} takes much longer duration than in \dsa{} to be drained of all transactions. \begin{figure}[h] \centering \includegraphics[width={\onecolgrid}]{images/mempool/mempool-distribution-jun-july} \figcap{Mempool\xspace{} size from \dsb{} as a function of time.}\label{fig:mpool-sz-b} \end{figure} \section{Significance of Transaction Fees} \label{sec:signif-tx-fees} Table~\ref{tab:fee-revenue} shows the contribution of transaction fees towards miners' revenue across all blocks mined from 2016 to 2020. In 2018, fees accounted for an average of $3.19\%$ of miners' total revenue per block; in 2019 and 2020 were $2.75\%$ and $6.29\%$, respectively. However, if we consider only blocks mined from May 2020 (i.e., blocks with a mining reward of $6.25$ BTC), the fees account for, on average, $8.90\%$ with an std. of $6.54\%$ in total. Therefore, revenue from transaction fees is increasing~\cite{Easley-SSRN2017}, and it tends to continue. \begin{table}[h] \begin{center} \tabcap{Miners' relative revenue from transaction fees (expressed as a percentage of the total revenue) across all blocks mined from 2016 until the end of 2020.}\label{tab:fee-revenue} \resizebox{.45\textwidth}{!}{% \begin{tabular}{rcrccccrc} \toprule \multicolumn{1}{c}{\multirow{1}{*}{\thead{Year}}} & \multicolumn{1}{c}{\thead{\# of blocks}} & \multicolumn{1}{c}{\thead{mean}} & \multicolumn{1}{c}{\thead{std}} & \multicolumn{1}{c}{\thead{min}} & \multicolumn{1}{c}{\thead{25-perc}} & \multicolumn{1}{c}{\thead{median}} & \multicolumn{1}{c}{\thead{75-perc}} & \multicolumn{1}{c}{\thead{max}}\\ \midrule 2016 & \num{54851} & 2.48 & 2.12 & 0 & 0.87 & 1.78 & 3.84 & 92.10 \\ 2017 & \num{55928} & 11.77 & 7.73 & 0 & 6.33 & 10.49 & 15.58 & 86.44 \\ 2018 & \num{54498} & 3.19 & 5.85 & 0 & 0.52 & 1.22 & 2.60 & 44.19 \\ 2019 & \num{54232} & 2.75 & 2.77 & 0 & 0.80 & 1.81 & 3.70 & 24.32 \\ 2020 & \num{53211} & 6.29 & 6.34 & 0 & 1.37 & 4.00 & 9.71 & 39.46 \\ \bottomrule \end{tabular} } \end{center} \end{table} \begin{figure}[b] \centering \includegraphics[width={\onecolgrid}]{images/blockchain/blockchain-txs-feerate-mp5-dataset-a-cdf} \figcap{Distributions of fee-rates for transactions committed by the top-5 mining pools in data set \dsa.}\label{fig:cdf-fee-top5} \end{figure} \section{Transaction Fee-rates across MPOs} \label{sec:tx-fees-across-mpos} Transaction fee-rate of committed transactions in both data sets \dsa{} and \dsb{} exhibits a wide range, from $10^{-6}$ to beyond $\uTxFee{1}$. A comparison of the fee-rates of transactions in \dsa{} committed by the top five mining pool operators (in a rank ordering of mining pool operators based on the number of blocks mined), in Figure~\ref{fig:cdf-fee-top5}, shows no major differences in fee-rate distributions across the different MPOs. Around $70\%$ of the transactions offer from $10^{-4}$ to $10^{-3}$ \uTxFee{} that is one to two orders of magnitude more than the recommended minimum of $10^{-5}$ \uTxFee{}. We hypothesize that users increase the fee-rates offered during high Mempool\xspace congestion---they assume that higher the fee-rate implies lower the transaction delay or commit time. \section{On Fee-rates and Congestion} \label{sec:fees-and-cong} \begin{figure}[t] \centering \includegraphics[width={\onecolgrid}]{images/mempool/mempool-feerate-congested-jun-july-cdf} \figcap{Distributions of transaction-commit delays for transactions in \dsb{} issued at different congestion levels.}\label{fig:fee-cong-rel-b} \end{figure} In Figure~\ref{fig:fee-cong-rel-b}, we show the fee-rates of transactions observed in $4$ different bins or congestion levels in data set \dsb. Each bin in the plot corresponds to a specific level of congestion identified by the Mempool\xspace size: lower than \uMB{1} (\stress{no congestion}), in $(1, 2]$ MB (\stress{lowest congestion}), in $(2, 4]$ MB, and higher than \uMB{4} (\stress{highest congestion}). Fee rates at high congestion levels are strictly higher (in distribution, and hence also on average) than those at low congestion levels. Users, therefore, increase transaction fees to mitigate the delays incurred during congestion. \begin{figure}[h] \centering \includegraphics[width={\onecolgrid}]{images/mempool/mempool-commit-times-feerate-june.pdf} \figcap{Distributions of transaction-commit delays in \dsb{} for different transaction fee-rates.}\label{fig:fee-delay-rel-b} \end{figure} Figure~\ref{fig:fee-delay-rel-b} shows that users' strategy of increasing fee-rates to combat congestion seems to work well in practice---higher the fee rate, lower the transaction commit delay. Here, we compare the CDF of commit delays of transactions with low (i.e., less than $10^{-4}$~BTC/KB{}), high (i.e., between $10^{-4}$ and $10^{-3}$~BTC/KB{}), and exorbitant (i.e., more than $10^{-3}$) fee-rates, in data set \dsb. The commit delays for transactions with high fee-rates (i.e., greater than $10^{-3}$~BTC/KB{}) are significantly smaller than those with low fee-rates (i.e., lesser than $10^{-4}$~BTC/KB{}). \section{Child-Pays-For-Parent Transactions} \label{sec:cpfp-txs} Given any block $B_i$ that contains a set of issued transactions $T = \{t_0, t_1, \cdots, t_n\}$, where each transaction has at least one transaction input identifier $V = \{v_0, v_1, \cdots, v_m\}$, the transaction $t_j \in T$ is said to be a \newterm{child-pays-for-parent transaction (CPFP-tx)} if and only if there exists at least one input $v_k \in V$ that belongs to $T$. In other words, a transaction is a CPFP-tx if and only if it spends from any previous transaction that was also included in the same block $B_i$. \section{Miners' Behavior During the Scam} \label{sec:supp-scam-txs} To examine the miners' behavior during the Twitter scam attack from July 14\tsup{th} to August 9\tsup{th}, 2020, we selected all blocks mined (\num{3697} in total, containing \num{8318621} issued transactions) during this time period from our data set \dsc. If we rank the MPOs responsible for these blocks by the number of blocks ($B$) mined (or, essentially, the approximate hashing capacity $h$), the top five MPOs (refer Figure~\ref{fig:dist-txs-blks-twitter}) turn out to be Poolin ($B$: $\num{565}$; $h$: $15.28\%$), F2Pool ($B$: $\num{536}$; $h$: $14.5\%$), BTC.com ($B$: $\num{424}$; $h$: $11.47\%$), AntPool ($B$: $\num{404}$; $h$: $10.93\%$), and Huobi ($B$: $\num{353}$; $h$: $9.55\%$).
6d1387a60d6af9667f1db9e7f6d2b1cd2e4c6471
\section{Introduction} \label{section: introduction} \IEEEPARstart{T}{he} social distancing requirements imposed by the COVID-19 pandemic has forced many businesses to adopt online retail platforms. Recent reports \cite{covid_ecommerce} indicate that the pandemic has accelerated the shift away from physical to on-line stores by roughly 5 years. It is predicted that at the current pace [citation], many ageing societies will not have sufficient human workers within a decade to sort and pack products; Human manual packing is simply unsustainable for these communities. A feasible strategy to deal with this record increasing demand for products and diverse commodities (whose heterogeneous properties may vary from compliant, to articulated, to deformable \cite{freichel2020role}) is to use dexterous robots that can automate the soft packing process. This approach can also help to optimize the current manual practice in the industry (which tends to use excessive packaging that wastes materials), and to improve social distancing (as no human workers are needed). Our goal in this paper is precisely to develop efficient manipulation strategies that can automate the packing problem of deformable objects. \begin{figure}[t!] \centering \includegraphics[width=0.99\columnwidth]{figures/setup.pdf} \caption{Experimental setup with a box frame $\{F_B\}=\{ \mathbf{i}_B, \mathbf{j}_B, \mathbf{k}_B\}$, two robot arms, the linear elastic object and a top-view camera.} \label{experimental setup} \end{figure} To advance in the development of these valuable manipulation skills \cite{zhu_ram,8457261}, in this paper we focus on the challenging problem where a (long) linear elastic object (LEO) needs to be autonomously grasped, shaped/deformed, and placed within a compact box that optimizes its packing space, as depicted in Fig. \ref{experimental setup}. There are two main challenges that arise with the automation of this task: (i) Due to the complexity of the shaping task (which is difficult to perform with a single continuous motion), several coordinated actions by collaborative arms are required to effectively deform and place the object within the box; (ii) The typically occluded view from vision sensors during the task leads to partial observations of the manipulated object and the environment (this results in incomplete geometric information that complicates the real-time guidance of the robot's motion). \subsection{Related Work} \subsubsection{Robotic Packing in Logistics} Although there has been a strong push towards robotizing the processing of product, e.g., with automated guided vehicles in distribution centres \cite{7151854}, packing remains a task entirely performed by human workers. Recently, many methods have been developed for the Amazon Picking Challenge to automatically recognize, collect, and transfer multiple types of products into boxes \cite{schwarz2018fast, yasuda2020packing, yu2016summary}. Note that the majority of these methods do not address (and underestimate) large-scale elastic deformations, e.g., those exhibited by linear elastic objects; To optimize packing space, shape control is needed to transfer and arrange LEOs into compact boxes. The few works that do consider the arrangement of highly deformable materials \cite{DBLP:journals/corr/abs-2012-03385}, do not address shape control and are mostly confined to simple numerical simulations. As the booming e-commerce industry now extends to many non-traditional commodities (e.g., deformable groceries and household products \cite{digital_shopping}), it is essential to develop shape control methods that can deform highly elastic materials and thus save packing space; However, this challenging soft packing problem has not been sufficiently studied in the literature. \subsubsection{Action Planning for Packing Tasks} In contrast with traditional (low-level) control methods for manipulating soft objects based on \emph{continuous} trajectories \cite{david2014} (i.e., with a single action), the robotic packing of a LEO requires to use \emph{discrete} task planning with multiple (high-level) actions. These types of methods decompose and plan the task in terms of a coordinated sequence of action primitives, each of which captures a specific motor behavior (this approach has been used in a wide range of applications, e.g., grasping \cite{felip2009robust}, soccer \cite{allgeuer2018hierarchical}, assembly \cite{wang2018robot}). Action primitives methods have been proposed for packing and object arrangement problems, e.g. \cite{schwarz2017nimbro} develops a controller for robotic picking and stowing tasks based on parametrized motion primitives; \cite{zeng2018learning} proposes a method for manipulating objects into tightly packed configurations by learning pushing/grasping policies; \cite{capitanelli2018manipulation} tackles the problem of reconfiguring articulated objects by using an ordered set of actions executed by a dual-arm robot. Yet, note that the action primitives adopted by these works cannot capture the complex behaviors that are needed to control the shape of a LEO during a packing task. \subsubsection{Representation of Deformable Objects} To visually guide the manipulation task, it is necessary for a controller to have a meaningful representation of the object. To this end, researchers have developed a variety of representation methods, e.g., physics-based approaches \cite{kimura2003constructing, essahbi2012soft, 9215039, petit2017using, kaufmann2009flexible} (using mass-spring-damping models and finite element method), based on visual geometric features \cite{qi2021contour, 9000733, 8676321,laranjeira2020catenary} (using points, angles, curvatures, catenaries, contours moments, etc), or data-driven representations \cite{navarro2018fourier, hu20193, zhu2021vision, 9410363} (using Fourier series, FPFH, PCA, autoencoders, etc). Typically, vision-based approaches are strongly affected by occlusions during the task (this is problematic for packing as a top observing camera will have incomplete observations during the task). To deal with this issue, many works have addressed the estimation and tracking of the object's deformation in real-time \cite{tang2018track, jin2022robotic, chi2019occlusion}; Yet, these works only consider with 2D scenarios, hence, are not applicable to our 3D LEO manipulation problem. \subsection{Our Contribution} In this paper, we propose: (1) A new hybrid geometric model that combines online 3D vision with an offline reference template to deal with camera occlusions; (2) A reference point planner that provides intermediate targets to guide the high-level packing actions; (3) A cyclic action planner that coordinates multiple action primitives of a dual-arm robot to perform the complex LEO packing task. The proposed methodology is original, and its demonstrated capabilities have not (to the best of the authors' knowledge) been previously reported in the literature. To validate this new approach, we report a detailed experimental study with a dual-arm robot performing packing tasks with LEOs of various elastic properties. This paper is organized as follows: Section \ref{section: modeling} presents the mathematical models; Section \ref{section: Hybrid Geometric Model} describes the hybrid geometric model; Section \ref{section: action planner} presents the packing method; Section \ref{section: results} reports the experiments; Section \ref{section: conclusions} gives conclusions. \section{Modeling}\label{section: modeling} Table \ref{nomenclature} presents the key nomenclature used in the paper. \begin{table}[t!] \centering \caption{Key Nomenclature} \begin{tabular}{l m{6.5cm} } \toprule[1pt] $\hspace{-2mm}$Symbol & Quantity \\ \specialrule{0.5pt}{1pt}{2pt} $\hspace{-2mm}\mathcal{O}(\eta, l_O, d_O)$ & LEO of material $\eta$, length $l_O$ and rod diameter $d_O$.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathcal{B}(l\hspace{-0.5mm}_B,w\hspace{-0.5mm}_B,h\hspace{-0.5mm}_B)$ & Cuboid container of dimensions $l\hspace{-0.5mm}_B\times w\hspace{-0.5mm}_B\times h\hspace{-0.5mm}_B$.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{P}^*$ & The offline reference template. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{P}$ & The raw feedback point cloud inside the box. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{P}^{O}$ & The ordered skeleton of the point cloud outside the box. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\hat{\mathbf p}_i$ & The corresponding point in $\mathbf{P}^O$ to the $i$th point in $\mathbf{P}^*$.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}e_{in}$, $e_{out}$ & The shape differences of $\mathbf{P}$ and $\mathbf{P}^O$ to $\mathbf{P}^*$, respectively.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm} e$, ${e}^*$ & Total shape difference and its desired value.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{x}$ & End-effector feedback pose.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{x}^*$ & End-effector reference pose. \\ \specialrule{0.01pt}{1pt}{2pt} \hspace{-2mm}$\mathbf{u}$ & End-effector target pose. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\{ F \}$ & The reference template frame. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\{ F_O \}$ & The object body frame. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\Delta h$, $\Delta f$ & Height offsets for hover and fixing action primitives.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm} \delta_l$, $\delta_f$ & Horizontal distances for the reference point generator.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{p}^{\hspace{-0.3mm}L}\hspace{-0.5mm}$,~ $\hspace{-0.5mm}\mathbf{p}^{\hspace{-0.3mm}G}\hspace{-0.5mm}$,~ $\hspace{-0.5mm}\mathbf{p}^{\hspace{-0.3mm}F}$ & Reference points for object placing, grasping, and fixing.\\ \bottomrule[1pt] \end{tabular} \label{nomenclature} \end{table} \subsection{Geometric Object Modeling} In our method, we use an RGB-D camera to capture point clouds of the scene in real-time. During the task, the raw point cloud is splitted into two structures: $\mathbf{P}^O$ which represents the object's part to be grasped by the robot, and $\mathbf{P}$ which represents the object's part already packed; Spatially, these two structures correspond to the object's parts outside and inside the box, respectively. To provide an intuitive topology that facilitates the LEO's manipulation, the points in $\mathbf P^O$ are ordered along the linear object's centerline. During initialization, the object's length $l_O$ and width $d_O$ are computed from the raw point cloud data. The offline reference template $\mathbf{P}^*=[\mathbf p_1^*,\ldots,\mathbf p_M^*]\in\mathbb R^{3\times M}$ is a pre-designed geometric curve which represents the final configuration to be given to the object $\mathcal{O}(\eta, l_O,d_O)$ within the box $\mathcal{B}(l_B,w_B,h_B)$. Similarly to the raw feedback point cloud, the reference template $\mathbf{P}^*$ is also separated into two parts at the split point $\mathbf{p}_s$, correspongding to $\mathbf{P}$ and $\mathbf{P}^O$. Given the $i$th point $\mathbf p_i^*$ on $\mathbf{P}^*$, we denote its corresponding point at $\mathbf P^O$ as $\hat{\mathbf{p}}_i^*$. The length from $\hat{\mathbf{p}}_i^*$ to the end of the object $\mathbf{p}^O_{N}$ equals the length from $\mathbf{p}_i^*$ to $\mathbf{p}_{M}^*$. For the point clouds $\mathbf{P}^*$ and $\mathbf{P}^O$, two important frames, the reference template frame and the object body frame, are defined. The reference template frame is denoted as $\{ F \}$ at $\mathbf{p}^*_{i}, i = 1,...,M-1$. Its z-axis is the unit vector vertically pointing to the table. The x-axis is the unit vector poiting to $\mathbf{p}^*_{i+1}$. And the y-axis is determined by the right-hand principle. The object body frame is denoted as $\{ F_O \} = \{\mathbf{i}, \mathbf{j}, \mathbf{k}\}$ at a point in $[\mathbf{p}_{i}^O, \mathbf{p}_{i+1}^O), i =\hspace{-1mm} 1,...,N - 1$. Similarly, its z-axis is the unit vector vertically pointing to the table. Then, we introduce the unit tangent vector $\hat{\mathbf{i}}$ pointing from $\mathbf{p}_{i}^O$ to $\mathbf{p}_{i+1}^O$. The y-axis $\mathbf{j}$ is orthogonal to the plane $\mathbf{k}-\hat{\mathbf{i}}$. Note that, since the object is not in a plane parallel to the table, $\hat{\mathbf{i}}$ is not always orthogonal to $\mathbf{k}$. So the real x-axis of $\{ F_O \}$ is determined by $\mathbf i = \mathbf j \times \mathbf k$. These definations are depicted in Fig. \ref{hybrid-geometric-model}. \begin{figure}[t!] \centerline{\includegraphics[width=\columnwidth]{figures/hybrid-geometric-model.pdf}} \caption{The the proposed hybrid geometric model of a LEO. Blue indicates the start curve and pink indicates its end. The axes $\mathbf i$, $\mathbf k$ and $\hat{\mathbf i}$ are all co-planar.} \label{hybrid-geometric-model} \end{figure} To compute shape difference $e$ between the feedback point cloud and the reference template, we must introduce the inside and outside the box errors between these two structures. For that, we define as $e_{in}$ the average minimum Euclidean distance (i.e., the similarity \cite{tian2017geometric}) between $\mathbf P$ and the first $s$ points of $\mathbf P^*$, and define as $e_{out}$ the average Euclidean distance between the other $M-s$ points of $\mathbf P^*$ and their corresponding points $\hat{\mathbf p}_i$. The total shape difference $e$ is computed as a weighted combination of these two errors: \begin{equation} e = w e_{in} + (1-w) e_{out} \end{equation} for $w = \frac{s}{M}$ as normalization weight. This metric quantifies the accuracy of the automatic shaping process. Note that, $e_{in}$ and $e_{out}$ are respectively averaged based on the number of points in $\mathbf P$ and $\mathbf P^O$, thus, they represent the distances between two pairs of points. Therefore, these errors are not significantly influenced if some feedback points are lost due to occlusion. Besides, $e$ is weighted based on the number of points at the two sides of $\mathbf p^*_s$; This gurantees that the contribution of $e_{in}$ and $e_{out}$ is normalized. As the feedback point clouds of the LEO represent points over its surface whereas the reference template represents a centerline, therefore, the error $e$ will ideally converge to a desired value ${e}^*= \frac{d_O}{2}$, i.e., half width of the object. \subsection{Action Planning} The robotic system considered in this study is composed of two end-effectors, one with an active grasp role and the other with an assistive fix role. It is assumed that the end-effectors are vertically pointing towards the box's plane. The configuration of the robotic arms is represented by a four degrees-of-freedom (DOF) pose vector $\mathbf x=[x,y,z,\theta{]}^\T$ (comprised of position and orientation coordinates) and one DOF for the gripper's open/close configuration. The initial pose $\mathbf x(t_0)$ of the robot arms is assumed to be above the box and object. The action planner that coordinates multiple action primitives performed by the robot arms is modeled with a classic state machine \cite{hudson2019learning}. This state machine is represented by the tuple $(S,T,A,G,R)$, whose elements are defined as: \begin{itemize} \item $A$: collection of action primitives for the end-effectors' 4-DOF pose. \item $G$: collection of action primitives for the grippers' 1-DOF open/close configuration. \item $R$: collection of the robot active and assitant roles. \item $S$: collection of action planner states, each represented by a robot movement. \item $T$: state transition function. \end{itemize} Aiming at recycling the modules of the state machine, the designed action primitives compose a periodic action planner \textit{loop} that enables the robot to automatically perform the complex packing task. \subsection{Framework Overview} \begin{figure}[t!] \centerline{\includegraphics[width=\columnwidth]{figures/framework.pdf}} \caption{The framework of the proposed approach for packing long LEOs into common-size boxes. Solid lines indicate data transmission, and dashed lines indicate physical contact.} \label{framework} \end{figure} The overview of the proposed automatic packing approach is shown in Fig. \ref{framework}. It is composed of a hybrid geometric model (green block), a reference point generator (orange block), and an action planner (blue block). The hybrid geometric model provides a robust representation of the object by combining an online part ($\mathbf P$ and $\mathbf P^O$) and an offline part ($\mathbf P^*$). The reference point generator computes reference poses $\mathbf{x}^*$ for the robot to perform grasping, placing, and fixing the object. The action planner commands the execution of the task based on a series of action primitives. The robotic platform (pink block) receives and executes the kinematic motion command (i.e., the target poses and gripper configurations) from the action planner, and returns an end flag to the control system after its completion. The state machine recycles a perdiodic action planner loop by alternating each robot arm between an active and an assistant role until the task is completed. \section{Hybrid Geometric Model}\label{section: Hybrid Geometric Model} The proposed hybrid geometric model consists of $\mathbf P$ (the raw feedback point cloud inside the box), $\mathbf P^O$ (the ordered skeleton of the object's part outside the box), and $\mathbf P^*$ (the generated offline reference template). It extracts the object's geometry in real time and generates the suitable target shape for packing LEOs, which are preconditions of reference point generation and packing progress measurement. On one hand, the reference point generator replaces $\mathbf P$ with the corresponding points ($\mathbf p^*_1$ to $\mathbf p^*_s$) of $\mathbf P^*$ and searches for the reference points in $\mathbf P^O$ and $\mathbf P^*$, to deal with the typical occlusions that result from the grippers blocking the top-view camera. On the other hand, the shape difference $e$ is computed as the combination of $e_{in}$ (the distance from $\mathbf P$ to the template points $\mathbf p^*_i$, $i = 1, \ldots, s$) and $e_{out}$ (the distance from $\mathbf P^O$ to the template points $\mathbf p^*_i$, $i = s, \ldots, M$), to monitor and quantify the object's packing. \subsection{Offline Reference Template} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/shape.pdf} \caption{Target shape $Spiral$ of a LEO in a box. (a) shows the box's bottom. The points in gradients from pink to lue resresents $\mathbf{P}$. (b) illustrates how $Spiral$ is constructed with straight segments (between red dash lines) and two sets of concentric semicircles (the centers are $\mathbf{C}_{od}$ and $\mathbf{C}_{ev}$). (c) illustrates the beginning segment (yellow) and periodic parts (orange and green) in $Spiral$. } \label{fig:object-model} \vspace{-0.2cm} \end{figure} The offline reference template is needed to perform the packing task, as it provides the final target shape of the object and replaces the occluded parts with its offline 3D points. To optimize packing space, the taget shape for the long LEO is designed in the form of a modified spiral, which is composed of straight segments and concentric semicircles, as shown in Fig. \ref{fig:object-model}. This target configuration is separated into periodic and aperiodic parts. The former consists of a semicircle followed by a straight segment; The latter only represents the beginning straight segment of the curve. Given a box-object pair, the maximum number of action planner loops (which equals to the number of grasps needed to complete the task) is one more than the total number of semicircles, i.e., $\lfloor \frac{w_B}{d_O}\rfloor$, where $\lfloor \cdot \rfloor$ denotes the rounded down nearest integer operator. We compute the maximum object length that can be placed in the box with the spiral shape as: \begin{equation} l_O = l_B - \frac{w_B}{2} + \sum_{j=1}^{\lfloor \tfrac{w_B}{d_O}\rfloor} {\left(l_B - w_B+ \frac{d_O}{2} + \pi \frac{w_B-d_Oj}{2} \right)} \label{capacity} \end{equation} Then, we parameterize the centerline of the spiral shape (see Fig. \ref{fig:object-model} (b)) with a normalized length $\lambda = i/M \in[0, 1]$, for $i=1,\dots,M$. The parameterized centerline is denoted as $\mathbf P^*(\lambda)$, and the length of its curve is computed as: \begin{equation} l(\lambda) = \lambda l_O = \sum_{i=1}^{M} \| \mathbf{p}_i^* - \mathbf{p}_{i-1}^* \|_2. \end{equation} The process for generating the target spiral shape is presented in Algorithm \ref{algorithm3}. \begin{algorithm}[t!] \small \caption{\small The description of the shape $Spiral$}\label{algorithm3} \KwIn{the box $\mathcal{B}(l_B, w_B, h_B)$, the object $\mathcal{O} (\eta, l_O, d_O)$} \KwOut{the parameterized fomula $\mathbf{P}^*(\lambda)$} $\mathbf{C}_{od}(-\frac{l_B}{2}+\frac{w_B}{2}, 0, \frac{d_O}{2})$, $\mathbf{C}_{ev}(\frac{l_B}{2}-\frac{w_B}{2}+\frac{d_O}{2}, \frac{d_O}{2}, \frac{d_O}{2})$\; $l_{line} = l_B-w_B+\frac{d_O}{2}$\; $l_{count} = l_B-\frac{w_B}{2}$\; $j=0$, $\lambda=0$\; \While{$\lambda<1$}{ \eIf{$0 \leq \lambda l_O <l_B-\frac{w_B}{2}$}{ $\mathbf{P}(\lambda)=(\frac{l_B}{2},\frac{-w_B+d_O}{2},\frac{d_O}{2}) + (\lambda l_O,0,0)$\; } { $j=j+1$\; $\lambda=(l_B-\frac{w_B}{2})/l_O$\; $r_{sc}=\frac{w_B}{2}-\frac{d_O}{2}j$\; $l_{semicircle}=\pi r_{sc}$\; \If{$\lambda l_O-l_{count}<l_{semicircle}$}{ $\phi = \frac{\lambda l_O-l_{count}}{r_{sc}}$\; $\mathbf{P}(\lambda)=\mathbf{C}_{od} - r_{sc}(\sin{\phi},\cos{\phi},0)$, $j$ is odd\; $\mathbf{P}(\lambda)=\mathbf{C}_{ev} + r_{sc}(\sin{\phi},\cos{\phi},0)$, $j$ is even\; $\lambda=\lambda+\frac{1}{M}$\; } $l_{count} = l_{count} + l_{semicircle}$\; \If{$\lambda l_O-l_{count} \leq l_{line}$}{ $\mathbf{P}(\lambda)=\mathbf{C}_{od} +(\lambda l_O-l_{count}, r_{sc})$, $j$ is odd\; $\mathbf{P}(\lambda)=\mathbf{C}_{ev} -(\lambda l_O-l_{count}, r_{sc})$, $j$ is even\; $\lambda=\lambda+\frac{1}{M}$\; } $l_{count} = l_{count} + l_{line}$\; } } \end{algorithm} \subsection{Online 3D Vision}\label{section: perception} \begin{figure}[t!] \centerline{\includegraphics[width=\columnwidth]{figures/perception.pdf}} \caption{Point cloud processing: (a) boundary extraction, (b) ordered skeleton.} \label{perception} \vspace{-0.2cm} \end{figure} To compute the ordered skeleton $\mathbf P^O$, the point cloud processing algorithm extracts geometric information of the objects in real time. Firstly, it smoothens the raw point clouds with a weighted filter \cite{hesterberg1995weighted} and downsamples it to optimize its computational cost. Next, it detects the boundaries of the object from the point cloud. For that, we introduce a polar coordinate system with the origin at center of the box and its axis define along $\mathbf i_B$ (as depicted in Fig. \ref{experimental setup}); Then, we segment the object into $N$ sections by rotating (clockwise) a ray starting from $\mathbf i_B$ around the center, and with a fixed angle interval (see Fig. \ref{perception}). Along each ray, we search for nearest and farthest points in the raw point cloud feedback, which we denote as $\mathbf p_i^{in}$ and $\mathbf p_i^{out}$, respectively; These points will serve as the components of the inner boundary and outer boundary of the object. Lastly, the LEO's ordered skeleton $\mathbf P^O$ is constructed by computing the mean of the raw feedback points between two adjacent rays. The length $l_O$ and width $d_O$ of the linear object are intuitively calculated as follows: \begin{equation} l_O= \hspace{-0.5mm}\sum_{i=2}^{N} \left\|\mathbf{p}_{i}^O - \mathbf{p}_{i-1}^O\right\|_2,\ d_O=\frac{1}{N}\sum_{i=1}^{N} \left\|\mathbf{p}_i^{out} - \mathbf{p}_{i}^{in}\right\|_2. \label{equ:geometry} \end{equation} \section{Automatic Packing Method}\label{section: action planner} \subsection{Reference Points Generator}\label{section Reference Points Generator} Our proposed manipulation method recycles a periodic action planner loop, which is composed of various high-level behaviors. To execute these behaviors, three types of points are planned, namely, the grasping reference $\mathbf p^G$, the placing reference $\mathbf p^L$ and the fixing reference $\mathbf p^F$. The reference point generator (the orange block in Fig. \ref{framework}) constructs the reference pose $\mathbf x^*=[x^*,y^*,z^*,\theta^*]^\T$ for the robot based on the object body frame $\{F_O\}$ (which is computed from $\mathbf P^O$ for $\mathbf p^G$) and the reference template frame $\{F\}$ (which is computed from $\mathbf P^*$ for $\mathbf p^L$ and $\mathbf p^F$). Fig. \ref{hybrid-geometric-model} conceptually depicts these frames. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/reference-point-generator.pdf} \caption{Reference point generator. The offline reference template starts from a corner of the box and the point index clockwise increases. (a) the candidate placing points (orange), (b) the cadidate fixing point (green) given a placing point (orange). Red and blue regions respectively indicate the workspaces of the left and right arms.} \label{path-planning} \vspace{-0.3cm} \end{figure} The points $\mathbf{p}^{L}$ represent the positions within the box where the LEO is to be placed by robot. These points are a subset of the offline reference template $\mathbf P^*$ and are defined as $\mathbf{p}^{L} = \mathbf{p}_{k}^*$, for $k=1,\ldots,M$ as the index for the points in the template. The index $k$ is chosen such that $\mathbf{p}_{k}^*$ is approximetly at a distance $\delta_L$ from the axis $\mathbf{j}_B$ (i.e., along the red dash lines shown in Fig. \ref{path-planning} (a)). Depending on which robot arm plays the active packing role, $\mathbf{p}_{k}^*$ is automatically chosen from the left or right to $\mathbf{j}_B$. The points $\mathbf{p}^{G}$ indicate the positions on the object to be grasped by the robot. These points are the corresponding points to $\mathbf{p}^L$ on the object's skeleton $\mathbf P^O$, and are computed as $\mathbf{p}^{G}=\hat{\mathbf p}_k$, for $k=1,\ldots,M$. The LEO's shaping behavior is achieved by driving $\mathbf p^G$ into $\mathbf p^L$. In contrast with inelastic linear deformable objects such as ropes or cords \cite{tang2018track}, LEOs have an intrinsitc elastic energy that restores its shape to the original configuration. Thus, to steadily place the object in the box requires the assitant robot arm to fix the deformed object at a point $\mathbf p^F$ while the active arm moves to a new gasping point $\mathbf p^G$. To compute $\mathbf p^F$, our method first obtains two candidate points along the object that are approximately at a distance $\delta_f$ from the placing point $\mathbf p^L$. The fixing point is selected at the same side of the assitant robot arm with respect to the active robot, see Fig. \ref{path-planning} (b). \subsection{Action Primitives}\label{section:action primitives} As modelled in Sec. II-B, our method adopts two types of action primitives (for grippers and end-effectors) to compose high-level manipulation behaviors. The collection of action primitives for the 1-DOF grippers is as follows: \begin{equation} G = \{\textit{Open}, \textit{Close}\} = \{g_1, g_2\} \end{equation} where the flags $g_1 = 1$ and $g_2 = 0$ define the closing and opening action of grippers, respectively. The collection of five active primitives for the robotic end-effectors is as follows: \begin{align} A &= \{ \textit{Hover}, \textit{Approach}, \textit{Fix}, \textit{Leave}, \textit{Reset}\} \nonumber \\ &= \{ a_1, a_2, a_3, a_4, a_5\} \end{align} These end-effector action primitives are defined as follows: \begin{itemize} \item[$a_1$:] \textit{Hover}. The robot moves and stops above the reference point by an offset $\Delta h$. With this action, the robot is commanded with an end-effector target pose $\mathbf u = \left[x^*, y^*, z^*+\Delta h, \theta^* \right]^{\T}$. This action is needed to avoid collisions with the object, and is done as a preparation step to perform fix and grasp actions. \item[$a_2$:] \textit{Approach}. The robot descends to $z^*$ (viz. the height of the object's centerline). With this action, the robot is commanded with an end-effector target pose $\mathbf{u} = \left[x , y , z^*, \theta \right]^{\T}$. This action, in combination with \textit{Hover}, is needed to grasp and/or place the object by changing the gripper's configuration. \item[$a_3$:] \textit{Fix}. The robot descends to the object's surface, whose height is denoted by $z^* + \Delta f$. With this action, the robot is commanded as $\mathbf u = \left[x,y,z^*+\Delta f,\theta \right]^{\T}$. This motion is needed to push the deformed elastic object and keep it inside the box, thus, preventing it from returning to its original shape. \item[$a_4$:] \textit{Leave}. The robot returns to its initial height. With this action, the robot is commanded with an end-effector target pose $\mathbf u = \left[x , y , z(t_0) , \theta \right]^{\T}$. This action is needed to provide the robot with an obstacle-free region above the box's packing workspace. \item[$a_5$:] \textit{Reset}. The robot returns to its initial pose $\mathbf u = \mathbf x(t_0) $. This action is needed to visually observe the object with the top-view camera. \end{itemize} \subsection{State Machine}\label{section: state machine} \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{figures/action-planner.pdf} \caption{Action planner for packing task. High-level behaviors consist of robotic movements, e.g., grasping the object, placing it into the box, releasing the active robot after placing the object, and changing the identifier of the active robot. The results of reference point generator are inputs of behaviors as references. The specific action primitives determine the final target poses.} \label{state-machine} \vspace{-0.2cm} \end{figure} The proposed state machine to automatically pack the long linear elastic object has one periodic action planner loop (depicted in Fig. \ref{state-machine}), which is iterated while monitoring the object's state until the task is completed. This sequence of actions is performed by colaborative robotic arms, identified as \textit{Left} and \textit{Right} (see Fig. \ref{experimental setup}) that can alternate between an active role and an assitant role. The former is in charge of grasping and placing the object into the box; The latter is in charge of immobilizing it while the arms change roles. Our method uses a collection of robot roles $R = \{ r, \overline r\}$, where $r$ specifies which robot takes up the active packing role in a given cycle of the action planner loop. The identifier $r=\textit{Left}/\textit{Right}$ is automatically determined based on the proximity of $\mathbf P^O$ to either the \textit{Left} or \textit{Right} robot. The assistant arm at the same cycle is denoted as $\overline{r}$, which for our dual-arm configuration, it simply represents the opposite arm, e.g., for $r=\textit{Right}$, $\overline{r}=\textit{Left}$. The proposed state machine in Fig. \ref{state-machine} is composed of two layers. The first layer contains four high-level behaviors, namely, grasp the object, place it into the box, release the active robot, and change the active robot. The inputs of these high-level behaviors are the reference points $\mathbf p^G$, $\mathbf p^L$ and $\mathbf p^F$, and the reference poses $\mathbf x^*$. The second layer contains several low-level robot movements; These are modelled as elements in the collection of states: \begin{equation} S = \{ s : s = m(R, G, A) \} \end{equation} where the triple $m(R, G, A)$ defines the robot movements as a sequence of the following two commands: (i) First, the active/assistant robot $r/\overline{r}\in R$ performs the gripper action primitive $g_i\in G$; (ii) Then, the robot performs the end-effector action primitive $a_j\in A$. The result of the robot movement $m(R, G, A)$ corresponding to each state $s$ is evaluated with the trasition function: \begin{equation} T(s)= \left\{ \begin{array}{l} 1,\,\, \textrm{once the robot completes the movement}, \\ 0,\,\, \textrm{otherwise.}\\ \end{array} \right. \end{equation} The proposed action planner stops when no object points are detected outside the box. The packing task succeeds when the shape difference $e$ converges to $e^*$. \section{Results}\label{section: results} \subsection{Experimental Setup}\label{section: result: experiments} \begin{table}[t!] \centering \caption{properties of the objects in experiments} \begin{tabular}{ ccc } \toprule Material & Density (kg/m$^3$) & Young's Modulus (MPa)\\ \midrule Natural Latex (NL) & 67.23 & 0.032\\ Polyurethane Foam (PUF) & 38.76 & 0.185\\ Silicone Foam (SCF) & 62.50 & 0.325\\ Polyethylene Foam (PEF) & 16.17 & 0.992\\ \bottomrule \end{tabular} \label{properties} \end{table} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/materials.jpg} \caption{The widths/diameters $d_O$ and cross-sections of objects made of different materials. The measurements of $d_O$ are 38.0 mm, 30.0 mm, 34.0 mm, and 98.0 mm. The cross-sections are circle, square, ring, and circle.} \label{material-list} \end{figure} We conduct an experimental study to validate the proposed method. Fig. \ref{experimental setup} shows the developed experimental platform, which is composed of two 6-DOF robot manipulators (UR3) equipped with active grippers (Robotiq) that drive customized object grasping fixtures, and a top-view LiDAR camera (Intel RealSense L515) that captures real-time point clouds of the workspace. A table is placed between the two robot arms, with the packing box is rigidly attached to its surface. The robotic arms are controlled with a Linux-based PC (running Ubuntu 16.04), with ROS and RViz used for communication and visualization \cite{ros}. Image processing is performed with the OpenCV libraries \cite{opencv_library}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/exp-results-2.pdf} \caption{Packing 13 objects $\mathcal{O}(\eta, l_O, d_O)$ of different lengths into two boxes. PEF: Polyethylene Foam, PUF: Polyurethane Foam, SCF: Silicone Foam, NL: Natural Latex.} \label{case-list} \end{figure} To test the robustness of our method for packing LEOs, we use 13 objects with different elastic properties, cross-section shapes, and object lengths. The density and Young's modulus of the object materials are listed in Table \ref{properties}; The cross-sections, the widths and diameters of the objects are shown in Fig. \ref{material-list}; The 13 objects and its lengths are shown in Fig. \ref{case-list}. Twelve of these linear elastic objects are made of three materials: polyethylene foam (PEF), polyurethane foam (PUF), and silicone foam (SCF); These objects have four lengths: 558 mm, 600 mm, 830 mm, and 972 mm. The thirteenth object is a pillow made of natural latex (NL) with a length of 600 mm. The objects in this study are all packed into boxes of two different sizes, viz. $\mathcal{B}(270,207,80)$ and $\mathcal{B}(314,232,80)$ (given in mm units). \begin{table} \centering \caption{Accuracy of geometric property estimation} \begin{tabular}{ ccc } \toprule Object & Length (\%) & Width/Diameter (\%)\\ \midrule $\mathcal{O}(PEF, 558, 38)$ & 98.08 $\pm$ 1.68 & 93.16 $\pm$ 6.32\\ $\mathcal{O}(PEF, 600, 38)$ & 97.73 $\pm$ 1.23 & 93.95 $\pm$ 4.74\\ $\mathcal{O}(PEF, 830, 38)$ & 98.41 $\pm$ 1.00 & 91.05 $\pm$ 4.47\\ $\mathcal{O}(PEF, 972, 38)$ & 98.80 $\pm$ 0.81 & 91.58 $\pm$ 5.26\\ $\mathcal{O}(PUF, 558, 30)$ & 97.83 $\pm$ 1.67 & 91.34 $\pm$ 7.33\\ $\mathcal{O}(PUF, 600, 30)$ & 97.77 $\pm$ 1.60 & 96.11 $\pm$ 6.07\\ $\mathcal{O}(PUF, 830, 30)$ & 98.46 $\pm$ 1.22 & 98.08 $\pm$ 5.02\\ $\mathcal{O}(PUF, 972, 30)$ & 98.91 $\pm$ 0.83 & 98.67 $\pm$ 6.10\\ $\mathcal{O}(SCF, 558, 34)$ & 97.80 $\pm$ 1.16 & 93.82 $\pm$ 5.59\\ $\mathcal{O}(SCF, 600, 34)$ & 98.18 $\pm$ 1.23 & 88.82 $\pm$ 7.35\\ $\mathcal{O}(SCF, 830, 34)$ & 98.83 $\pm$ 0.99 & 87.65 $\pm$ 7.65\\ $\mathcal{O}(SCF, 972, 34)$ & 98.89 $\pm$ 0.85 & 92.64 $\pm$ 6.47\\ $\mathcal{O}(NL, 600, 98)$ & 99.27 $\pm$ 1.90 & 96.22 $\pm$ 3.16\\ \bottomrule \end{tabular} \label{tab: initialization} \end{table} \subsection{Vision-Based Computation of the Objects' Geometry} We validate the accuracy of the model \eqref{equ:geometry} describing the objects' geometry (i.e., the length $l_O$ and width $d_O$) by collecting 10 measurements of their initial configuration over the table (similar to the one depicted in Fig. \ref{experimental setup}) and comparing the calculated dimensions with the ground truth, see Table \ref{tab: initialization}. The length $l_O$ is computed from the ordered point cloud $\mathbf{P}^O$, wheras $d_O$ is computed from the raw point cloud. The results in Table \ref{tab: initialization} show that the estimated object length $l_O$ and width $d_O$ are slightly smaller than the ground truth, which is caused by the discretization of the continuous objects' arc-length and the partial view of their surface. This, however, does not affect the proposed manipulation strategy, as demonstrated in the experimental results that follow. \subsection{Similarity Analysis of the Reference Template} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/points.pdf} \caption{The comparison of raw feedback and the offline reference template.} \label{points} \end{figure} \begin{table}[t!] \centering \caption{performance of the method in packing tasks} \begin{tabular}{ cccc } \toprule Object & Mean $\mu$ & Variance $\sigma^2$ & 3-$\sigma$ Confidence Interval\\ \midrule $\mathcal{O}(PEF, 558, 38)$ & 17.5 & 0.299 & 99.79\\ $\mathcal{O}(PEF, 600, 38)$ & 18.0 & 0.411 & 99.84\\ $\mathcal{O}(PEF, 830, 38)$ & 18.1 & 1.087 & 99.87\\ $\mathcal{O}(PEF, 972, 38)$ & 18.5 & 0.874 & 99.42\\ $\mathcal{O}(PUF, 558, 30)$ & 12.7 & 0.540 & 97.48\\ $\mathcal{O}(PUF, 600, 30)$ & 15.1 & 0.078 & 98.61\\ $\mathcal{O}(PUF, 830, 30)$ & 16.0 & 0.300 & 97.85\\ $\mathcal{O}(PUF, 972, 30)$ & 15.1 & 0.360 & 98.03\\ $\mathcal{O}(SCF, 558, 34)$ & 19.2 & 2.049 & 99.64\\ $\mathcal{O}(SCF, 600, 34)$ & 19.6 & 0.130 & 99.73\\ $\mathcal{O}(SCF, 830, 34)$ & 17.0 & 1.120 & 98.89\\ $\mathcal{O}(SCF, 972, 34)$ & 20.3 & 1.435 & 99.85\\ $\mathcal{O}(NL, 600, 98)$ & 50.9 & 1.984 & 99.94 \\ \bottomrule \end{tabular} \label{tab: performance} \end{table} To verify if the designed $Spiral$ shape is able to match the desired object in boxes, we compute the similarity between the point clouds of the reference template and the raw feedback of the object. To this end, we compute the set of minimum Euclidean distance between every point in $\mathbf{P}$ to the points in $\mathbf{P^*}$as follows: \begin{equation} D=\{ \min_j \| \mathbf{p}_i - \mathbf{p}_j^* \|_2: \mathbf{p}_i \in \mathbf{P}, \mathbf{p}_j^* \in \mathbf{P}^* \} \label{Euclidean} \end{equation} If the shape of the packed object matches $Spiral$ well, $D$ follows a Gaussian distribution $N(\mu,\sigma^2)$, with mean $\mu$ equal to the radius or half-width of the object $\mu\approx\frac{d_O}{2}$, and standard deviation $\sigma\approx 0$. The average value of the set $D$ is equal to the error $e_{in}$. Fig. \ref{points} presents the raw feedback point clouds $\mathbf P$ and the offline reference template $\mathbf P$. A statisticcal analytsis of the similarity is shown in Table \ref{tab: performance}, which demonstrates thatthe mean distances $\mu\approx\frac{d_O}{2}$ and the variances $\sigma^2$ are small. \begin{comment} \begin{table*} \centering \caption{geometric information of the objects in experiments} \begin{tabular}{ cccccccc } \Trule Object & Measured $l_O$ & Computed $l_O$ & Measured $d_O$ & Computed $d_O$ & Mean & Variance & Success Rate\\ \midrule $\mathcal{O}(PEF, 558, 38)$ & 558.0 & 547.3 $\pm$ 9.4 & 38.0 & 35.4 $\pm$ 2.4 & 17.5 & 0.299 & 10/10\\ $\mathcal{O}(PEF, 600, 38)$ & 600.0 & 586.4 $\pm$ 7.4 & 38.0 & 35.7 $\pm$ 1.8 & 18.0 & 0.411 & 10/10\\ $\mathcal{O}(PEF, 830, 38)$ & 830.0 & 816.8 $\pm$ 8.3 & 38.0 & 34.6 $\pm$ 1.7 & 18.1 & 1.087 & 10/10\\ $\mathcal{O}(PEF, 972, 38)$ & 972.0 & 960.3 $\pm$ 7.9 & 38.0 & 34.8 $\pm$ 2.0 & 18.5 & 0.874 & 10/10\\ $\mathcal{O}(PUF, 558, 30)$ & 558.0 & 545.9 $\pm$ 9.3 & 30.0 & 27.3 $\pm$ 2.2 & 12.7 & 0.540 & 10/10\\ $\mathcal{O}(PUF, 600, 30)$ & 600.0 & 586.6 $\pm$ 9.6 & 30.0 & 28.8 $\pm$ 1.8 & 15.1 & 0.078 & 10/10\\ $\mathcal{O}(PUF, 830, 30)$ & 830.0 & 817.2 $\pm$ 10.1 & 30.0 & 29.4 $\pm$ 1.5 & 16.0 & 0.300 & 10/10\\ $\mathcal{O}(PUF, 972, 30)$ & 972.0 & 961.4 $\pm$ 8.1 & 30.0 & 29.6 $\pm$ 1.8 & 15.1 & 0.360 & 10/10\\ $\mathcal{O}(SCF, 558, 34)$ & 558.0 & 545.7 $\pm$ 6.5 & 34.0 & 31.9 $\pm$ 1.9 & 19.2 & 2.049 & 10/10\\ $\mathcal{O}(SCF, 600, 34)$& 600.0 & 589.1 $\pm$ 7.4 & 34.0 & 30.2 $\pm$ 2.5 & 19.6 & 0.130 & 10/10\\ $\mathcal{O}(SCF, 830, 34)$& 830.0 & 820.3 $\pm$ 8.2 & 34.0 & 29.8 $\pm$ 2.6 & 17.0 & 1.120 & 10/10\\ $\mathcal{O}(SCF, 972, 34)$& 972.0 & 961.2 $\pm$ 8.3 & 34.0 & 31.5 $\pm$ 2.2 & 20.3 & 1.435 & 10/10\\ $\mathcal{O}(NL, 600, 98)$ & 600.0 & 595.6 $\pm$ 11.4 & 98.0 & 94.3 $\pm$ 3.1 & 50.9 & 1.984 & 10/10 \\ \bottomrule \end{tabular} \end{table*} \end{comment} \subsection{Generation of Reference Points}\label{section: result: target planning} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/planning-result.png} \caption{Reference point generator of action planner. The light green points are raw feedback points $\mathbf{P}$. Blue indicates the start, red indicates the end, and the gradients of colors indicate the order of points. The grasping points and placing points are orange, and the fixing points are light-blue. Blue indicates the start and red indicates the end, and the gradients of colors indicate the order of points.} \label{path-planning-result} \end{figure} In this section, we take $\mathcal{O}(PEF, 972, 38)$ as an example. The constant distance from $\mathbf{p}^{L}$ to $\mathbf{j}_B$ is set as $\delta_l = 50$ mm, and the distance from $\mathbf{p}^{F}$ to $\mathbf{p}^{L}$ as $\delta_f = 100$ mm. Based on these parameteres, the generator automatically computes the placing points $\mathbf p^L=\mathbf p_k^*$ and grasping points $\mathbf p^G=\hat{\mathbf p}_k$, for and index $k=\{26,64,110\}$; The fixing points $\mathbf p^F$ are determined within each cycle based on the location of the robot relative to each other. Fig. \ref{path-planning-result} depicts the reference template and the ordered skeleton, where $\mathbf p^G$ and $\mathbf p^L$ are represented by organge points, whereas $\mathbf p^F$ by light-blue points. In this figure, we can see how the active robot grasps the object at $\mathbf p^G$ (orange points on the object) and placed it at $\mathbf p^L$ (corresponding orange points at the template). The figure also shows how the assistant robot fixes the object by pushing it at $\mathbf p^F$ (corresponding light-blue point at the reference template), which enables the active robot to be released and conduct the next action. \subsection{Shape Difference} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/ds_400x640.pdf} \caption{The shape differances (inside box, outside box, and total) during manipulation.} \label{shape-difference} \end{figure} This section validates the performance of the proposed automatic packing method with the 13 different objects shown in Fig. \ref{case-list}; Each experiment is conducted 10 times\footnote{\href{https://youtu.be/ZGJcRE2nqBc}{https://youtu.be/ZGJcRE2nqBc}}. To quantify the progress and accuracy of the packing task, we compute the shape difference with visual feedback; This metric is only computed at the beginning of every loop, as there are occlusions and noisy points when the robots are moving. The blue, green, and red solid curves shown in Fig. \ref{shape-difference} respectively represent the errors $e_{in}$, $e_{out}$ and $e$ that are obtained from ten automatic packing experiments. The red dashed line represents the errors' ideal value $e^* = \frac{d_O}{2}$. The blue curves start from zero when the object is completely outside the box before the automatic manipulation, and converge to $e^*$ after the object has been fully packed. The green curves start from large initial values when the objects lie on the table with an underformed shape, and converge to zero when there are no points outside of the box after packing has been completed. The red curves, which represent the weighted average of the blue and green curves, start from the same initial values as the green curves, and monotonically decrease to $e^*$. These results quantitatively demonstrate that the proposed manipulation strategy can successfully deform and manipulate various types of LEOs into compact boxes. \subsection{High-Level Behaviors Constructed with Action Primitives} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/collision.pdf} \caption{Robotic movements with two modes of \textit{Hover}. In (a-1)--(a-3), the robot moves at the constant height so that it touches the object causing the failure of grasping. In (b-1)--(b-3), the robot smoothly moves along the object with the following algorithm.} \label{collision} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/follow-path.pdf} \caption{Path planning for robots following the object without collision.} \label{follow-path} \end{figure} Generally, it is sufficient for the robots to move at a constant height to avoid collisions with the edge of the cuboid-shaped bin. However, as the ordered skeleton $\mathbf P^O$ is higher than the box, this constant height may produce collisions with the object, as shown in Fig. \ref{collision} (a-1)--(a-3). To deal with this problem, we proposed the \textit{Hover} action primitive, which guides the robot to move to the nearest point above the object, then, to move along the object's curvature until the gripper reaches the target grasping point $\mathbf p^G$, see Fig. \ref{follow-path}. This valuable of action primitive enables to successfully grasp LEOs with complex bent geometries, while avoiding collisions with them, as demonstrated in Fig. \ref{collision} (b-1)--(b-3). The four high-level behaviors (i.e., grasp the object, place it into the box, release the active robot, and change the active robot) are shown in Fig. \ref{ap-grasping}--\ref{ap-changehand}. The experiment in Fig. \ref{ap-grasping} shows the how the robot autonomously grasps the object and reaches a safe height from the table. This figure shows that the \textit{Left} robot performs \textit{Hover} above $\mathbf{p}^{G}$, then \textit{Approach} towards the grasping point $\mathbf{p}^{G}$ with an open gripper, and finally \textit{Close} and \textit{Leave} from $\mathbf{p}^{G}$ towards the initial height $z(t_0)$. The second high-level behavior is depicted in Fig. \ref{ap-locating}. The purpose of these sequence of actions is to deform and place the grasped object at a specific position within the box. The figure shows the initial configuration where the object is grasped by the \textit{Left} robot and the inside-the-box part is fixed by the \textit{Right} robot; Then, the \textit{Left} robot performs \textit{Hover} and \textit{Approach} towards $\mathbf{p}^{L}$, while holding the object. The third high-level behavior (release the active robot) is depicted in Fig. \ref{ap-releasehand}. The purpose of these movements is to fix the object's shape while the active robot that is holding the object opens its gripper. The figure shows the initial configuration where the object (already inside the box) is grasped by the \textit{Left} robot; Then, the \textit{Right} robot performs \textit{Hover} and \textit{Approach} $\mathbf{p}^{F}$ with a closed gripper. The \textit{Left} robot \textit{Open} its gripper, \textit{Leave} the object, and returns to the initial height $z(t_0)$. The fourth high-level behavior (change the active robot) is depicted in Fig. \ref{ap-changehand}. The purpose of these movements is to switch the active robot's identifier from \textit{Left} to \textit{Right}, this, in preparation for the \textit{Right} robot to conduct the next grasp task. The figure shows the initial state where the \textit{Left} robot is free, and the right robot is performing \textit{Fix} onto the object; Then, the \textit{Left} robot performs the \textit{Fix} action while the \textit{Right} robot performs \textit{Leave} and then \textit{Reset} to return to its initial position, which completes one cycle of the action planner loop. We take $\mathcal{O}(PEF, 972, 38)$ as a representative example to demonstrate the performance of the method. Fig. \ref{action primitives} depicts the complete process of the packing task, which consists of three cycles, corresponding to the three rows; Each thumbnail in the figure presents a movement $m(R,G,A)$ conducted by the robot arms. The periodic nature of the action planner is illustrated by the fact that the three cycles share the same first three high-level behaviors, viz. grasp the obejct, place into the box, and release the active robot. The first and second cycles only differ in the fourth high-level behavior, i.e., change the active robot, as the active robots in these two cycles are the same (thus, there is no need to change the active robot). The packing process ends in the third cycle, where the object has been completely packed into the box. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/grasping.pdf} \caption{A grasping manipulation (Loop 1) is composed of action primitives given a grasping point $\mathbf{p}^{G}_1$: (a) $m (\textit{Left}, \textit{Open}, \textit{Hover})$, (b)$ m (\textit{Left}, \textit{Open}, \textit{Approach})$, (c) $m (\textit{Left}, \textit{Close}, \textit{Leave})$.} \label{ap-grasping} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/locating.pdf} \caption{A placing manipulation (Loop 2) is composed of action primitives given a grasping point $\mathbf{p}^{L}_2$: (a) initial state, (b) $ m (\textit{Left}, \textit{Close}, \textit{Hover})$, (c) $m (\textit{Left}, \textit{Close}, \textit{Approach})$.} \label{ap-locating} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/releasehand.pdf} \caption{The assistant robot help to release the active robot that is grasping the object after placing it in the box (Loop 2) given a fixing point $\mathbf{p}^{F}_2$: (a) initial state, (b)$ m (\textit{Right}, \textit{Close}, \textit{Hover})$, (c) $m (\textit{Right}, \textit{Close}, \textit{Approach})$, (d) $m (\textit{Left}, \textit{Open}, \textit{Leave})$.} \label{ap-releasehand} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/changehand.pdf} \caption{Changing hand manipulation (Loop 2) is composed of action primitives conducted by both robots concerning their current positions: (a) initial state, (b) $m (\textit{Left}, \textit{Close}, \textit{Fix})$, (c) $m (\textit{Right}, \textit{Close}, \textit{Leave})$, (d) $m (\textit{Right}, \textit{Open}, \textit{Reset})$.} \label{ap-changehand} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=18cm]{figures/action-planning-exp.jpg} \caption{Experiment process and action primitives of packing $\mathcal{O}(PEF, 972, 38)$. The rows represent three action planner loops. The columns represent the behaviors of robots. The thumbnails demonstrate robotic movements. The first two loops are mainly executed by the left arm (grasping and placing) and assisted by the right arm (fixing). The third loop is mainly executed by the right arm (grasping and placing) and assisted by the left arm (fixing). } \label{action primitives} \end{figure*} \section{Conclusions}\label{section: conclusions} In this work, we propose a complete method to pack long linear elastic objects into compact boxes. First, we design a hybrid geometric model including an online 3D-vision method and an offline reference template to tackle occlusions during packing manipulations under a single-view camera. Online 3D-vision method extracts objects' geometric information in real time. Offline reference template is generated based on a designed shape $Spiral$. The effectiveness of $Spiral$ is proved by the high similarity between the offline reference template and the shape of the packed object. Then, we propose a method to preliminarily plan reference points for grasping, placing, and fixing. Next, we propose an action planner to compose defined action primitives as high-level behaviors and achieve packing tasks by repeating a periodic action planner loop. Finally, extensive experiments are conducted to verify the generality of our proposed method for various objects with different elastic materials, lengths, densities, and cross-sections. Although the method is designed for packing tasks, the defined action primitives and the reference point generator method can be used in other manipulation tasks (e.g. object sorting, multiple objects assemblies, etc). Also, the proposed perception method is able to work without markers and decrease computation time by extracting minimum geometic information of objects. A limitation of our method is that our perception method does not consider the situations where the object is outside the camera's view range. A possible solution is to employ multi-view visual system to perceive the object. For future work, we plan to explore the multi-view vision and to extend the framework to other comprehensive tasks involving more types of objects (e.g., rigid, elastic, articulated), as well as to optimize the packing to save space. Our team is currently working along this challenging direction. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} \label{section: introduction} \IEEEPARstart{T}{he} social distancing requirements imposed by the COVID-19 pandemic has forced many businesses to adopt online retail platforms. Recent reports \cite{covid_ecommerce} indicate that the pandemic has accelerated the shift away from physical to on-line stores by roughly 5 years. It is predicted that at the current pace [citation], many ageing societies will not have sufficient human workers within a decade to sort and pack products; Human manual packing is simply unsustainable for these communities. A feasible strategy to deal with this record increasing demand for products and diverse commodities (whose heterogeneous properties may vary from compliant, to articulated, to deformable \cite{freichel2020role}) is to use dexterous robots that can automate the soft packing process. This approach can also help to optimize the current manual practice in the industry (which tends to use excessive packaging that wastes materials), and to improve social distancing (as no human workers are needed). Our goal in this paper is precisely to develop efficient manipulation strategies that can automate the packing problem of deformable objects. \begin{figure}[t!] \centering \includegraphics[width=0.99\columnwidth]{figures/setup.pdf} \caption{Experimental setup with a box frame $\{F_B\}=\{ \mathbf{i}_B, \mathbf{j}_B, \mathbf{k}_B\}$, two robot arms, the linear elastic object and a top-view camera.} \label{experimental setup} \end{figure} To advance in the development of these valuable manipulation skills \cite{zhu_ram,8457261}, in this paper we focus on the challenging problem where a (long) linear elastic object (LEO) needs to be autonomously grasped, shaped/deformed, and placed within a compact box that optimizes its packing space, as depicted in Fig. \ref{experimental setup}. There are two main challenges that arise with the automation of this task: (i) Due to the complexity of the shaping task (which is difficult to perform with a single continuous motion), several coordinated actions by collaborative arms are required to effectively deform and place the object within the box; (ii) The typically occluded view from vision sensors during the task leads to partial observations of the manipulated object and the environment (this results in incomplete geometric information that complicates the real-time guidance of the robot's motion). \subsection{Related Work} \subsubsection{Robotic Packing in Logistics} Although there has been a strong push towards robotizing the processing of product, e.g., with automated guided vehicles in distribution centres \cite{7151854}, packing remains a task entirely performed by human workers. Recently, many methods have been developed for the Amazon Picking Challenge to automatically recognize, collect, and transfer multiple types of products into boxes \cite{schwarz2018fast, yasuda2020packing, yu2016summary}. Note that the majority of these methods do not address (and underestimate) large-scale elastic deformations, e.g., those exhibited by linear elastic objects; To optimize packing space, shape control is needed to transfer and arrange LEOs into compact boxes. The few works that do consider the arrangement of highly deformable materials \cite{DBLP:journals/corr/abs-2012-03385}, do not address shape control and are mostly confined to simple numerical simulations. As the booming e-commerce industry now extends to many non-traditional commodities (e.g., deformable groceries and household products \cite{digital_shopping}), it is essential to develop shape control methods that can deform highly elastic materials and thus save packing space; However, this challenging soft packing problem has not been sufficiently studied in the literature. \subsubsection{Action Planning for Packing Tasks} In contrast with traditional (low-level) control methods for manipulating soft objects based on \emph{continuous} trajectories \cite{david2014} (i.e., with a single action), the robotic packing of a LEO requires to use \emph{discrete} task planning with multiple (high-level) actions. These types of methods decompose and plan the task in terms of a coordinated sequence of action primitives, each of which captures a specific motor behavior (this approach has been used in a wide range of applications, e.g., grasping \cite{felip2009robust}, soccer \cite{allgeuer2018hierarchical}, assembly \cite{wang2018robot}). Action primitives methods have been proposed for packing and object arrangement problems, e.g. \cite{schwarz2017nimbro} develops a controller for robotic picking and stowing tasks based on parametrized motion primitives; \cite{zeng2018learning} proposes a method for manipulating objects into tightly packed configurations by learning pushing/grasping policies; \cite{capitanelli2018manipulation} tackles the problem of reconfiguring articulated objects by using an ordered set of actions executed by a dual-arm robot. Yet, note that the action primitives adopted by these works cannot capture the complex behaviors that are needed to control the shape of a LEO during a packing task. \subsubsection{Representation of Deformable Objects} To visually guide the manipulation task, it is necessary for a controller to have a meaningful representation of the object. To this end, researchers have developed a variety of representation methods, e.g., physics-based approaches \cite{kimura2003constructing, essahbi2012soft, 9215039, petit2017using, kaufmann2009flexible} (using mass-spring-damping models and finite element method), based on visual geometric features \cite{qi2021contour, 9000733, 8676321,laranjeira2020catenary} (using points, angles, curvatures, catenaries, contours moments, etc), or data-driven representations \cite{navarro2018fourier, hu20193, zhu2021vision, 9410363} (using Fourier series, FPFH, PCA, autoencoders, etc). Typically, vision-based approaches are strongly affected by occlusions during the task (this is problematic for packing as a top observing camera will have incomplete observations during the task). To deal with this issue, many works have addressed the estimation and tracking of the object's deformation in real-time \cite{tang2018track, jin2022robotic, chi2019occlusion}; Yet, these works only consider with 2D scenarios, hence, are not applicable to our 3D LEO manipulation problem. \subsection{Our Contribution} In this paper, we propose: (1) A new hybrid geometric model that combines online 3D vision with an offline reference template to deal with camera occlusions; (2) A reference point planner that provides intermediate targets to guide the high-level packing actions; (3) A cyclic action planner that coordinates multiple action primitives of a dual-arm robot to perform the complex LEO packing task. The proposed methodology is original, and its demonstrated capabilities have not (to the best of the authors' knowledge) been previously reported in the literature. To validate this new approach, we report a detailed experimental study with a dual-arm robot performing packing tasks with LEOs of various elastic properties. This paper is organized as follows: Section \ref{section: modeling} presents the mathematical models; Section \ref{section: Hybrid Geometric Model} describes the hybrid geometric model; Section \ref{section: action planner} presents the packing method; Section \ref{section: results} reports the experiments; Section \ref{section: conclusions} gives conclusions. \section{Modeling}\label{section: modeling} Table \ref{nomenclature} presents the key nomenclature used in the paper. \begin{table}[t!] \centering \caption{Key Nomenclature} \begin{tabular}{l m{6.5cm} } \toprule[1pt] $\hspace{-2mm}$Symbol & Quantity \\ \specialrule{0.5pt}{1pt}{2pt} $\hspace{-2mm}\mathcal{O}(\eta, l_O, d_O)$ & LEO of material $\eta$, length $l_O$ and rod diameter $d_O$.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathcal{B}(l\hspace{-0.5mm}_B,w\hspace{-0.5mm}_B,h\hspace{-0.5mm}_B)$ & Cuboid container of dimensions $l\hspace{-0.5mm}_B\times w\hspace{-0.5mm}_B\times h\hspace{-0.5mm}_B$.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{P}^*$ & The offline reference template. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{P}$ & The raw feedback point cloud inside the box. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{P}^{O}$ & The ordered skeleton of the point cloud outside the box. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\hat{\mathbf p}_i$ & The corresponding point in $\mathbf{P}^O$ to the $i$th point in $\mathbf{P}^*$.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}e_{in}$, $e_{out}$ & The shape differences of $\mathbf{P}$ and $\mathbf{P}^O$ to $\mathbf{P}^*$, respectively.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm} e$, ${e}^*$ & Total shape difference and its desired value.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{x}$ & End-effector feedback pose.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{x}^*$ & End-effector reference pose. \\ \specialrule{0.01pt}{1pt}{2pt} \hspace{-2mm}$\mathbf{u}$ & End-effector target pose. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\{ F \}$ & The reference template frame. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\{ F_O \}$ & The object body frame. \\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\Delta h$, $\Delta f$ & Height offsets for hover and fixing action primitives.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm} \delta_l$, $\delta_f$ & Horizontal distances for the reference point generator.\\ \specialrule{0.01pt}{1pt}{2pt} $\hspace{-2mm}\mathbf{p}^{\hspace{-0.3mm}L}\hspace{-0.5mm}$,~ $\hspace{-0.5mm}\mathbf{p}^{\hspace{-0.3mm}G}\hspace{-0.5mm}$,~ $\hspace{-0.5mm}\mathbf{p}^{\hspace{-0.3mm}F}$ & Reference points for object placing, grasping, and fixing.\\ \bottomrule[1pt] \end{tabular} \label{nomenclature} \end{table} \subsection{Geometric Object Modeling} In our method, we use an RGB-D camera to capture point clouds of the scene in real-time. During the task, the raw point cloud is splitted into two structures: $\mathbf{P}^O$ which represents the object's part to be grasped by the robot, and $\mathbf{P}$ which represents the object's part already packed; Spatially, these two structures correspond to the object's parts outside and inside the box, respectively. To provide an intuitive topology that facilitates the LEO's manipulation, the points in $\mathbf P^O$ are ordered along the linear object's centerline. During initialization, the object's length $l_O$ and width $d_O$ are computed from the raw point cloud data. The offline reference template $\mathbf{P}^*=[\mathbf p_1^*,\ldots,\mathbf p_M^*]\in\mathbb R^{3\times M}$ is a pre-designed geometric curve which represents the final configuration to be given to the object $\mathcal{O}(\eta, l_O,d_O)$ within the box $\mathcal{B}(l_B,w_B,h_B)$. Similarly to the raw feedback point cloud, the reference template $\mathbf{P}^*$ is also separated into two parts at the split point $\mathbf{p}_s$, correspongding to $\mathbf{P}$ and $\mathbf{P}^O$. Given the $i$th point $\mathbf p_i^*$ on $\mathbf{P}^*$, we denote its corresponding point at $\mathbf P^O$ as $\hat{\mathbf{p}}_i^*$. The length from $\hat{\mathbf{p}}_i^*$ to the end of the object $\mathbf{p}^O_{N}$ equals the length from $\mathbf{p}_i^*$ to $\mathbf{p}_{M}^*$. For the point clouds $\mathbf{P}^*$ and $\mathbf{P}^O$, two important frames, the reference template frame and the object body frame, are defined. The reference template frame is denoted as $\{ F \}$ at $\mathbf{p}^*_{i}, i = 1,...,M-1$. Its z-axis is the unit vector vertically pointing to the table. The x-axis is the unit vector poiting to $\mathbf{p}^*_{i+1}$. And the y-axis is determined by the right-hand principle. The object body frame is denoted as $\{ F_O \} = \{\mathbf{i}, \mathbf{j}, \mathbf{k}\}$ at a point in $[\mathbf{p}_{i}^O, \mathbf{p}_{i+1}^O), i =\hspace{-1mm} 1,...,N - 1$. Similarly, its z-axis is the unit vector vertically pointing to the table. Then, we introduce the unit tangent vector $\hat{\mathbf{i}}$ pointing from $\mathbf{p}_{i}^O$ to $\mathbf{p}_{i+1}^O$. The y-axis $\mathbf{j}$ is orthogonal to the plane $\mathbf{k}-\hat{\mathbf{i}}$. Note that, since the object is not in a plane parallel to the table, $\hat{\mathbf{i}}$ is not always orthogonal to $\mathbf{k}$. So the real x-axis of $\{ F_O \}$ is determined by $\mathbf i = \mathbf j \times \mathbf k$. These definations are depicted in Fig. \ref{hybrid-geometric-model}. \begin{figure}[t!] \centerline{\includegraphics[width=\columnwidth]{figures/hybrid-geometric-model.pdf}} \caption{The the proposed hybrid geometric model of a LEO. Blue indicates the start curve and pink indicates its end. The axes $\mathbf i$, $\mathbf k$ and $\hat{\mathbf i}$ are all co-planar.} \label{hybrid-geometric-model} \end{figure} To compute shape difference $e$ between the feedback point cloud and the reference template, we must introduce the inside and outside the box errors between these two structures. For that, we define as $e_{in}$ the average minimum Euclidean distance (i.e., the similarity \cite{tian2017geometric}) between $\mathbf P$ and the first $s$ points of $\mathbf P^*$, and define as $e_{out}$ the average Euclidean distance between the other $M-s$ points of $\mathbf P^*$ and their corresponding points $\hat{\mathbf p}_i$. The total shape difference $e$ is computed as a weighted combination of these two errors: \begin{equation} e = w e_{in} + (1-w) e_{out} \end{equation} for $w = \frac{s}{M}$ as normalization weight. This metric quantifies the accuracy of the automatic shaping process. Note that, $e_{in}$ and $e_{out}$ are respectively averaged based on the number of points in $\mathbf P$ and $\mathbf P^O$, thus, they represent the distances between two pairs of points. Therefore, these errors are not significantly influenced if some feedback points are lost due to occlusion. Besides, $e$ is weighted based on the number of points at the two sides of $\mathbf p^*_s$; This gurantees that the contribution of $e_{in}$ and $e_{out}$ is normalized. As the feedback point clouds of the LEO represent points over its surface whereas the reference template represents a centerline, therefore, the error $e$ will ideally converge to a desired value ${e}^*= \frac{d_O}{2}$, i.e., half width of the object. \subsection{Action Planning} The robotic system considered in this study is composed of two end-effectors, one with an active grasp role and the other with an assistive fix role. It is assumed that the end-effectors are vertically pointing towards the box's plane. The configuration of the robotic arms is represented by a four degrees-of-freedom (DOF) pose vector $\mathbf x=[x,y,z,\theta{]}^\T$ (comprised of position and orientation coordinates) and one DOF for the gripper's open/close configuration. The initial pose $\mathbf x(t_0)$ of the robot arms is assumed to be above the box and object. The action planner that coordinates multiple action primitives performed by the robot arms is modeled with a classic state machine \cite{hudson2019learning}. This state machine is represented by the tuple $(S,T,A,G,R)$, whose elements are defined as: \begin{itemize} \item $A$: collection of action primitives for the end-effectors' 4-DOF pose. \item $G$: collection of action primitives for the grippers' 1-DOF open/close configuration. \item $R$: collection of the robot active and assitant roles. \item $S$: collection of action planner states, each represented by a robot movement. \item $T$: state transition function. \end{itemize} Aiming at recycling the modules of the state machine, the designed action primitives compose a periodic action planner \textit{loop} that enables the robot to automatically perform the complex packing task. \subsection{Framework Overview} \begin{figure}[t!] \centerline{\includegraphics[width=\columnwidth]{figures/framework.pdf}} \caption{The framework of the proposed approach for packing long LEOs into common-size boxes. Solid lines indicate data transmission, and dashed lines indicate physical contact.} \label{framework} \end{figure} The overview of the proposed automatic packing approach is shown in Fig. \ref{framework}. It is composed of a hybrid geometric model (green block), a reference point generator (orange block), and an action planner (blue block). The hybrid geometric model provides a robust representation of the object by combining an online part ($\mathbf P$ and $\mathbf P^O$) and an offline part ($\mathbf P^*$). The reference point generator computes reference poses $\mathbf{x}^*$ for the robot to perform grasping, placing, and fixing the object. The action planner commands the execution of the task based on a series of action primitives. The robotic platform (pink block) receives and executes the kinematic motion command (i.e., the target poses and gripper configurations) from the action planner, and returns an end flag to the control system after its completion. The state machine recycles a perdiodic action planner loop by alternating each robot arm between an active and an assistant role until the task is completed. \section{Hybrid Geometric Model}\label{section: Hybrid Geometric Model} The proposed hybrid geometric model consists of $\mathbf P$ (the raw feedback point cloud inside the box), $\mathbf P^O$ (the ordered skeleton of the object's part outside the box), and $\mathbf P^*$ (the generated offline reference template). It extracts the object's geometry in real time and generates the suitable target shape for packing LEOs, which are preconditions of reference point generation and packing progress measurement. On one hand, the reference point generator replaces $\mathbf P$ with the corresponding points ($\mathbf p^*_1$ to $\mathbf p^*_s$) of $\mathbf P^*$ and searches for the reference points in $\mathbf P^O$ and $\mathbf P^*$, to deal with the typical occlusions that result from the grippers blocking the top-view camera. On the other hand, the shape difference $e$ is computed as the combination of $e_{in}$ (the distance from $\mathbf P$ to the template points $\mathbf p^*_i$, $i = 1, \ldots, s$) and $e_{out}$ (the distance from $\mathbf P^O$ to the template points $\mathbf p^*_i$, $i = s, \ldots, M$), to monitor and quantify the object's packing. \subsection{Offline Reference Template} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/shape.pdf} \caption{Target shape $Spiral$ of a LEO in a box. (a) shows the box's bottom. The points in gradients from pink to lue resresents $\mathbf{P}$. (b) illustrates how $Spiral$ is constructed with straight segments (between red dash lines) and two sets of concentric semicircles (the centers are $\mathbf{C}_{od}$ and $\mathbf{C}_{ev}$). (c) illustrates the beginning segment (yellow) and periodic parts (orange and green) in $Spiral$. } \label{fig:object-model} \vspace{-0.2cm} \end{figure} The offline reference template is needed to perform the packing task, as it provides the final target shape of the object and replaces the occluded parts with its offline 3D points. To optimize packing space, the taget shape for the long LEO is designed in the form of a modified spiral, which is composed of straight segments and concentric semicircles, as shown in Fig. \ref{fig:object-model}. This target configuration is separated into periodic and aperiodic parts. The former consists of a semicircle followed by a straight segment; The latter only represents the beginning straight segment of the curve. Given a box-object pair, the maximum number of action planner loops (which equals to the number of grasps needed to complete the task) is one more than the total number of semicircles, i.e., $\lfloor \frac{w_B}{d_O}\rfloor$, where $\lfloor \cdot \rfloor$ denotes the rounded down nearest integer operator. We compute the maximum object length that can be placed in the box with the spiral shape as: \begin{equation} l_O = l_B - \frac{w_B}{2} + \sum_{j=1}^{\lfloor \tfrac{w_B}{d_O}\rfloor} {\left(l_B - w_B+ \frac{d_O}{2} + \pi \frac{w_B-d_Oj}{2} \right)} \label{capacity} \end{equation} Then, we parameterize the centerline of the spiral shape (see Fig. \ref{fig:object-model} (b)) with a normalized length $\lambda = i/M \in[0, 1]$, for $i=1,\dots,M$. The parameterized centerline is denoted as $\mathbf P^*(\lambda)$, and the length of its curve is computed as: \begin{equation} l(\lambda) = \lambda l_O = \sum_{i=1}^{M} \| \mathbf{p}_i^* - \mathbf{p}_{i-1}^* \|_2. \end{equation} The process for generating the target spiral shape is presented in Algorithm \ref{algorithm3}. \begin{algorithm}[t!] \small \caption{\small The description of the shape $Spiral$}\label{algorithm3} \KwIn{the box $\mathcal{B}(l_B, w_B, h_B)$, the object $\mathcal{O} (\eta, l_O, d_O)$} \KwOut{the parameterized fomula $\mathbf{P}^*(\lambda)$} $\mathbf{C}_{od}(-\frac{l_B}{2}+\frac{w_B}{2}, 0, \frac{d_O}{2})$, $\mathbf{C}_{ev}(\frac{l_B}{2}-\frac{w_B}{2}+\frac{d_O}{2}, \frac{d_O}{2}, \frac{d_O}{2})$\; $l_{line} = l_B-w_B+\frac{d_O}{2}$\; $l_{count} = l_B-\frac{w_B}{2}$\; $j=0$, $\lambda=0$\; \While{$\lambda<1$}{ \eIf{$0 \leq \lambda l_O <l_B-\frac{w_B}{2}$}{ $\mathbf{P}(\lambda)=(\frac{l_B}{2},\frac{-w_B+d_O}{2},\frac{d_O}{2}) + (\lambda l_O,0,0)$\; } { $j=j+1$\; $\lambda=(l_B-\frac{w_B}{2})/l_O$\; $r_{sc}=\frac{w_B}{2}-\frac{d_O}{2}j$\; $l_{semicircle}=\pi r_{sc}$\; \If{$\lambda l_O-l_{count}<l_{semicircle}$}{ $\phi = \frac{\lambda l_O-l_{count}}{r_{sc}}$\; $\mathbf{P}(\lambda)=\mathbf{C}_{od} - r_{sc}(\sin{\phi},\cos{\phi},0)$, $j$ is odd\; $\mathbf{P}(\lambda)=\mathbf{C}_{ev} + r_{sc}(\sin{\phi},\cos{\phi},0)$, $j$ is even\; $\lambda=\lambda+\frac{1}{M}$\; } $l_{count} = l_{count} + l_{semicircle}$\; \If{$\lambda l_O-l_{count} \leq l_{line}$}{ $\mathbf{P}(\lambda)=\mathbf{C}_{od} +(\lambda l_O-l_{count}, r_{sc})$, $j$ is odd\; $\mathbf{P}(\lambda)=\mathbf{C}_{ev} -(\lambda l_O-l_{count}, r_{sc})$, $j$ is even\; $\lambda=\lambda+\frac{1}{M}$\; } $l_{count} = l_{count} + l_{line}$\; } } \end{algorithm} \subsection{Online 3D Vision}\label{section: perception} \begin{figure}[t!] \centerline{\includegraphics[width=\columnwidth]{figures/perception.pdf}} \caption{Point cloud processing: (a) boundary extraction, (b) ordered skeleton.} \label{perception} \vspace{-0.2cm} \end{figure} To compute the ordered skeleton $\mathbf P^O$, the point cloud processing algorithm extracts geometric information of the objects in real time. Firstly, it smoothens the raw point clouds with a weighted filter \cite{hesterberg1995weighted} and downsamples it to optimize its computational cost. Next, it detects the boundaries of the object from the point cloud. For that, we introduce a polar coordinate system with the origin at center of the box and its axis define along $\mathbf i_B$ (as depicted in Fig. \ref{experimental setup}); Then, we segment the object into $N$ sections by rotating (clockwise) a ray starting from $\mathbf i_B$ around the center, and with a fixed angle interval (see Fig. \ref{perception}). Along each ray, we search for nearest and farthest points in the raw point cloud feedback, which we denote as $\mathbf p_i^{in}$ and $\mathbf p_i^{out}$, respectively; These points will serve as the components of the inner boundary and outer boundary of the object. Lastly, the LEO's ordered skeleton $\mathbf P^O$ is constructed by computing the mean of the raw feedback points between two adjacent rays. The length $l_O$ and width $d_O$ of the linear object are intuitively calculated as follows: \begin{equation} l_O= \hspace{-0.5mm}\sum_{i=2}^{N} \left\|\mathbf{p}_{i}^O - \mathbf{p}_{i-1}^O\right\|_2,\ d_O=\frac{1}{N}\sum_{i=1}^{N} \left\|\mathbf{p}_i^{out} - \mathbf{p}_{i}^{in}\right\|_2. \label{equ:geometry} \end{equation} \section{Automatic Packing Method}\label{section: action planner} \subsection{Reference Points Generator}\label{section Reference Points Generator} Our proposed manipulation method recycles a periodic action planner loop, which is composed of various high-level behaviors. To execute these behaviors, three types of points are planned, namely, the grasping reference $\mathbf p^G$, the placing reference $\mathbf p^L$ and the fixing reference $\mathbf p^F$. The reference point generator (the orange block in Fig. \ref{framework}) constructs the reference pose $\mathbf x^*=[x^*,y^*,z^*,\theta^*]^\T$ for the robot based on the object body frame $\{F_O\}$ (which is computed from $\mathbf P^O$ for $\mathbf p^G$) and the reference template frame $\{F\}$ (which is computed from $\mathbf P^*$ for $\mathbf p^L$ and $\mathbf p^F$). Fig. \ref{hybrid-geometric-model} conceptually depicts these frames. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/reference-point-generator.pdf} \caption{Reference point generator. The offline reference template starts from a corner of the box and the point index clockwise increases. (a) the candidate placing points (orange), (b) the cadidate fixing point (green) given a placing point (orange). Red and blue regions respectively indicate the workspaces of the left and right arms.} \label{path-planning} \vspace{-0.3cm} \end{figure} The points $\mathbf{p}^{L}$ represent the positions within the box where the LEO is to be placed by robot. These points are a subset of the offline reference template $\mathbf P^*$ and are defined as $\mathbf{p}^{L} = \mathbf{p}_{k}^*$, for $k=1,\ldots,M$ as the index for the points in the template. The index $k$ is chosen such that $\mathbf{p}_{k}^*$ is approximetly at a distance $\delta_L$ from the axis $\mathbf{j}_B$ (i.e., along the red dash lines shown in Fig. \ref{path-planning} (a)). Depending on which robot arm plays the active packing role, $\mathbf{p}_{k}^*$ is automatically chosen from the left or right to $\mathbf{j}_B$. The points $\mathbf{p}^{G}$ indicate the positions on the object to be grasped by the robot. These points are the corresponding points to $\mathbf{p}^L$ on the object's skeleton $\mathbf P^O$, and are computed as $\mathbf{p}^{G}=\hat{\mathbf p}_k$, for $k=1,\ldots,M$. The LEO's shaping behavior is achieved by driving $\mathbf p^G$ into $\mathbf p^L$. In contrast with inelastic linear deformable objects such as ropes or cords \cite{tang2018track}, LEOs have an intrinsitc elastic energy that restores its shape to the original configuration. Thus, to steadily place the object in the box requires the assitant robot arm to fix the deformed object at a point $\mathbf p^F$ while the active arm moves to a new gasping point $\mathbf p^G$. To compute $\mathbf p^F$, our method first obtains two candidate points along the object that are approximately at a distance $\delta_f$ from the placing point $\mathbf p^L$. The fixing point is selected at the same side of the assitant robot arm with respect to the active robot, see Fig. \ref{path-planning} (b). \subsection{Action Primitives}\label{section:action primitives} As modelled in Sec. II-B, our method adopts two types of action primitives (for grippers and end-effectors) to compose high-level manipulation behaviors. The collection of action primitives for the 1-DOF grippers is as follows: \begin{equation} G = \{\textit{Open}, \textit{Close}\} = \{g_1, g_2\} \end{equation} where the flags $g_1 = 1$ and $g_2 = 0$ define the closing and opening action of grippers, respectively. The collection of five active primitives for the robotic end-effectors is as follows: \begin{align} A &= \{ \textit{Hover}, \textit{Approach}, \textit{Fix}, \textit{Leave}, \textit{Reset}\} \nonumber \\ &= \{ a_1, a_2, a_3, a_4, a_5\} \end{align} These end-effector action primitives are defined as follows: \begin{itemize} \item[$a_1$:] \textit{Hover}. The robot moves and stops above the reference point by an offset $\Delta h$. With this action, the robot is commanded with an end-effector target pose $\mathbf u = \left[x^*, y^*, z^*+\Delta h, \theta^* \right]^{\T}$. This action is needed to avoid collisions with the object, and is done as a preparation step to perform fix and grasp actions. \item[$a_2$:] \textit{Approach}. The robot descends to $z^*$ (viz. the height of the object's centerline). With this action, the robot is commanded with an end-effector target pose $\mathbf{u} = \left[x , y , z^*, \theta \right]^{\T}$. This action, in combination with \textit{Hover}, is needed to grasp and/or place the object by changing the gripper's configuration. \item[$a_3$:] \textit{Fix}. The robot descends to the object's surface, whose height is denoted by $z^* + \Delta f$. With this action, the robot is commanded as $\mathbf u = \left[x,y,z^*+\Delta f,\theta \right]^{\T}$. This motion is needed to push the deformed elastic object and keep it inside the box, thus, preventing it from returning to its original shape. \item[$a_4$:] \textit{Leave}. The robot returns to its initial height. With this action, the robot is commanded with an end-effector target pose $\mathbf u = \left[x , y , z(t_0) , \theta \right]^{\T}$. This action is needed to provide the robot with an obstacle-free region above the box's packing workspace. \item[$a_5$:] \textit{Reset}. The robot returns to its initial pose $\mathbf u = \mathbf x(t_0) $. This action is needed to visually observe the object with the top-view camera. \end{itemize} \subsection{State Machine}\label{section: state machine} \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{figures/action-planner.pdf} \caption{Action planner for packing task. High-level behaviors consist of robotic movements, e.g., grasping the object, placing it into the box, releasing the active robot after placing the object, and changing the identifier of the active robot. The results of reference point generator are inputs of behaviors as references. The specific action primitives determine the final target poses.} \label{state-machine} \vspace{-0.2cm} \end{figure} The proposed state machine to automatically pack the long linear elastic object has one periodic action planner loop (depicted in Fig. \ref{state-machine}), which is iterated while monitoring the object's state until the task is completed. This sequence of actions is performed by colaborative robotic arms, identified as \textit{Left} and \textit{Right} (see Fig. \ref{experimental setup}) that can alternate between an active role and an assitant role. The former is in charge of grasping and placing the object into the box; The latter is in charge of immobilizing it while the arms change roles. Our method uses a collection of robot roles $R = \{ r, \overline r\}$, where $r$ specifies which robot takes up the active packing role in a given cycle of the action planner loop. The identifier $r=\textit{Left}/\textit{Right}$ is automatically determined based on the proximity of $\mathbf P^O$ to either the \textit{Left} or \textit{Right} robot. The assistant arm at the same cycle is denoted as $\overline{r}$, which for our dual-arm configuration, it simply represents the opposite arm, e.g., for $r=\textit{Right}$, $\overline{r}=\textit{Left}$. The proposed state machine in Fig. \ref{state-machine} is composed of two layers. The first layer contains four high-level behaviors, namely, grasp the object, place it into the box, release the active robot, and change the active robot. The inputs of these high-level behaviors are the reference points $\mathbf p^G$, $\mathbf p^L$ and $\mathbf p^F$, and the reference poses $\mathbf x^*$. The second layer contains several low-level robot movements; These are modelled as elements in the collection of states: \begin{equation} S = \{ s : s = m(R, G, A) \} \end{equation} where the triple $m(R, G, A)$ defines the robot movements as a sequence of the following two commands: (i) First, the active/assistant robot $r/\overline{r}\in R$ performs the gripper action primitive $g_i\in G$; (ii) Then, the robot performs the end-effector action primitive $a_j\in A$. The result of the robot movement $m(R, G, A)$ corresponding to each state $s$ is evaluated with the trasition function: \begin{equation} T(s)= \left\{ \begin{array}{l} 1,\,\, \textrm{once the robot completes the movement}, \\ 0,\,\, \textrm{otherwise.}\\ \end{array} \right. \end{equation} The proposed action planner stops when no object points are detected outside the box. The packing task succeeds when the shape difference $e$ converges to $e^*$. \section{Results}\label{section: results} \subsection{Experimental Setup}\label{section: result: experiments} \begin{table}[t!] \centering \caption{properties of the objects in experiments} \begin{tabular}{ ccc } \toprule Material & Density (kg/m$^3$) & Young's Modulus (MPa)\\ \midrule Natural Latex (NL) & 67.23 & 0.032\\ Polyurethane Foam (PUF) & 38.76 & 0.185\\ Silicone Foam (SCF) & 62.50 & 0.325\\ Polyethylene Foam (PEF) & 16.17 & 0.992\\ \bottomrule \end{tabular} \label{properties} \end{table} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/materials.jpg} \caption{The widths/diameters $d_O$ and cross-sections of objects made of different materials. The measurements of $d_O$ are 38.0 mm, 30.0 mm, 34.0 mm, and 98.0 mm. The cross-sections are circle, square, ring, and circle.} \label{material-list} \end{figure} We conduct an experimental study to validate the proposed method. Fig. \ref{experimental setup} shows the developed experimental platform, which is composed of two 6-DOF robot manipulators (UR3) equipped with active grippers (Robotiq) that drive customized object grasping fixtures, and a top-view LiDAR camera (Intel RealSense L515) that captures real-time point clouds of the workspace. A table is placed between the two robot arms, with the packing box is rigidly attached to its surface. The robotic arms are controlled with a Linux-based PC (running Ubuntu 16.04), with ROS and RViz used for communication and visualization \cite{ros}. Image processing is performed with the OpenCV libraries \cite{opencv_library}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/exp-results-2.pdf} \caption{Packing 13 objects $\mathcal{O}(\eta, l_O, d_O)$ of different lengths into two boxes. PEF: Polyethylene Foam, PUF: Polyurethane Foam, SCF: Silicone Foam, NL: Natural Latex.} \label{case-list} \end{figure} To test the robustness of our method for packing LEOs, we use 13 objects with different elastic properties, cross-section shapes, and object lengths. The density and Young's modulus of the object materials are listed in Table \ref{properties}; The cross-sections, the widths and diameters of the objects are shown in Fig. \ref{material-list}; The 13 objects and its lengths are shown in Fig. \ref{case-list}. Twelve of these linear elastic objects are made of three materials: polyethylene foam (PEF), polyurethane foam (PUF), and silicone foam (SCF); These objects have four lengths: 558 mm, 600 mm, 830 mm, and 972 mm. The thirteenth object is a pillow made of natural latex (NL) with a length of 600 mm. The objects in this study are all packed into boxes of two different sizes, viz. $\mathcal{B}(270,207,80)$ and $\mathcal{B}(314,232,80)$ (given in mm units). \begin{table} \centering \caption{Accuracy of geometric property estimation} \begin{tabular}{ ccc } \toprule Object & Length (\%) & Width/Diameter (\%)\\ \midrule $\mathcal{O}(PEF, 558, 38)$ & 98.08 $\pm$ 1.68 & 93.16 $\pm$ 6.32\\ $\mathcal{O}(PEF, 600, 38)$ & 97.73 $\pm$ 1.23 & 93.95 $\pm$ 4.74\\ $\mathcal{O}(PEF, 830, 38)$ & 98.41 $\pm$ 1.00 & 91.05 $\pm$ 4.47\\ $\mathcal{O}(PEF, 972, 38)$ & 98.80 $\pm$ 0.81 & 91.58 $\pm$ 5.26\\ $\mathcal{O}(PUF, 558, 30)$ & 97.83 $\pm$ 1.67 & 91.34 $\pm$ 7.33\\ $\mathcal{O}(PUF, 600, 30)$ & 97.77 $\pm$ 1.60 & 96.11 $\pm$ 6.07\\ $\mathcal{O}(PUF, 830, 30)$ & 98.46 $\pm$ 1.22 & 98.08 $\pm$ 5.02\\ $\mathcal{O}(PUF, 972, 30)$ & 98.91 $\pm$ 0.83 & 98.67 $\pm$ 6.10\\ $\mathcal{O}(SCF, 558, 34)$ & 97.80 $\pm$ 1.16 & 93.82 $\pm$ 5.59\\ $\mathcal{O}(SCF, 600, 34)$ & 98.18 $\pm$ 1.23 & 88.82 $\pm$ 7.35\\ $\mathcal{O}(SCF, 830, 34)$ & 98.83 $\pm$ 0.99 & 87.65 $\pm$ 7.65\\ $\mathcal{O}(SCF, 972, 34)$ & 98.89 $\pm$ 0.85 & 92.64 $\pm$ 6.47\\ $\mathcal{O}(NL, 600, 98)$ & 99.27 $\pm$ 1.90 & 96.22 $\pm$ 3.16\\ \bottomrule \end{tabular} \label{tab: initialization} \end{table} \subsection{Vision-Based Computation of the Objects' Geometry} We validate the accuracy of the model \eqref{equ:geometry} describing the objects' geometry (i.e., the length $l_O$ and width $d_O$) by collecting 10 measurements of their initial configuration over the table (similar to the one depicted in Fig. \ref{experimental setup}) and comparing the calculated dimensions with the ground truth, see Table \ref{tab: initialization}. The length $l_O$ is computed from the ordered point cloud $\mathbf{P}^O$, wheras $d_O$ is computed from the raw point cloud. The results in Table \ref{tab: initialization} show that the estimated object length $l_O$ and width $d_O$ are slightly smaller than the ground truth, which is caused by the discretization of the continuous objects' arc-length and the partial view of their surface. This, however, does not affect the proposed manipulation strategy, as demonstrated in the experimental results that follow. \subsection{Similarity Analysis of the Reference Template} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/points.pdf} \caption{The comparison of raw feedback and the offline reference template.} \label{points} \end{figure} \begin{table}[t!] \centering \caption{performance of the method in packing tasks} \begin{tabular}{ cccc } \toprule Object & Mean $\mu$ & Variance $\sigma^2$ & 3-$\sigma$ Confidence Interval\\ \midrule $\mathcal{O}(PEF, 558, 38)$ & 17.5 & 0.299 & 99.79\\ $\mathcal{O}(PEF, 600, 38)$ & 18.0 & 0.411 & 99.84\\ $\mathcal{O}(PEF, 830, 38)$ & 18.1 & 1.087 & 99.87\\ $\mathcal{O}(PEF, 972, 38)$ & 18.5 & 0.874 & 99.42\\ $\mathcal{O}(PUF, 558, 30)$ & 12.7 & 0.540 & 97.48\\ $\mathcal{O}(PUF, 600, 30)$ & 15.1 & 0.078 & 98.61\\ $\mathcal{O}(PUF, 830, 30)$ & 16.0 & 0.300 & 97.85\\ $\mathcal{O}(PUF, 972, 30)$ & 15.1 & 0.360 & 98.03\\ $\mathcal{O}(SCF, 558, 34)$ & 19.2 & 2.049 & 99.64\\ $\mathcal{O}(SCF, 600, 34)$ & 19.6 & 0.130 & 99.73\\ $\mathcal{O}(SCF, 830, 34)$ & 17.0 & 1.120 & 98.89\\ $\mathcal{O}(SCF, 972, 34)$ & 20.3 & 1.435 & 99.85\\ $\mathcal{O}(NL, 600, 98)$ & 50.9 & 1.984 & 99.94 \\ \bottomrule \end{tabular} \label{tab: performance} \end{table} To verify if the designed $Spiral$ shape is able to match the desired object in boxes, we compute the similarity between the point clouds of the reference template and the raw feedback of the object. To this end, we compute the set of minimum Euclidean distance between every point in $\mathbf{P}$ to the points in $\mathbf{P^*}$as follows: \begin{equation} D=\{ \min_j \| \mathbf{p}_i - \mathbf{p}_j^* \|_2: \mathbf{p}_i \in \mathbf{P}, \mathbf{p}_j^* \in \mathbf{P}^* \} \label{Euclidean} \end{equation} If the shape of the packed object matches $Spiral$ well, $D$ follows a Gaussian distribution $N(\mu,\sigma^2)$, with mean $\mu$ equal to the radius or half-width of the object $\mu\approx\frac{d_O}{2}$, and standard deviation $\sigma\approx 0$. The average value of the set $D$ is equal to the error $e_{in}$. Fig. \ref{points} presents the raw feedback point clouds $\mathbf P$ and the offline reference template $\mathbf P$. A statisticcal analytsis of the similarity is shown in Table \ref{tab: performance}, which demonstrates thatthe mean distances $\mu\approx\frac{d_O}{2}$ and the variances $\sigma^2$ are small. \begin{comment} \begin{table*} \centering \caption{geometric information of the objects in experiments} \begin{tabular}{ cccccccc } \Trule Object & Measured $l_O$ & Computed $l_O$ & Measured $d_O$ & Computed $d_O$ & Mean & Variance & Success Rate\\ \midrule $\mathcal{O}(PEF, 558, 38)$ & 558.0 & 547.3 $\pm$ 9.4 & 38.0 & 35.4 $\pm$ 2.4 & 17.5 & 0.299 & 10/10\\ $\mathcal{O}(PEF, 600, 38)$ & 600.0 & 586.4 $\pm$ 7.4 & 38.0 & 35.7 $\pm$ 1.8 & 18.0 & 0.411 & 10/10\\ $\mathcal{O}(PEF, 830, 38)$ & 830.0 & 816.8 $\pm$ 8.3 & 38.0 & 34.6 $\pm$ 1.7 & 18.1 & 1.087 & 10/10\\ $\mathcal{O}(PEF, 972, 38)$ & 972.0 & 960.3 $\pm$ 7.9 & 38.0 & 34.8 $\pm$ 2.0 & 18.5 & 0.874 & 10/10\\ $\mathcal{O}(PUF, 558, 30)$ & 558.0 & 545.9 $\pm$ 9.3 & 30.0 & 27.3 $\pm$ 2.2 & 12.7 & 0.540 & 10/10\\ $\mathcal{O}(PUF, 600, 30)$ & 600.0 & 586.6 $\pm$ 9.6 & 30.0 & 28.8 $\pm$ 1.8 & 15.1 & 0.078 & 10/10\\ $\mathcal{O}(PUF, 830, 30)$ & 830.0 & 817.2 $\pm$ 10.1 & 30.0 & 29.4 $\pm$ 1.5 & 16.0 & 0.300 & 10/10\\ $\mathcal{O}(PUF, 972, 30)$ & 972.0 & 961.4 $\pm$ 8.1 & 30.0 & 29.6 $\pm$ 1.8 & 15.1 & 0.360 & 10/10\\ $\mathcal{O}(SCF, 558, 34)$ & 558.0 & 545.7 $\pm$ 6.5 & 34.0 & 31.9 $\pm$ 1.9 & 19.2 & 2.049 & 10/10\\ $\mathcal{O}(SCF, 600, 34)$& 600.0 & 589.1 $\pm$ 7.4 & 34.0 & 30.2 $\pm$ 2.5 & 19.6 & 0.130 & 10/10\\ $\mathcal{O}(SCF, 830, 34)$& 830.0 & 820.3 $\pm$ 8.2 & 34.0 & 29.8 $\pm$ 2.6 & 17.0 & 1.120 & 10/10\\ $\mathcal{O}(SCF, 972, 34)$& 972.0 & 961.2 $\pm$ 8.3 & 34.0 & 31.5 $\pm$ 2.2 & 20.3 & 1.435 & 10/10\\ $\mathcal{O}(NL, 600, 98)$ & 600.0 & 595.6 $\pm$ 11.4 & 98.0 & 94.3 $\pm$ 3.1 & 50.9 & 1.984 & 10/10 \\ \bottomrule \end{tabular} \end{table*} \end{comment} \subsection{Generation of Reference Points}\label{section: result: target planning} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/planning-result.png} \caption{Reference point generator of action planner. The light green points are raw feedback points $\mathbf{P}$. Blue indicates the start, red indicates the end, and the gradients of colors indicate the order of points. The grasping points and placing points are orange, and the fixing points are light-blue. Blue indicates the start and red indicates the end, and the gradients of colors indicate the order of points.} \label{path-planning-result} \end{figure} In this section, we take $\mathcal{O}(PEF, 972, 38)$ as an example. The constant distance from $\mathbf{p}^{L}$ to $\mathbf{j}_B$ is set as $\delta_l = 50$ mm, and the distance from $\mathbf{p}^{F}$ to $\mathbf{p}^{L}$ as $\delta_f = 100$ mm. Based on these parameteres, the generator automatically computes the placing points $\mathbf p^L=\mathbf p_k^*$ and grasping points $\mathbf p^G=\hat{\mathbf p}_k$, for and index $k=\{26,64,110\}$; The fixing points $\mathbf p^F$ are determined within each cycle based on the location of the robot relative to each other. Fig. \ref{path-planning-result} depicts the reference template and the ordered skeleton, where $\mathbf p^G$ and $\mathbf p^L$ are represented by organge points, whereas $\mathbf p^F$ by light-blue points. In this figure, we can see how the active robot grasps the object at $\mathbf p^G$ (orange points on the object) and placed it at $\mathbf p^L$ (corresponding orange points at the template). The figure also shows how the assistant robot fixes the object by pushing it at $\mathbf p^F$ (corresponding light-blue point at the reference template), which enables the active robot to be released and conduct the next action. \subsection{Shape Difference} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/ds_400x640.pdf} \caption{The shape differances (inside box, outside box, and total) during manipulation.} \label{shape-difference} \end{figure} This section validates the performance of the proposed automatic packing method with the 13 different objects shown in Fig. \ref{case-list}; Each experiment is conducted 10 times\footnote{\href{https://youtu.be/ZGJcRE2nqBc}{https://youtu.be/ZGJcRE2nqBc}}. To quantify the progress and accuracy of the packing task, we compute the shape difference with visual feedback; This metric is only computed at the beginning of every loop, as there are occlusions and noisy points when the robots are moving. The blue, green, and red solid curves shown in Fig. \ref{shape-difference} respectively represent the errors $e_{in}$, $e_{out}$ and $e$ that are obtained from ten automatic packing experiments. The red dashed line represents the errors' ideal value $e^* = \frac{d_O}{2}$. The blue curves start from zero when the object is completely outside the box before the automatic manipulation, and converge to $e^*$ after the object has been fully packed. The green curves start from large initial values when the objects lie on the table with an underformed shape, and converge to zero when there are no points outside of the box after packing has been completed. The red curves, which represent the weighted average of the blue and green curves, start from the same initial values as the green curves, and monotonically decrease to $e^*$. These results quantitatively demonstrate that the proposed manipulation strategy can successfully deform and manipulate various types of LEOs into compact boxes. \subsection{High-Level Behaviors Constructed with Action Primitives} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/collision.pdf} \caption{Robotic movements with two modes of \textit{Hover}. In (a-1)--(a-3), the robot moves at the constant height so that it touches the object causing the failure of grasping. In (b-1)--(b-3), the robot smoothly moves along the object with the following algorithm.} \label{collision} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/follow-path.pdf} \caption{Path planning for robots following the object without collision.} \label{follow-path} \end{figure} Generally, it is sufficient for the robots to move at a constant height to avoid collisions with the edge of the cuboid-shaped bin. However, as the ordered skeleton $\mathbf P^O$ is higher than the box, this constant height may produce collisions with the object, as shown in Fig. \ref{collision} (a-1)--(a-3). To deal with this problem, we proposed the \textit{Hover} action primitive, which guides the robot to move to the nearest point above the object, then, to move along the object's curvature until the gripper reaches the target grasping point $\mathbf p^G$, see Fig. \ref{follow-path}. This valuable of action primitive enables to successfully grasp LEOs with complex bent geometries, while avoiding collisions with them, as demonstrated in Fig. \ref{collision} (b-1)--(b-3). The four high-level behaviors (i.e., grasp the object, place it into the box, release the active robot, and change the active robot) are shown in Fig. \ref{ap-grasping}--\ref{ap-changehand}. The experiment in Fig. \ref{ap-grasping} shows the how the robot autonomously grasps the object and reaches a safe height from the table. This figure shows that the \textit{Left} robot performs \textit{Hover} above $\mathbf{p}^{G}$, then \textit{Approach} towards the grasping point $\mathbf{p}^{G}$ with an open gripper, and finally \textit{Close} and \textit{Leave} from $\mathbf{p}^{G}$ towards the initial height $z(t_0)$. The second high-level behavior is depicted in Fig. \ref{ap-locating}. The purpose of these sequence of actions is to deform and place the grasped object at a specific position within the box. The figure shows the initial configuration where the object is grasped by the \textit{Left} robot and the inside-the-box part is fixed by the \textit{Right} robot; Then, the \textit{Left} robot performs \textit{Hover} and \textit{Approach} towards $\mathbf{p}^{L}$, while holding the object. The third high-level behavior (release the active robot) is depicted in Fig. \ref{ap-releasehand}. The purpose of these movements is to fix the object's shape while the active robot that is holding the object opens its gripper. The figure shows the initial configuration where the object (already inside the box) is grasped by the \textit{Left} robot; Then, the \textit{Right} robot performs \textit{Hover} and \textit{Approach} $\mathbf{p}^{F}$ with a closed gripper. The \textit{Left} robot \textit{Open} its gripper, \textit{Leave} the object, and returns to the initial height $z(t_0)$. The fourth high-level behavior (change the active robot) is depicted in Fig. \ref{ap-changehand}. The purpose of these movements is to switch the active robot's identifier from \textit{Left} to \textit{Right}, this, in preparation for the \textit{Right} robot to conduct the next grasp task. The figure shows the initial state where the \textit{Left} robot is free, and the right robot is performing \textit{Fix} onto the object; Then, the \textit{Left} robot performs the \textit{Fix} action while the \textit{Right} robot performs \textit{Leave} and then \textit{Reset} to return to its initial position, which completes one cycle of the action planner loop. We take $\mathcal{O}(PEF, 972, 38)$ as a representative example to demonstrate the performance of the method. Fig. \ref{action primitives} depicts the complete process of the packing task, which consists of three cycles, corresponding to the three rows; Each thumbnail in the figure presents a movement $m(R,G,A)$ conducted by the robot arms. The periodic nature of the action planner is illustrated by the fact that the three cycles share the same first three high-level behaviors, viz. grasp the obejct, place into the box, and release the active robot. The first and second cycles only differ in the fourth high-level behavior, i.e., change the active robot, as the active robots in these two cycles are the same (thus, there is no need to change the active robot). The packing process ends in the third cycle, where the object has been completely packed into the box. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/grasping.pdf} \caption{A grasping manipulation (Loop 1) is composed of action primitives given a grasping point $\mathbf{p}^{G}_1$: (a) $m (\textit{Left}, \textit{Open}, \textit{Hover})$, (b)$ m (\textit{Left}, \textit{Open}, \textit{Approach})$, (c) $m (\textit{Left}, \textit{Close}, \textit{Leave})$.} \label{ap-grasping} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/locating.pdf} \caption{A placing manipulation (Loop 2) is composed of action primitives given a grasping point $\mathbf{p}^{L}_2$: (a) initial state, (b) $ m (\textit{Left}, \textit{Close}, \textit{Hover})$, (c) $m (\textit{Left}, \textit{Close}, \textit{Approach})$.} \label{ap-locating} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/releasehand.pdf} \caption{The assistant robot help to release the active robot that is grasping the object after placing it in the box (Loop 2) given a fixing point $\mathbf{p}^{F}_2$: (a) initial state, (b)$ m (\textit{Right}, \textit{Close}, \textit{Hover})$, (c) $m (\textit{Right}, \textit{Close}, \textit{Approach})$, (d) $m (\textit{Left}, \textit{Open}, \textit{Leave})$.} \label{ap-releasehand} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/changehand.pdf} \caption{Changing hand manipulation (Loop 2) is composed of action primitives conducted by both robots concerning their current positions: (a) initial state, (b) $m (\textit{Left}, \textit{Close}, \textit{Fix})$, (c) $m (\textit{Right}, \textit{Close}, \textit{Leave})$, (d) $m (\textit{Right}, \textit{Open}, \textit{Reset})$.} \label{ap-changehand} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=18cm]{figures/action-planning-exp.jpg} \caption{Experiment process and action primitives of packing $\mathcal{O}(PEF, 972, 38)$. The rows represent three action planner loops. The columns represent the behaviors of robots. The thumbnails demonstrate robotic movements. The first two loops are mainly executed by the left arm (grasping and placing) and assisted by the right arm (fixing). The third loop is mainly executed by the right arm (grasping and placing) and assisted by the left arm (fixing). } \label{action primitives} \end{figure*} \section{Conclusions}\label{section: conclusions} In this work, we propose a complete method to pack long linear elastic objects into compact boxes. First, we design a hybrid geometric model including an online 3D-vision method and an offline reference template to tackle occlusions during packing manipulations under a single-view camera. Online 3D-vision method extracts objects' geometric information in real time. Offline reference template is generated based on a designed shape $Spiral$. The effectiveness of $Spiral$ is proved by the high similarity between the offline reference template and the shape of the packed object. Then, we propose a method to preliminarily plan reference points for grasping, placing, and fixing. Next, we propose an action planner to compose defined action primitives as high-level behaviors and achieve packing tasks by repeating a periodic action planner loop. Finally, extensive experiments are conducted to verify the generality of our proposed method for various objects with different elastic materials, lengths, densities, and cross-sections. Although the method is designed for packing tasks, the defined action primitives and the reference point generator method can be used in other manipulation tasks (e.g. object sorting, multiple objects assemblies, etc). Also, the proposed perception method is able to work without markers and decrease computation time by extracting minimum geometic information of objects. A limitation of our method is that our perception method does not consider the situations where the object is outside the camera's view range. A possible solution is to employ multi-view visual system to perceive the object. For future work, we plan to explore the multi-view vision and to extend the framework to other comprehensive tasks involving more types of objects (e.g., rigid, elastic, articulated), as well as to optimize the packing to save space. Our team is currently working along this challenging direction. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
28fd8a664e4fd9a650d1f9a595c80f3537d5808e
\section{Introduction} \IEEEPARstart{C}{onvolutional} Neural Networks (CNNs) have been employed to learn local image features through an isotropic mechanism of their receptive fields \cite{luo2016understanding}. Typically, CNNs struggle to deal with global contextual features. Therefore, conventional CNNs are unable to capture useful structural information and deal with diverse backgrounds and unrepresentative regions. Recent studies in CNN proposed to various method to find the optimal receptive field. Despite this, they implement conventional architecture search methods in coarse search spaces. This mechanism loses important fine-grained inner structures \cite{Liu_2021_CVPR}. The flexibility of CNN offers an excellent opportunity to solve this problem by designing different architectures \cite{szegedy2015going,he2016deep,huang2017densely,tan2019efficientnet}. These models went deeper and wider with convolution neural networks. These models are trained on large scale image datasets, e.g., ImageNet, and employed different data augmentation techniques to achieve high accuracy and solve overfitting issues \cite{masi2016we}. Increasing the network depth and width is hard to train due to the vanishing gradient problem. Moreover, patch-based models, especially with Graph Convolutional Networks (GCN), have been introduced to capture global visual features \cite{zhou2018graph,gao2019graph,hamdi2020flexgrid2vec}. However, recent GCN models typically suffer from the increasing size and complexity of the network parameters and computations. In this paper, we propose a novel patch-based method to compute global context features to capture significant complex structures in images. The proposed Global Context Convolutional Network (GCCN) offers a powerful yet straightforward approach to augment and normalise the classical CNN feature vectors. GCCN computes features of local maxima of image patches based on CNN feature maps, as visualised in Fig. \ref{GCCN2}. Local maxima convolutional features represent pixels with high visual sharpness after convolution and pooling operations. Therefore, connecting local maxima features from different image regions tend to have a discriminative feature vector. GCCN shows significant accuracy in image classification. We explore the potential impact of using GCCN to achieve accurate few-shot learning, as in Fig. \ref{GCCN1}. Learning from a few samples is a challenging task in visual representation learning. Current methods do not offer satisfactory solutions for few-shot learning~\cite{vinyals2016matching}. Naive methods that depend on retraining the model on the new data is extremely overfitting \cite{snell2017prototypical}. The overfitting problem leads to limited scalability to learn new classes and poor applicability to fit new unseen or rare examples \cite{sung2018learning}. Existing method that overcome overfitting such as batch and layer normalisation \cite{ioffe2015batch,ba2016layer}, usually, fail in few-shot \cite{antoniou2017data}. GCCN produces discriminative visual feature with in-depth attention to local and global contexts. The learnt feature space is utilised to compute the centroid of each class similar to \cite{snell2017prototypical,mensink2013distance,rippel2015metric}. The proposed \textit{GCCN} outperforms recent works such as VAMPIRE (WACV, 2020) \cite{nguyen2020uncertainty}, APL (ICLR, 2019) \cite{ramalho2019adaptive}, SImPa (TPAMI, 2020) \cite{nguyen2020pac}, LaplacianShot (ICML, 2020) \cite{ziko2020laplacian}, and Hyperbolic ProtoNets (CVPR, 2020) \cite{khrulkov2020hyperbolic}. {Figure \ref{GCCN2} shows the components of the proposed GCCN. We extract global contextual features from convolutional maps. First, GCCN uses these maps to compute visual embedding and selects global context features from image patches. We then apply feature vector augmentation and normalisation. A fully connected layer uses GCCN final vectors to perform image classification. On the other hand, a head model for few-shot learning computes the member class distributions based on the metric distance measures, as shown in Fig. \ref{GCCN2} (d).} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.99\linewidth]{GCCN-2.png} \end{center} \caption{GCCN extracts global context information from CNN feature maps. a) Convolutional feature maps are utilised to produce feature embedding and global context features. c) Global context feature vector extraction from image patches. b) Feature vector augmentation and normalisation. d) A head model for few-shot learning computes the member class distributions based on the metric distance measures. \label{GCCN2} \end{figure*} The proposed GCCN is simple yet efficient enhancing CNN accuracy with computing small size context feature vectors. GCCN is also designed to be flexibly applied on different CNN architectures. The main contributions are as follows: \begin{itemize} \item A novel method to augment and normalise CNN features with global context information. GCCN tends to produce useful feature embeddings with attention to both local and global image structures. \item An implementation of the proposed \textit{GCCN} as a base model to the state-of-the-art prototypical and matching networks. \textit{GCCN} improves their accuracy by up to $30\%$ in a variety of few-shot learning benchmarks. \item A comparative study on image classification and few-shot learning tasks with well-known baseline architectures and state-of-the-art on seven benchmark datasets, namely CIFAR-10, CIFAR-100, STL-10, SVHN, Omniglot, MiniImageNet and CUB-200. \end{itemize} { The rest of this paper is organised as follows. Section \ref{rw} reviews the related literature of image classification and few-shot learning. Sections \ref{gccn} introduces the proposed GCCN and its components. Section \ref{results} the experimental work and discusses the accuracy of GCCN compared to baseline and state-of-the-art methods.} \section{Related Work}\label{rw} \paragraph{Image Classification.} Visual representation learning has advanced image classification tasks. Recent model offer multiple deep learning method to compute representative visual features such as ResNets \cite{he2016deep,huang2016deep,he2016identity}, DenseNet \cite{huang2017densely}, and Efficient-Net \cite{tan2019efficientnet}. Such models offer alternative methods to compute better convolutional features in comparison to the classical CNN. They proposed new architecture designs based on CNN depth (number of layers) and width (number of neurons in each layer). Recent models used to be pre-trained on the ImageNet dataset and accompanied by image augmentation techniques. Recent methods have also introduced useful loss functions to learn discriminative feature space through effective gradients. Deep networks, such as ResNet and Efficient-Net, have achieved state-of-the-art accuracy in various image classification tasks. They have also inspired multiple recent studies to propose new CNN based visual representation learning methods for different vision applications. Although they offer deep, wide, and effective architectures, they are still limited by the complex visual structures, available resources. \paragraph{Few-Shot Learning.} Few-shot learning has different research directions, including metric learning \cite{vinyals2016matching,snell2017prototypical,sung2018learning}, transfer learning \cite{chen2019closer}, meta-learning \cite{finn2017model}, and data augmentation approaches \cite{antoniou2017data,wang2018low}. Matching networks \cite{vinyals2016matching} trains attention memory-based classifier over episodes of support and query sets. Matching networks use LSTM to update the few-shot classifier at each episode to generalise to the test set. Although this approach is complex, it still relies on distance similarity metrics. Sachin et al., \cite{Sachin2017Optimization} employed this episode strategy to learn meta-learning for few-shot learning. Prototypical networks \cite{snell2017prototypical} are proposed to overcome the few-shot overfitting problem. They are designed to learn the class centroids or prototypes in the feature space. These prototypes are computed as the means of non-linear CNN based feature embeddings. Prototypical networks have been employed in multiple recent works such as Hyperbolic ProtoNets \cite{khrulkov2020hyperbolic}. Their simple design enables further research to extend on them. In~\cite{wang2018low}, both Prototypical and Matching networks are hallucinated with image augmentation via generative model. However, data augmentation methods offer limited reasonable images variations. There is a need for new research to develop much broader augmentations. \section{Global Context Convolutional Network}\label{gccn} This paper proposes a novel method to learn global and local visual features through a straightforward yet effective approach. { In this Section, we introduce the proposed architecture for two visual recognition tasks, namely, image classification and few-shot learning as visualised in Figures (\ref{GCCN2} and \ref{GCCN1}, respectively. } GCCN computes attention features from different image regions without the need for complex, wide or large architectures. The proposed \textit{GCCN} combines the CNN feature vector with global context features. Inspired by the metric learning of distances of the query and the support centroids \cite{snell2017prototypical}, our proposed methodology enhances few-shot learning by augmenting and normalising the CNN feature vectors with important global information. Our main contribution enables informative convolution vector embedding representations of both the local appearance and global context of an image. This simple yet powerful representation tends to be helpful for both image classification and metric learning. Fig. \ref{GCCN1} and \ref{GCCN2} show the input images and the different components of the proposed algorithm. \subsection{GCCN for Visual Representation learning} We propose a novel base model for visual feature extraction. This model augments the CNN feature embeddings with global context information to overcome the CNN limitation of ignoring global structural features due to the local receptive fields. { We compute the global contextual feature vector as a concatenation of the local maxima from different region on the image feature maps. This global vector is utilised to augment and normalise the conventional CNN embedding vector. The proposed \textit{GCCN} is an algorithmic framework that includes multiple components. Fig. \ref{GCCN2} describe the GCCN architecture, as follows: \begin{enumerate}[(a)] \item Dividing CNN feature maps into non-overlapping patches. \item Extracting CNN classical feature vectors. \item Computing CNN based global feature vectors, as defined in Definition \ref{GC}. \item Preforming vector augmentation and normalisation between the convolutional and global context features. \end{enumerate} } { \begin{definition}\label{GC} \textbf{Global Context (GC)} features vector is defined as a set of key visual points computed based on CNN feature maps. These key points are selected as local maxima after the convolutional and max-pooling operations. \end{definition} } First, we extract the CNN feature vectors for both $S$ and $Q$ images. Then, we extract global context convolution feature vectors for each image. We divide the convolution feature map into small equal patches via a sliding window. The CNN features maps of an image $I$ computed by a filter kernel $K$ as follows: \begin{equation} conv(I,K)_{x,y} = \sum^h_{i=1} \sum^w_{j=1} \sum^c_{k=1} K_{i,j,k} I_{x+i-1,y+j-1,k} \end{equation} where $x$ and $y$ denote the coordinates of the image and $h$, $w$ and $c$ are the height, width and image channels. { This feature map is divided into small patches, and the local maxima i.e., pixels with maximum values, are selected to form the global context feature vector $GC$, as defined in Definition \ref{GC}. The selected features are concatenated in one vector. \begin{equation} GC = \sum_{W_i \in W} max(W_i) \end{equation} where $W$ is a set of patches of the feature map. The output $GC$ is concatenated with the CNN feature vector to present the feature vector augmentation, as in Eq. \ref{GCconcat}. \begin{equation}\label{GCconcat} GCCN(I) = conv(I) \bigoplus GC \end{equation} This feature extraction mechanism has effectively enabled the CNN based model to augment its local features with a set of global context informative features.} { We propose to utilise the extracted GCCN features to normalise the CNN features. The Euclidean (also called Frobenius) norm is computed as Eq. \ref{norm}. \begin{equation}\label{norm} \|V\|_{F}=\left[\sum_{i, j} \operatorname{abs}\left(a_{i, j}\right)^{2}\right]^{1 / 2} \end{equation} where $F$ denotes the Frobenius norm, $V$ is the feature vector and $a_{i, j}$ is an element in $V$. This vector normalisation method calculates the square root of the sum of absolute squares of the matrix elements. The output of the norm process is utilised to normalise the original or augmented CNN vector, as in Eq. \ref{GCnorm}. \begin{equation}\label{GCnorm} GCCN(I) = \frac{conv(I) \bigoplus GC}{\|GC\|_{F}} \end{equation} } We provide extensive experimental work to compare these different setups on four image classification datasets, including CIFAR-10, CIFAR-100, STL-10 and SVHN. Next, we use the augmented feature vectors to compute each class's prototype or utilise it by the matching networks. \subsection{GCCN for Few-shot Learning} { Deep learning methods require large datasets to be trained and fitted across a large number of parameters. The available training datasets are limited in many realistic scenarios, leading to poor model generalisation on other test data. Few-shot learning enables classification models to be trained on a few samples. It can be one-shot or more based on how many samples per class. It offers data-efficient learning and a more efficient approach with regards to fine-tuning and model adaptation \cite{antoniou2019how}. Learning from a few samples is a challenging task in visual representation learning. Current methods do not offer satisfactory solutions for few-shot learning~\cite{vinyals2016matching}. Naive methods that depend on retraining the model on the new data is extremely overfitting \cite{snell2017prototypical}. The overfitting problem leads to limited scalability to learn new classes and poor applicability to fit new unseen or rare examples \cite{sung2018learning}. Existing method that overcome overfitting such as batch and layer normalisation \cite{ioffe2015batch,ba2016layer}, usually, fail in few-shot \cite{antoniou2017data}. } We define few-shot learning as mapping a support set of small $k$ samples $S = \{(x_i, y_i)\}^k_{i=1}$ to a classifier $c_S(\hat x)$. That classifier is given a test image $\hat x$ to find the probability distribution over the $\hat y$ output labels. Fig. \ref{GCCN1} shows the main components of the proposed few-shot image classification architecture using \textit{GCCN}. The figure incorporates four main sections: the input query and support image sets, the extraction of the CNN feature, the vector augmentation and the learning process of the probability distribution between the query vector and the support prototypes. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.99\linewidth]{GCCN.png} \end{center} \caption{ { The proposed architecture: global context convolutional neural networks for few-shot learning. a) Support $S$ and query $Q$ sets. b) Extracting convolution feature. c) global context feature vector based on the CNN feature maps. d) CNN feature vector augmentation with the global feature vector. e) A head model for few-shot learning. Prototypical networks are used to compute the class prototypes.}\} \label{GCCN1} \end{figure*} \paragraph{Prototypical Networks.} The output GCCN feature vectors are fed into a few-shot learning metric-based model. We use state-of-the-art prototypical networks for their simplicity and high accuracy. Prototypes are defined as the estimated mean of each class of vectors. These vectors are computed over the support set $S$ to measure the distance between them and the vector of the query image $Q$. The latter is assigned to the closest class mean or prototype. The works in \cite{mensink2013distance,rippel2015metric} use multiple prototypes per class. However, having more than one prototype in each class requires a portioning function to group each class of support points. In this paper, we only select one prototype per class, similar to \cite{snell2017prototypical}. The embedded query vector is classified through softmax over distances to the class prototypes. \begin{equation} p_\phi(y=k|x) = \frac{e^{(-d(f\phi(X), \mu_k))}}{\sum_{k'} e^{(-d(f\phi(x), \mu_{k'})}} \end{equation} where $d$ is a distance function, $f\phi$ a feature vector embedding function and $\mu_k$ is the prototype as in Eq. \ref{mu}. \begin{equation}\label{mu} \mathbf{\mu}_{k}=\frac{1}{\left|S_{k}\right|} \sum_{\left(\mathbf{x}_{i}, y_{i}\right) \in S_{k}} f_{\phi}\left(\mathbf{x}_{i}\right) \end{equation} \paragraph{Matching Networks.} We have also implemented \textit{GCCN} under the state-of-the-art matching networks. They utilise a weighted nearest-neighbour classifier over embedding space. $S$ is modelled in a sequence, and $Q$ is embedded in it by a bidirectional long short-term memory (LSTM). This work defines the one-shot learning task as mapping a support set of small $k$ samples $S = \{(x_i, y_i)\}^k_{i=1}$ to a classifier $c_S(\hat x)$. That classifier is given a test image $\hat x$. It aims to find the probability distribution over the $\hat y$ output labels. The mapping is defined as follows: \begin{equation}\label{P} S \rightarrow c_S(\hat x) \Longleftrightarrow P(\hat y|\hat x, S') = \sum^k_{i=1} a(\hat x, x_i)y_i \end{equation} where $P$ is parameterised by a neural network to make the label $\hat y$ prediction and $a$ is an attention mechanism through a $X \times X$ kernel density estimation. The $P$ in Eq. \ref{P} depends on $a$, the attention mechanism that fully controls the classifier. This is simply computed using the softmax over cosine distance $c$, as in Eq. \ref{a}. \begin{equation}\label{a} a(\hat x,x_i) = \frac{e^{c(f(\hat x),g(x_i))}}{\sum^k_{j=1} e^{c(f(\hat x),g(x_j))}} \end{equation} where $f$ and $g$ are embedding functions through neural networks to embed $\hat x$ and $x_i$. \paragraph{Distance and Similarity Metrics} We tested the $Euclidean$ distance method as follows: \begin{equation}\label{euclidean} E(p,q) = \sqrt {\sum _{i=1}^{n} \left( q_{i}-p_{i}\right)^2} \end{equation} where $p$ and $q$ are the support/prototype and query vectors. The distance function $E$ is Pythagorean formula and $q$ and $p$ are normalised by the Euclidean norm $L_2$. \begin{equation} ||p|| = \sqrt{p^2_1+p^2_2+...+p^2_n} = \sqrt{p\cdot p} \end{equation} Eq. \ref{euclidean} can now be written as follows: \begin{equation} ||q-p|| \sqrt{(q-p)\cdot (q-p)} = \sqrt{||p||^2+||q||^2-2p\cdot q} \end{equation} We have also tested the cosine similarity method, as in Eq. \ref{cos}. \begin{equation}\label{cos} \cos ({\bf p},{\bf q})= {{\bf p} {\bf q} \over \|{\bf p}\| \|{\bf q}\|} = \frac{ \sum_{i=1}^{n}{{\bf p}_i{\bf q}_i} }{ \sqrt{\sum_{i=1}^{n}{({\bf p}_i)^2}} \sqrt{\sum_{i=1}^{n}{({\bf q}_i)^2}} } \end{equation} The $d$ distance function in the above-mentioned head models can be $E$ or $cos$. We introduce an extensive experimental discussion on both distance methods in the next section. \section{Experimental Results}\label{results} \subsection{Image Classification} GCCN offers a novel approach to capture global contextual attentions to support the conventional convolutional features. We evaluate the GCCN as a visual representation method for image classification. We utilised four benchmark datasets, as follows: \begin{itemize} \item CIFAR-10 \cite{krizhevsky2009learning} includes of $60,000$ $32 \times 32$ images. They are divided into $50,000$ for training and $10,000$ for testing. CIFAR-10 dataset has $10$ categories, such as dog, cat and bird. The categories are mutually exclusive with no overlapping. \item CIFAR-100 is similar to the CIFAR-10, having $100$ classes containing $600$ images each. \item STL-10 \cite{coates2011analysis} has $10$ classes with $5,000$ images for training and $8,000$ for testing. The images are in the size of $96 \times 96$ and are acquired from the ImageNet dataset. \item SVHN contains $630,420$ house number images of a size $32 \times 32$ pixels. The SVHN official split contains $73,257$ and $26,032$ images for training and testing, respectively. \end{itemize} We tested the GCCN using ResNet-50 and Efficient-Net. GCCN is implemented over one convolutional block. We designed the convolutional block to return one feature map after the max pooling. This feature map is utilised to compute the global context vectors. Thus, GCCN can be repeated after each convolutional process over the output pooled feature maps. We tested GCCN with one, two and three layers to explore, as listed in Table \ref{gccn_comp}. \paragraph{CIFAR-10.} Table \ref{CIFAR-10} lists the benchmark results of using GCCN on the CIFAR-10 dataset. GCCN has introduced high accuracy with $94.6\%$. GCCN outperforms state-of-the-art models such as DeepInfoMax \cite{hjelm2018learning}, which has 75.57\%. DeepInfoMax is a patch-based approach similar to our GCCN. Although DeepInfoMax has a complex architecture, GCCN achieved higher accuracy. Other recent methods are also outperformed by GCCN, such as ANODE \cite{NIPS2019_8577} (NeurIPS, 2019), CLS-GAN \cite{qi2020loss} (IJCV, 2020) and Mish \cite{misra2020mish} (BMVC, 2020). \begin{table}[!ht] \small \begin{center} \caption{Classification accuracy (top 1) results on CIFAR-10.}\label{CIFAR-10} \small \begin{tabular}{|p{5.5cm}|l|} \hline Model & Test Accuracy\\ \hline ANODE \cite{NIPS2019_8577} & 60.6\% \\ \hline DeepInfoMax (infoNCE) \cite{hjelm2018learning} & 75.57\%\\ \hline DenseNet \cite{huang2017densely} & 77.79\% \\ \hline DCGAN \cite{radford2015unsupervised} & 82.8\% \\ \hline Baikal \cite{gonzalez2020improved} & 84.53\% \\ \hline Scat + FC \cite{oyallon2017scaling} & 84.7\% \\ \hline CapsNet \cite{sabour2017dynamic} & 89.4\% \\ \hline MP \cite{hendrycks2016baseline} & 89.07\%\\ \hline ResNet-34 \cite{he2016deep} & 89.56\% \\ \hline APAC \cite{sato2015apac} & 89.70\% \\ \hline MIM \cite{liao2016importance} & 91.5\% \\ \hline CLS-GAN \cite{qi2020loss} & 91.7\% \\ \hline DSN \cite{lee2015deeply} & 91.8\% \\ \hline BinaryConnect \cite{courbariaux2015binaryconnect} & 91.7\% \\ \hline Mish \cite{misra2020mish} & 92.20\% \\ \hline \textbf{GCCN (ours)} & \textbf{94.6\%} \\ \hline \end{tabular} \end{center} \end{table \paragraph{CIFAR-100.} Table \ref{CIFAR-100} shows experiment results comparing GCCN to state-of-the-art image classification methods using the CIFAR-100 dataset. We utilised ResNet-50 as a base model to compute the CNN feature within the GCCN. GCCN improves the ResNet-50 accuracy from $64.06\%$ to $79.77\%$. GCCN also outperforms deeper versions of ResNet, such as the ResNet-1001, which achieves $77.3\%$. In the same fashion, GCCN outperforms recent state-of-the-art methods, such as MixMatch \cite{NEURIPS2019_MixMatch} with 74.10\%, Mish \cite{misra2020mish} with 74.41\% and DIANet \cite{huang2020dianet} with 76.98\% . \begin{table}[!ht] \small \begin{center} \caption{Classification accuracy (top 1) results on CIFAR-100.}\label{CIFAR-100} \small \begin{tabular}{|p{5.5cm}|l|} \hline Model & Test Accuracy\\ \hline DSN \cite{lee2015deeply} & 65.4\% \\ \hline ResNet-50 \cite{he2016identity} & 67.06\% \\ \hline MIM \cite{liao2016importance} & 70.8\% \\ \hline MixMatch \cite{NEURIPS2019_MixMatch} & 74.10\% \\ \hline Mish \cite{misra2020mish} & 74.41\% \\ \hline Stochastic Depth \cite{huang2016deep} & 75.42\% \\ \hline Exponential Linear Units \cite{clevert2015fast} & 75.7\% \\ \hline DIANet \cite{huang2020dianet} & 76.98\% \\ \hline Evolution \cite{real2017large} & 77\% \\ \hline ResNet-1001 \cite{he2016identity} & 77.3\% \\ \hline \textbf{GCCN} & \textbf{79.77\%} \\ \hline \end{tabular} \end{center} \end{table \paragraph{STL-10} Table \ref{STL-10} lists benchmark results of using the STL-10 dataset. GCCN achieves $95.41\%$ accuracy outperforming state-of-the-art methods. It has better accuracy than different versions of DeepInfoMax, FixMatch and NSGANetV2. ResNet accuracy also improved from $82.66\%$ to $95.41\%$ using the proposed GCCN. \begin{table}[!ht] \small \caption{Classification accuracy (top 1) results on STL-10 dataset.}\label{STL-10} \centering \begin{tabular}{|p{6cm}|l|} \hline Model & Test Accuracy \\ \hline DeepInfoMax (JSD) \cite{hjelm2018learning} & 65.93\% \\ \hline DeepInfoMax (infoNCE) \cite{hjelm2018learning} & 67.08\% \\ \hline ResNet \cite{luo2020extended} & 72.66\% \\ \hline Second-order Hyperbolic CNN \cite{ruthotto2019deep} & 74.3\% \\ \hline SOPCNN (RA) \cite{NEURIPS20_FixMatch} & 88.08\% \\ \hline FixMatch \cite{NEURIPS20_FixMatch} & 89.59\% \\ \hline SESN \cite{sosnovik2019scale} & 91.49\% \\ \hline NSGANetV2 \cite{lu2020nsganetv2} & 92\% \\ \hline FixMatch (RA) \cite{NEURIPS20_FixMatch} & 92.02\% \\ \hline \textbf{GCCN} & \textbf{95.41\%}\\ \hline \end{tabular} \end{table \paragraph{SVHN} Table \ref{SVHN} compares the accuracy of GCCN and state-of-the-art methods using the SVHN dataset. GCCN comes slightly after the ReNet+GRU \cite{Moser2020DartsReNet}, FPID \cite{pmlr-v80-hoffman18a} and SE-b \cite{french2017self}. However, GCCN outperformed the accuracy of ReNet+LSTM that processes the image as a sequence of patches. GCCN also outperformed other recent studies, such as DenseNet \cite{huang2017densely} with 94.19, WRN \cite{zagoruyko2016wide} with 94.50\%, E-ABS \cite{ju2020abs} with 89.20\% and DANN \cite{ganin2016domain} with 91.00\%. \begin{table}[!ht] \small \begin{center} \caption{Classification accuracy (top 1) results on SVHN data.}\label{SVHN} \small \begin{tabular}{|p{6cm}|l|l|} \hline Model & Test Accuracy \\ \hline E-ABS \cite{ju2020abs} & 89.20\% \\ \hline Asymmetric Tri-Training \cite{saito2017asymmetric} & 90.83\% \\ \hline DANN \cite{ganin2016domain} & 91.00\% \\ \hline Associative Domain Adaptation \cite{haeusser2017associative} & 91.80\% \\ \hline SE-a \cite{french2017self} & 91.92\% \\ \hline CLS-GAN \cite{qi2020loss} & 94.02\% \\ \hline ReNet+LSTM \cite{Moser2020DartsReNet} & 94.10\% \\ \hline DenseNet \cite{huang2017densely} & 94.19\% \\ \hline WRN-OE \cite{hendrycks2019deep} & 94.19\% \\ \hline WRN \cite{zagoruyko2016wide} & 94.50\% \\ \hline DWT-MEC \cite{roy2019unsupervised} & 94.62\% \\ \hline Farhadi et al., \cite{farhadi2019novel} & 94.62\% \\ \hline \textbf{GCCN} & \textbf{94.65\%} \\ \hline ReNet+GRU \cite{Moser2020DartsReNet} & 95.16\% \\ \hline FPID \cite{pmlr-v80-hoffman18a} & 95.67\% \\ \hline \end{tabular} \end{center} \end{table \paragraph{Vector Augmentation and Normalisation.} Table \ref{gccn_comp} lists the experiment results of using GCCN vector augmentation and normalisation. We tested GCCN on different image sizes (S), using different CNN networks (Efficient-Net and ResNet) and with various GCCN layers (one, two and three). Each experiment was run under three different settings based on the method of concatenating the global context features with the CNN vector. These were test vector augmentation (Aug), normalisation (norm) and augmentation followed by normalisation (A+N), as explained in the methodology section. Using GCCN with ResNet-50 always produced better accuracy than Efficient-Net. Furthermore, GCCN works better when increasing the image crop size from $32$ to $96$ for the CIFAR-10 and SVHN and $96$ to $224$ for the STL-10. This insight highlights the effectiveness of GCCN in capturing better global attention with higher resolutions of the same images. In most cases, using the normalisation after GCCN augmentation improves the classification accuracy. In multiple experiments, the model overfits if only augmentation or normalisation is utilised. However, the combination method does not show any overfitting. \begin{table*}[!ht] \centering \small \caption{GCCN vector augmentation and normalisation.} \label{gccn_comp} \begin{tabular}{|p{4cm}|l|l|l|l|l|c|} \hline Dataset & Size & CNN & L & Aug & Norm & Aug+Norm \\ \hline \multirow{9}{*}{CIFAR10} & \multirow{6}{*}{$32\times 32$} & \multirow{3}{*}{Eff.Net} & 1 & 86.7\% & 83.09\% & 84.62\% \\ \cline{4-7} & & & 2 & 86.04\% & 81.51\% & 84.54\% \\ \cline{4-7} & & & 3 & 85.87\% & 81.86\% & 83.86\% \\ \cline{3-7} & & \multirow{3}{*}{ResNet} & 1 & 87.97\% & 86.32\% & 88.07\% \\ \cline{4-7} & & & 2 & 87.81\% & 84.54\% & \textbf{86.86\%} \\ \cline{4-7} & & & 3 & 87.82\% & 84.81\% & 86.56\% \\ \cline{2-7} & \multirow{3}{*}{$96\times 96$} & \multirow{3}{*}{ResNet} & 1 & 90.15\% & 93.35\% & \textbf{94.60\%} \\ \cline{4-7} & & & 2 & 47\% & 91.65\% & 93.42\% \\ \cline{4-7} & & & 3 & 37\% & 92.54\% & 94.01\% \\ \hline \multirow{6}{*}{STL10} & \multirow{3}{*}{$96\times 96$} & \multirow{3}{*}{ResNet} & 1 & 91.65\% & 91.6 \% & \textbf{92.26\%} \\ \cline{4-7} & & & 2 & 90.55\% & 91.44\% & 91.73\% \\ \cline{4-7} & & & 3 & 91.09\% & 90.97\% & 91.4 \% \\ \cline{2-7} & \multirow{3}{*}{$224\times 224$} & \multirow{3}{*}{ResNet} & 1 & 59.54\% & 59.07\% & \textbf{95.41\%} \\ \cline{4-7} & & & 2 & 94.45\% & 92.81\% & 94.76\% \\ \cline{4-7} & & & 3 & 80.87\% & 69.81\% & 95.24\% \\ \hline \multirow{6}{*}{SVHN} & \multirow{3}{*}{$32\times 32$} & \multirow{3}{*}{ResNet} & 1 & \textbf{94.45\%} & 93.66\% & 94.26\% \\ \cline{4-7} & & & 2 & 44.03\% & 93.07\% & 93.63\% \\ \cline{4-7} & & & 3 & 34.59\% & 93.18\% & 93.89\% \\ \cline{2-7} & \multirow{3}{*}{$96\times 96$} & \multirow{3}{*}{ResNet} & 1 & 94.6\% & 94.61\% & \textbf{94.65\%} \\ \cline{4-7} & & & 2 & 47.59\% & 94.3 \% & 93.89\% \\ \cline{4-7} & & & 3 & 94.14\% & 93.91\% & 93.74\% \\ \hline \end{tabular} \end{table*} \subsection{Few-Shot Image Classification} We followed the state-of-the-art episode composition similar to \cite{snell2017prototypical,vinyals2016matching}. We choose a set of $W$ ways or classes and $K$ support shots per class. Specifically, we tested $1-way$ with $1-shot$ and $5-shots$, and $20-shots$ with $1-shot$ and $5-shots$. We tested both $Cosine$ and $Euclidean$ distance functions similar to the literature \cite{vinyals2016matching,Sachin2017Optimization}. We utilised a CNN encoder function of four convolutional blocks. Each convolutional block contains $64$ convolution $3 \times 3$ filters. It is also normalised by batch normalisation function \cite{ioffe2015batch} and followed by a Relu non-linearity. Each block also has a $2\times 2$ max-pooling layer. We then add our proposed global context convolutional layer. \paragraph{Few-Shot Datasets.} We use three benchmark datasets, including Omniglot MiniImageNet and CUB-200. The training sets are randomly divided into training episodes. Each training episode has a support set and a query set.. \begin{itemize} \item MiniImageNet dataset \cite{Sachin2017Optimization} has $60,000$ images of $100$ classes ($600$ images each) from the original ImageNet. Each image in the MiniImageNet is $84\times 84$. To compare with the state-of-the-art, we use the standard split of $64$, $16$ and $20$. It is one of the most difficult datasets for few-shot learning. \item CUB-200 dataset \cite{wah2011caltech} has 200 categories of birds and has around 6,000 images for training and 6,000 images for testing. \item Omniglot dataset \cite{lake2015human} has $1,623$ handwritten characters from $50$ different alphabets. The dataset is augmented with different rotations, having a total of $6,492$ classes. We used $4,800$ classes of $1200$ characters for training and $1692$ classes for testing, following \cite{vinyals2016matching}. \end{itemize} \paragraph{MiniImageNet.} Table \ref{tab:mI5w} shows benchmarking results using the MiniImageNet dataset for $5-ways$ with $1-shot$ and $5-shot$. \textit{GCCN} ranks first with $84.8\%$ in the $5-shot$ and second in $1-shot$ with $65.6\%$, after the LaplacianShot, outperforming the state-of-the-art. The LaplacianShot \cite{ziko2020laplacian} has achieved $75.57\%$ and $84.7\%$ in the $1-shot$ and $5-shot$ setups, respectively. Hyperbolic prototypical is a similar work that proposed a new method for the prototypical networks \cite{khrulkov2020hyperbolic}. GCCN outperforms the hyperbolic prototypical with $6\%$ and $8\%$ in $1-shot$ and $5-shot$, respectively. Other recent works have also come after \textit{GCCN}, such as Meta-SGD \cite{Li2017MetaSGDLT}, iMAML HF \cite{rajeswaran2019meta}, Meta-Net \cite{munkhdalaiY17Meta} and iMAML GD \cite{rajeswaran2019meta} with $50.5\%$, $49.3\%$, $49.2\%$ and $49\%$, respectively. $GCCN$ outperforms the prototypical and matching networks that achieved $48\%$ and $43.4\%$, respectively. These results show the impact of using the augmented vectors over the conventional CNN vector embeddings. The proposed vector-level augmentation algorithm increased the accuracy of the prototypical networks from $48\%$ to $53\%$, and from $66.2\%$ to $84.8\%$ in the $5-shot$ task. \textit{GCCN} has also outperformed state-of-the-art methods in the $1-shot$ task, such as Meta-learner LSTM, MAML \cite{finn2017model} and GPNet + Polynomial \cite{patacchiola2019deep}. \begin{table}[!ht] \centering \small \caption{Benchmark of GCCN on MiniImageNet few-shot.} \label{tab:mI5w} \begin{tabular}{|p{5cm}|l|l|l|} \hline Model & 1-shot & 5-shot \\ \hline PIXELS (Cosine) & 23 \% & 26.6\% \\\hline Baseline nearest neighbours (Cosine) & 28.8\% & 49.8\% \\\hline Matching networks (Cosine) \cite{vinyals2016matching} & 43.4\% & 51\% \\\hline Meta-learner LSTM \cite{Sachin2017Optimization} & 43.4\% & 60.6\% \\ \hline ProtoNet \cite{snell2017prototypical} (Euclid) & 48\% & 66.2\% \\ \hline MAML \cite{finn2017model} & 48.7\% & 63.1\% \\\hline Hyperbolic ProtoNet \cite{khrulkov2020hyperbolic} & 51.6\% & 66 \% \\\hline Reptile + Trans \cite{nichol2018first} & 49.9\% & 65.9\% \\ \hline Relation Net \cite{Sung_2018_CVPR} & 50.4\% & 65.32\% \\ \hline DN4 \cite{Li_2019_CVPR} & 51.2\% & 71.02\% \\ \hline VAMPIRE \cite{nguyen2020uncertainty} & 51.5\% & 64.31\% \\ \hline Hyperbolic ProtoNet (4 Conv) \cite{khrulkov2020hyperbolic} & 54.43\%& 72.67\% \\ \hline Hyperbolic ProtoNet (ResNet) \cite{khrulkov2020hyperbolic} & 59.47\%& 76.84\% \\ \hline SImPa (4 Conv) \cite{nguyen2020pac} & 52.1\% & 63.87\% \\ \hline SImPa (ResNet) \cite{nguyen2020pac} & 63.8\%& 78.04\% \\ \hline MAML++ \cite{antoniou2019train} & 52.4\% & 68.32\% \\ \hline DSN-MR (4 Conv) \cite{simon2020adaptive} & 55.88\%& 70.5\% \\ \hline DSN-MR (ResNet) \cite{simon2020adaptive} & 64.60\%& 79.51\% \\ \hline EPNet (ResNet) \cite{rodriguez2020embedding} & 66.50\%& 81.06\% \\ \hline LaplacianShot \cite{ziko2020laplacian} & \textbf{75.57\%} & 84.7\% \\ \hline GCCN (ours) & 65.6\% & \textbf{84.8\%} \\\hline \end{tabular} \end{table} \paragraph{CUB-200-2011.} Table \ref{tab:CUB} shows benchmarking results comparing \textit{GCCN} with the state-of-the-art methods using the CUB-200-2011 dataset. GCCN outperformed state-of-the-art methods in both $1-shot$ and $5-shot$. GCCN has $1.6\%$ and $8.56\%$ better accuracy than the hyperbolic prototypical. GCCN also outperforms MAML and MAML++. GCCN has improved the accuracy of the baseline prototypical network. It has $51.31\%$ and $70.77\%$ with around $14\%$ and $10\%$ lower accuracy than GCCN. \begin{table}[!ht] \centering \small \caption{5-ways benchmark results on the CUB datasets.} \label{tab:CUB} \begin{tabular}{|p{5cm}|l|l|} \hline Method & 1-shot & 5-shot \\ \hline Matching Nets \cite{vinyals2016matching} & 56.53\% & 63.54\% \\ \hline DEML+Matching Nets \cite{vinyals2016matching,zhou2018deep} & 63.47\% & 64.86\% \\ \hline MAML \cite{finn2017model} & 50.45\% & 59.60\% \\ \hline DEML+MAML \cite{zhou2018deep} & 64.63\% & 66.75\% \\ \hline Meta-SGD \cite{li2017meta} & 53.34\% & 67.59\% \\ \hline DEML+Meta-SGD \cite{li2017meta,zhou2018deep} & \textbf{66.95\%} & 77.11\% \\ \hline Hyperbolic ProtoNet \cite{khrulkov2020hyperbolic} & 64.02\% & 72.22\% \\ \hline ProtoNet \cite{snell2017prototypical} & 51.31\% & 70.77\% \\ \hline MACO \cite{hilliard2018few} & 60.76\% & 74.96\% \\ \hline RelationNet \cite{Sung_2018_CVPR} & 62.45\% & 76.11\% \\ \hline Baseline++ \cite{chen2018a} & 60.53\% & 79.34\% \\ \hline GCCN (ours) & 65.62\% & \textbf{80.74\%} \\ \hline \end{tabular} \end{table} \paragraph{Omniglot.} Table \ref{tab:Og520w} shows the benchmarking results comparing \textit{GCCN} with the state-of-the-art and baseline methods using the Omniglot dataset. The results are from the experimental setup of $5-ways$ for both $1-shot$ and $5-shot$ learning. GCCN achieves state-of-the-art results with $99.4\%$ and $99.9\%$ for both $1-shot$ and $5-shot$ tasks. \textit{GCCN} outperformed the original version of the utilised head models (prototypical and matching networks). The prototypical network achieved $98.8\%$ and $99.7\%$, and matching networks have $98.1\%$ and $98.9\%$ in $1-shot$ and $5-shot$ tasks, respectively. Hyperbolic prototypical networks\cite{khrulkov2020hyperbolic} have recently been published to enhance the prototypical networks, with $99\%$ and $99.4\%$. \textit{GCCN} outperformed the hyperbolic version in the $1-shot$ and $5-shot$ learning. \textit{GCCN} outperforms recent works such VAMPIRE (WACV, 2020) \cite{nguyen2020uncertainty}, APL (ICLR, 2019) \cite{ramalho2019adaptive} and hyperbolic prototypical networks (CVPR, 2020) \cite{khrulkov2020hyperbolic}. \textit{GCCN} using the $Euclidean$ method has the best results over the $Cosine$ and other state-of-the-art networks. \textit{GCCN} proved its superiority in increasing the accuracy of the utilised head models (prototypical and matching networks) using $Euclidean$ or $Cosine$ distance metrics. For example, \textit{GCCN} has increased the prototypical network accuracy from $90\%$ to $99.2$ in the $1-shot$ using $Cosine$. Table \ref{tab:Og520w} lists benchmarking results of $20-ways$ with both $1-shot$ and $5-shot$ using Omniglot. \textit{GCCN} ranks first in comparison to the state-of-the-art results with $99.1\%$ in the $5-shot$ learning task and second in the $2-shot$ tasks with $96.4\%$, behind the APL \cite{ramalho2019adaptive} that achieved first place with $97.2\%$. The utilised head models (prototypical and matching networks) achieved $95.8\%$ and $98.9\%$, and $62.6\%$ and $74.3\%$, in the $1-shot$ and $5-shots$ tasks, respectively. \textit{GCCN} outperformed both prototypical and matching networks model in the $1-shot$ and $5-shot$ setups. Hyperbolic prototypical networks \cite{khrulkov2020hyperbolic} are also outperformed by \textit{GCCN}, achieving $95\%$ and $98.2\%$. \textit{GCCN} also outperforms other state-of-the-art woks, such as VAMPIRE \cite{nguyen2020uncertainty}, adaCNN \cite{munkhdalai2018rapid}, APL \cite{ramalho2019adaptive} and Reptile + Transduction \cite{nichol2018first}. \textit{GCCN} using the $Euclidean$ method has the best results over the $Cosine$ and other state-of-the-art networks. It achieves $21.1\%$ and $12.6\%$ higher scores in the $1-shot$ and $5-shot$ of the $20-ways$, respectively. \begin{table*}[!ht] \centering \small \caption{Benchmark results on Omniglot on 5-way and 20-way.} \label{tab:Og520w} \begin{tabular}{|p{7cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{l|}{5-way} & \multicolumn{2}{l|}{20-way} \\ \cline{2-5} & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline Matching Net \cite{vinyals2016matching} (Cos.) & 86.4\% & 90.6\% & 62.6\% & 74.3\% \\ \hline PrototNet \cite{snell2017prototypical} (Cos.) & 90\% & 91\% & 68.8\% & 80.1\% \\ \hline Reptile + Trans \cite{nichol2018first} & 97.7\% & 99.5\% & 89.4\% & 97\% \\ \hline \textbf{APL} \cite{ramalho2019adaptive} & 97.9\% & \textbf{99.9\%} & \textbf{97.2\%} & 97.6\% \\ \hline Matching Net \cite{vinyals2016matching} (Euclid.) & 98.1\% & 98.9\% & 92.8\% & 97.8\% \\ \hline Neural statistician \cite{edwards2016towards} & 98.1\% & 99.5\% & 93.2\% & 98.1\% \\ \hline VAMPIRE \cite{nguyen2020uncertainty} & 98.4\% & 99.6\% & 93.2\% & 98.5\% \\ \hline adaCNN \cite{munkhdalai2018rapid} & 98.4\% & 99.4\% & 96.1\% & 98.4\% \\ \hline \textbf{MAML} \cite{finn2017model} & 98.7\% & \textbf{99.9\%} & 95.8\% & 98.9\% \\ \hline PrototNet \cite{snell2017prototypical} (Euclid.) & 98.8\% & 99.7\% & 95.8\% & 98.6\% \\ \hline Hyperbolic ProtoNet \cite{khrulkov2020hyperbolic} & 99.0\% & 99.4\% & 95.9\% & 98.2\% \\ \hline GCCN (ours) (Cos.) & 99.2\% & 97\% & 75.3\% & 86.5\% \\ \hline \textbf{GCCN (ours)} (Euclid.) & \textbf{99.4\%} & \textbf{99.9\%} & 96.4\% & \textbf{99.1} \\ \hline \end{tabular} \end{table*} \begin{table}[!ht] \centering \small \caption{Impact of \textit{GCCN} on the prototypical networks.} \label{PN} \begin{tabular}{|l|l|l|l|l|} \hline distance & K-ways & N-shots & Base Model & Accuracy \\ \hline \multirow{8}{*}{Euclidean} & 5 & 1 & \textbf{GCCN} & \textbf{99.4\%} \\ \cline{2-5} & 5 & 1 & Conv. & 98.2\% \\ \cline{2-5} & 5 & 5 & \textbf{GCCN} & \textbf{99.9\%} \\ \cline{2-5} & 5 & 5 & Conv. & 99.4\% \\ \cline{2-5} & 20 & 1 & GCCN & \textbf{96.4\%} \\ \cline{2-5} & 20 & 1 & Conv. & 95.8\% \\ \cline{2-5} & 20 & 5 & \textbf{GCCN} & \textbf{99.1\%} \\ \cline{2-5} & 20 & 5 & Conv. & 98.6\% \\ \hline \multirow{8}{*}{Cosine} & 5 & 1 & \textbf{GCCN} & \textbf{99.2\%} \\ \cline{2-5} & 5 & 1 & Conv. & 90\% \\ \cline{2-5} & 5 & 5 & \textbf{GCCN} & \textbf{97\%} \\ \cline{2-5} & 5 & 5 & Conv. & 91\% \\ \cline{2-5} & 20 & 1 & \textbf{GCCN} & \textbf{75.3\%} \\ \cline{2-5} & 20 & 1 & Conv. & 68.8\% \\ \cline{2-5} & 20 & 5 & \textbf{GCCN} & \textbf{86.5\%} \\ \cline{2-5} & 20 & 5 & Conv. & 80.1\% \\ \hline \end{tabular} \end{table} \begin{table}[!ht] \centering \small \caption{Impact of \textit{GCCN} on the matching networks.} \label{MN} \begin{tabular}{|l|l|l|l|l|} \hline distance & K-ways & N-shots & Base Model & Accuracy \\ \hline \multirow{8}{*}{Euclidean} & 5 & 1 & \textbf{GCCN} & \textbf{98.8\%} \\ \cline{2-5} & 5 & 1 & Conv. & 98.1\% \\ \cline{2-5} & 5 & 5 & \textbf{GCCN} & \textbf{99.9\%} \\ \cline{2-5} & 5 & 5 & Conv. & 98.9\% \\ \cline{2-5} & 20 & 1 & GCCN & 91.7\% \\ \cline{2-5} & 20 & 1 & Conv. & 92.8\% \\ \cline{2-5} & 20 & 5 & \textbf{GCCN} & \textbf{98.7\%} \\ \cline{2-5} & 20 & 5 & Conv. & 97.8\% \\ \hline \multirow{8}{*}{Cosine} & 5 & 1 & \textbf{GCCN} & \textbf{89.0\%} \\ \cline{2-5} & 5 & 1 & Conv. & 86.4\% \\ \cline{2-5} & 5 & 5 & \textbf{GCCN} & \textbf{95.8\%} \\ \cline{2-5} & 5 & 5 & Conv. & 90.6\% \\ \cline{2-5} & 20 & 1 & \textbf{GCCN} & \textbf{71.4\%} \\ \cline{2-5} & 20 & 1 & Conv. & 62.6\% \\ \cline{2-5} & 20 & 5 & \textbf{GCCN} & \textbf{77.7\%} \\ \cline{2-5} & 20 & 5 & Conv. & 74.3\% \\ \hline \end{tabular} \end{table} \paragraph{GCCN Impact on Prototypical and Matching Networks.} In this section, we discuss the impact of the proposed \textit{GCCN} used as a base model within the prototypical and matching networks. Tables \ref{PN} and \ref{MN} list the experiment results using both the $Euclidean$ and $Cosine$ distance measures with the prototypical and matching networks, respectively. Prototypical networks using the proposed vector augmentation algorithm (\textit{GCCN}) have better accuracy than their original implementation using the classical convolutional networks. The vector augmentation method has successfully enriched the CNN feature embeddings. The \textit{GCCN} performance is better in both $Euclidean$ distance and $Cosine$ similarity measures. The performance of prototypical networks using \textit{GCCN} has outperformed the original prototypical networks in all tasks. For example, \textit{GCCN} using the $Cosine$ method has significantly improved the accuracy from $90\%$, $91\%$, $68.8\%$ and $80.1\%$ to $99.2\%$, $97\%$, $75.3\%$ and $86.5\%$ in the tasks of $5-ways-1-shot$, $5-ways-5-shots$, $20-ways-1-shot$ and $20-ways-5-shots$, respectively. The performance has also been improved using the $Euclidean$ distance in all tasks. We also implemented the matching networks using the proposed \textit{GCCN} as a base model instead of CNN. Matching networks with \textit{GCCN} outperform the original version of matching with the CNN base model. The matching networks ($Cosine$) performance is significantly improved using the proposed vector embedding augmentation utilising global context structural information. Table \ref{MN} shows that the accuracy of matching networks increased from $90.6\%$ to $95.8\%$ for the $5-ways-5-shots$ task and $62.6\%$ to $71.4\%$ for the $20-ways-1-shots$ task. On the other hand, the matching networks ($Euclidean$) performance also improved. Matching networks with \textit{GCCN} outperformed the original version in most cases. For example, the accuracy increased from $98.1\%$, $98.9\%$ and $97.8\%$ to $98.8\%$, $99.9\%$ and $98.7\%$ in the tasks of $5-ways-1-shots$, $5-ways-5-shots$, and $20-ways-5-shots$, respectively. \section{Conclusion} We have introduced \textit{GCCN}, a novel embedding vector augmentation and normalisation method. The proposed \textit{GCCN} tends to overcome the limitation of the traditional CNNs of ignoring important structural information relying on local receptive fields. We offer to extract useful global context information to augment the CNN features. This augmentation has proved to be a simple yet effective approach. In this paper, we have experimented with this methodology on both image classification and few-shot learning datasets. We have also introduced an in-depth performance evaluation using the proposed vector embedding method under the state-of-the-art few-shot methods. \section*{Acknowledgments} Ali Hamdi is supported by RMIT Research Stipend Scholarship. This research is partially supported by Australian Research Council (ARC) Discovery Project \textit{DP190101485}. \bibliographystyle{elsarticle-num}
971d96fd378d56663f06327d4eec37fb0b30287d
\section{Introduction} K (\url{https://kframework.org}) is a well established framework for programming languages, which brings a different perspective of what such a framework should be. K provides means to give formal definitions for programming languages and aims to automatically derive a series of practical tools for those languages: a parser, an interpreter, a debugger, a symbolic execution tool, a deductive verifier, a model-checker, and others. A formal semantics of a language defined in K consists of syntax declarations, a language configuration, and a set of rewriting rules. The configuration is a constructor term which holds the semantical information needed to execute programs (e.g., the code, environment, stack, program counter, etc). The rewriting rules are pairs $\varphi \Rightarrow \varphi'$ of program configurations with variables which specify how program configurations transit to other program configurations. An example of a tool automatically generated by K is the interpreter, which works as follows: the user provides a concrete configuration (which includes the program and an initial state of that program) and K applies rewriting rules as much as possible to this configuration. Another tool is the K prover, which uses symbolic execution to prove reachability properties. The theoretical foundation of K is Matching Logic~\cite{rosu-2017-lmcs,chen-lucanu-rosu-2021-jlamp} (hereafter shorthanded as \ensuremath{\textsc{ML}}\xspace), a logical framework where the formal definitions of program languages~\cite{ellison-rosu-2012-popl,DBLP:conf/pldi/HathhornER15,park-stefanescu-rosu-2015-pldi} and program reasoning~\cite{stefanescu-park-yuwen-li-rosu-2016-oopsla,stefanescu-ciobaca-mereuta-moore-serbanuta-rosu-2014-rta,DBLP:conf/wrla/RusuA16,DBLP:conf/birthday/LucanuRAN15,arusoaie:hal-01627517} can be done in a uniform way. \ensuremath{\textsc{ML}}\xspace formulas are called \emph{patterns} and they are used to uniformly specify syntax and semantics of programming languages, and the properties of program executions. \ensuremath{\textsc{ML}}\xspace has a \emph{pattern matching semantics}: a pattern is interpreted as the set of elements that \emph{match} it. For example, if a pattern $t$ encodes a symbolic program configuration (with variables), then $t$ is interpreted as the set of concrete program configurations that match it. \ensuremath{\textsc{ML}}\xspace has a minimal, but expressive, syntax. For example, if one wants to specify configurations $t$ that satisfy a first-order constraint $\phi$, then $t \land \phi$ is the pattern that captures this intent. Not only constraints can be attached to patterns. Conjunctions $t_1 \land t_2$ (and disjunctions $t_1 \lor t_2$) are interpreted as the intersection (and union, respectively) of the elements that match $t_1$ and $t_2$. Moreover, it is easy to explain K rules $\varphi \Rightarrow \varphi'$ as implications of patterns $\varphi\limplies {\bullet}\varphi'$: the program configurations that match $\varphi$ can transit in one step to program configurations that match $\varphi'$, where ${\bullet}$ is a special symbol used to specify one step transitions. \ensuremath{\textsc{ML}}\xspace is equipped with a sound proof system which can derive sequents of the form $\Gamma \vdash \varphi$, where $\varphi$ is a pattern and $\Gamma$ is an \emph{\ensuremath{\textsc{ML}}\xspace theory} (i.e., a set of axiom patterns). The \ensuremath{\textsc{ML}}\xspace proof system is the key ingredient of another tool that K aims to generate: a deductive verifier.\\ \textbf{Motivation.} A fair question that needs to be posed is \emph{how can we trust the proofs produced by the deductive verifier generated by K?} Given the size of the K codebase (about half a million lines of code~\cite{chen-lin-trinh-rosu-2021-cav}) and its dynamics (new code committed every week), the formal verification of the implementation of K is out of question. The solution here is to do what other formal verification tools do: instrument K so that its automatically generated tools produce \emph{proof objects} that can be independently checked by a trusted kernel. In our context, proof objects are just proofs that use the \ensuremath{\textsc{ML}}\xspace proof system. It turns out that all these tools that K aims to generate share several components. For example, \emph{matching} algorithms are useful for concrete execution (interpreter), while \emph{unification} and \emph{antiunification} algorithms are needed for symbolic execution and program verification. Therefore, we can have a uniform approach: if we can find proof object generation mechanisms for each component, then we can simply instantiate those mechanisms whenever needed. Symbolic execution is a key component in program verification and it has been used in K as well (e.g., ~\cite{arusoaie-2014,JSC2016,stefanescu-park-yuwen-li-rosu-2016-oopsla}). Generating proof objects for symbolic execution is difficult because the parameters of an execution step must carry more proof information than in the concrete executions case. First, instead of matching, proof parameters must include unification information. Second, path conditions need to be carried along the execution. In \ensuremath{\textsc{ML}}\xspace, there is a natural way to deal with symbolic execution. \ensuremath{\textsc{ML}}\xspace patterns $\varphi$ have a \emph{normal form} $t \land \phi$, where $t$ is a term pattern and $\phi$ is a predicate pattern, expressing a constraint on variables in $t$. In particular, $t$ can be the program configuration and $\phi$ the path condition. Patterns $t \land \phi$ are evaluated to the set of values that \emph{match} $t$ and satisfy $\phi$. To compute the symbolic successors of a pattern, say $t \land \phi$, with respect to a rule, say $t_1 \land \phi_1 \Rightarrow t_2 \land \phi_2$, we need to unify the patterns $t \land \phi$ and $t_1 \land \phi_1$. Because unification can be expressed as a conjunction in \ensuremath{\textsc{ML}}\xspace~\cite{fm,rosu-2017-lmcs}, we can say that only the states matched by $(t \land \phi) \land (t_1 \land \phi_1) \equiv (t \land t_1) \land (\phi \land \phi_1)$ transit to states matched by $t_2 \land \phi_2$. Expressing unification as a conjunction $(t \land t_1)$ is a nice feature of \ensuremath{\textsc{ML}}\xspace, but, in practice, unification algorithms are still needed to compute the general unifying substitution since it is used in symbolic successor patterns. The symbolic successors are obtained by applying the unifying substitution to the right-hand side of a rule (e.g., $t_2 \land \phi_2$) and adding the substitution (as an \ensuremath{\textsc{ML}}\xspace formula) to the path condition. Also, unification algorithms are being used to normalise conjunctions of the form $t \land t_1$, so that they consist of only one term and a constraint, $t'\land \phi'$. Therefore, unification algorithms are parameters of the symbolic execution steps and they must be used to generate the corresponding proof objects. It is often the case when more than one rule can be applied during symbolic execution. For instance, if an additional rule $t_1' \land \phi_1' \Rightarrow (t_2' \land \phi_2')$ can be applied to $t \land \phi$, then the set of target states must match $t_2 \land \phi_2$ or $t_2' \land \phi_2'$. This set of states is matched by the disjunction $(t_2 \land \phi_2) \vee (t_2' \land \phi_2')$. For the case when $\phi_2 \land \phi'_2$ holds, the disjunction reduces to $t_2 \lor t'_2$, which is not a normal form but it can be normalised using antiunification.\\ \textbf{Related work.} The literature on (anti)unification is vast. Due to the space limit, we recall here the closest related work which addresses proof object generation for concrete and symbolic executions, to understand the context of our work. In~\cite{chen-lin-trinh-rosu-2021-cav}, the authors propose a method to generate proof objects for program executions $\varphi_\mathit{init} \Rightarrow \varphi_\mathit{final}$, where $\varphi_\mathit{init}$ is the formula that specifies the initial state of the execution, $\varphi_\mathit{final}$ specifies the final state, and ``$\Rightarrow$'' states the rewriting/reachability relation between states. The correctness of an execution,\\%[1ex] \centerline{$ \Gamma \vdash \varphi_\mathit{init} \Rightarrow \varphi_\mathit{final}, $} \noindent is witnessed by a formal proof, which uses the \ensuremath{\textsc{ML}}\xspace proof system. The K interpreter computes the parameters (e.g., execution traces, matching info) needed to generate the proof object. In~\cite{rosu-2017-lmcs}, the author shows that unification in \ensuremath{\textsc{ML}}\xspace can be represented as a conjunction of \ensuremath{\textsc{ML}}\xspace patterns. A first step into generating proof objects for unification was done in~\cite{fm}. We proposed a method to normalise conjunctions of patterns $t_1 \land t_2$. The K implementation works with patterns in normal form $t \land \phi$, which are more efficient: matching/unification algorithms are executed only once on normalised patterns, rather than multiple times on patterns having multiple structural components (e.g., $t_1 \land t_2$). In~\cite{fm}, we use the syntactic unification algorithm~\cite{martelli} to (1) find an equivalent normal form $t \land \phi$ for conjunctions $t_1 \land t_2$, and (2) to generate proof objects for the equivalence between $t \land \phi$ and $t_1 \land t_2$. The unification algorithm provides the needed parameters (e.g., unifying substitutions) for proof generation.\\ \textbf{Contributions.} A lesson that we learned from~\cite{fm} and~\cite{chen-lin-trinh-rosu-2021-cav} is that the algorithms implemented in various components of K can be used to compute the parameters needed to generate proof objects. In this paper we address the problem of \emph{generating proof objects for antiunification}, which is used, e.g., in symbolic execution and verification. In \ensuremath{\textsc{ML}}\xspace, the \emph{least general generalisation} of two term patterns $t_1$ and $t_2$ is given by their disjunction $t_1 \lor t_2$. We use Plotkin's antiunification algorihm~\cite{Plo:72,Plotkin70} to find normal forms $t \land \phi$ for disjunctions $t_1 \lor t_2$, and to generate proof objects for the equivalences between $t \land \phi$ and $t_1 \lor t_2$. The execution of the antiunification algorithm provides the parameters (intermediate generalisations and substitutions computed at each step) to generate the proof objects. Our~contributions~are \begin{enumerate} \item We express Plotkin's antiunification algorithm in \ensuremath{\textsc{ML}}\xspace terms and we show that its steps produce equivalent patterns (Lemma~\ref{lem:stepaunif} and Theorem~\ref{th:antiunif}). \item We propose a proof object generation mechanism for the equivalences computed by the algorithms used in symbolic execution and verification, and we show how it works in the case of antiunification (Section~\ref{sec:aunifgen}). \item We provide a prototype implementation of our proof object generation mechanism and a proof checker (Section~\ref{sec:prototype}); \item We test our prototype on interesting examples, including inputs inspired from the K definitions of C~\cite{ellison-rosu-2012-popl,DBLP:conf/pldi/HathhornER15} and Java~\cite{DBLP:conf/popl/BogdanasR15}. \end{enumerate} Indeed, the most challenging part of this work is the proof object generation mechanism. The main difficulty was to find the right proof object schema generation that precisely captures one step of the antiunification algorithm. Another tricky part was to design the proof object schema so that the proofs generated for each step can be easily composed. The size of the resulted proofs depends on the number of steps performed by the antiunification algorithm.\\ \textbf{Paper organisation.} Section~\ref{sec:aml} presents \ensuremath{\textsc{ML}}\xspace, its proof system and the \ensuremath{\textsc{ML}}\xspace theory for many-sorted term algebras. In Section~\ref{sec:aunif} we present antiunification in a \ensuremath{\textsc{ML}}\xspace setting, and we prove that Plotkin's antiunification can be safely used to normalise disjunctions of term patterns. Our proof object generation methodology is presented in Section~\ref{sec:aunifgen}. The prototype implementation is described in Section~\ref{sec:prototype} and we conclude in Section~\ref{sec:conclusions}. \section{Matching Logic} \label{sec:aml} Matching logic (\ensuremath{\textsc{ML}}\xspace)~\cite{rosu-2017-lmcs,CR19,chen-lucanu-rosu-2021-jlamp} started as a logic over a particular case of constrained terms~\cite{DBLP:conf/lics/RosuSCM13,stefanescu-ciobaca-mereuta-moore-serbanuta-rosu-2014-rta,arusoaie:hal-01627517,DBLP:conf/birthday/LucanuRAN15}, but now it is developed as a full logical framework. We recall from~\cite{chen-lucanu-rosu-2021-jlamp} the definitions and notions that we use in this paper. A matching logic \emph{signature} is a triple $(\mathit{EVar}, \mathit{SVar}, \Sigma)$, where $\mathit{EVar}$ is a set of \emph{element variables} $x, y, \ldots$, $\mathit{SVar}$ is a set of \emph{set variables} $X,Y,\ldots$, and $\Sigma$ is a set of \emph{constant symbols} (or \emph{constants}). The set \textsc{Pattern} of \emph{$\Sigma$-patterns} is generated by the grammar below, where $x \in \mathit{EVar}$, $X \in \mathit{SVar}$, and $\sigma \in \Sigma$ \\[1ex] \centerline{$ \varphi ::= x \mid X \mid \sigma \mid \varphi_1~\varphi_2 \mid \bot \mid \dfness{\varphi} \mid \varphi_1 \rightarrow \varphi_2 \mid \exists x . \varphi \mid \mu X . \varphi \textit{ if $\varphi$ is positive in $X$} $}\\[1ex] A pattern $\varphi$ is \emph{positive} in $X$ if all free occurrences of $X$ in $\varphi$ are under an even number of negations. The patterns below are derived constructs:\\[1ex] \centerline{$ \begin{aligned} \top &\equiv \lnot \bot & \tlness{\varphi} &\equiv \lnot \dfness{\lnot \varphi} & \varphi_1 \lor \varphi_2 &\equiv \lnot \varphi_1 \rightarrow \varphi_2 \\ \lnot \varphi &\equiv \varphi \rightarrow \bot & \varphi_1 = \varphi_2 &\equiv \tlness{\varphi_1 \leftrightarrow \varphi_2} & \varphi_1 \land \varphi_2 &\equiv \lnot (\lnot \varphi_1 \lor \lnot \varphi_2) & \\ \forall x. \varphi &\equiv \lnot\exists x . \lnot \varphi & \varphi_1 \neq \varphi_2 &\equiv \lnot (\varphi_1 = \varphi_2) & \varphi_1 \leftrightarrow \varphi_2 &\equiv (\varphi_1 \rightarrow \varphi_2) \land (\varphi_2 \rightarrow \varphi_1) \end{aligned} $\vspace{-1ex}} \begin{example} \label{ex:signature} Let $\Sigma = \{ \mathit{zero}, \mathit{succ}, \mathit{nil}, \mathit{cons} \}$ be an \ensuremath{\textsc{ML}}\xspace signature. Then $x$, $\mathit{zero}$, $\mathit{succ}\,\mathit{zero}$, $\mathit{succ}\,x$, $\exists x. \mathit{zero} = x$, $\mu X. \mathit{zero} \lor (\mathit{succ}\, X)$ are examples of \ensuremath{\textsc{ML}}\xspace patterns\footnote{Of course, $\mathit{succ}\,\mathit{nil}$ or $\mathit{cons}\,\mathit{nil}\,\mathit{zero}$ are also \ensuremath{\textsc{ML}}\xspace patterns but these can be handled properly using sorts (see Section~\ref{sec:termalg}).}. \vspace{-1ex} \end{example} \ensuremath{\textsc{ML}}\xspace has a pattern matching semantics where patterns are interpreted on a given carrier se , say $M$. Each pattern is interpreted as the set of elements that match it. \emph{Element variables} $x$ are matched by a singleton set, while \emph{set variables} $X$ are matched by a subset of $M$. The pattern $\bot$ is matched by the empty set (and hence $\top$ by $M$). The implication pattern $\varphi_1 \rightarrow \varphi_2$ is matched by the elements that do not match $\varphi_1$ or match $\varphi_2$. A pattern $\exists x\mathord{.\,}\varphi$ is matched by the instances of $\varphi$ when $x$ ranges over $M$. In particular, $\exists x.x$ is matched by $M$. Note that $\exists$ binds only element variables. Symbols $\sigma$ (e.g. $\mathit{zero}$, $\mathit{succ}$) are interpreted as subsets $\sigma_M\subseteq M$, and, usually, the needed interpretation for them is obtained using axioms. For instance, the pattern $\exists x.\mathit{zero}=x$ is matched by $M$ if $\mathit{zero}_M$ is a singleton, and by $\bot$ otherwise. This type of pattern is often used as axiom to restrict the interpretation of symbols to singletons. The pattern $\varphi_1~\varphi_2$ is an \emph{application} and its interpretation is given by means of a function $M\times M\to \mathcal{P}(M)$, which is pointwise extended to a function $\mathcal{P}(M)\times \mathcal{P}(M)\to \mathcal{P}(M)$. Applications are useful to build various structures or relations. For instance, $\forall x\mathord{.\,} \exists y\mathord{.\,} \mathit{succ}\,x=y$ says that $\mathit{succ}$ has a functional interpretation (recall that the element variable $y$ is matched by a singleton set). Applications are left associative. The pattern $\mu X\mathord{.\,}\varphi$ is matched by the least fixpoint of the functional defined by $\varphi$ when $X$ ranges $\mathcal{P}(M)$. An example is $\mu X. (\mathit{zero}\lor \mathit{succ}\,X)$, which is matched by the natural numbers $\mathbb{N}$ (up to a surjection), when both $\mathit{zero}$ and $\mathit{succ}$ have a functional interpretation (as above). Note that $\mu$ binds only set variables in \emph{positive} patterns. % The pattern $\dfness{\varphi}$ is called \emph{definedness}\footnote{For convenience, we introduce it directly in the syntax of patterns but it can be axiomatised as in~\cite{rosu-2017-lmcs}.} and it is matched by $M$ if $\varphi$ is matched at least by one element, and by $\emptyset$ otherwise. Such patterns are called \emph{predicate patterns}. The syntax priorities of the \ensuremath{\textsc{ML}}\xspace constructs is given by this ordered list:\\[0.5ex] \centerline{$ \lnot\_, \dfness{\_}, \tlness{\_}, \_=\_, \_\land\_, \_\lor\_, \_\rightarrow\_, \_\leftrightarrow\_, \exists\_.\_, \forall\_.\_, \mu\_.\_, $}\\[0.5ex] where $\lnot$ has the highest priority and $\mu\_.\_$ has the lowest priority. By convention, the scope of the binders extends as much as possible to the right, and parentheses can be used to restrict the scope of the binders. We often write $\varphi[\psi/x]$ and $\varphi[\psi/X]$ to denote the pattern obtained by substituting all free occurrences of $x$ and $X$, respectively, in $\varphi$ for $\psi$. In order to avoid variable capturing, we consider that $\alpha$-renaming happens implicitly. \emph{The \ensuremath{\textsc{ML}}\xspace proof system}~\cite{chen-lucanu-rosu-2021-jlamp} is shown in Figure~\ref{fig:proofsystem}. It contains four categories of rules: propositional tautologies, frame reasoning over application contexts, standard fixpoint reasoning, and two rules needed for completeness. An \emph{application context} $C$ is a pattern with a distinguished placeholder variable $\square$ s.t. the path from the root of $C$ to $\square$ has only applications. $C[\varphi/\square]$ is a shorthand for $C[\varphi]$ and $\mathit{free}(\varphi)$ denotes the set of free variables in $\varphi$. \begin{figure}[t] \selectfont \centering \renewcommand{\arraystretch}{1.15} \begin{tabular}{|lrl|} \hline \multicolumn{3}{|c|}{\textbf{Hilbert-style proof system}}\\ \hline \textsc{Propositional} && $\varphi$, if $\varphi$ is a propositional tautology over patterns\\ \textsc{Modus Ponens} && $\infer{\varphi_2}{\varphi_1 & \varphi_1 \rightarrow \varphi_2}$ \\ \textsc{$\exists$-Quantifier} && $\varphi[y/x]\rightarrow \exists x . \varphi$ \\ \textsc{$\exists$-Generalisation} && \begin{minipage}{.2\textwidth}$\infer{(\exists x . \varphi_1) \rightarrow \varphi_2}{\varphi_1 \rightarrow \varphi_2}$\end{minipage} if $x \not\in \mathit{free}(\varphi_2)$ \\ \hline \textsc{Propagation$_\bot$} && $C[\bot] \rightarrow \bot$ \\ \textsc{Propagation$_\lor$} && $C[\varphi_1 \lor \varphi_2] \rightarrow C[\varphi_1] \lor C[\varphi_2]$ \\ \textsc{Propagation$_\exists$} && $C[\exists x . \varphi] \rightarrow \exists x . C[\varphi] $ ~~~ if $x \not\in \mathit{free}(C)$\\ \textsc{Framing} && $\infer{C[\varphi_1] \rightarrow C[\varphi_2]}{\varphi_1 \rightarrow \varphi_2}$\\ \hline \textsc{Set Variable Substitution} && $\infer{\varphi[\psi/X]}{\varphi}$\\ \textsc{Pre-Fixpoint} && $\varphi[\mu X. \varphi / X] \rightarrow \mu X . \varphi$\\ \textsc{Knaster-Tarski} && $\infer{\mu X . \varphi \rightarrow \psi}{\varphi[\psi/X] \rightarrow \psi}$\\ \hline \textsc{Existence} && $\exists x . x$\\ \textsc{Singleton} && $\lnot (C_1[x \land \varphi] \land C_2[x \land \lnot \varphi])$\\ \hline \end{tabular} \caption{The Hilbert-style \ensuremath{\textsc{ML}}\xspace proof system.} \label{fig:proofsystem} \end{figure} \subsection{\ensuremath{\textsc{ML}}\xspace Specification of the Term Algebra} \label{sec:termalg} \newcommand{\mathit{Sort}}{\mathit{Sort}} \newcommand{\mathit{Unit}}{\mathit{Unit}} \newcommand{\mathit{unit}}{\mathit{unit}} A complete \ensuremath{\textsc{ML}}\xspace axiomatization of the many-sorted term algebra is given in~\cite{chen-lucanu-rosu-2020-tr} and we briefly recall it in Figure~\ref{spec:msa}. The specification $\mathsf{SORTS}$ introduces symbols for sorts and their inhabitant sets, and some usual notations for them. The specification $\mathsf{MSA}$ includes the axioms corresponding to a given algebraic signature. Finally, $\mathsf{TERM(S, F)}$ includes the properties "no confusion" and "no junk" (inductive domains) that characterizes the (initial) term algebra "No confusion" says that the function symbols $F$ are constructors: \begin{itemize}[topsep=0pt, partopsep=0pt, itemsep=0pt] \item two different constructors will define different terms $\axname{NoConfusion I}$; \item a constructor is injective (i.e., the same constructors with different arguments will define different terms), and this is captured by $\axname{NoConfusion II}$. \end{itemize} The "no junk" property says that all inhabitants of a sort are generated using the constructors $F$, and this is captured by the axiom \axname{Induct. Domain}. For the sake of presentation, this axiom does not include the case of the mutual recursive sorts\footnote{See~\cite{chen-lucanu-rosu-2021-jlamp} for a complete definition.}. \begin{figure}[h] \renewcommand\figurename{\textsf{\textbf{Specification}}} \small \begin{tabular}{|c|c|} \hline \begin{minipage}{0.44\textwidth}\vspace{-1\baselineskip} \begin{lstlisting}[mathescape] spec $\mathsf{SORTS}$ (*{\textsf{Symbol}:}*) $\mathit{inh},\mathit{Sort}$ (*{\textsf{Notation}:}*) $\inh{s} \equiv \mathit{inh} \ s$ $\forall x {:} s \mathord{.\,} \varphi \equiv \forall x \mathord{.\,} x \in \inh{s} \limplies \varphi$ $\exists x {:} s \mathord{.\,} \varphi \equiv \exists x \mathord{.\,} x \in \inh{s} \land \varphi$ $\varphi {:} s \equiv \exists z {:} s \mathord{.\,} \varphi = z$ $\forall x_1,{\ldots},x_n {:} s \mathord{.\,} \varphi \equiv \forall x_1 {:} s {\ldots} \forall x_n {:} s \mathord{.\,} \varphi$ $\exists x_1,{\ldots},x_n {:} s \mathord{.\,} \varphi \equiv \exists x_1 {:} s {\ldots} \exists x_n {:} s \mathord{.\,} \varphi$ endspec \end{lstlisting}\vspace{-1\baselineskip} \end{minipage} & \begin{minipage}{0.51\textwidth}\vspace{\baselineskip}\vspace{-0.75\baselineskip} \begin{lstlisting}[mathescape] spec $\mathsf{MSA}(\ensuremath{S,F})$ (*{\textsf{Import}:}*) $\mathsf{SORTS}$ (*{\textsf{Symbol}:}*) $s \in S$, $f \in F$ (*{\textsf{Notation}:}*) $f(\varphi_1,\ldots,\varphi_n)\equiv f\,\varphi_1\,\ldots\,\varphi_n$ (*{\textsf{Axiom}:}*) $\begin{array}{ll} \axname{Sort}& s{:} \mathit{Sort} \textrm{~for~each~}s\in S\\ \axname{NonEmpty}& \llbracket s \rrbracket \neq \bot\textrm{~for~each~}s\in S\\ \axname{Function} & \forall x_1{:} s_1{\ldots}\forall x_n{:} s_n . (f\,x_1\ldots x_n) {:} s\\ & \textrm{~for~each~}f \in F_{s_1\ldots s_n, s} \end{array}$ endspec \end{lstlisting \end{minipage} \\ \hline $\mathsf{SORTS}$ & $\mathsf{MSA}$ \\ \hline \end{tabular} \centering \\ \begin{minipage}{1.0\textwidth} \begin{mdframed}[innerleftmargin=0.25em, innerrightmargin=0.25em, innertopmargin=.25em, innerbottommargin=.25em, skipabove=0.25cm, skipbelow=0.25cm \begin{lstlisting}[mathescape] spec $\mathit{TERM}(\ensuremath{S,F})$ (*{\textsf{Import}:}*) $\textsf{MSA}(\ensuremath{S,F})$ (*{\textsf{Axiom}:}*) $\begin{array}{ll} \axname{NoConfusion I} & f\!\not=\!f'\!\rightarrow\!\forall x_1{:}s_1. .. \forall x_n{:}s_n.\forall x'_1{:}s'_1. .. \forall x'_m{:}s'_m. f\,x_1\,..\,x_n \neq f'\,x'_1\,..~x'_m\\ \axname{NoConfusion II} & \forall x_1,x'_1{:}s_1. \ldots \forall x_n,x'_n{:}s_n. (f\,x_1\cdots x_n)=(f\,x'_1\cdots x'_n) \rightarrow\\ & ~\hfill(x_1{=}x'_1)\land\cdots \land (x_n{=}x'_n)\\ \axname{Induct. Domain} &\displaystyle \llbracket s \rrbracket = \mu X.\bigvee_{f:s_1\ldots s_n, s} f~Y_1\cdots Y_n,\textrm{~where~} Y_i=\begin{cases} X, &\textrm{if~} s_i=s \\ \llbracket s_i\rrbracket, & \textrm{~otherwise}\end{cases} \end{array}$ endspec \end{lstlisting}\vspace{-0.25\baselineskip} \hrule\vspace{.1\baselineskip} \centerline{$\mathsf{TERM(S, F)}$} \end{mdframed}\vspace{-0.75\baselineskip} \end{minipage} \caption{\ensuremath{\textsc{ML}}\xspace specifications for sorts, many-sorted algebras, and term algebra} \label{spec:sort} \label{spec:term} \label{spec:msa} \end{figure} \begin{theorem}[\cite{chen-lucanu-rosu-2020-tr}] The specification $\mathsf{MSA}(S,F)$ captures the many-sorted $(S,F)$-algebras in the following sense: \begin{itemize \item from each $\mathsf{MSA}(S,F)$-model $M$ we may extract an $(S,F)$-algebra $\alpha(M)$, and \item for each $(S,F)$-algebra $A$ there is an $\mathsf{MSA}(S,F)$-model $M$ s.t. $\alpha(M)=A$. \end{itemize} If $M$ is a $\mathit{TERM}(\ensuremath{S,F})$-model, then $\alpha(M)$ is the term $(S,F)$-algebra (up to isomorphism). \end{theorem} \begin{restatable}{proposition}{proprulestermalg} \label{prop:rulestermalg} The next patterns are semantical consequences of $\mathit{TERM(S, F)}$:\\[1ex] \centerline{$ \begin{aligned} \exists z. t \land (z = u) \leftrightarrow t[u/z] && \textrm{if~}z \not\in\mathit{var}(u) \\ z=(f\, \overline{t}) \leftrightarrow \exists \overline{y}. z=(f\, \overline{y}) \wedge \overline{y} = \overline{t} && \textrm{if~}\overline{y} \not\in \mathit{var}\big((f\, \overline{t})\big) \cup \{z\} \end{aligned}$}\\[1ex] \end{restatable} \noindent The equivalences in Proposition~\ref{prop:rulestermalg} are later used as macro rules for proof object generatio . The notation $(f\,\overline{t})$ means $(f\,t_1\,\ldots\,t_n)$. We also use $\exists \overline{y}$ or $\exists \{y_1, \ldots, y_n\}$ instead of $\exists y_1.\ldots \exists y_n$. The equality $\overline{y} = \overline{t}$ is sugar syntax for $\bigwedge_{i=1}^n y_i = t_i$. \section{Antiunification in \ensuremath{\textsc{ML}}\xspace} \label{sec:aunif} \vspace{-1ex} Antiunification is a process dual to unification~\cite{martelli} that computes a \emph{generalisation} $t$ of two input terms $t_1$ and $t_2$. A term $t$ is an \emph{antiunifier} of $t_1$ and $t_2$ if there are two substitutions $\sigma_1$ and $\sigma_2$ such that $t \sigma_1 = t_1$ and $t \sigma_2 = t_2$. There is at least one antiunifier for any $t_1$ and $t_2$: we can always choose a variable $x \not\in \mathit{var}(t_1, t_2)$ and substitutions $\sigma_1 = \{ x \mapsto t_1 \}$ and $\sigma_2 = \{ x \mapsto t_2 \}$ s.t. $x\sigma_1 = t_1$ and $x\sigma_2= t_2$. A term $t'$ is more \emph{general} than a term $t$ if there is a substitution $\sigma$ such that $t'\sigma = t$. Given $t_1$ and $t_2$, their \emph{least general} antiunifier $t$ satisfies: for any antiunifier $t'$ of $t_1$ and $t_2$ we have that $t$ is less general than $t'$ (a.k.a., \emph{least general generalisation}, shorthanded as \textit{lgg}\xspace). \begin{remark} In other words, $t$ is less {general} than $t'$ iff the set of ground instances of $t$ is included in that of $t'$. In terms of matching logic, this can be expressed by $\exists \mathit{var}(t).t \subseteq \exists \mathit{var}(t').t'$, where $\varphi_1\subseteq \varphi_2$ is defined as $\lfloor \varphi_1 \rightarrow \varphi_2\rfloor$. \end{remark} Now we present Plotkin's antiunification algorithm~\cite{Plotkin70} for computing the \textit{lgg}\xspace over \ensuremath{\textsc{ML}}\xspace term patterns. First, we define \emph{antiunification problems}: \begin{definition} An \emph{antiunification problem} is a pair $\langle t, P \rangle$ consisting of a term pattern $t$ and a non-empty set $P$ of elements of the form $z \mapsto u \sqcup v$, where $z$ is a variable, and $u$ and $v$ are term patterns. \end{definition} Plotkin's algorithm~\cite{Plotkin70} for computing the \textit{lgg}\xspace consists in applying a decomposition rule over antiunification problems as much as possible:\\ \noindent $$\langle t, P \cup \{z\mapsto (f\,u_1\,\ldots\,u_n)\sqcup (f\,v_1\,\ldots\,v_n)\}\rangle \rightsquigarrow\langle t[(f\,z_1\,\ldots\,z_n)/z], P \cup \{z_1\mapsto u_1\sqcup v_1,\ldots, z_n\mapsto u_n\sqcup v_n\}\rangle,$$ \noindent where $z_1,\ldots,z_n$ are fresh variables. If we want to compute the \textit{lgg}\xspace of $t_1$ and $t_2$, we build the initial antiunification problem $\langle z , \{ z \mapsto t_1 \sqcup t_2 \} \rangle$ with $z \not\in \mathit{var}(t_1) \cup \mathit{var}(t_2)$ and we apply Plotkin's rule repeatedly. When this rule cannot be applied anymore, we say that the obtained antiunification problem $\langle t', P' \rangle$ is in \emph{solved form}. The obtained $t'$ is the \textit{lgg}\xspace of $t_1$ and $t_2$, while $P'$ defines the two substitutions $\sigma_1 =\{ z \mapsto u \mid z \mapsto u \sqcup v \in P' \}$ and $\sigma_2 = \{ z \mapsto v \mid z \mapsto u \sqcup v \in P' \}$ such that $t'\sigma_1 = t_1$ and $t' \sigma_2 = t_2$. Note that the pairs $u \sqcup v$ are not commutative. \label{appendix:runningex} \begin{example} \label{ex:aunif} Let $t_1 = (\mathit{cons}\,(\mathit{succ}\,x_1)\,(\mathit{cons}\,\mathit{zero}\,l_1))$ and $t_2 = (\mathit{cons}\,x_2\,(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2))$. Using Plotkin's algorithm on the input $\langle z, \{ z \mapsto t_1 \sqcup t_2 \} \rangle$ (note that $z$ is fresh w.r.t. $\mathit{var}(t_1) \cup \mathit{var}(t_2)$) we obtain: \vspace{-1ex} \begin{align*} \langle z, \{ z \mapsto t_1 \sqcup t_2 \} \rangle &=\\ \langle z, \{ z \mapsto (\mathit{cons}\,(\mathit{succ}\,x_1)\,(\mathit{cons}\,\mathit{zero}\,l_1)) \sqcup (\mathit{cons}\,x_2\,(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2)) \} \rangle &\rightsquigarrow\\ \langle z[(\mathit{cons}\,z_1\, z_2)/z], \{ z_1 \mapsto (\mathit{succ}\,x_1) \sqcup x_2, z_2\mapsto (\mathit{cons}\,\mathit{zero}\,l_1) \sqcup (\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \} \rangle & =\displaybreak[0]\\ \langle (\mathit{cons}\,z_1\, z_2), \{ z_1 \mapsto (\mathit{succ}\,x_1) \sqcup x_2, z_2 \mapsto (\mathit{cons}\,\mathit{zero}\,l_1) \sqcup (\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \} \rangle& \rightsquigarrow\\ \langle (\mathit{cons}\,z_1\, z_2)[(\mathit{cons}\,z_3\,z_4)/z_2], \{ z_1 {\mapsto} (\mathit{succ}\,x_1)\sqcup x_2, z_3 {\mapsto} \mathit{zero} \sqcup\! (\mathit{succ}\,x_2), z_4 {\mapsto}\, l_1 {\sqcup}\, l_2 \} \rangle& = \\ \langle (\mathit{cons}\,z_1\, (\mathit{cons}\,z_3\,z_4)), \{ z_1 \mapsto (\mathit{succ}\,x_1) \sqcup x_2, z_3 \mapsto \mathit{zero} \sqcup (\mathit{succ}\,x_2), z_4 \mapsto l_1 \sqcup l_2 \}\rangle \not\rightsquigarrow & \end{align*} } The \textit{lgg}\xspace of the term patterns $t_1$ and $t_2$ is the term pattern $t \triangleq (\mathit{cons}\,z_1\, (\mathit{cons}\,z_3\,z_4))$ while the substitutions $\sigma_1 = \{z_1 \mapsto (\mathit{succ}\,x_1), z_3 \mapsto \mathit{zero}, z_4 \mapsto l_1 \}$ and $\sigma_2 = \{z_1 \mapsto x_2, z_3 \mapsto (\mathit{succ}\,x_2), z_4 \mapsto l_2 \}$ satisfy $t\sigma_1 = t_1$ and $t\sigma_2 = t_2$. The generated variables $z_1, z_2, z_3, z_4$ occur at most once in the computed \textit{lgg}\xspace, and $\mathit{var}(t) = \mathit{dom}(\sigma_1) = \mathit{dom}(\sigma_2)$. \end{example} The $\rightsquigarrow^!$ in Example~\ref{ex:aunif} means that $\rightsquigarrow$ has been applied repeatedly until $\langle t, P \rangle$ is in solved form.\\ Antiunification problems are encoded as \ensuremath{\textsc{ML}}\xspace patterns as below: \begin{definition}[Antiunification problem] \label{def:auaspattern} For each antiunification problem $\langle t, P \rangle$ we define an \ensuremath{\textsc{ML}}\xspace pattern $$\phi^{\langle t, P\rangle} \triangleq \exists\overline{z}. t \land \big(\phi^{\sigma_1} \vee \phi^{\sigma_2}\big),$$ \noindent where $\sigma_1 =\{ z \mapsto u \mid z \mapsto u \sqcup v \in P \}$, $\sigma_2 = \{ z \mapsto v \mid z \mapsto u \sqcup v \in P \}$, and $\mathit{var}(t) = \mathit{dom}(\sigma_1) = \mathit{dom}(\sigma_2) = \overline{z}$. \end{definition} \begin{example} \label{ex:step1} Here are the corresponding encodings for the intermediate antiunification problems that are generated during the execution shown in Example~\ref{ex:aunif}: \begin{enumerate} \item $\langle z, \{ z \mapsto (\mathit{cons}\,(\mathit{succ}\,x_1)\,(\mathit{cons}\,\mathit{zero}\,l_1)) \sqcup (\mathit{cons}\,x_2\,(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2)) \} \rangle$ is encoded as\\ \hfill $\exists z. z \land \big(z = (\mathit{cons}\,(\mathit{succ}\,x_1)\,(\mathit{cons}\,\mathit{zero}\,l_1)) \vee z = (\mathit{cons}\,x_2\,(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2)) \big)$; \item $\langle (\mathit{cons}\,z_1\, z_2), \{ z_1 \mapsto (\mathit{succ}\,x_1) \sqcup x_2, z_2 \mapsto (\mathit{cons}\,\mathit{zero}\,l_1) \sqcup (\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \} \rangle$ is encoded as: $\exists\{ z_1, z_2\}. (\mathit{cons}\,z_1\, z_2) \land \Big( \big(z_1 = (\mathit{succ}\,x_1) \land z_2 = (\mathit{cons}\,\mathit{zero}\,l_1) \big) \vee \big(z_1 = x_2 \land z_2 = (\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \big)\Big)$; \item $\langle (\mathit{cons}\,z_1\, (\mathit{cons}\,z_3\,z_4)), \{ z_1 \mapsto (\mathit{succ}\,x_1) \sqcup x_2, z_3 \mapsto \mathit{zero} \sqcup (\mathit{succ}\,x_2), z_4 \mapsto l_1 \sqcup l_2 \}\rangle$ is encoded as: $\exists \{z_1, z_3, z_4\}. (\mathit{cons}\,z_1\, (\mathit{cons}\,z_3\,z_4)) \land$ \hfill $ \Big( \big(z_1 = (\mathit{succ}\,x_1) \land z_3 = \mathit{zero} \land z_4 = l_1\big) \vee \big(z_1 = x_2 \land z_3 = (\mathit{succ}\,x_2) \land z_4 = l_2\big) \Big)$. \end{enumerate} } \end{example}\vspace{-1ex} Note that the encodings shown in Example~\ref{ex:step1} are all equivalent. Also, remember that the scope of the quantifiers extends as much as possible to the right. \begin{restatable}{lemma}{stepaunif} \label{lem:stepaunif} If $\langle t_i, P_i \rangle \leadsto \langle t_{i+1}, P_{i+1} \rangle$ is a step performed using Plotkin's antiunification rule, then\\ $$\mathit{TERM(S, F)} \models\phi^{\langle t_i, P_i \rangle} \leftrightarrow \phi^{\langle t_{i+1}, P_{i+1} \rangle}.$$ \end{restatable} The soundness theorem shown below is a direct consequence of Lemma~\ref{lem:stepaunif :\\ \vspace{-1ex} \begin{restatable}{theorem}{antiunif}{\bf (Soundness)} \label{th:antiunif} Let $t_1$ and $t_2$ be two term patterns and $z$ a variable such that $z \not\in \mathit{var}(t_1) \cup \mathit{var}(t_2)$. If $\langle z, \{ z\mapsto t_1 \sqcup t_2 \} \rangle \rightsquigarrow^! \langle t, P \rangle$, then $\mathit{TERM(S, F)} \models (t_1 \lor t_2) \leftrightarrow \phi^{\langle t, P \rangle}$. \end{restatable} The above results are proved using the semantical \ensuremath{\textsc{ML}}\xspace satisfaction relation ($\models$). Recall that our goal is to generate proof objects, and thus, we want to prove the above results using the \ensuremath{\textsc{ML}}\xspace proof system. We address this challenge in the following section. \section{Generating Proof Objects} \label{sec:aunifgen} Our method for generating proof objects is generic in the sense that it can be used for a larger class of term-algebra-based algorithms (e.g., unification, antiunification). A proof object is represented by a sequence of lines of the form:\\[2ex] \centerline{ \begin{tabular}{|c|c|c|} \hline $~~~k~~~$ & ~~~derived pattern ~~~ &~~~ justification~~~\\ \hline \end{tabular} }\\[2ex] where $k$ is the step index and the justification mentions the applied inference rule and the step index of the premises of the rule (if any). The step index of the premises should be smaller than $k$, i.e., the premises are justified by the previous lines.\\ Our method is sketched as follows: \begin{enumerate \item We consider algorithms that transform a pattern $\varphi$ into an equivalent one $\varphi'$. So, a proof object $\mathit{proofObj}$ has to be generated for $\varphi\leftrightarrow \varphi'$. \item The execution of such algorithms for an input $\varphi$ produces a sequence of intermediate patterns $\varphi_1,\ldots,\varphi_{n-1}$ such that $\varphi^{i-1}\leftrightarrow\varphi^i$, $0 < i \le n$, where $\varphi_0 \triangleq \varphi$ and $\varphi_n \triangleq\varphi'$. So, a proof object for each $\varphi^{i-1}\leftrightarrow\varphi^i$ has to be generated in order to build $\mathit{proofObj}$. \item Assuming that $\varphi^i$ is obtained from $\varphi^{i-1}$ by applying a generic step of the algorithm, we design a proof schema for this step s.t. the instance of this schema for $\varphi^{i-1}$ and $\varphi^i$ produces a proof object $\mathit{proofObj}_i$ for $\varphi^{i-1}\leftrightarrow\varphi^i$. \item The proof object $\mathit{proofObj}$ for the equivalence $\varphi\leftrightarrow \varphi'$ is obtained by the composition $$\mathit{proofObj}_1;\ldots ;\mathit{proofObj}_n;\mathit{proofObj}_{n+1},$$ \noindent where $\mathit{proofObj}_{n+1}$ connects the other proofs objects by transitivity so that the result is a proof object. \end{enumerate} \noindent The approach from~\cite{fm} for unification can be formalised now as an instance of this method. \vspace{-1ex} \subsection{Generating proof objects for antiunification} The goal is to generate proof objects for the equivalence $\mathit{TERM(S, F)} \models (t_1 \lor t_2) \leftrightarrow \phi^{\langle t, P \rangle}$ (cf. Theorem~\ref{th:antiunif}). For an input antiunification problem $\langle t_0, P_0 \rangle \triangleq \langle z , z \mapsto t_1 \sqcup t_2\rangle$, Plotkin's algorithm generates a sequence of antiunification problems until it reaches the final pair $\langle t_k, P_k \rangle \triangleq \langle t, P \rangle$ which contains the \textit{lgg}\xspace:\\[1ex] \centerline{$\langle t_0, P_0 \rangle \rightsquigarrow \cdots \rightsquigarrow\langle t_i, P_i \rangle \rightsquigarrow \cdots \rightsquigarrow \langle t_k, P_k \rangle.$}\\[1ex] The key observation is that the \ensuremath{\textsc{ML}}\xspace encodings of the antiunification problems from the above sequence are all equivalent (cf. Lemma~\ref{lem:stepaunif}):\\[1ex] \centerline{$\phi^{\langle t_0, P_0 \rangle} \leftrightarrow \cdots \leftrightarrow\phi^{\langle t_i, P_i \rangle} \leftrightarrow \cdots \leftrightarrow \phi^{\langle t_k, P_k \rangle}.$}\\[1ex] \noindent The proof object for $t_1\lor t_2 \leftrightarrow \phi^{\langle t, P \rangle}$ is obtained by instantiating the next schema:\\[1ex] \centerline{ $\vee_\mathit{gen}$\xspace;($\rightsquigarrow_\mathit{step}$\xspace)$^k$;($\leftrightarrow_{\mathit{tranz}}$)$^k,$ }\\[1ex] \noindent where: \begin{itemize}[topsep=0pt, partopsep=1pt, itemsep=1pt] \item $k$ is the number of applications of Plotkin's rule; \item $\vee_\mathit{gen}$\xspace is the proof schema which corresponds to the initial equivalence $t_1 \vee t_2 \leftrightarrow \phi^{\langle t_0, P_0\rangle}$. Recall that $\phi^{\langle t_0, P_0\rangle} = \phi^{\langle z, z \mapsto t_1 \sqcup t_2\rangle} \triangleq\exists z. z \land (z = t_1 \vee z = t_2)$; \item $\rightsquigarrow_\mathit{step}$\xspace is the proof schema corresponding to the equivalences $\phi^{\langle t_i, P_i \rangle} \leftrightarrow \phi^{\langle t_{i+1}, P_{i+1} \rangle}$, with $i \in \{0, \ldots, k-1\}$. All these equivalences are obtained by applying a generic step of the algorithm. The schema $\rightsquigarrow_\mathit{step}$\xspace corresponds to this generic step. \item Finally, using the transitivity of $\leftrightarrow$ $k$ times we obtain a proof object for $t_1 \vee t_2 \leftrightarrow \phi^{\langle t_k, P_k \rangle}$.\\ \end{itemize} The proof schema for $\vee_\mathit{gen}$\xspace and $\rightsquigarrow_\mathit{step}$\xspace are presented in Sections~\ref{sec:orgen} and~\ref{sec:decaunif}. Both use the proof rules in Figure~\ref{fig:proofsystem} and the macro rules in Section~\ref{sec:macrorules}. The proof schema for $\rightsquigarrow_\mathit{step}$\xspace uses two additional (sub)schema \textsc{$\exists$-{Gen}}\xspace' (shown in Section~\ref{sec:existsgenprim}) and \textsc{Dec}\xspace (shown in Section~\ref{sec:adec}). Example~\ref{ex:toplevelproof} shows a high-level proof object where we apply $\vee_\mathit{gen}$\xspace once, $\rightsquigarrow_\mathit{step}$\xspace and $\leftrightarrow_{\mathit{tranz}}$ twice. This is because the antiunification rule has been applied two times to obtain the \textit{lgg}\xspace. The exact (low-level) proof object corresponding to this example is obtained by instantiating the proof schemata for $\vee_\mathit{gen}$\xspace and $\rightsquigarrow_\mathit{step}$\xspace (presented later in Sections~\ref{sec:orgen} and~\ref{sec:decaunif}). \begin{example} \label{ex:toplevelproof} We show here the proof object corresponding to the execution from Example~\ref{ex:aunif}, where we use the encodings in Example~\ref{ex:step1}. The structure of the proof object follows the schema $\vee_\mathit{gen}$\xspace;($\rightsquigarrow_\mathit{step}$\xspace)$^k$;($\leftrightarrow_{\mathit{tranz}}$)$^k$, where $k = 2$: \begin{center} \selectfont \begin{tabular}{|l|l|l|} \hline $(1)$ &$ t_1 \vee t_2 \leftrightarrow$ & \\ & \hfill $\exists z. z \land \big(z=(\mathit{cons}\,(\mathit{succ}\,x_1)\,(\mathit{cons}\,\mathit{zero}\,l_1)) \vee z=(\mathit{cons}\,x_2\,(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2))\big)$ & $\vee_\mathit{gen}$\xspace\\ \hline $(2.1)$ &$ \exists z. z \land$ & \\ &\hfill $ \big(z=(\mathit{cons}\,(\mathit{succ}\,x_1)\,(\mathit{cons}\,\mathit{zero}\,l_1)) \vee z=(\mathit{cons}\,x_2\,(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2))\big) \leftrightarrow $ & \\ & $\exists \{z_1,z_2\}. (\mathit{cons}\,z_1\, z_2) \land$ &\\ & \hfill$\Big( \big(z_1=(\mathit{succ}\,x_1) \land z_2=(\mathit{cons}\,\mathit{zero}\,l_1) \big) \vee \big(z_1=x_2 \land z_2=(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \big)\Big)$ & $\rightsquigarrow_\mathit{step}$\xspace\\ \hline $(2.2)$ & $\exists \{z_1, z_2\}. (\mathit{cons}\,z_1\, z_2) \land$&\\ &\hfill$\Big( \big(z_1\!=\!(\mathit{succ}\,x_1) \land z_2\!=\!(\mathit{cons}\,\mathit{zero}\,l_1) \big) \vee \big(z_1\!=\!x_2\land z_2\!=\!(\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \big)\Big) \leftrightarrow$ & \\ & $\exists \{z_1, z_3, z_4\}. (\mathit{cons}\,z_1\, (\mathit{cons}\,z_3\,z_4)) \land $ &\\ & \hfill$\Big( \big(z_1 = (\mathit{succ}\,x_1) \land z_3 = \mathit{zero} \land z_4 = l_1\big) \vee \big(z_1 = x_2 \land z_3 = (\mathit{succ}\,x_2) \land z_4 = l_2\big) \Big)$ & $\rightsquigarrow_\mathit{step}$\xspace\\ \hline $(3.1)$ & $t_1 \vee t_2 \leftrightarrow$ & \\ & $\exists \{z_1, z_2\}. (\mathit{cons}\,z_1\, z_2) \land$& $\leftrightarrow_{\mathit{tranz}}$:\\ &\hfill$\Big( \big(z_1 = (\mathit{succ}\,x_1) \land z_2 = (\mathit{cons}\,\mathit{zero}\,l_1) \big) \vee \big(z_1 = x_2 \land z_2 = (\mathit{cons}\,(\mathit{succ}\,x_2)\,l_2) \big)\Big)$ & $1, 2.1$ \\ \hline $(3.2)$ & $t_1 \vee t_2 \leftrightarrow$ & $\leftrightarrow_{\mathit{tranz}}$:\\ & $\exists \{z_1, z_3, z_4\}. (\mathit{cons}\,z_1\, (\mathit{cons}\,z_3\,z_4)) \land $ & $3.1, 2.2$ \\ & \hfill $\Big( \big(z_1 = (\mathit{succ}\,x_1) \land z_3 = \mathit{zero} \land z_4 = l_1\big) \vee \big(z_1 = x_2 \land z_3 = (\mathit{succ}\,x_2) \land z_4 = l_2\big) \Big)$ & \\ \hline \end{tabular} \end{center} \end{example} \bigskip Generating proof objects for antiunification turns out to be more complex than in the unification case from~\cite{fm}. Plotkin's algorithm generates fresh variables at each step. These variables are existentially quantified in the corresponding \ensuremath{\textsc{ML}}\xspace encodings and handling these quantifiers in proofs is difficult. The main difficulty comes from the fact that most of the times the application of a \ensuremath{\textsc{ML}}\xspace proof rule requires a lot of preparation work: you have to isolate the goal that can be proved using a particular \ensuremath{\textsc{ML}}\xspace proof rule, and then find a way to put back the existential quantifiers. Also, you have to make sure that the proof objects generated by $\rightsquigarrow_\mathit{step}$\xspace remain composable, so that $\leftrightarrow_{\mathit{tranz}}$ can be applied. This is why we use several macro rules in addition to the \ensuremath{\textsc{ML}}\xspace proof system. \vspace{-1ex} \subsection{The Macro Rules} \label{sec:macrorules} Our approach uses the additional rules shown in Figure~\ref{fig:macro-rules}\footnote{Proving these rules using the proof system in Figure~\ref{fig:proofsystem} is out of scope of this paper.}. The first part includes three macro rules. $\textsc{$\exists$-Ctx}\xspace$ enables the replacement of a formula with an equivalent one under the $\exists$ quantifier. $\textsc{$\exists$-Scope}\xspace$ extends the scope of $\exists$ over formulas that do not contain variables that can be captured. $\textsc{$\exists$-Collapse}\xspace$ is useful when existentially quantified formulas can be collapsed under a single quantifier. The second part includes two macro rules which are consequences of the specification $\mathit{TERM(S, F)}$ (cf. Proposition~\ref{prop:rulestermalg}). \textsc{$\exists$-{Subst}}\xspace states that $t \land (z = u)$ is equivalent to $t[u/z]$ under the existential quantifier which binds $z$. \textsc{$\exists$-{Gen}}\xspace allows one to replace subterms $\overline{t} \triangleq t_1 \ldots t_n$ of a term with existentially quantified fresh variables $\overline{y} \triangleq y_1 \ldots y_n$. To obtain an equivalent formula, the constraints $\overline{y} = \overline{t}$ are added. \begin{figure}[h] \centering \renewcommand{\arraystretch}{1.25} \vspace{-2ex} \begin{tabular}{|lrl|} \hline \multicolumn{3}{|c|}{\bf Additional proof rules}\\ \hline \textsc{$\exists$-Ctx}\xspace && \infer{\big(\exists \overline{x}.\varphi_1 \land \varphi_2\big) \leftrightarrow \exists \overline{x}.(\varphi_1 \land \varphi'_2)}{\varphi_2 \leftrightarrow \varphi'_2} \\ \textsc{$\exists$-Scope}\xspace && $\big(\big(\exists \overline{x}. \varphi_1\big) \odot \varphi_2\big) \leftrightarrow \exists \overline{x}. (\varphi_1 \odot \varphi_2), \textrm{if~}\overline{x} \not\in \mathit{free}(\varphi_2)$\\ \textsc{$\exists$-Collapse}\xspace && $\big((\exists \overline{x}. \varphi_1) \vee (\exists \overline{x}. \varphi_2)\big) \leftrightarrow \exists \overline{x}. (\varphi_1 \vee \varphi_2)$\\ \hline \multicolumn{3}{|c|}{\bf Term algebra specific proof rules}\\ \hline $\textsc{$\exists$-{Subst}}\xspace$ && $\exists z. t \land (z = u) \leftrightarrow t[u/z], \textrm{if~}z \not\in\mathit{var}(u)$\\ $\textsc{$\exists$-{Gen}}\xspace$ && $z=(f\, \overline{t}) \leftrightarrow \exists \overline{y}. z=(f\, \overline{y}) \wedge \overline{y} = \overline{t}$, $\textrm{if~}\overline{y} \not\in \mathit{var}\big((f\, \overline{t})\big) \cup \{z\}$ \\ \hline \end{tabular} \caption{The macro rules used to generate proof objects for antiunification. The $\odot$ is a placeholder for one logical operator in the set $\{\land, \leftrightarrow\}$. Also, unless explicitly delimited using parentheses, the scope of the quantifiers extends as much as possible to the right.} \label{fig:macro-rules} \end{figure} \subsection{Proof object schema $\vee_\mathit{gen}$\xspace} \label{sec:orgen} The first step of our method is to establish the equivalence between the disjunction $t_1 \vee t_2$ and the encoding of the initial unification problem $\langle z , z \mapsto t_1 \sqcup t_2\rangle$, that is $\big(\exists z. z \land (z = t_1)\big) \vee \big(\exists z. z \land (z = t_2)\big)$. This is done via the proof object schema $\vee_\mathit{gen}$\xspace, which is shown below. To shorten our presentation, the \textsc{ModusPonens} rule from Figure~\ref{fig:proofsystem} is applied here directly over a double implication $\leftrightarrow$ (instead of $\rightarrow$). For the steps k$\mathop{+}$6 and k$\mathop{+}$8 we use \textsc{Propositional} to justify two trivial equivalences: the first says that $\phi_1 \vee \phi_2 \leftrightarrow \phi_1' \vee \phi_2'$ if $\phi_1 \leftrightarrow \phi_1'$ and $\phi_2 \leftrightarrow \phi_2'$; the second is just a well-known distributivity property.\\ \begin{center} \begin{tabular}{|l|l|l|} \hline (k) &$\big(\exists z. t_1 \leftrightarrow z \land (z = t_1)\big)$ & \textsc{$\exists$-{Subst}}\xspace (note: $z[t_1/z] = t_1$) \\ \hline (k$\mathop{+}$1) &$\big(\exists z. t_1 \leftrightarrow z \land (z = t_1)\big) \leftrightarrow \big(t_1 \leftrightarrow \exists z. z \land (z = t_1)\big) $ & \textsc{$\exists$-Scope}\xspace \\ \hline (k$\mathop{+}$2) & $t_1 \leftrightarrow \exists z. z \land (z = t_1)$ & \textsc{ModusPonens}: k, k+1\\ \hline (k$\mathop{+}$3) &$\big(\exists z. t_2 \leftrightarrow z \land (z = t_2)\big)$ & \textsc{$\exists$-{Subst}}\xspace (note: $z[t_2/z] = t_2$) \\ \hline (k$\mathop{+}$4) &$\big(\exists z. t_2 \leftrightarrow z \land (z = t_2)\big) \leftrightarrow \big(t_2 \leftrightarrow \exists z. z \land (z = t_2)\big)$ & \textsc{$\exists$-Scope}\xspace \\ \hline (k$\mathop{+}$5) & $t_2 \leftrightarrow \exists z. z \land (z = t_2)$ & \textsc{ModusPonens}: k+3, k+4\\ \hline (k$\mathop{+}$6) & $t_1 \vee t_2 \leftrightarrow \big(\exists z. z \land (z = t_1)\big) \vee \big(\exists z. z \land (z = t_2)\big)$ & \textsc{Propositional}: k+2, k+5\\ \hline (k$\mathop{+}$7) & $\Big(\big(\exists z. z \land (z = t_1)\big) \vee \big(\exists z. z \land (z = t_2)\big) \Big)\leftrightarrow$ &\\ & \hfill$\exists z. \big(z \land (z = t_1)\big) \vee \big( z \land (z = t_2)\big)$ & \textsc{$\exists$-Collapse}\xspace \\ \hline (k$\mathop{+}$8) & $\big( z \land (z = t_1)\big) \vee \big(z \land (z = t_2)\big) \leftrightarrow $ & \\ & $ z \land \big((z = t_1) \vee (z = t_2)\big)$ & \textsc{Propositional} \\ \hline (k$\mathop{+}$9) & $\exists z. \big( z \land (z = t_1)\big) \vee \big(z \land (z = t_2)\big) \leftrightarrow$ & \\ & $ \exists z. z \land \big((z = t_1) \vee (z = t_2)\big)$ & \textsc{$\exists$-Ctx}\xspace: k$\mathop{+}$8 \\ \hline (k$\mathop{+}$10) & $t_1 \vee t_2 \leftrightarrow \exists z. \big(z \land (z = t_1)\big) \vee \big( z \land (z = t_2)\big)$ & $\leftrightarrow_{\mathit{tranz}}$: k$\mathop{+}$6, k$\mathop{+}$7 \\ \hline (k$\mathop{+}$11) & $t_1 \vee t_2 \leftrightarrow \exists z. z \land \big((z = t_1) \vee (z = t_2)\big)$ & $\leftrightarrow_{\mathit{tranz}}$: k$\mathop{+}$10, k$\mathop{+}$9\\ \hline \end{tabular} \end{center} \aaa{Am mai adaugat niste propozitii la fiecare dintre sectiunile care urmeaza: pentru e-gen', dec si step. } \subsection{Proof schema $\rightsquigarrow_\mathit{step}$\xspace} \label{sec:decaunif} The proof schema of $\rightsquigarrow_\mathit{step}$\xspace used two other (sub)schemata \textsc{$\exists$-{Gen}}\xspace' and \textsc{Dec}\xspace. We explain these first, and them we present the proof schema for $\rightsquigarrow_\mathit{step}$\xspace. \subsubsection{Proof object schema \textsc{$\exists$-{Gen}}\xspace'} \label{sec:existsgenprim} Recall that $\textsc{$\exists$-{Gen}}\xspace$ (Figure~\ref{fig:macro-rules} -- term algebra specific rule) establishes an equivalence between $z=(f\, \overline{t})$ and $\exists \overline{y}. z=(f\, \overline{y}) \wedge \overline{y} = \overline{t}$, $\textrm{if~}\overline{y} \not\in \mathit{var}\big((f\, \overline{t})\big) \cup \{z\}$, which basically describes a generalisation of $(f\, \overline{t})$. However, most of the times $\textsc{$\exists$-{Gen}}\xspace$ is applied under a conjunction. The proof schema $\textsc{$\exists$-{Gen}}\xspace'$ generalises the $\textsc{$\exists$-{Gen}}\xspace$ macro rule as follows:\\[1ex] \centerline{$\big(\varphi \land z = (f\,\overline{t})\big) \leftrightarrow \exists \overline{z} . \varphi \land z = (f\,\overline{z}) \land \overline{z} = \overline{t},$}\\[1ex] \noindent where $(f\,\overline{t})$ denotes $(f\,t_1\,\ldots\,t_n)$, $ (f\,\overline{z})$ stands for $(f\,z_1\,\ldots\,z_n)$, and $\overline{z} = \overline{t}$ denotes the conjunction $\bigwedge_{i=1}^n z_i = t_i$. Note that $\varphi$ is safely introduced under the existential quantifier. The proof schema \textsc{$\exists$-{Gen}}\xspace' is shown below \begin{center} \begin{tabular}{|l|l|l|} \hline (k) &$z = (f\,\overline{t}) \leftrightarrow \exists \overline{z} . z = (f\,\overline{y}) \land \overline{y} = \overline{t}$ & $\textsc{$\exists$-{Gen}}\xspace$ \\ \hline (k$\mathop{+}$1) &$\big(\varphi \land z = (f\,\overline{t})\big) \leftrightarrow $ & \\ & \hfill $ \varphi \land \exists \overline{z} . z = (f\,\overline{z}) \land \overline{z} = \overline{t}$ & \textsc{Propositional}:k \\ \hline (k$\mathop{+}$2) &$\big(\varphi \land \exists \overline{z} . z = (f\,\overline{z}) \land \overline{z} = \overline{t}\big) \leftrightarrow $ & \\ & \hfill $ \exists \overline{z} . \varphi \land z = (f\,\overline{z}) \land \overline{z} = \overline{t}$ & $\textsc{$\exists$-Scope}\xspace$, $\mathit{var}(\varphi)\cap\{z_1, \ldots, z_n\} = \emptyset$ \\ \hline (k$\mathop{+}$3) &$\big(\varphi \land z = (f\,\overline{t})\big) \leftrightarrow $ & \\ & \hfill $ \exists \overline{z} . \varphi \land z = (f\,\overline{z}) \land \overline{z} = \overline{t}$ & $\leftrightarrow_{\mathit{tranz}}$: k$\mathop{+}$1, k$\mathop{+}$2\\ \hline \end{tabular} \end{center} \noindent At step $k\mathop{+}1$ we use \textsc{Propositional}, in particular, we use this property: if $\varphi_1 \leftrightarrow \varphi_2$ then $\varphi \land \varphi_1 \leftrightarrow \varphi \land \varphi_2$. This schema is applied in a certain context, where $\mathit{var}(\varphi)\cap\{z_1, \ldots, z_n\} = \emptyset$, because $z_1, \ldots, z_n$ are always fresh variables introduced by Plotkin's antiunification algorithm. \subsubsection{Proof schema {\textsc{Dec}\xspace}} \label{sec:adec} Once we have equivalent forms for $z=(f\, \overline{t})$ (i.e., $\exists \overline{y}. z=(f\, \overline{y}) \wedge \overline{y} = \overline{t}$, cf. $\textsc{$\exists$-{Gen}}\xspace'$), we are now ready to tackle disjunctions $(f\,\overline{u}) \vee (f\,\overline{v})$. \textsc{Dec}\xspace captures a decomposition: $(f\,\overline{u}) \vee (f\,\overline{v})$ is equivalent to a conjunction between $(f\,\overline{z})$ and $(\overline{z} = \overline{u}) \vee (\overline{z} = \overline{v})$, where again $\overline{z} = \{ z_1\ldots\,z_n \}$ are existentially quantified. In addition, \textsc{Dec}\xspace performs the decomposition under a conjunction:\\[1ex] \centerline{$ \big((\varphi \land z = (f\,\overline{u})) \vee (\varphi' \land z = (f\,\overline{v})) \big) \leftrightarrow \exists \overline{z} . z = (f\,\overline{z}) \land \big((\varphi \land \overline{z} = \overline{u}) \vee (\varphi' \land \overline{z} = \overline{v})\big), $}\\[1ex] \noindent where $(f\,\overline{u})$ means $(f\,u_1\,\ldots\,u_n)$, $ (f\,\overline{z})$ stands for $(f\,z_1\,\ldots\,z_n)$, and the equality $\overline{z} = \overline{u}$ denotes $\bigwedge_{i=1}^n z_i = u_i$. Similarly, $(f\,\overline{v})$ denotes $(f\,v_1\,\ldots\,v_n)$ and $\overline{z} = \overline{v}$ is $\bigwedge_{i=1}^n z_i = v_i$. The schema for \textsc{Dec}\xspace uses \textsc{$\exists$-{Gen}}\xspace':\\ \begin{center} \begin{tabular}{|l|l|l|} \hline (k) & $\big(\varphi \land z\mathop{=}(f\,\overline{u})\big) \leftrightarrow \exists \overline{z} . \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}$ & \textsc{$\exists$-{Gen}}\xspace'\\ \hline (k$\mathop{+}$1) & $\big(\varphi' \land z\mathop{=}(f\,\overline{v})\big) \leftrightarrow \exists \overline{z} . \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v}$ & \textsc{$\exists$-{Gen}}\xspace'\\ \hline (k$\mathop{+}$2) & $\big(\varphi \land z\mathop{=}(f\,\overline{u})\big) \lor \big(\varphi' \land z\mathop{=}(f\,\overline{v})\big) \leftrightarrow $ & \textsc{Propositional}:\\ & \hfill $\big(\exists \overline{z} . \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}\big) \vee \exists \overline{z} . \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v}$ & k$\mathop{+}$1, k$\mathop{+}$2\\ \hline (k$\mathop{+}$3) & $\big(\exists \overline{z} . \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}\big) \vee \exists \overline{z} . \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v} \leftrightarrow $ & \\ & \hfill $\exists \overline{z} . \big( \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}\big) \vee \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v}$ & \textsc{$\exists$-Collapse}\xspace \\ \hline (k$\mathop{+}$4) & $\big(\varphi \land z\mathop{=}(f\,\overline{u})\big) \lor \big(\varphi' \land z\mathop{=}(f\,\overline{v})\big) \leftrightarrow $ & \\ & \hfill $\exists \overline{z} . \big( \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}\big) \vee \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v}$ & $\leftrightarrow_{\mathit{tranz}}$: k$\mathop{+}$2, k$\mathop{+}$3 \\ \hline (k$\mathop{+}$5) & $\big( \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}\big) \vee \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v} \leftrightarrow$ & \\ & \hfill $z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & \textsc{Propositional} \\ \hline (k$\mathop{+}$6) & $\exists \overline{z}. \big( \varphi \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{u}\big) \vee \varphi' \land z\mathop{=}(f\,\overline{z}) \land \overline{z}\mathop{=}\overline{v} \leftrightarrow$ & \\ & \hfill $\exists \overline{z}. z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & \textsc{$\exists$-Ctx}\xspace: k$\mathop{+}$5 \\ \hline (k$\mathop{+}$7) & $\big(\varphi \land z\mathop{=}(f\,\overline{u})\big) \lor \big(\varphi' \land z\mathop{=}(f\,\overline{v})\big) \leftrightarrow $ & \\ & \hfill $\exists \overline{z}. z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & $\leftrightarrow_{\mathit{tranz}}$: k$\mathop{+}$4, k$\mathop{+}$6 \\ \hline \end{tabular} \end{center} \subsubsection{Proof schema $\rightsquigarrow_\mathit{step}$\xspace} \label{sec:decaunif} Recall that in each step $\langle t_i,P_i \rangle \rightsquigarrow \langle t_{i+1}, P_{i+1} \rangle$, $t_{i+1}$ is a generalisation of $t_{i}$, and both $t_{i}$ and $t_{i+1}$ are generalisations of the initial term patterns. Also, recall that both $\phi^{\langle t_i,P_i \rangle}$ and $\phi^{\langle t_{i+1},P_{i+1} \rangle}$ are existentially quantified conjunctions between a term pattern (e.g., the generalisations $t_i$, $t_{i+1}$) and a predicate (cf. Definition~\ref{def:auaspattern}). Our final step is to add the missing existential quantifiers and the missing term patterns (generalisations) to the equivalences obtained using \textsc{Dec}\xspace. The schema which does all the above is summarised below:\\ \begin{center} \begin{tabular}{|l|l|l|} \hline (k$\mathop{+}$1) & $\big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) \leftrightarrow $ & \\ & \hfill $ \exists \overline{z}. z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & \textsc{Dec}\xspace \\ \hline (k$\mathop{+}$2) & $\exists\overline{x}.t \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) \leftrightarrow$ & $z \in \overline{x}\mathop{=}\mathit{var}(t)$ \\ & \hfill $\exists\overline{x}.t \land \exists \overline{z}. z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & \textsc{$\exists$-Ctx}\xspace: k+1 \\ \hline (k$\mathop{+}$3) & $ t \land \exists \overline{z}. z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) \leftrightarrow$ & $\overline{z}$ fresh \\ & \hfill $ \exists \overline{z}. t \land z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & \textsc{$\exists$-Scope}\xspace \\ \hline (k$\mathop{+}$4) & $ \exists \overline{x}. t \land \exists \overline{z}. z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) \leftrightarrow$ & $\overline{x}\in \mathit{var}(t)$ \\ & \hfill $ \exists \overline{x}. \exists \overline{z}. t \land z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & \textsc{$\exists$-Ctx}\xspace: k+3\\ \hline (k$\mathop{+}$5) & $\exists\overline{x}.t \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) \leftrightarrow$ & $\leftrightarrow_{\mathit{tranz}}$:\\ & \hfill $ \exists \overline{x}. \exists \overline{z}. t \land z\mathop{=}(f\,\overline{z}) \land \big( (\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big)$ & k+2, k+4 \\ \hline (k$\mathop{+}$6) & $\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z}) \land \big((\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) \leftrightarrow$ & \\ & \hfill $\big(\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z})\big) \land \big((\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) $ &\textsc{$\exists$-Scope}\xspace \\ \hline (k$\mathop{+}$7) & $\exists \{\overline{x}, \overline{z}\}.t \land z\mathop{=}(\mathit{f}\,\overline{z}) \land \big((\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) \leftrightarrow$ & \\ & \hfill $\exists \{\overline{x}, \overline{z}\}\setminus\{z\}.\big(\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z})\big) \land \big((\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) $ &\textsc{$\exists$-Ctx}\xspace: k+6 \\ \hline (k$\mathop{+}$8) & $\exists\overline{x}.t \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) \leftrightarrow$ & $\leftrightarrow_{\mathit{tranz}}$: \\ & \hfill $\exists \{\overline{x}, \overline{z}\}\setminus\{z\}.\big(\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z})\big) \land \big((\varphi \land \overline{z}\mathop{=}\overline{u}) \vee (\varphi' \land \overline{z}\mathop{=}\overline{v})\big) $ & k+5, k+7 \\ \hline (k$\mathop{+}$9) & $\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z}) \leftrightarrow t[(\mathit{f}\,\overline{z})/z]$ & \textsc{$\exists$-{Subst}}\xspace \\ \hline (k$\mathop{+}$10) & $\big(\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z}) \leftrightarrow t[(\mathit{f}\,\overline{z})/z]\big) \leftrightarrow \big(\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z})\big) \leftrightarrow t[(\mathit{f}\,\overline{z})/z] $ & \textsc{$\exists$-Scope}\xspace\\ \hline && \textsc{ModusPon.}:\\ (k$\mathop{+}$11) & $\big(\exists z.t \land z\mathop{=}(\mathit{f}\,\overline{z})\big) \leftrightarrow t[(\mathit{f}\,\overline{z})/z]$ & k+9, k+10\\ \hline (k$\mathop{+}$12) & $\big(\exists \{\overline{x}, \overline{z}\}.t \land z\mathop{=}(\mathit{f}\,\overline{z})\big) \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) \leftrightarrow $ & \\ &\hfill $ \exists \{\overline{x}, \overline{z}\}\setminus\{z\}. t[(\mathit{f}\,\overline{z})/z] \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) $ & \textsc{$\exists$-Ctx}\xspace: k+11\\ \hline (k$\mathop{+}$13) & $\exists\overline{x}.t \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) \leftrightarrow$ & $\leftrightarrow_{\mathit{tranz}}$:\\ &\hfill $ \exists \{\overline{x}, \overline{z}\}\setminus\{z\}. t[(\mathit{f}\,\overline{z})/z] \land \big(\varphi \land z\mathop{=}(\mathit{f}\,\overline{u})\big) \vee \big(\varphi' \land z\mathop{=}(\mathit{f}\,\overline{v})\big) $ & k+8, k+12 \\ \hline \end{tabular} \end{center} \medskip \noindent This schema uses the fact that Plotkin's algorithm generates fresh variables at each antiunification step. Here, $\overline{x} = \{x_1, \ldots, x_n\}=\mathit{var}(t)$ and $\exists \{\overline{x}, \overline{z}\}$ stands for $\exists \{x_1, \ldots, x_n, z_1, \ldots, z_m\}$. At k$\mathop{+}$7, we directly use $\exists \{\overline{x}, \overline{z}\}$ instead of $\exists \{\overline{x}, \overline{z}\}\setminus\{z\}.\exists z$. \section{A tool for certifying antiunification} \label{sec:prototype} We implement a prototype~\cite{maude-tool-for-certif-aintiunif} for our proof object generation mechanism and a checker for the generated proof objects. The proof generator and the proof checker are implemented in Maude~\cite{allAboutMaude}. Both tools can be used directly in Maude, but we also created a Python interface in order to facilitate the interaction of the user with the Maude tools. The Python script takes easy-to-write specifications as inputs, it automatically calls the Maude proof generator and the proof checker behind the scenes, and outputs a proof and a checking status. The specifications are minimally specified in an input file whose content is self-explanatory. For example, the antiunification problem from Example~\ref{ex:aunif} is specified as: {\fontsize{8}{10}\selectfont \begin{lstlisting} variables: x1, x2, l1, l2 symbols: cons, succ, zero problem: cons(succ(x1),cons(zero,l1))=?cons(x2,cons(succ(x2),l2)) \end{lstlisting} } The Python script parses the input and extracts the variables, the symbols, and the antiunification problem. It infers automatically the arities of the symbols and throws errors when the inputs are not well-formed or arities are inconsistent (e.g., same symbol used with different number of arguments). Then it calls the Maude proof generator and checker in the background. The output from Maude is postprocessed in Python and the user can inspect the proof and the checking status ({\tt true} or {\tt false}): {\fontsize{8}{10}\selectfont \begin{lstlisting}[language=c++] > python3 ml-antiunify.py tests/samples/13_paper_cons_succ.in Proof of: // goal: ... // generated proof... Checked: true \end{lstlisting} } \noindent For this particular example, the generated proof has 84 proof lines as shown in the first line of Table~\ref{tbl:results}. We tested our prototype on larger inputs as well. Our goal was to see if the size of the generated proof objects for real-life language configurations is indeed manageable (e.g., the size should increase linearly, not exponentially). Also, we wanted to check whether our proof-generation schemas are correctly instantiated and they compose as expected to obtain the final proof objects. The inputs that we use are inspired from the K definitions of several languages, including the K definitions of C~\cite{ellison-rosu-2012-popl} and Java~\cite{DBLP:conf/popl/BogdanasR15}. In these K definitions, the configurations are quite big (i.e., $\sim$130 nodes for C, $\sim$65 nodes for Java). From these definitions we extracted some large term patterns which we use as inputs for antiunification. Table~\ref{tbl:results} shows some of the results that we obtained\footnote{Details can be found here: \url{https://github.com/andreiarusoaie/certifying-unification-in-aml/tree/master/tests/samples\#readme}}. The input terms in our specifications represent C and Java symbolic configurations converted into our input format. We use the specification file size to measure the input size and we use the number of proof lines to measure the size of proof object. As expected, the size of the generated proof objects depends on the input size. For language definitions that have larger configurations the generated proofs are big (e.g., ~5000 lines for C, and 2300 lines for Java). \begin{table} \caption{The results obtained when generating proof objects for large inputs inspired from K definitions of real-life languages (C and Java).\\[1ex]} \centerin \begin{tabular}{|l|l|c|c|} \hline Language & File & File size & proof object size \\ & name & (kb) & (no. of lines) \\ \hline list of nats & \texttt{13\_paper\_cons\_succ.in} & 0.122& 84\\ C& \texttt{18\_c\_declare\_local.in}& 14& 5052\\ Java&\texttt{19\_java\_method\_invoke.in}& 6& 2352\\ \hline \end{tabular} \label{tbl:results} \end{table} \dl{cred ca putem fi mai precisi aici: dimensiunea instantelor este acceeasi cu cea a schemelor, asa ca marimea proof objectului este proportionala cu cea executiei algoritmului de unificare, care in cazul cel mai nefavorabil este dimensiunea celui mai mic termen (e.g., t2 instanta a lui t1, atunci o ss fie marimea lui t1).} \aaa{Am adaugat cele 2 paragrafe de mai jos.} For each step of the antiunification algorithm, the proof generator (which implements the schemas discussed in Section~\ref{sec:aunifgen}) produces a fixed number of proof lines. Therefore, the size of the proof object is directly proportional with the number of execution steps of the antiunification algorithm. In the worst-case scenario, when executed on an input $\langle z , z \mapsto t_1 \sqcup t_2\rangle$ the number of execution steps of the antiunification algorithm is given by the size of the smallest term pattern between $t_1$ and $t_2$. An example of worst-case scenario is when one term is an instance of the other, i.e., the \textit{lgg}\xspace $t_1$ and $t_2$ is $t_i$ with $i \in \{1,2\}$.\\ The tests that we performed on large inputs show that our proof object generation method can be implemented and used in practice. Our prototype is able to produce a correct output (i.e., the proof objects are successfully checked by the checker) and thus, it shows that there are no corner cases that we missed in our approach. \vspace{-1ex} \section{Conclusions} \label{sec:conclusions} \vspace{-1ex} In order to obtain a certified symbolic execution, the parameters of an execution step have to carry the proof object of their actions. Two examples of such parameters are unification and antiunification algorithms. \aaa{Partea pana la future work a fost modificata.} In this paper we proposed a generic method for generating proof objects for the tasks that the K tool performs. We showed how this method works for the case of antiunification by providing schemas for generating proof objects. More precisely, we used Plotkin's antiunification to normalise disjunctions and to generate the corresponding proof objects. We also provided a prototype implementation of our proof object generation technique and a checker for the generated proof objects. The prototype generates proof objects whose size depends on the number of steps performed by the antiunification algorithm. However, the number of steps is limited by the size of the inputs. We successfully used our prototype on complex inputs inspired from the K semantics of C and Java. This indicates that our approach is practical even for large inputs.\\ \textbf{Future work} In order to obtain a proof object that uses only the Hilbert-style proof rules, the next step is to find proof schemata for the macro rules in Figure~\ref{fig:macro-rules}. This will allow us to use the newest proof checker for \ensuremath{\textsc{ML}}\xspace implemented in Metamath~\cite{checkermm}. Both checkers (the existing Metamath and our Maude implementations) have the same functionality w.r.t. \ensuremath{\textsc{ML}}\xspace proof system. So far, we preferred our Maude implementation because it was easier to handle macro rules. Also, we reused only the \ensuremath{\textsc{ML}}\xspace syntax module written in Maude for both the checker and the proof generator. However, the short term goal is to use the Metamath checker so that the proof object generator and the checker are completely independent. \bibliographystyle{eptcs}
70211ec17eef47f9ce7ce67708807bc7300edcfe
\section{Introduction} Semantic segmentation is a foundational technology in autonomous driving applications, because it provides the vehicle with information about its surroundings that is critical to safely and reliably navigate the environment. Great strides have been made to improve this technology in the autonomous driving scenario, using supervised deep learning methods trained using great quantities of data with pixel-wise annotations. However, data collection and annotation is time consuming, expensive and hard-to-scale. A successful strategy to mitigate this issue is to rely on simulators to generate massive amounts of synthetic data \cite{Richter2016GTA, Ros2016Synthia, tavera2020idda}. This solution has the benefit that synthetic data is easy and cheap to collect, and the semantic annotations generated automatically by the graphical engines are perfect. The downside is that there is a significant shift between the synthetic domain of the training data and the real domain of the application. There are unsupervised and semi-supervised solutions that address this domain shift \cite{survey_uda1, survey_uda2}, however they still need large amounts of images from the real domain thus falling back into the data collection problem. A more viable solution is to consider a few-shot setting where only a few annotated images from the real target are needed, rather than many target images without annotation. The few-shot learning problem has been studied in several visual learning scenarios (see \cref{sec:related} for a review of previous works). One of its main challenges is dealing with the intrinsic imbalance between source and target data \cite{sun2019not}. When the few-shot learning is considered within the semantic segmentation scenario, this issue is exacerbated by the intrinsic pixel-wise imbalance among segmented classes: some classes are both extremely frequent and spatially extended (\eg, sky, road), while others may appear seldom and be small in size (\eg, traffic sign). This implies that there can be a great disproportion in the number of pixels-per-class available in the target domain, with some classes that may be scarcely represented or even missing. This imbalance is more pronounced than in other problem settings, causing image-wise adversarial training methods to align large and well-represented classes, resulting in less accurate mapping of those that are under-represented in the target domain (see \cref{fig:teaser}). We argue that to address successfully cross-domain few shot learning in semantic segmentation, it is imperative to embed in the solution the intrinsic pixel-wise connotation of spatially segmenting classes. To do this, we introduce the Pixel-By-Pixel Cross-Domain Alignment framework ({\our}), that uses a novel pixel-wise discriminator and modulates the adversarial loss for each pixel to: (i) align pixel-wise source and target domains; (ii) avoid to further align correctly represented pixels and reduce the negative transfer; (iii) regularize the training of underrepresented classes to avoid overfitting. The pixel-wise adversarial training is assisted by a sample selection procedure that handles the imbalance between source and target data by progressively eliminating samples from the source domain. The two mechanisms coexist within an end-to-end training process. Summarizing, the main contributions of this paper are: \begin{itemize} \itemsep -0.2em \item we propose the first algorithm for cross-domain few shot semantic segmentation able to deal with classes scarcely represented in the training data by spatially aligning the domains pixel by pixel; \item we define a new pixel-wise adversarial loss that aligns source and target domains locally while reducing negative transfer and avoiding overfitting the under-represented classes; \item we evaluate our architecture on the two standard synthetic-to-real scenarios, \ie, GTA5$\to$Cityscapes and SYNTHIA$\to$Cityscapes, where it sets new state-of-the-art scores. Additionally, an in-depth ablation study analyzes the influence of all the features introduced by our method. \end{itemize} \section{Related Works} \label{sec:related} \vspace{-5pt} \myparagraph{Semantic Segmentation.} Over the last few years semantic segmentation has achieved remarkable results thanks to the widespread use of deep learning \cite{long2015fully, chen2018encoder, zhao2017pyramid, lin2017refinenet, zhang2018exfuse}. The current state-of-the-art methods differentiate themselves in the strategy applied to condition the semantic information on the global context. Methods like RefineNet~\cite{lin2017refinenet}, PSPNet~\cite{zhao2017pyramid}, ExFuse~\cite{zhang2018exfuse} or DeepLab~\cite{chen2017deeplab, chen2017rethinking, chen2018encoder} are designed to capture objects as well as image context at multiple scales. Other works model the hierarchical or the spatial dependencies to boost the pixel-level classifier \cite{chen2017rethinking,ghiasi2016laplacian}. One problem with all these methods is that they require a large amount of densely annotated images, which are expensive and time-consuming to obtain. This issue has spurred the creation of synthetic datasets~\cite{tavera2020idda, Ros2016Synthia, Richter2016GTA} that offer high quality images with automatically generated semantic labels. Despite the clear advantages in terms of data availability and quality of the annotations, models trained using synthetic datasets face a drastic domain gap when tested with real images. \vspace{-5pt} \myparagraph{Domain Adaptation.} Domain Adaptation (DA) refers to the study of solutions to bridge the domain gap that is present when the data used to develop the model (source) and the data the model is applied to (target) come from different distributions. Some of these solutions seek to minimize a measure of the discrepancy across domains, like the MMD in \cite{geng2011daml, pmlr-v37-long15}. Other methods exploit generative networks and image-to-image translation algorithms to generate target images conditioned on the source domain or vice-versa~\cite{hoffman18cycada,wu2018dcan,yang2020fda}. Strategies like~\cite{li2019bidirectional, kim2020learning, yang2020fda} combine image-to-image translation with self-learning, using the predictions on a previously pre-trained model as pseudo labels to fine-tune and reinforce the model itself. Finally, the most popular approach for domain adaptation in semantic segmentation is adversarial training~\cite{vu2019advent,luo2019taking,chang2019all}. In the Unsupervised DA setting, Luo \etal \cite{luo2019taking} introduces the negative transfer problem caused by the common global-level adversarial alignment strategy, addressing it with a co-training strategy and an alignment performed at a category-level. Conversely, we focus on Few-Shot DA for the autonomous driving scenario and propose a novel training strategy that strengthens domain alignment at pixel level while addressing both negative transfer and overfitting on the target domain. \vspace{-5pt} \myparagraph{Few Shot.} Few-shot learning deals with novel classes given only few images \cite{shaban2017one, rakelly2018few, dong2018few, zhang2019canet, snell2017prototypical, koch2015siamese, finn2017model, hariharan2017low, sung2018learning}. The problem has been extensively studied in the context of image-classification \cite{ snell2017prototypical, koch2015siamese, finn2017model, hariharan2017low,sung2018learning} and, only recently, in the context of semantic segmentation \cite{shaban2017one, rakelly2018few, dong2018few, zhang2019canet}. Differently from few-shot learning, the purpose of few-shot in domain adaptation is to transfer the knowledge from a well-annotated source dataset to a target one containing only few annotated images \cite{dong2018few,luo2020adversarial,motiian2017few,zhang2019few}. FSDA~\cite{zhang2019few} tackles this problem in semantic segmentation with a two-stage method: the first stage implements a static label filtering that guides the learning towards the pixels that are difficult to classify; the second stage performs domain adaptation at image-level via two image-wise domain discriminators and using all the source images, which forces a negative transfer to the target realm. Conversely, our work achieves domain alignment at pixel-level granularity using a new pixel-wise discriminator and a new loss function exploiting the semantic and visual information for each individual pixel. Moreover, we use a novel sample selection strategy to limit the number of source images used and avoid a negative transfer. \myparagraph{Knowledge Distillation.} Knowledge distillation (KD) \cite{hinton_2015} is applied to transfer knowledge from a cumbersome model to a lighter model with the aim of improving the performance of the latter by forcing the match between the predictions provided by the two networks. It has been applied first to image classification \cite{NIPS2014_ea8fcd92, Romero2015FitNetsHF, Zagoruyko2017AT} and object detection problems \cite{li_2017}, and only in recent years it has been deployed to the semantic segmentation \cite{Xie2018ImprovingFS, liu2019structured, Shu2020ChannelwiseDF} and the incremental learning tasks \cite{cermelli2020modeling, Douillard2020PLOPLW, michieli2019}. With {\our} we use KD as a regularization term \cite{yuan2020revisiting} to avoid catastrophic forgetting of the acquired knowledge and to avoid overfitting towards the few number of target images provided by the considered few-shot setting. \section{Method} \subsection{Problem Setting} \label{sec:problem} We consider the semantic segmentation task in the cross-domain few-shot setting that was formulated by Zhang \etal in \cite{zhang2019few}. The problem setting defines $\mathcal{K}$-shot as a task providing $\mathcal{K}$ real images randomly selected for each of the $\mathcal{N}$ cities of the target dataset. For example, in the 1-shot setting with Cityscapes as target dataset, the whole target data is made of 18 annotated frames because Cityscapes is composed by 18 different cities. This problem setting is tailored for the autonomous driving application, where a single self-driving solution is usually deployed to a finite number of designated cities. Although not all the datasets available in the literature provide meta information regarding the division in different cities, this formulation from \cite{zhang2019few} gives a precise and well established experimental protocol. To tackle the problem, let us denote as $\mathcal{X}$ the set of RGB images composed by the set of pixels $\mathcal{I}$, and as $\mathcal{Y}$ the set of semantic masks associating to each pixel $i \in \mathcal{I}$ a class from the set of semantic classes $\mathcal{C}$. At training time we have available two sets of semantically annotated images: $X_{s} = \{(x^{s}, y^{s})\}$ which is a collection of $N_{s}$ images, with $x^{s} \in \mathcal{X}$ from a synthetic domain (\emph{source}), and $X_{t} = \{(x^{t}, y^{t})\}$ which contains a small number of samples $x^{t} \in \mathcal{X}$ from the real-world domain (\emph{target}). Similarly to~\cite{zhang2019few}, the evaluations discussed in Sec. \ref{sec:experiments} are carried out in the (1-5)-shot setting. In this notation, $y^{s}, \; y^{t} \in \mathcal{Y}$ denote the annotation masks associated with the source and target images, respectively. In this problem the goal is to use the datasets $X_s$ and $X_t$ to learn a function $f$, parameterized by $\theta$, from the input space $\mathcal{X}$ to a pixel-wise probability, \ie, $f_\theta: \mathcal{X} \rightarrow \mathbb{R}^{|\mathcal{I}|\times|\mathcal{Y}|}$, and evaluating it on unseen images from the target domain. In the following, we indicate the model output in a pixel $i$ for the class c as $p_i^c$, \ie, $p_i^c(x) = f_\theta(x)[i,c]$. Without domain adaptation, the parameters $\theta$ are optimized to minimize the segmentation loss $L_{seg}$: \begin{equation} L_{\text{seg}}(x, y) = - \frac{1}{|\mathcal{I}|} \sum_{i \in \mathcal{I}} (\alpha(1-p^{y_{i}}_i(x))^{\gamma } \log(p^{y_{i}}_i(x)) \label{eq:focal} \end{equation} where $L_{seg}$ corresponds to a focal loss \cite{focal} and $\alpha(1-p^{y_{i}}_i(x))^{\gamma}$ is its modulating factor. \subsection{Pixel-by-Pixel Adversarial Training} \label{sec:main_method} Many approaches~\cite{vu2019advent,luo2019taking,chang2019all} in domain adaptation deal with the domain shift problem by aligning the features extracted from the source and target domains in an adversarial manner. The common solution, first introduced by \cite{hoffman2016fcns}, is to play a min-max game between the segmentation network and an image-wise domain discriminator, in which the discriminator predicts the domain a feature belongs to, and the segmentation network tries to deceive it by making source and target features indistinguishable. Since the domains are analyzed and aligned from a global perspective, the discriminator may disregard portions of the scene that expose few pixels of the small classes, focusing mainly on the well-represented ones. As a result, adversarial training would mostly align big and well-represented classes while inducing a negative transfer \cite{luo2019taking} on the others, which leads to poor adaptation. This problem is amplified in the few shot scenario since there is a discrepancy between the number of images in the source and target domains, and some target semantic classes may be underrepresented or even absent. \myparagraph{The {\ourloss} Loss.} To address the imbalance among classes and reduce the negative transfer, we propose a novel adversarial loss that analyzes each pixel individually rather than operating on a global level (see \cref{fig:framework}). Our goal is to prioritize and improve pixel alignment using three criteria: (i) align the source and target domain, (ii) avoid to further align correctly represented pixel, limiting negative transfer, and (iii) regularize the training of infrequent classes, forcing the domain alignment to avoid overfitting. To accomplish this, we use a pixel-wise discriminator whose goal is to discern, for each pixel, what domain it belongs to. The domain discriminator is a computationally less expensive version of the common Fully Convolutional discriminator found in DCGANs~\cite{radford2016unsupervised} (see Sec.~\ref{sec:implementation} for more details). The discriminator $D$ is trained to classify whether the features are coming from the source or the target domain. Formally, we minimize the following loss: \begin{equation} L_{D}(x^s, x^t) = -\sum_{i\in \mathcal{I}} \log\ D_{i}(f_\theta(x^s)) + \log(1-D_{i}(f_\theta(x^t))), \end{equation} where $D$ is the discriminator, and $D_{i}(x)$ indicates the output probability for the pixel $i$ to belong to source domain. However, using a pixel-wise discriminator without considering the class imbalance problem does not prevent a negative transfer effect. Hence, we introduce a novel adversarial loss function (\emph{{\ourloss} Loss}), denoted as $L_{{\ourloss}}$, designed to align each pixel according to its importance. In particular, to determine the strength with which each pixel is aligned, the $L_{\text{{\ourloss}}}$ modulates the adversarial loss according to a combination of two different terms, each with a specific purpose: \begin{equation} \begin{multlined} \label{eq:fuse} L_{\text{{\ourloss}}}(x^t,y^t) = - \frac{1}{|\mathcal{I}|} \sum_{i \in \mathcal{I}} S_{i}(x^t,y^t) B_{i}(y^t) \log D_{i}(f_\theta(x^t)). \end{multlined} \end{equation} The term $S$ in eq. (\ref{eq:fuse}) is related to the network classification confidence and it is considered as a measure, for each pixel, of the ability of the network to represent it: \begin{equation} \label{eq:fuse_S} S_{i}(x^t,y^t) = - y_i \;\log p^{y_i}_i(x^t), \end{equation} where $p^{y_i}_{i}(x)$ denote the probability for class ${y_{i}}$ at pixel $i$. High values of $S_{i}$ indicate that the network misrepresents the pixel $i$, whereas a small value indicates that the network is able to correctly represent and classify it. The term $B$ in eq. (\ref{eq:fuse}) represents the imbalance of the pixels and aims to re-balance the classes contribution based on their frequency in the target dataset: \begin{equation} \label{eq:fuse_B} B_{i}(y^t) = 1- \frac{1}{{|\mathcal{I}|}} \sum_{j \in \mathcal{I}} \mathbbm{1}_{y_j = y_i}, \end{equation} where $\mathbbm{1}$ is the indicator function, being one when $y_j$ and $y_i$ are equal, zero otherwise. Values of $B_{i}$ which tend to 1 refer to a misrepresented class while values tending to 0 refer to a well-represented class. The term $B$ is crucial since the target domain exposes many pixels of some classes (\eg, road, sidewalk) but very few of others (\eg, train, person). Through $B$ we are able to balance the classes, resulting in a more heterogeneous and effective adaptation. We would like to point out that the terms $S$ and $B$ aren't used in backpropagation, but rather as a pixel-by-pixel map to modulate the adversarial loss. Summing up, the overall segmentation network training loss function is expressed as follows: \begin{equation} \begin{multlined} \frac{1}{|X_s^k|} \sum_{x^s \in X_s^k} L_{\text{seg}}(x^s, y^s) + \\ \frac{1}{|X_t|} \sum_{x^t \in X_t} L_{\text{seg}}(x^t, y^t) + \lambda L_{\text{{\ourloss}}}(x^t, y^t), \end{multlined} \label{scenetotloss} \end{equation} where $L_{\text{{\ourloss}}}$ is the proposed adversarial pixel-wise {\ourloss} loss and $X_s^k$ is a subset of the source dataset $X_s$ selected with the sample selection procedure. \subsection{Sample Selection} \label{sec:sample_selection} Due to the extent and variety of the synthetic source dataset there will be source samples far away and detached from the target domain (\eg, different perspective or illumination condition). Forcing the alignment to these samples can result in negative transfer in the target dataset, lowering the network's overall performance. With this in mind, we propose a sample selection procedure that, working side by side with the {\ourloss} loss, enhances the use of the source data by identifying and selecting source samples that are better aligned with the target semantic distribution. Without affecting the segmentation model and not taking part in the adversarial learning process, we simultaneously train a global image-wise domain discriminator $D_{g}$ with the following loss: \begin{equation} \begin{multlined} L_{D_g}(x^s, x^t) = - \log\ D_g(f_\theta(x^s)) - \log (1 - D_g(f_\theta(x^t))). \end{multlined} \end{equation} The main reason for using such a discriminator is to distinguish source from target and to capture both semantic and visual domain information. Formally, at each epoch $k$ we exploit $D_{g}$ to predict the likelihood that a source image carries worthy information to the target domain and use this prediction to select a subset $X_s^k$ of source images to be retained from the previous epoch, \ie, $|X_s^{k}| \leq |X_s^{k-1}|$. Following this intuition, an image $x^s \in X_s^{k-1}$ is added to $X_s^{k}$ if $D_{g}(x^s) < \delta$, where $\delta$ is a predefined threshold. After each epoch, we raise the threshold consequently to the increasing capacity of the image-wise discriminator to correctly classify the target data as training progresses, selecting an ever decreasing number of relevant samples (see \cref{fig:sample_selection_example} for a better understanding). \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{Images/SampleSelection.pdf} \caption{Illustration of the sample selection mechanism (top-left). At each epoch $k$ the source dataset is subsampled selecting images that would carry worthy information to the target domain. For example, samples with different perspectives of lighting conditions w.r.t. the target data (bottom-right) are discarded.} \label{fig:sample_selection_example} \vspace{-10pt} \end{figure} \subsection{Fine-Tuning and Knowledge Distillation} \label{sec:fine_tuning} Once the adversarial training is completed and {\ourloss} loss has aligned the representation between pixels of the source and target domains, we can further exploit the available semantic information on the target data to enhance the network representation. However, na\"ively fine-tuning on the target data ignores the domain alignment obtained previously and may lead to overfitting the few target images. To avoid this problem we use a Knowledge Distillation (KD) strategy. KD \cite{hinton_2015} has been designed to regularize the training of a student network using the output of a teacher network. In our framework the student $f_{\theta_{S}}$ corresponds to the segmentation model which is fine-tuned, while the teacher, denoted as $f_{\theta_{T}}$, is a frozen copy of the same network after the adversarial learning process. Formally, we optimize $\theta_{S}$ with the following: \begin{equation} \begin{multlined} \frac{1}{|X_t|} \sum_{x^t \in X_t} L_{\text{seg}}(x^t, y^t) + \lambda_{kd} L_{\text{kd}}(x^t, f_{\theta_{T}}, f_{\theta_{S}}), \end{multlined} \end{equation} where $L_{seg}$ is the segmentation loss from \cref{eq:focal}, $\lambda_{kd}$ is a weighting parameter. $L_{kd}$ is the distillation loss expressed as follow: \begin{equation} \begin{multlined} L_{kd} = - \sigma(\frac{f_{\theta_{T}}(x^t)}{\tau}) \log \sigma(f_{\theta_{S}}(x^t)), \end{multlined} \end{equation} where $\sigma$ indicates the softmax function, and $\tau$ is a temperature, as in \cite{hinton_2015}. \section{Experiments} \label{sec:experiments} \subsection{Datasets and metric} We assess the performance of our method on the two standard synthetic-to-real benchmarks used in the domain adaptation literature: GTA5 \cite{Richter2016GTA} to Cityscapes \cite{Cordts2016Cityscapes}, and SYNTHIA \cite{Ros2016Synthia} to Cityscapes \cite{Cordts2016Cityscapes}. \myparagraph{GTA5}. It consists of 24966 images synthesized from the homonymous video game. The original images size is $1914 \times 1052$. For training and evaluation we used the standard 19 semantic classes in common with Cityscapes. \myparagraph{SYNTHIA}. We use the "RAND-CITYSCAPES" subset that consists of 9400 images synthesized from a virtual world simulator. The original image resolution is $1280 \times 760$. The 19 classes in common with Cityscapes are considered for training while the evaluation, following the standard protocol used in \cite{yang2020fda} and in \cite{vu2019advent}, is performed on a subset of 13 and 16 classes. \myparagraph{Cityscapes}. It is a real-world dataset collected across several German cities. It consists of 2975 images but for the experiments we use only a subset according to the standard $K$-shot selection ($K$ images from each of the $M$ city). We use the whole validation set made of 500 images to test our network. The original resolution is $2048 \times 1024$. We assess the efficiency of {\our} in these two domain adaptation scenarios: GTA5$\to$Cityscapes and SYNTHIA$\to$Cityscapes. In all tests we use the standard Intersection over Union metric \cite{miou} to measure performance. \subsection{Implementation and training details} \label{sec:implementation} \myparagraph{Architecture.} The segmentation module of our method is DeepLab V2 \cite{chen2017deeplab} with ResNet101 \cite{he2016deep} pre-trained on ImageNet. The pixel-wise discriminator we built is a Fully Convolutional discriminator which has $2$ convolutional layers with kernel $3 \times 3$, stride 1 and padding 1, followed by a last convolutional layer with kernel $1\times 1$, stride 1 and padding 0. The three layers channel numbers are $\{64, 128, 1\}$. The image-wise discriminator is a common Fully Convolutional discriminator with $5$ convolutional layers with kernel $4\times4$, channel numbers $\{64, 128, 256, 512, 1\}$ and stride 2. For both discriminators, each layer except the last one is followed by a Leaky ReLU with a negative slope of $0.2$. \myparagraph{Training.} We implement our method in PyTorch and deploy it on two NVIDIA Tesla V100 GPUs with 16GB each. The segmentation model is trained using batch size $4$ and SGD with initial learning rate $2.5 \cdot 10^{-4}$ and adjusted at each iteration with a "poly" learning rate decay with a power of $0.9$, momentum $0.9$ and weight decay to $0.0005$. The discriminators are trained using Adam optimizer, with learning rate $10^{-5}$ and the same decay schedule of the segmentation model. The momentum for Adam is set to \{0.9, 0.99\}. To reduce the low-level visual domain shift (\eg, color, brightness, etc.) between the source and target domains (both in the adversarial training and in the sample selection phases) we apply to each source image the FFT style translation algorithm from FDA~\cite{yang2020fda}, which is parameterless and computationally light. {\our} training starts with a pre-trained version of the segmentation model on source data and continues until the sample selection module selects relevant source images for the next epoch. The last fine-tuning and knowledge distillation phase lasts 200 iteration. We set $\lambda$ equal to $0.1$ for GTA and $1$ for Synthia. The sample selection threshold $\delta$ is set to $0.4$ and doubled at every epoch. Finally, $\lambda_{kd}=0.5$ and $\tau=0.5$. Test is performed without any post-processing. \myparagraph{Baselines.} Our method is compared to several baselines. The first baseline that we consider is the Source Only model, \ie, the network trained only with the source dataset. The Joint Training (JT) baseline, that trains for 4 epochs the model with a concatenation of the source and target images. The Fine-Tuning (FT) baseline, that fine-tunes for 30k iterations the Source Only model on the target domain. Our method, JT and FT exploit the Focal Loss to compute segmentation accuracy. We then report results for three state-of-the-art methods: FDA \cite{yang2020fda}, NAAE \cite{sun2019not}, and FSDA \cite{zhang2019few}. FDA \cite{yang2020fda} and "Not All Areas are Equal" (NAAE) \cite{sun2019not} are implemented using the same hyper-parameters proposed in their original papers, replacing only the target train set with the K-shot selection. For FSDA \cite{zhang2019few} we follow the same results and implementation details reported by the authors. DeepLabV2 with ResNet101 is used as the backbone for all the baselines with the only exception of NAAE that, as provided by its authors, uses a FCN \cite{long2015fully} with VGG16 \cite{vgg}. \subsection{Results} \input{Tables/GTA5_Experiments} \vspace{-5pt} \myparagraph{GTA5 to Cityscapes.} The results for this scenario are reported in Table \ref{table:gta}. At a first glance, we observe that NAAE and Joint Training lead to underwhelming results, with a mIoU below $40\%$ in all tests. FDA is slightly better, but its accuracy does not improve when increasing the number of target images from 1 to 5. Fine-Tuning the model pretrained on the source domain leads to comparable accuracy to the current state-of-the-art, FSDA. Finally, our {\our} is the best performer in all (1-5)-shot tests, outperforming the Source Only model by a minimum of $+20.44\%$ in the 1-shot setting to a maximum of $+24.84\%$ in the 5-shot. Compared to the next best competitor, \ie, FSDA, {\our} marks an average boost of $+3.63\%$ to the mIoU. We also note that in the 1-shot setting, the accuracy of FSDA in few classes (traffic light, motorcycle) drops below the Source Only baseline, which is indicative of a negative transfer. This result confirms that {\our} uses more effectively the information from the domain images depending on the content in the target images. Finally, we observe that our method not only works well with predominant classes such as "road", "sky" and "building", but on average it improves the recognition of semantic categories that are under-represented, either because containing few pixels (\eg, "traffic light", where we achieve a $+9.39\%$ \wrt to the Source Only) or because rarely appearing (\eg, "train", where we achieve $+15.51\%$ \wrt to the Source Only). Overall, on under-represented classes (last column in \cref{table:gta}) we outperform FSDA by $+6.58\%$, demonstrating our ability to correctly align the pixels related to these categories. These results are qualitatively confirmed in Fig. \ref{fig:qualitatives}, where we show that the {\ourloss} loss provides a stronger adaptation for small and rare classes, such as "traffic sign" and "bicycle"; hence, these categories are predicted quite accurately even in the 1-shot setting. \input{Tables/SYNTHIA_Experiments} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{Images/Qualitatives.pdf} \caption{Qualitative results for the GTA$\to$Cityscapes and SYNTHIA$\to$Cityscapes scenarios. The boxes on the images highlight some examples of underrepresented classes. Qualitative comparison to the other methods is left to the Supplementary Material.} \label{fig:qualitatives} \end{figure*} \myparagraph{SYNTHIA to Cityscapes.} Results for this scenario are reported in Table \ref{table:synthia} and shown in \cref{fig:qualitatives} and confirm what we showed in the first set of experiments. NAAE, FDA and Joint Training lag behind all other methods, confirming that they are not viable solutions to address the cross-domain few-shot problem. Fine-Tuning and FSDA show similar accuracies, although it must be noted that the results from FSDA are only reported for the protocol with 13 classes, where the difficult categories "pole", "fence" and "wall" are excluded from the evaluation. Our method is designed to better handle such categories as well and, on these classes, it improves its performance \wrt the Source Only model by $+22.88\%$, $+13.60\%$ and $+15.16\%$ respectively. Overall, {\our} displays the best mIoU in all (1-5)-shot settings, both with the 13 and 16 classes protocols. It outperforms the Source Only model by a minimum of $+25.72\%$ in the 1-shot scenario to a maximum of $31.66\%$ in the 5-shot. Compared to the current state-of-the-art, \ie, FSDA, {\our} scores an average accuracy improvement of $+1.69\%$ within the 13 classes protocol and of $+1.86\%$ if considering only the rare classes. The closest method to {\our}, \ie, Fine-Tuning, achieves good results on these three categories but it is much less consistent than our solution. In the 16-classes protocol, {\our} achieves an average boost of $+2.56\%$ \wrt the second best competitor, \ie, Fine-Tuning. \subsection{Ablation Study} \label{sec:ablation} \vspace{-2mm} \myparagraph{Contribution of terms in the PixAdv loss.}\label{sec:fuse_loss_ablation} In \cref{table:loss_ablation} we provide an in-depth ablation study to prove the effectiveness of making domain alignment at a pixel-level. The results are computed while both sample selection and knowledge distillation and fine-tuning are turned off. The table shows that aligning source and target domains with an image-wise discriminator yields a substantial improvement ($+11.14\%$) over joint-training (no adversarial loss). However, since it discriminates the images globally, it tends to align well-represented classes while ignoring the others. In comparison, the pixel-level adversarial loss, which aligns each pixel separately, further enhances performance by $+2.17\%$. Regardless of the change, merely aligning the pixels does not prevent negative transfer and overfitting to few-shot images. Indeed, re-weighting the pixels based on their frequency ($B$ term) yields a further improvement of $+1.46\%$. Furthermore, decreasing the weight of well-represented pixels ($S$ term) to prevent negative transfer is crucial, providing a $+1.64\%$ w.r.t the pixel-wise adversarial loss. Finally, the $B$ and $S$ terms complement each other and boost efficiency even further when used together. The resulting loss ({\ourloss}) outperforms the image-wise adversarial loss by $+5.86\%$ and the pixel-wise adversarial loss by $+3.69\%$, suggesting that weighting each pixel contribution is advantageous to prevent negative transfer and overfitting. \input{Tables/ablation_loss} \myparagraph{Contribution of each component in PixDA.} \vspace{-1pt} In this section we assess to what extent each component in our framework contributes to the final performance. We move bottom-up examining six different cases: (a) Source Only model; (b) Joint Training; (c) training with the {\ourloss} loss; (d) with the {\ourloss} Loss and our Sample Selection mechanism; (e) the fine-tuning step and finally, (f) the knowledge distillation that completes the {\our} framework. From the results in \cref{table:da}, it is evident that each component brings an improvement to the overall framework. In particular, the addition of the {\ourloss} Loss improves the Joint Training by $+17\%$, indicating that domain alignment is necessary to obtain good performance. The Sample Selection provides an additional improvement of $+1.45\%$, indicating that removing samples far from the target distribution is beneficial. Finally, while na\"ively fine-tuning the network on the target brings a little improvement ($+0.31\%$), using knowledge distillation brings that difference to $+1.42\%$. We remark that only adding the {\ourloss} Loss already surpasses the state-of-the-art. As a follow-up test, we replaced the Focal Loss with a standard Cross Entropy, yielding a lower but still state-of-the-art result ($48.89\%$) and confirming the effectiveness of our loss. Additional studies to asses the impact of hyperparameters in our framework are included in the supplementary material due to lack of space. \input{Tables/ablation_da} \subsection{Application to Unuspervised DA} \vspace{-1pt} Although this research targets the few-shot DA setting, which is most relevant for the autonomous driving application, the problem of class imbalance is also present in Unsupervised DA (UDA). Additional experiments are included in the supplementary material to demonstrate that the PixAdv loss is also effective in the UDA setting. \section{Conclusion} \vspace{-5pt} In this work we address the task of Cross-Domain Few-Shot Semantic Segmentation. We present a pixel-by-pixel adversarial training strategy that uses a new pixel-wise loss and discriminator to better align source and target domain and to reduce the negative transfer problem. We also assist the adversarial training with a sample selection procedure that handles the imbalance between source and target domain. Our framework achieves the state-of-the-art performance in all the 1-to-5 shot settings from the two standard synthetic-to-real benchmarks. Future work will analyze a modified version of the sample selection strategy that selects only the top $K$ confident source samples rather than increasing the threshold, as well as the application of the {\ourloss} Loss to other settings, such as Unsupervised DA, which has a preliminary assessment in the Supplementary Material, and Multi Source DA.
b0edd178a6496fdebb524125faa888481dabb371
\section{Introduction} Lane detection is a pivotal element in driver assistance systems and autonomous vehicles as lane marker information is essential in maneuvering the vehicle safely on roads. Detecting lanes in real-world scenarios is a challenging task due to adverse weather, lighting conditions and occlusions. As the computational budget available for lane detection in the aforementioned systems is limited, a light-weight, fast and accurate lane detection system is crucial. Recent lane detection approaches fall into two broad classes: semantic segmentation based methods and row-wise classification based methods. While semantic segmentation based methods \cite{pan2018SCNN,CurveLane-NAS,hou2019learning} provide competitive results in terms of accuracy, a common drawback is the reduced speed due to per-pixel classification and large backbones. On the other hand, row-wise classification based methods \cite{yoo2020end,qin2020ultra} focus on improving speed and obtaining real-time performance. However, the inherent limitation of a grid-based representation in row-wise classification methods and the bias towards overfitting due to the similar structure of lanes in the training set may result in reduced accuracy, highlighting the speed-accuracy trade-off in lane detection models. In this work, we propose a simple, light-weight, end-to-end deep learning based lane detection framework with a smaller backbone and a lesser number of multiply-accumulate operations (MACs) following the row-wise classification approach. The inference speed is significantly increased by reducing the computational complexity, and the light-weight network architecture is less prone to overfitting. Moreover, we also introduce a false positive suppression algorithm based on the length of the lane segment and the Pearson correlation coefficient, and a second-order polynomial fitting method as post-processing techniques to improve the overall accuracy of the system. Comprehensive experimental results are shown on the CULane \cite{pan2018SCNN} benchmark dataset, accompanied by a comparison of our results with other state-of-the-art approaches. An ablation study shows how each of the proposed methods contributes to the speed and the accuracy. Furthermore, we deploy our lane detection framework on a Nvidia Jetson AGX Xavier integrated with Robot Operating System (ROS) \cite{ros} to demonstrate the capability of our light-weight network architecture to perform real-time lane detection in an embedded system. The trained model is optimized and quantized using TensorRT for increasing the inference speed. We also provide qualitative results for locally captured street view images to showcase how well our model generalizes for the task of lane detection. In summary, our contributions are as follows: we introduce a novel, light-weight, end-to-end deep learning architecture supplemented with two effective post-processing techniques for fast and efficient lane detection. Our proposed method drastically improves the inference speed, reaching 411 frames per second (FPS) to surpass state-of-the-art while achieving comparable accuracy. We further optimize the trained model using TensorRT and implement it on an embedded system in the ROS ecosystem. The overall system achieves an inference speed of 56 FPS, demonstrating the capability of our method to perform real-time lane detection. \section{Related Work} \label{sec:related_works} Initially, lane detection research mainly focused on classical image processing algorithms, such as using basic hand-crafted features\cite{kluge1995deform,yu1997lane,Ghazali2012road}, color-based approaches \cite{chiu2005lane,lee2009effective}, and traditional feature extraction methods with machine learning algorithms such as decision trees and support vector machines \cite{gonzalez2000lane,lanesvm2012}. Although these methods are computationally less expensive, the performance is poor in complex scenarios with occlusions, shadows and different lighting conditions. Recent deep learning based approaches outperform classical methods and can be further divided into two broad classes: semantic segmentation based methods and row-wise classification based methods. In semantic segmentation based methods \cite{pan2018SCNN,CurveLane-NAS,hou2019learning}, classification is done on a per-pixel basis by classifying each pixel as lane or background. A special convolution method known as slice-by-slice convolution is proposed in SCNN \cite{pan2018SCNN}, which enables information propagation within the same layer to improve the detection of long thin structures such as lanes. CurveLane-NAS \cite{CurveLane-NAS} focuses on capturing long-range contextual information and short-range curved trajectory information using a lane-sensitive neural architecture search framework. Attention maps extracted from different layers of a trained model which contain important contextual information are used as distillation targets for the lower layers in SAD \cite{hou2019learning}. The pixel-wise computation in semantic segmentation based approaches increases the computational complexity and reduces the inference speed drastically. Row-wise classification based methods \cite{yoo2020end, qin2020ultra} have been able to progress towards real-time lane detection by addressing the computational complexity problem. In these approaches, the input image is divided into a grid and for each row, the model outputs the probability of each cell belonging to a lane. This approach is first introduced in E2E-LMD \cite{yoo2020end} by converting the output of the segmentation backbone to a row-wise representation using a special module called horizontal reduction module. The no-visual-clue problem in lane detection is addressed in UltraFast \cite{qin2020ultra} using a low-cost, row-wise classification based network, which utilizes global and structural information. Although their approach achieves state-of-the-art speed of 322.5 FPS, the accuracy is low when compared with other methods. \begin{figure*} \begin{center} \input{fig/arch} \caption{Proposed model architecture. ResNet-14 backbone generates feature maps from the input image. A $ 2 \times 2 $ max pooling layer and a $ 1 \times 1 $ convolutional layer are used to reduce the spatial dimensions and the number of channels. Resulting feature maps are flattened and passed through two fully-connected layers with dropout layers in between. The model predictions are fed through false positive suppression and curve fitting modules to obtain the lane output.} \label{fi:architecture} \end{center} \vspace{-2ex} \end{figure*} \begin{figure} \centering \input{graphics/culane_diagram} \caption{Lane Representation. The region comprising lanes is divided into a pre-defined number of row anchors ($h$) and gridding cells ($w$).} \vspace{-0.3cm} \label{fig:lane_rep} \end{figure} Almost all of the above mentioned algorithms have been implemented in high-end computational platforms and implementation of lane detectors in embedded systems is comparatively a less researched area. A lane detection algorithm optimized for PXA255 embedded device has been introduced by \cite{Ming07real} which achieves a frame rate of 13 FPS. PathMark \cite{pathmark} is another lane detection algorithm running at 13 FPS in a TI-OMAP4430 based embedded system. A Nvidia Jetson-TK1 board has been used in \cite{lanedeparture} for implementing a real-time lane detection and departure warning system at 44 FPS. In \cite{marcos18fast}, a lane detection and modeling pipeline has been presented for embedded platforms which delivers real-time performance in a Jetson-TX2 embedded device. All of these approaches rely on classical image processing based techniques and do not perform well in complex scenarios when compared with deep learning based approaches. \section{Methodology} \label{sec:method} In this section, we present the lane representation mechanism, a detailed explanation of our model architecture and the algorithms used to further increase the model accuracy. \subsection{Lane Representation} \label{ssec:laneRep} We address the lane detection task as a row-wise classification problem following the formulation introduced by \cite{qin2020ultra}. The region of the image which contains lanes is divided into a pre-defined number of row anchors ($h$) and each row anchor is divided into a pre-defined number of gridding cells ($w$) as shown in Fig. \ref{fig:lane_rep}. The number of lanes ($c$) is pre-defined, and for each lane, the lane locations are represented by a $h \times w$ grid. An additional cell is attached to the end of each row anchor to indicate the absence of a particular lane in that row anchor. \subsection{Model Architecture} \label{ssec:Arch} We propose a simple end-to-end light-weight convolutional neural network based model architecture for the lane detection task as shown in Fig. \ref{fi:architecture}. The first stage of the proposed model is the backbone which extracts features from the input image. As the backbone we use ``ResNet-14'' which is obtained by dropping the last four convolutional layers of ResNet-18 \cite{he2016deep} to increase the speed by reducing the computational complexity. The output of the backbone is a feature representation of the image which would then be fed into a $2 \times 2$ max pooling layer for dimensionality reduction in the spatial dimensions. For dimensionality reduction in the channel dimension a $1 \times 1$ convolution layer is applied. This output is flattened to obtain a one-dimensional tensor which is then passed through two fully connected layers to obtain the output tensor. Dropout layers are implemented in between to further prevent the network from overfitting. The output tensor represents the score of each gridding cell (including the no lane cell) belonging to each lane in each row anchor. $S_{i,j,k}$ represents the score of $k$\textsuperscript{th} gridding cell in $j$\textsuperscript{th} row anchor belonging to $i$\textsuperscript{th} lane which can be obtained by, \begin{equation} \label{eq:lane_rep} S_{i,j,k}=f(X), \; \text{s.t.} \; i\in[1,c], \; j\in[1,h], \; k\in[1,w+1] \end{equation} \normalsize Here, $f$, $X$, $c$, $h$ and $w$ stands for the classification model, the input image, the number of lanes, the number of row anchors and the number of gridding cells, respectively. The lane points can then be extracted by choosing the gridding cell with the highest score in each row anchor for each lane. If the last gridding cell is not the cell with the highest score, the location of $i$\textsuperscript{th} lane in $j$\textsuperscript{th} row anchor is given by, \begin{equation} \label{eq:location} Loc_{i,j}=\mathrm{argmax}_{k}\; (S_{i,j,k})\;, \; \text{s.t.} \; k \in [1,w] \end{equation} Having the highest score in the last gridding cell implies that the considered lane is not present in the selected row anchor. For training the model, we define the classification loss as the negative log likelihood loss which is given by, \begin{equation} \label{eq:classLoss} L_{cls}= \sum_{i=1}^{C} \sum_{j=1}^{h} -\alpha_{i,j,T_{i,j}} \cdot \log( P_{i,j,T_{i,j}}) \end{equation} Here, $T_{i,j}$ denotes the correct location (gridding cell) of $i$\textsuperscript{th} lane in $j$\textsuperscript{th} row anchor as per the ground truth and $P_{i,j,k}$ denotes the probability of $k$\textsuperscript{th} gridding cell in $j$\textsuperscript{th} row anchor belonging to $i$\textsuperscript{th} lane which can be obtained by, \begin{equation} \label{eq:prob} P_{i,j,k}=\mathrm{softmax}(S_{i,j,k}) \end{equation} $\alpha_{i,j,k}$ is the modulating factor for the focal loss adjustment as mentioned in \cite{lin2017focal}. \begin{equation} \label{eq:alpha} \alpha_{i,j,k}=(1-P_{i,j,k})^{\gamma} \end{equation} \subsection{False Positive Suppression} \label{ssec:FPSup} We propose two post processing techniques to reduce false detections in the model output. First, we remove all instances of small lane segments which have a less number of detected lane points than a threshold value. Second, we remove all instances of lanes which have a considerable deviation from a straight line. Pearson correlation coefficient measures the linear correlation between two variables which is given by (\ref{eq:pearson}), where $x \textsubscript{i}$ and $y\textsubscript{i}$ are the sample data points of $x$ and $y$ variables and $\bar{x}$ and $\bar{y}$ are the respective means. \begin{equation} \label{eq:pearson} r= \frac{\sum_{}^{}(x\textsubscript{i}-\bar{x})(y\textsubscript{i} - \bar{y})}{\sqrt{\sum_{}^{}(x\textsubscript{i}-\bar{x})^{2}(y\textsubscript{i} - \bar{y})^{2}}} \end{equation} In our case, Pearson correlation coefficient of row anchors and gridding cells of an identified lane segment is used to measure how well the lane points can be represented using a straight line. Since majority of the lanes have a slight deviation from a straight line, the Pearson correlation coefficient should be close to one in magnitude. Therefore, we remove all instances of lanes which have a Pearson correlation coefficient below a threshold value. \subsection{Curve Fitting} \label{ssec:LSQLF} In most of the scenarios, lanes are straight lines or curve segments with small curvature values. Therefore, lanes can be approximated to a greater extent by second-order polynomials. Since we use a finite number of gridding cells, lanes in the model output are represented in the discrete domain. Second-order polynomial fitting can be used to replace these discrete gridding cell numbers by continuous values which results in smooth lane segments. \section{Experiments} \label{sec:Experiments} In this section, we present the details about the dataset used to evaluate our model, the training process and a detailed description on the embedded system implementation for real-time applications. \subsection{Dataset Description} \label{ssec:datasets} For the training and quantitative evaluation of our model, we use the publicly available CULane \cite{pan2018SCNN} benchmark dataset which is one of the largest lane detection datasets with 133,235 total frames having a resolution of $1640 \times 590$. The dataset is divided into the train set, the validation set and the test set which comprises 88,880 frames, 9,675 frames and 34,680 frames, respectively. The dataset covers several complex scenarios and the test images are divided into 9 categories: Normal, Crowded, Dazzle light, Shadow, No line, Arrow, Curve, Crossroad and Night. As the evaluation metric, F1-measure is used to compare the performance in the CULane benchmark. Each lane is represented by a 30-pixel-width line and each prediction which has an intersection over union (IoU) greater than 0.5 with the ground truth is considered as a true positive. Then F1-measure is calculated as follows where $TP$, $FP$ and $FN$ stands for true positives, false positives and false negatives, respectively. \begin{equation} precision = \frac{TP}{TP+FP} \end{equation} \begin{equation} recall = \frac{TP}{TP+FN} \end{equation} \begin{equation} F_{1}-measure = \frac{2\times precision \times recall}{precision+recall} \end{equation} \begin{table*}[t] \addtolength{\tabcolsep}{-2.5pt} \begin{center} \caption{Comparison of F1-measure and speed (FPS) on CULane with state-of-the-art methods} \label{ta:results} \normalsize \input{tables/lane_results} \vspace{-2ex} \end{center} \end{table*} \subsection{Model Training} \label{ssec:imp_details} Each image in the CULane dataset is resized to $288 \times 800$ from the input resolution of $590 \times 1640$. We use 36 row anchors ($h$) and 150 gridding cells ($w$) to represent the area which contains lanes (height ranging from 260 to 590 in the original image). The number of lanes ($c$) is set to 4. The threshold for false positive suppression using number of lane points is set to 12, and the threshold for false positive suppression using the Pearson correlation coefficient is set to 0.995. As the optimization algorithm, SGD with momentum \cite{pmlr-v28-sutskever13} is used with an initial learning rate of 0.1, a momentum of 0.9 and a weight decay of $1 \times 10^{-4}$ for training the model. The model is trained for 50 epochs and at 15\textsuperscript{th}, 25\textsuperscript{th}, 35\textsuperscript{th} and 45\textsuperscript{th} epochs, the learning rate is multiplied by a factor of 0.3. For training and testing our model we use a computational platform comprising of an Intel Core i9-9900K CPU and Nvidia RTX-2080 Ti GPU. All experiments are carried out using PyTorch\cite{paszke2017automatic} based on the implementation of \cite{qin2020ultra}. \begin{figure}[t] \begin{center} \input{fig/optimize} \vspace{1ex} \caption{Optimization of the lane detection model. The trained PyTorch model is converted to a TensorRT engine.} \label{fi:oppt} \end{center} \vspace{-0.5em} \end{figure} As a measure to make the model more robust and generalized without overfitting, we apply two data augmentation techniques while training the model. First, we fit a random affine transformation to each image which comprises a random rotation, a random horizontal shift and a random vertical shift. Second, we use the colour jitter augmentation technique to randomly change the brightness and the contrast of the input image. \subsection{Implementation on the Embedded System} \label{ssec:embedded} As the embedded system, we use a Nvidia Jetson AGX Xavier which possesses the required processing power to run deep learning based algorithms with the help of CUDA and Tensor cores. We further optimize our lane detection model for the embedded system by generating a TensorRT engine as shown in Fig. \ref{fi:oppt}. First, the trained PyTorch model is converted to ONNX file format and the ONNX model is then used by the ONNXParser in TensorRT Python API to generate the TensorRT engine. We evaluate the use of both single-precision floating point (FP32) and half-precision floating point (FP16) formats for building the TensorRT engine. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{fig/rqt_lane.png} \caption{RQT graph for the implementation of the lane detection system in the ROS ecosystem.} \vspace{-0.3cm} \label{fig:rqt_graph} \end{figure} We implement the lane detector system in the Robot Operating System (ROS) \cite{ros} ecosystem as shown in Fig. \ref{fig:rqt_graph}. The \textit{image\_feeder} node retrieves frames from a given video file and publishes each frame to the \textit{input\_frame} topic. The \textit{lane\_detector} node detects lanes in the current frame and publishes the detections to the \textit{lane\_detections} topic. For a faster inference speed, we use the FP16 quantized TensorRT engine for the lane detection task. The visualizer node marks the detected lane points in the current frame and publishes the resultant image to the \textit{output\_frame} topic. The RViz visualization tool is used to visualize the lane detections in real-time. \section{Results} \label{sec:results} \normalsize \begin{figure*} \centering \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/1_lanes_highway.jpg} \end{subfigure}% \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/5_lanes_rural.jpg} \end{subfigure}% \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/2_lanes_highway.jpg} \end{subfigure}% \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/6_lanes_fp.jpg} \end{subfigure}% \vspace{0.5em} \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/4_lanes_rural.jpg} \end{subfigure}% \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/3_lanes_urban.jpg} \end{subfigure}% \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/7_lanes_fp.jpg} \end{subfigure}% \begin{subfigure}[b]{0.24\linewidth} \centering \includegraphics[width=0.98\linewidth]{LaneQualitativeResults/lane_images/8_lanes_undetected.jpg} \end{subfigure}% \vspace{0.5em} \caption{ Visualization of lane detection result on locally captured images. The first six images show accurate detections while the last two show failure cases including false detections and undetected lanes.} \label{fi:local_lane_qualitative_results} \end{figure*} \begin{figure} \centering \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/normal.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/normalgt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/normalpred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/crowd.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/crowdgt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/crowdpred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/highlight.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/highlightgt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/dazzlepred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/shadow.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/shadowgt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/shadowpred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/noline.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/nolinegt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/nolinepred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/arrow.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/arrowgt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/arrowpred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/curve.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/curvegt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/curvepred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/cross.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/crossgt.jpg} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/crosspred.jpg} \end{subfigure}% \vspace{.3ex} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/night.jpg} \caption{Input Image} \label{sfi:sf1} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/nightgt.jpg} \caption{Ground Truth} \label{sfi:sf2} \end{subfigure}% \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[width=0.98\linewidth]{fig/nightpred.jpg} \caption{Prediction} \label{sfi:sf3} \end{subfigure}% \vspace{.3ex} \caption{ Visualization of results on CULane. The nine rows represent the nine scenarios in CULane; Normal, Crowded, Dazzle light, Shadow, No line, Arrow, Curve, Crossroad and Night respectively.\\} \label{fi:pictorial_results} \vspace{-5ex} \end{figure} The performance of our method on the CULane benchmark dataset is compared against state-of-the-art lane detection approaches in Table \ref{ta:results}. The number of false positives are displayed under the ``Cross'' category since there are no true positives in the ground truth for that category. The inference speed is measured by taking the average frames per second (FPS) value for 1000 runs including the forward pass of the model and the post-processing steps. The number of multiply-accumulate operations in billions is represented in the ``GMACs'' column. For a fairer comparison, we measured the speed of \cite{qin2020ultra} under the same conditions as ours. \begin{table} \caption{Performance on the embedded system} \vspace{-1ex} \label{ta:opt} \begin{center} \normalsize \begin{tabular}{|l|c|c|} \hline \textbf{Model} & \textbf{F1-measure} & \textbf{Speed (FPS)} \\ \hline Pytorch Model & 74.02 & 23 \\ \hline TensorRT Engine (FP32) & 74.02 & 35 \\ \hline TensorRT Engine (FP16) & 74.03 & 56 \\ \hline \end{tabular} \end{center} \vspace{-4ex} \end{table} It can be observed that while being the fastest, our method achieves competitive results with other state-of-the-art methods in F1-measure. Our method also uses the least number of multiply-accumulate operations (MACs) which highlights the efficiency of our formulation. The low number of false positives in the ``Cross'' category validates the effectiveness of our false positive suppression technique. Compared to the segmentation based methods \cite{pan2018SCNN, hou2019learning}, the inference speed improves substantially while providing better results at the same time. When compared with \cite{qin2020ultra}, which is the fastest among other approaches, our method achieves better results with a 6.6\% increase in F1-measure. While we obtain comparable performance with \cite{CurveLane-NAS} and \cite{yoo2020end}, a direct comparison cannot be made in terms of the speed, as their inference speeds are not mentioned. Although \cite{pinet_2021}, \cite{resa_2020}, \cite{tabelini2021keep} and \cite{FOLO_2021_CVPR} achieves on par or better results than our method, the low inference speeds of their best performing models act as a barrier for real-time implementation especially on resource constrained environments. The performance of the Pytorch model and the generated FP32 and FP16 TensorRT engines on the Nvidia Jetson AGX Xavier are shown in Table \ref{ta:opt} in terms of the F1-measure and speed. The inference speed is calculated as the average frames per second value for inferencing a locally captured video within the ROS ecosystem. It can be observed that while the accuracy stays almost the same, the inference speed has increased significantly by optimizing and quantizing the model through TensorRT. Qualitative results obtained by our lane detector model are visualized in Fig. \ref{fi:pictorial_results} for the nine categories in the CULane dataset. In addition, locally captured street view images that encompass a range of road scenarios including urban, rural and expressway conditions are inferenced in order to assess the robustness of our trained model. Some of those results are shown in Fig. \ref{fi:local_lane_qualitative_results}. \begin{table}[t] \caption{Ablation study results on CULane} \vspace{-1ex} \label{ta:Ablation} \begin{center} \normalsize \input{tables/ablation_culane} \end{center} \vspace{-3ex} \end{table} \subsection{Ablation Study} \label{ssec:ablation} As an ablation study, each of the proposed methods is evaluated in terms of the speed and the F1-measure, as given in Table \ref{ta:Ablation}. The first line contains the results of the base model, and the FPS value is calculated based on the forward inference time on the GPU. The high FPS value shows the efficiency of our proposed light-weight network architecture with reduced multiply-accumulate operations (MACs). The rest of the lines show how the proposed post-processing techniques contribute towards increasing the accuracy. However, employment of each method reduces the FPS value, especially because these algorithms run on the CPU. \section{Conclusion} \label{sec:conc} In this work, we proposed a simple, light-weight, end-to-end deep learning based network architecture coupled with the row-wise classification formulation for fast and efficient lane detection. Furthermore, we introduced a false positive suppression algorithm based on the length of the lane segment and the Pearson correlation coefficient, and a second-order polynomial fitting method as post-processing techniques. Collectively, our approach surpasses state-of-the-art with regard to speed reaching up to 411 FPS, while achieving competitive results in terms of accuracy, as justified in the qualitative and quantitative experiments carried out on the CULane benchmark dataset. We further demonstrated the capability of our light-weight network architecture to perform in real-time, by optimizing and quantizing our trained model using TensorRT and deploying on an embedded system while integrating with ROS, which achieves a high inference speed of 56 FPS. The inference results for the locally captured street view images show how well our method generalizes for the task of lane detection. { \bibliographystyle{ieeetran}
d61ab52a1802ac6f92ee4e98186e413c2c35ab76
\section{Introduction} In this paper, we isolate and study \emph{name principles}. They express that for names $\sigma$ such that a certain property is forced for $\sigma$, there exists a filter $g$ in the ground model $V$ such that $\sigma^g$ already has this property in $V$. In general, we fix a class $\Sigma$ of names, for example nice names for sets of ordinals. Given a forcing $\mathbb{P}$ and a formula $\varphi(x)$, one can then study the principle: \begin{quote} \emph{``If $\sigma\in \Sigma$ and $\mathbb{P}\Vdash \varphi(\sigma)$ holds, then there exists a filter $g\in V$ on $\mathbb{P}$ such that $\varphi(\sigma^g)$ holds in $V$.''} \end{quote} Such principles are closely related to Bagaria's work on generic absoluteness and forcing axioms \cite{bagaria2000bounded}. Recall that the forcing axiom $\mathsf{FA}_{\mathbb{P},\kappa}$ associated to a forcing $\mathbb{P}$ and an uncountable cardinal $\kappa$ states: \begin{quote} \emph{``For any sequence $\vec{D}=\langle D_\alpha \mid \alpha<\kappa\rangle$ of predense subsets of $\mathbb{P}$, there is a filter $g\in V$ on $\mathbb{P}$ such that $g\cap D_\alpha \neq\emptyset$ for all $\alpha<\kappa$.''} \end{quote} Often, proofs from forcing axioms can be formulated by first proving a name principle and then obtaining the desired result as an application. \begin{example} $\mathsf{FA}_{\mathbb{P},\omega_1}$ implies that for any stationary subset $S$ of $\omega_1$, $\mathbb{P}$ does not force that $S$ is nonstationary. We sketch an argument via a name principle. We shall show in Section \ref{Section correspondence and applications} that $\mathsf{FA}_{\mathbb{P},\omega_1}$ implies the name principle for any nice name $\tau$ for a subset of $\omega_1$ and any $\Sigma_0$-formula $\varphi$. So towards a contradiction, suppose there is a name $\tau$ for a club with $\Vdash_\mathbb{P} \tau\cap S=\emptyset$. Apply the name principle for the formula ``$\tau$ is a club in $\omega_1$ and $\tau\cap \check{S} =\emptyset$''. Hence there is a filter $g\in V$ such that $\tau^g$ is a club and $\tau^g\cap S=\emptyset$. However, the existence of $\tau^g$ contradicts the assumption that $S$ is stationary. \end{example} Name principles for stationary sets have appeared implicitly in combination with forcing axioms. \begin{example} The forcing axiom $\axiomft{PFA}^+$ states: \begin{quote} For any proper forcing $\mathbb{P}$, any sequence $\vec{D}=\langle D_\alpha \mid \alpha<\omega_1\rangle$ of predense subsets of $\mathbb{P}$ and any nice name $\sigma$ for a stationary subset of $\omega_1$, there is a filter $g$ on $\mathbb{P}$ such that \begin{itemize} \item $g\cap D_\alpha \neq\emptyset$ for all $\alpha<\omega_1$ and \item $\sigma^g$ is stationary. \end{itemize} \end{quote} Thus $\axiomft{PFA}^+$ is a combination of the forcing axiom $\axiomft{PFA}$ with a name principle for stationary sets. Note that the formula ``$\sigma$ is stationary'' is not $\Sigma_0$. \end{example} We aim for an analysis of name principles for their own sake. The main result of this paper is that name principles are more general than forcing axioms. In other words, all known forcing axioms can be reformulated as name principles. For instance, we have: \begin{theorem} \label{special case of main theorem} (see Theorem \ref{correspondence forcing axioms name principles}\footnote{This follows from Theorem \ref{correspondence forcing axioms name principles} \ref{correspondence forcing axioms name principles 2} for $X=\kappa$ and $\alpha=1$.}) Suppose that $\mathbb{P}$ is a forcing and $\kappa$ is a cardinal. Then the following statements are equivalent: \begin{enumerate-(1)} \item $\mathsf{FA}_{\mathbb{P},\kappa}$ \item The name principle $\mathsf{N}_{\mathbb{P},\kappa}$ for nice names $\sigma$ and the formula $\sigma=\check{\kappa}$. \item The simultaneous name principle $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}$ for nice names $\sigma$ and all first-order formulas over the structure $(\kappa,\in,\sigma)$. \end{enumerate-(1)} \end{theorem} The main Theorems \ref{correspondence forcing axioms name principles} and \ref{correspondence bounded forcing axioms name principles} are more general and cover: (i) arbitrary names instead of nice names and (ii) bounded forcing axioms. Bagaria proved an equivalence between bounded forcing axioms and generic absoluteness principles \cite{bagaria1997characterization, bagaria2000bounded}. The following corollary of \ref{correspondence bounded forcing axioms name principles} has Bagaria's result as a special case. Here $\mathsf{BFA}_{\mathbb{P},\kappa}$ denotes the usual bounded forcing axiom, i.e. for $\kappa$ many predense sets of size at most $\kappa$, and the principle in \ref{Bagaria's characterisation 2c} denotes the name principle for names of the form $ \{ (\check{\alpha},p_\alpha)\mid \alpha\in \kappa \}$ and for all $\Sigma_0$-formulas simultaneously. \begin{theorem} \label{Theorem intro Bagaria} (see Theorems \ref{Bagaria's characterisation} and \ref{variant of Bagaria's characterisation for countable cofinality}) Suppose that $\kappa$ is an uncountable cardinal, $\mathbb{P}$ is a complete Boolean algebra and $\dot{G}$ is a $\mathbb{P}$-name for the generic filter. The following conditions are equivalent: \begin{enumerate-(1)} \item \label{Bagaria's characterisation 1c} $\mathsf{BFA}_{\mathbb{P},\kappa}$ \item \label{Bagaria's characterisation 2c} $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^1_{\mathbb{P},\kappa}$ \item \label{Bagaria's characterisation 3c} $\Vdash_\mathbb{P} V \prec_{\Sigma^1_1(\kappa)}V[\dot{G}]$ \end{enumerate-(1)} If $\mathrm{cof}(\kappa)>\omega$ or there is no inner model with a Woodin cardinal, then the next condition is equivalent to \ref{Bagaria's characterisation 1c}, \ref{Bagaria's characterisation 2c} and \ref{Bagaria's characterisation 3c}: \begin{enumerate-(1)} \setcounter{enumi}{3} \item \label{Bagaria's characterisation 5c} $\Vdash_\mathbb{P} H_{\kappa^+}^V \prec_{\Sigma_1} H_{\kappa^+}^{V[\dot{G}]}$ \end{enumerate-(1)} If $\mathrm{cof}(\kappa)=\omega$ and $2^{<\kappa}=\kappa$, then the next condition is equivalent to \ref{Bagaria's characterisation 1c}, \ref{Bagaria's characterisation 2c} and \ref{Bagaria's characterisation 3c}: \begin{enumerate-(1)} \setcounter{enumi}{4} \item \label{Bagaria's characterisation 4c} $1_\mathbb{P}$ forces that no new bounded subset of $\kappa$ are added. \end{enumerate-(1)} \end{theorem} The second topic of this paper is the study of name principles for \emph{specific} formulas $\varphi(x)$. In particular, we will consider these principles when $\varphi(x)$ denotes a notion of largeness for subsets of $\kappa$ such as being unbounded, stationary, or in the club filter. For each of these notions, we also study the corresponding forcing axiom. For instance, the \emph{unbounded forcing axiom} $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ states: \begin{quote} \emph{``For any sequence $\vec{D}=\langle D_\alpha \mid \alpha<\kappa\rangle$ of predense subsets of $\mathbb{P}$, there is a filter $g$ on $\mathbb{P}$ such that $g\cap D_\alpha \neq\emptyset$ for unboundedly many $\alpha<\kappa$.''} \end{quote} All these principles are defined formally in Section \ref{Section definitions}. The next diagram displays some results about them. Solid arrows denote non-reversible implications, dotted arrows stand for implications whose converse remains open, and dashed lines indicate that no implication is provable. The numbers indicate where to find the proofs. \begin{figure}[H] \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{N}_\kappa$}} \ar@{<->}[r]^{\ref{Corollary equivalence of FA, NP, clubFA, clubNP}} \ar@{<->}[d]_{\ref{Lemma_FA iff N}} & {\txt{$\mathsf{club}\text{-}\mathsf{N}_\kappa$}} \ar@{<->}[d]_{\ref{Corollary equivalence of FA, NP, clubFA, clubNP}} \ar@{--}[r]^{\ref{Lemma stat-NP for sigma-centred},\ \ref{Remark_FACohen_meagre} }_{\ref{Remark_strength of statNsigma-closed}} & \txt{$\mathsf{stat}\text{-}\mathsf{N}_\kappa$} \ar@{->}[r] \ar@{->}[d]^{\ref{Lemma statN to statFA}} & \txt{$\mathsf{ub}\text{-}\mathsf{N}_\kappa$} \ar@{<->}[d]_{\ref{Lemma ubN to ubFA}}^{\ref{Lemma ubFA to ubN}} & \\ {\txt{$\mathsf{FA}_\kappa$}} \ar@{<->}[r]^{\ref{Lemma basic FA implications}}_{\ref{Lemma equivalence of clubFA and FA}} & \txt{$\mathsf{club}\text{-}\mathsf{FA}_\kappa$} \ar@{->}[r]^{\ref{Lemma basic FA implications}} & \txt{$\mathsf{stat}\text{-}\mathsf{FA}_\kappa$} \ar@{.>}[r]^{\ref{Lemma basic FA implications}} & \txt{$\mathsf{ub}\text{-}\mathsf{FA}_\kappa$} & \\ }\] \caption{Forcing axioms and name principles for regular $\kappa$} \label{diagram of implications} \end{figure} We also investigate whether similar implications hold for $\lambda$-bounded name principles and forcing axioms, where $\lambda$ is any cardinal. The results about the cases $\kappa\leq\lambda$, $\omega\leq\lambda<\kappa$ and $1\leq\lambda<\kappa$ are displayed in the next diagrams. Here a CBA is a complete Boolean algebra. \begin{figure}[H] \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{BN}_\kappa^\lambda$}} \ar@{<.}[r]^{\leftrightarrow \text{ for CBAs}} \ar@{<->}[d] & {\txt{$\mathsf{club}\text{-}\mathsf{BN}_\kappa^\lambda$}} \ar@{.>}[d]_{\leftrightarrow \text{ for}}^{\text{CBAs}} & \txt{$\mathsf{stat}\text{-}\mathsf{BN}_\kappa^\lambda$} \ar@{.>}[r] \ar@{.>}[d] & \txt{$\mathsf{ub}\text{-}\mathsf{BN}_\kappa^\lambda$} \ar@{<->}[d] & \\ {\txt{$\mathsf{BFA}_\kappa^\lambda$}} \ar@{<->}[r] & \txt{$\mathsf{club}\text{-}\mathsf{BFA}_\kappa^\lambda$} \ar@{->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{BFA}_\kappa^\lambda$} \ar@{.>}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{BFA}_\kappa^\lambda$} & \\ }\] \caption{$\lambda$-bounded forcing axioms and name principles for regular $\kappa$ and $\lambda\geq\kappa$ } \label{diagram of implications - bounded with lambda>kappa} \end{figure} It is open whether $\mathsf{club}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$ implies $\mathsf{stat}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$. Conversely, there are forcings $\mathbb{P}$ where $\mathsf{stat}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$ holds for all $\lambda$, but $\mathsf{club}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$ fails for all $\lambda\geq\omega$ (see Section \ref{section ccc forcings}, Lemma \ref{Lemma stat-NP for sigma-centred} and Remark \ref{Remark_FACohen_meagre}). \begin{figure}[H] \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{BN}_\kappa^\lambda$}} \ar@{<->}[d] \ar@{<-}[r] & {\txt{$\mathsf{club}\text{-}\mathsf{BN}_\kappa^\lambda$}} \ar@{->}[d]^{\ref{Lemma separating BFA from clubBN} } & \txt{$\mathsf{stat}\text{-}\mathsf{BN}_\kappa^\lambda$} \ar@{.>}[d] & \txt{$\mathsf{ub}\text{-}\mathsf{BN}_\kappa^\lambda$} \ar@{.>}[d] & \\ {\txt{$\mathsf{BFA}_\kappa^\lambda$}} \ar@{<->}[r] & \txt{$\mathsf{club}\text{-}\mathsf{BFA}_\kappa^\lambda$} \ar@{->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{BFA}_\kappa^\lambda$} \ar@{.>}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{BFA}_\kappa^\lambda$} & \\ }\] \caption{$\lambda$-bounded forcing axioms and name principles for regular $\kappa$ and $\omega\leq\lambda<\kappa$ } \label{diagram of implications - bounded with lambda<kappa} \end{figure} Again, it is open whether $\mathsf{club}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$ implies $\mathsf{stat}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$, but the converse implication does not hold by the previous remarks. \begin{figure}[H] \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{BN}_\kappa^n$}} \ar@{<->}[d] \ar@{<-}[r] & {\txt{$\mathsf{club}\text{-}\mathsf{BN}_\kappa^n$}} \ar@{->}[d] & \txt{$\mathsf{stat}\text{-}\mathsf{BN}_\kappa^n$} \ar@{->}[d]^{\ref{Suslin trees}} & \txt{$\mathsf{ub}\text{-}\mathsf{BN}_\kappa^n$} \ar@{->}[d]^{\ref{Suslin trees}} & \\ {\txt{$\mathsf{BFA}_\kappa^n$}} \ar@{<->}[r] & \txt{$\mathsf{club}\text{-}\mathsf{BFA}_\kappa^n$} \ar@{<->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{BFA}_\kappa^n$} \ar@{<->}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{BFA}_\kappa^n$} & \\ }\] \caption{$n$-bounded forcing axioms and name principles for regular $\kappa$ and $1\leq n<\omega$} \label{diagram of implications - bounded with finite lambda} \end{figure} The principles in the bottom row and $\mathsf{BN}^n_\kappa$ are all provable. \medskip The implications and separations in the previous diagrams are proved using specific forcings such as Cohen forcing, random forcing and Suslin trees. For instance, we have the following results: \begin{proposition}(see Lemma \ref{Lemma_Random ubFA}) Let $\mathbb{P}$ denote random forcing. The following are equivalent: \begin{enumerate-(1)} \item $\mathsf{FA}_{\mathbb{P},\omega_1}$ \item $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ \item $2^\omega$ is not the union of $\omega_1$ many null sets \end{enumerate-(1)} \end{proposition} \begin{proposition} (see Corollary \ref{Suslin trees}) Suppose that a Suslin tree exists. Then there exists a Suslin tree $T$ such that $\mathsf{stat}\text{-}\mathsf{BN}^1_{T,\omega_1}$ fails. \end{proposition} For some forcings, most of Figure \ref{diagram of implications} collapses. In particular, if $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ implies $\mathsf{FA}_{\mathbb{P},\kappa}$, then all entries other than $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\kappa}$ are equivalent. We investigate when this implication holds. For instance: \begin{proposition} (see Lemma \ref{ubFA implies FA for sigma-distributive forcings}) For any ${<}\kappa$-distributive forcing $\mathbb{P}$, we have $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa} \implies\mathsf{FA}_{\mathbb{P},\kappa}$. \end{proposition} In a broader range of cases, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa} $ implies most of the entries in Figure \ref{diagram of implications - bounded with lambda>kappa}: \begin{proposition} (see Lemma \ref{ubFA implies BFA}) If $\kappa$ an uncountable cardinal and $\mathbb{P}$ is a complete Boolean algebra that does not add bounded subsets of $\kappa$, then $$(\forall q\in \mathbb{P}\ \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_q,\kappa}) \Longrightarrow \mathsf{BFA}_{\mathbb{P},\kappa}^\kappa.$$ \end{proposition} The previous result is a corollary to the proof of Theorem \ref{Theorem intro Bagaria}. \medskip We collect some definitions in Section \ref{Section definitions}. In Section \ref{Section results for rank 1}, we prove the positive implications in Figure \ref{diagram of implications}. In Section \ref{Section correspondence and applications}, we prove a general correspondence between forcing axioms and name principles. Theorem \ref{special case of main theorem} is a special case. We further derive results about generic absoluteness and other consequences of the correspondence. In Section \ref{Section specific classes of forcings}, we study the principles in Figures \ref{diagram of implications}-\ref{diagram of implications - bounded with finite lambda} for specific classes of forcings such as $\sigma$-distributive and c.c.c. and for specific forcings such as Cohen and random forcing. We use these results to separate some of the principles in the figures. \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} \subsection*{Acknowledgements} The authors would like to thank Joel David Hamkins for a discussion in February 2020 and for permission to include his proof of Lemma \ref{lemma_failure of lambda-bounded and 1-bounded name principle} and Corollary \ref{lemma_failure of lambda-bounded and 1-bounded name principle for trees} from June 2021. They are further grateful to Philip Welch for several discussions and Joan Bagaria for an email exchange in July 2021. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement No 794020 of the first-listed author. He was partially supported by EPSRC grant number EP/V009001/1, FWF grant number I4039 and the RIMS Research Center at Kyoto University, where the idea for this project originated in November 2019. \addtocontents{toc}{\protect\setcounter{tocdepth}{3}} \section{Some definitions} \label{Section definitions} In this section, we introduce the axioms we will be finding equivalences between. We will also define a few concepts that we will want to use repeatedly. \begin{definition}\label{P^alpha(X)} Let $X$ be a set and $\alpha$ an ordinal. We recursively define $\mathcal{P}^\alpha(X)$ and $\mathcal{P}^{<\alpha}(X)$: $\mathcal{P}^0(X)=X$ $\mathcal{P}^{<\alpha}(X)=\bigcup_{\beta<\alpha} \mathcal{P}^\beta(X)$ $\mathcal{P}^\alpha(X) = \mathcal{P}(\mathcal{P}^{<\alpha}(X))$ for $\alpha>0$. \end{definition} The axioms we are working with come under two headings: \textit{forcing axioms} and \textit{name principles}. Within these headings there are a variety of different axioms we will be working with. A \emph{forcing} is a partial order with a largest element $1$. Throughout this section, assume that $\mathbb{P}$ is a forcing and $\mathcal{C}$ is a class of forcings. $G$ will be a generic filter (on $\mathbb{P}$); $g$ will be a filter on $\mathbb{P}$ which is contained in the ground model $V$ (and therefore certainly not generic, if $\mathbb{P}$ is atomless). \subsection{Forcing axioms} \begin{notation} \label{notation_trace} In the following, $\vec{D}=\langle D_\gamma:\gamma<\kappa\rangle$ always denotes a sequence of dense (or predense) subsets of a forcing $\mathbb{P}$. If $g$ is a subset of $\mathbb{P}$, then its \emph{trace} with respect to $\vec{D}$ is defined as the set $$\mathrm{Tr}_{g,\vec{D}}=\{\alpha<\kappa\mid g\cap D_\alpha \neq \emptyset\}.$$ \end{notation} \begin{definition}\label{Defn_FA} Let $\kappa$ be a cardinal. The forcing axiom $\mathsf{FA}_{\mathbb{P},\kappa}$ says: \begin{quote} ``For any $\vec{D}$, there exists a filter $g\in V$ with $\mathrm{Tr}_{g,\vec{D}}=\kappa$.'' \end{quote} The forcing axiom $\mathsf{FA}_{\mathcal{C},\kappa}$ asserts that $\mathsf{FA}_{\mathbb{P},\kappa}$ holds for all $\mathbb{P}\in \mathcal{C}$. \end{definition} Of course, we could just as well have written ``predense'' instead of ``dense'' in the above definition. We will suppress the $\mathbb{P}$ or $\mathcal{C}$ in the above notation when it is clear which forcing we are referring to. If $\kappa=\omega_1$ we will suppress it too, just writing $\mathsf{FA}_\mathbb{P}$ (or just $\mathsf{FA}$ if $\mathbb{P}$ is clear as well). We can weaken this axiom: instead of insisting that $g$ must meet every $D_\gamma$, we could insist only that it meets ``many`` of them in some sense. The following forcing axioms do exactly that, for various senses of ``many''. \begin{definition}\label{Defn_SpecialFA} Suppose that $\kappa$ is a cardinal and $\varphi(x)$ is a formula. The axiom $\varphi\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ states: \begin{quote} ``For any $\vec{D}$, there is a filter $g$ on $\mathbb{P}$ such that $\varphi(\mathrm{Tr}_{g,\vec{D}})$ holds.'' \end{quote} In particular, we will consider the following formulas: \begin{enumerate-(1)} \item $\mathsf{club}(x)$ states that $x$ contains a club in $\kappa$. $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ is called the \textit{club forcing axiom}. \item $\mathsf{stat}(x)$ states that $x$ is stationary in $\kappa$. $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ is called the \textit{stationary forcing axiom}. \item $\mathsf{ub}(x)$ states that $x$ contains a club in $\kappa$. $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ is called the \textit{unbounded forcing axiom}. \item $\omega\text{-}\mathsf{ub}(x)$ states that $x$ contains $\omega$ as a subset and is also unbounded in $\kappa$. $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ is called the \textit{$\omega$-unbounded forcing axiom}. \end{enumerate-(1)} We define $\mathsf{club}\text{-}\mathsf{FA}_{\mathcal{C},\kappa}$, $\mathsf{stat}\text{-}\mathsf{FA}_{\mathcal{C},\kappa}$, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathcal{C},\kappa}$ and $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_{\mathcal{C},\kappa}$ in the same way as we defined $\mathsf{FA}_{\mathcal{C},\kappa}$ in Definition \ref{Defn_FA}. \end{definition} $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}$ can also be expressed as a combined version of two forcing axioms: that given a $\kappa$ long sequence $\vec{D}$ and a separate $\omega$ long sequence $\vec{E}$ of (pre)dense sets, we can find a filter $g$ such that $\mathrm{Tr}_{g,\vec{D}}$ is unbounded and $\mathrm{Tr}_{g,\vec{E}}=\omega$. Again, we will suppress $\mathbb{P}$ or $\mathcal{C}$ where they are obvious, and will suppress $\kappa$ when $\kappa=\omega_1$. We can also weaken the axiom by insisting that every dense set $D_\gamma$ be \textit{bounded} in cardinality, by some small cardinal. \begin{definition}\label{Defn_BFA} Let $\kappa$ and $\lambda$ be cardinals. The bounded forcing axiom $\mathsf{BFA}_{\mathbb{P},\kappa}^\lambda$ says \begin{quote}``Whenever $\langle D_\gamma,\gamma<\kappa\rangle$ is a sequence of \textit{predense} subsets of $\mathbb{P}$, \textit{and for all} $\gamma$ \textit{we have} $\lvert D_\gamma\rvert \leq \lambda$, then there is a filter $g\in V$ such that for all $\gamma<\kappa$, $g\cap D_\gamma\neq \emptyset$.'' \end{quote} We define $\mathsf{BFA}_{\mathcal{C}, \kappa}^\lambda$, $\mathsf{club}\text{-}\mathsf{BFA}_{\mathbb{P},\kappa}^\lambda$ and so forth in the natural way, using definitions analogous to those in \ref{Defn_FA} and \ref{Defn_SpecialFA}. \end{definition} Again, we will suppress notation as described above. We will suppress the $\lambda$ if $\lambda=\kappa$. Note that we are definitely looking at predense sets here, since actual dense sets are likely to be rather large and the axiom would be likely to be trivial if we had to use dense sets. These bounded forcing axioms are only really of interest when $\mathbb{P}$ is a Boolean algebra, since they always contain (nontrivial) predense sets with as few as two elements so the axiom will not be vacuous. There is one more forcing axiom we want to introduce, but it requires some additional notation so we will postpone it until later in this section. \subsection{Name principles} We ought to define name principles at this point, but we need to cover some other terminology first in order to express the definitions. As one might expect, name principles are about different $\mathbb{P}$ names, and it will be useful to have some measure of how complex a name is. The following three definitions are all different ways of doing this; we will be using all of them. \begin{definition}\label{Defn_Ranks} Let $X$ be a set (in $V$). We recursively define a a name's rank as follows. $\sigma$ is an $\alpha$ rank $X$ name (or a rank $\alpha$ name for short) if either: \begin{itemize} \item $\alpha=0$ and $\sigma=\check{x}$ for some $x\in X$; or \item $\sigma$ is not rank $0$ and $\alpha=\sup\{\text{rank}(\tau): \exists p\in \mathbb{P} (\tau,p)\in \sigma\}$ \end{itemize} \end{definition} We also call a $1$ (or $0$) rank $X$ name a \textit{good} name. Of course, we will also talk about rank $\leq \alpha$ names, meaning names which are either rank ${<}\alpha$ or rank $\alpha$. This definition is a name analogue to saying that $\sigma \in \mathcal{P}^\alpha(X)$, where $X$ is transitive. Most of the time, we will be interested in the case where $X$ is some cardinal, most often either $0$ or $\omega_1$. Note that every $\mathbb{P}$ name is an $\alpha$ rank $X$ name for some $\alpha$. \begin{definition}\label{Defn_kappasmall} Let $\sigma$ be a $\mathbb{P}$ name and $\kappa$ be a cardinal. We say $\sigma$ is \emph{locally $\kappa$ small} if there are at most $\kappa$ many names $\tau$ such that for some $p\in \mathbb{P}$, we have $(\tau,p)\in \sigma$. A name $\sigma$ is \textit{$\kappa$ small} if it is locally $\kappa$ small, and every name $\tau$ in the above definition is $\kappa$ small. \end{definition} If being rank $\alpha$ is analogous to being in $\mathcal{P}^\alpha$ (or $\mathcal{P}^\alpha(X)$) then the analogue of being $\kappa$ small would be being in $H_{\kappa^+}$. We could also easily define a version of this for $H_{\kappa^+}(X)$ if we wanted. However, we don't actually need to: in all the cases we're going to be interested in, $\bar{X}$ will have cardinality $\leq \kappa$ and the definition would be equivalent to the above one. The following proposition says that we only really need to worry about $\kappa$ smallness when we go above rank $1$ names. \begin{proposition} Let $X$ be transitive, and of size at most $\kappa$. Let $\sigma$ be a $0$ rank or $1$ rank $X$ name. Then $\sigma$ is $\kappa$ small. \end{proposition} On the other hand if $X$ has size greater than $\kappa$ then no interesting rank $1$ name will be $\kappa$ small. The next definition does not have an easy analogue, but is a kind of complement to the previous one and is critical when we work with bounded forcing axioms. \begin{definition}\label{Defn_lambdabounded} Let $\sigma$ be a $\mathbb{P}$ name and $\lambda$ be a cardinal. We say $\sigma$ is \textit{locally $\lambda$ bounded} if it can be written as \begin{equation*} \sigma = \{ (\tau,p): \tau \in T, p\in S_\tau\} \end{equation*} where $T$ is some set of names, and for $\tau \in T$ the set $S_\tau$ is a subset of $\mathbb{P}$ of size at most $\lambda$. A name $\sigma$ is \textit{$\lambda$ bounded} if it is locally $\lambda$ bounded, and every name $\tau\in T$ in the above definition is $\lambda$ bounded.\end{definition} A good name which is $1$ bounded is known as a very good name. A check name $\check{x}$ has the form $\{(\check{y},1): y\in x\}$ and is therefore guaranteed to be $\lambda$ bounded for any $\lambda>0$. We will be talking about interpreting names with respect to a filter. Unfortunately, the literature uses two different meanings of the word ``interpretation'', which only coincide if the filter is generic. For clarity: \begin{definition}\label{Defn_Interpretation} Let $\sigma$ be a name, and $g$ a filter. (Here, $g$ may be inside $V$ or in some larger model.) When we refer to the \textit{interpretation} $\sigma^g$ of $\sigma$, we mean the recursive interpretation: \begin{equation*} \sigma^g := \{ \tau^g: \exists p\in g (\tau,p)\in \sigma\} \end{equation*} When we refer to the \textit{quasi-interpretation} $\sigma^{(g)}$, we mean the following set: \begin{equation*} \sigma^{(g)}:=\{x\in V: \exists p\in g p\mathrel{\Vdash} \check{x}\in \sigma\} \end{equation*} \end{definition} \begin{proposition} $\sigma^g=\sigma^{(g)}$ if either \begin{enumerate-(1)} \item $g$ is generic; or \item $\sigma$ is a $1$ rank $X$ name (for any $X$) and is $1$ bounded. \end{enumerate-(1)} \end{proposition} \begin{proposition} Suppose $\mathbb{P}$ is a complete Boolean algebra, and $\sigma$ is a $1$ rank $X$ name. Then we can find a name $\tau$ such that for every filter $g$, $\tau^g=\tau^{(g)}=\sigma^{(g)}$. \end{proposition} \begin{proof} For $x\in X$ let $p_x=\sup\{p\in \mathbb{P}: (\check{x},p)\in \sigma\}$ (so $p_x\in \mathbb{P}\cup \{0\}$). Let $\tau=\{(\check{x},p_x): x\in X, p_x\neq 0\}$. \end{proof} We can now define our name principles. Here, we take $\mathbb{P}$ to be a forcing, $\mathcal{C}$ a class of forcings, and $X$ an arbitrary set. \begin{definition}\label{Defn_nameprinciple} Let $\alpha$ be an ordinal, $\kappa$ a cardinal and $X$ a transitive set of size at most $\kappa$. The name principle $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ says the following: \begin{quote} ``Whenever $\sigma$ is a $\kappa$ small ${\leq}\alpha$ rank $X$ name, and $A\in H_{\kappa^+}\cap \mathcal{P}^\alpha(X)$ is a set such that $\mathbb{P}\mathrel{\Vdash} \sigma=\check{A}$, there is a filter $g\in V$ such that $\sigma^g=A$.'' \end{quote} $\mathsf{N}_{\mathcal{C},X,\kappa}(\alpha)$ is the statement that $\mathsf{N}_{\mathbb{Q},X,\kappa}(\alpha)$ holds for all $\mathbb{Q}\in \mathcal{C}$. $\mathsf{N}_{\mathbb{P},\kappa}(\infty)$ (resp. $\mathsf{N}_{\mathcal{C},\kappa}(\infty)$) is the statement that $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ (resp. $\mathsf{N}_{\mathcal{C},X,\kappa}(\alpha)$) holds for all $\alpha\in \mathrm{Ord}$ and all $X\in H_{\kappa^+}$. (Equivalently, we could just require that it holds for $\alpha\leq\kappa^+$ and all $X\in H_{\kappa^+}$.) \end{definition} Some comments on this definition: It is easy to see that if $\sigma$ is a $\kappa$ small $X$ name, and $g\in V$, then $\sigma^g\in H_{\kappa^+}$. If $\sigma$ is rank ${\leq} \alpha$, then it is also easy to see that $\sigma^g\in \mathcal{P}^\alpha(X)$. So if we didn't require that $A\in H_{\kappa^+}\cap \mathcal{P}^\alpha(X)$, then the principle would fail trivially for most forcings. The only forcings on which it could hold would be those which don't force any names to be equal to such large $A$ anyway. This argument also shows that the name principle fails trivially if, for some $\lambda<\kappa$, there is a $\lambda$ small $\sigma$ which is forced to be equal to some $A\not \in H_{\lambda^+}$. So we might think we should exclude such names from the principle as well. But in fact, we shall see in Section \ref{Section correspondence and applications} that it makes little difference: the proof of Theorem \ref{correspondence forcing axioms name principles} shows that if a name principle fails because of such a name, then it also fails for non-trivial reasons. We can easily see that if $\sigma$ is a $\kappa$-small $1$ rank $X$ name, and is forced to be equal to $A$, then $A\subseteq X$ and $\lvert A \rvert \leq \kappa$. Hence, when we're dealing with $\mathsf{N}(1)$, we don't need to worry about checking if the names we're working with are in $H_{\kappa^+}\cap \mathcal{P}(X)$, as this is automatically true. On the other hand, once we go above rank $1$, these names can exist, even for small values of $\alpha$ and $\kappa$. For example, \cite[Lemma 7.1]{holy2019sufficient} has an $\omega$ bounded rank $2$ name which is forced to be equal to $(2^\omega)^V$. One might ask why we allowed $X$-names for all $X\in H_{\kappa^+}$ in the definition of $\mathsf{N}_{\mathbb{P},\kappa}(\infty)$. This is because any such name can be understood as an $\emptyset$-name of some high rank, so these principles already follow from the conjunction of $\mathsf{N}_{\mathbb{P},\emptyset,\kappa}(\alpha)$ for all $\alpha\in \mathrm{Ord}$. As with the forcing axioms, we will sometimes omit part of this notation. We will drop $\mathbb{P}$ and $\mathcal{C}$ when they are clear from context. We will omit $\alpha$ when $\alpha=1$. While $X$ is formally just some arbitrary set, most of the time it can be thought of as a cardinal; we will omit it in the case that $X=\kappa$, and will then omit $\kappa$ as well if $\kappa=\omega_1$. Most often, these omissions will come up when we're assuming $\alpha=1$ and taking $X$ to be some cardinal. In that situation, $\kappa$ smallness is essentially trivial: if $\kappa<X$ then our class of names is too restrictive to do anything interesting, and if $\kappa \geq X$ then every $1$ rank $X$ name will be $\kappa$ small, automatically. So when $\alpha=1$ and $X$ is a cardinal we can find out everything we need to know just by looking at the case $X=\kappa$. We can also define variations analogous to $\mathsf{club}\text{-} \mathsf{FA}$, $\mathsf{stat}\text{-} \mathsf{FA}$, etc. However, this only really makes sense when we know $\sigma$ a subset of some cardinal. For this reason, we only define these variations for the case where $\alpha=1$ (also dropping the requirement of $\kappa$-smallness) and where $X$ is a cardinal. \begin{definition}\label{Defn_SpecialN} Let $\kappa$ be a cardinal and $\varphi(x)$ a formula. The axiom $\varphi\text{-}\mathsf{N}_{\mathbb{P},\kappa}$ states: \begin{quote} ``For any $1$ rank $\kappa$ name $\sigma$, if $\mathbb{P}\mathrel{\Vdash} \varphi(\sigma)$ then there is a filter $g$ on $\mathbb{P}$ such that $\varphi(\sigma^g)$ holds in $V$.'' \end{quote} In particular, we shall consider the axioms for the formulas $\mathsf{club}(x)$, $\mathsf{stat}(x)$, $\mathsf{ub}(x)$ and $\omega\text{-}\mathsf{ub}(x)$ given in Definition \ref{Defn_SpecialFA}: \begin{enumerate-(1)} \item The \emph{club name principle} $\mathsf{club}\text{-} \mathsf{N}_{\mathbb{P},\kappa}$. \item The \emph{stationary name principle} $\mathsf{stat}\text{-} \mathsf{N}_{\mathbb{P},\kappa}$. \item The \emph{unbounded name principle} $\mathsf{ub}\text{-} \mathsf{N}_{\mathbb{P},\kappa}$. \item The \emph{$\omega$-unbounded name principle} $\omega\text{-} \mathsf{ub}\text{-} \mathsf{N}_{\mathbb{P},\kappa}$. \end{enumerate-(1)} \end{definition} As usual, we also define similar axioms with $\mathcal{C}$ in place of $\mathbb{P}$. Note that we could also express $\omega\text{-}\mathsf{ub}\text{-}\mathsf{N}$ as an axiom about two names, one of which is forced to be an unbounded subset of $\kappa$ while the other is forced to be equal to $\omega$. \begin{remark} The axioms $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$, $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ and $\omega\text{-} \mathsf{ub}\text{-} \mathsf{FA}_{\mathbb{P},\kappa}$ in Definition \ref{Defn_SpecialFA} can be understood as a more general form of name principles for two formulas $\varphi(x)$ and $\psi(x)$: \begin{quote} ``For any $1$ rank $\kappa$ name $\sigma$, if $\mathbb{P}\mathrel{\Vdash} \varphi(\sigma)$ then there is a filter $g$ on $\mathbb{P}$ such that $\psi(\sigma^g)$ holds in $V$,'' \end{quote} For instance, $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ is equivalent to the statement: \begin{quote} ``If $\sigma$ is a rank $1$ name for $\omega_1$, then there is a filter $g\in V$ such that $\sigma^g$ is stationary.'' \end{quote} \end{remark} We can also generalise the ideas here: rather than simply working with a single statement like ``$\sigma$ is unbounded'' or ``$\sigma$ is some particular set in $V$'', we could ask to be able to find a filter to correctly interpret every reasonable statement. In the following definition, we allow bounded quantifiers in our $\Sigma_0$ formulas. \begin{definition}\label{Defn_foN} Let $\alpha$ be an ordinal and $\kappa$ a cardinal. The simultaneous name principle $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ says the following: \begin{quote} ``Whenever $\sigma_0,\ldots, \sigma_n$ are $\kappa$ small ${\leq}\alpha$ rank $X$ names, we can find a filter $g$ in $V$ such that $\varphi(\sigma_0^g,\dots,\sigma_n^g)$ holds for \textit{every} $\Sigma_0$ formula $\varphi$ such that $\mathbb{P} \mathrel{\Vdash} \varphi(\sigma_0,\ldots,\sigma_n)$.'' \end{quote} Moreover: \begin{itemize} \item The \emph{simultaneous name principle} $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty)$ is the same statement, except that the names are $X$ names for some $X\in H_{\kappa^+}$ and there is no restriction on their rank. \item $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathcal{C},X,\kappa}(\alpha)$ is the statement that $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{Q},X,\kappa}(\alpha)$ holds for all $\mathbb{Q}\in \mathcal{C}$. \item $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathcal{C},\kappa}(\infty)$ is defined similarly. \item The bounded name principles $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},X,\kappa}(\alpha)$ are defined similarly. \end{itemize} \end{definition} The $\Sigma_0$ requirement on $\varphi$ is necessary, because otherwise the axiom would say that any sentence which is forced to be true by $\mathbb{P}$ is already true in $V$. This would make the axiom trivially false for almost all interesting forcings. Again we will suppress $X$, $\kappa$ and $\alpha$ as described earlier. All of these name principles also have bounded variants: \begin{definition}\label{Defn_NamePrincipleBounded} Let $\alpha$ be an ordinal and $\kappa, \lambda$ cardinals. The bounded name principle $\mathsf{BN}_{\mathbb{P},X,\kappa}^{\lambda}(\alpha)$ says the following: \begin{quote} ``Whenever $\sigma$ is a $\kappa$ small $\lambda$ bounded $\leq\alpha$ rank $X$ name, and $A$ is a set such that $\mathbb{P}\mathrel{\Vdash} \sigma=A$, we can find a filter $g\in V$ such that $\sigma^g=A$.'' \end{quote} \end{definition} We define similar bounded forms of all the other name principles we have introduced so far. Again, we will suppress $\lambda$ when $\lambda=\kappa$ and will suppress other notation as described above. \subsection{Hybrid axioms} There is one more group of axioms which are worth mentioning, because of their frequent use in the literature. They are a hybrid of forcing axiom and name principle. The axioms $\axiomft{MA}^+$ and $\axiomft{PFA}^+$ were introduced introduced by Baumgartner in \cite[Section 8]{baumgartner1984applications}. \begin{definition} The forcing axiom $\mathsf{FA}^+_{\mathbb{P},\kappa}$ says: \begin{quote} Suppose $\vec{D}=\langle D_\gamma: \gamma<\kappa\rangle$ is a sequence of dense subsets of $\mathbb{P}$ and let $\sigma$ be a $1$ rank $\kappa$ name such that $\mathbb{P}\mathrel{\Vdash} ``\sigma$ is stationary``. Then there is a filter $g$ such that \begin{enumerate-(1)} \item For all $\gamma$, $D_\gamma \cap g \neq \emptyset$; and \item $\sigma^g$ is stationary. \end{enumerate-(1)} \end{quote} The forcing axiom $\mathsf{FA}^{++}_{\mathbb{P},\kappa}$ says: \begin{quote} Let $D_\gamma: \gamma<\kappa$ be dense subsets of $\mathbb{P}$ and let $\sigma_\gamma: \gamma<\kappa$ be $1$ rank $\kappa$ names such that $\mathbb{P}\mathrel{\Vdash} ``\sigma_\gamma$ is stationary`` for every $\gamma$. Then we can find a filter $g$ such that \begin{enumerate-(1)} \item For all $\gamma$, $D_\gamma \cap g \neq \emptyset$; and \item For all $\gamma$, $\sigma_\gamma^g$ is stationary. \end{enumerate-(1)} \end{quote} \end{definition} As usual, we will also use versions of the above with $\mathcal{C}$ in place of $\mathbb{P}$, and bounded versions. We have actually gone against convention slightly here: the literature generally uses the quasi-interpretation $\sigma^{(g)}$ when defining $\mathsf{FA}^+$ and $\mathsf{FA}^{++}$ style axioms. However, our version is in fact equivalent, as the following theorem shows: \begin{theorem} Let $\mathsf{FA}^{(+)}$ and $\mathsf{FA}^{(++)}$ be defined in the same way as $\mathsf{FA}^+$ and $\mathsf{FA}^{++}$ above, but with $\sigma^{(g)}$ and $\sigma_\gamma^{(g)}$ in place of $\sigma^g$ and $\sigma_\gamma^g$ respectively. Then $\mathsf{FA}_{\mathbb{P},\kappa}^+\iff \mathsf{FA}_{\mathbb{P},\kappa}^{(+)}$ and $\mathsf{FA}_{\mathbb{P},\kappa}^{++}\iff \mathsf{FA}_{\mathbb{P},\kappa}^{(++)}.$ \end{theorem} \begin{proof} We will prove the $\mathsf{FA}^+$ case; the $\mathsf{FA}^{++}$ version is similar. The $\Leftarrow$ direction is trivial. $\Rightarrow$: Let $D_\gamma: \gamma<\kappa$ be a collection of $\kappa$ many dense subsets of $\mathbb{P}$. Let $\sigma$ be a rank $1$ name with $\mathbb{P}\Vdash ``\sigma \text{ is stationary}"$. For $\gamma\in \kappa$, let \begin{equation*} E_\gamma:=\{p\in \mathbb{P}\colon p\mathrel{\Vdash} \check{\gamma}\not \in \sigma \text{ or } \exists q\geq p \, (\check{\gamma},q) \in \sigma\} \end{equation*} We can see that $E_\gamma$ is dense: given $p\in \mathbb{P}$, either we can find some $q\parallel p$ with $\langle \check{\gamma},q\rangle \in \sigma$ and we're done, or $p\mathrel{\Vdash} \check{\gamma}\not \in \sigma$ since all the elements of $\sigma$ are check names. \begin{claim} If $g$ is any filter which meets all the $E_\gamma$, then $\sigma^g=\sigma^{(g)}$ \end{claim} \begin{proof} $\subseteq$: Let $\gamma \in \sigma^g$. Then there is a $q\in g$ with $(\check{\gamma},q) \in \sigma$. Clearly $q\mathrel{\Vdash} \check{\gamma}\in \sigma$, so $\gamma \in \sigma^{(g)}$. $\supseteq$: Let $\gamma\in \sigma^{(g)}$. Then we can find $r\in g$ with $r\mathrel{\Vdash} \check{\gamma}\in \sigma$. Certainly, then, there is no $p\in g$ with $p\mathrel{\Vdash} \check{\gamma}\not \in \sigma$. Since nonetheless $g$ meets $E_\gamma$, there must be some $q\in g$ with $(\check{\gamma},q) \in \sigma$. Hence $\gamma \in \sigma^g$. \end{proof} Now we simply use our forcing axiom to take a filter $g$ which meets all the $D_\gamma$, all the $E_\gamma$, and which is such that $\sigma^{(g)}$ is stationary. \end{proof} In defining the $E_\gamma$ in the above proof, we used a technique which we will be invoking many times. It will save us a lot of time if we give it a name now. \begin{definition} Let $\tau$ and $\sigma$ be names, and $p\in \mathbb{P}$. We say $p$ strongly forces $\tau \in \sigma$, and write $p\sforces \tau \in \sigma$, if there exists $q\geq p$ with $(\tau,q)\in \sigma$. \end{definition} The value of this definition is shown in the following two propositions. \begin{proposition}\label{Prop_sforcingAndForcing} Let $\sigma$ and $\tau$ be names, and $p\in \mathbb{P}$. \begin{enumerate-(1)} \item If $p\mathrel{\Vdash} \tau \in \sigma$ then there exist densely many $r\leq p$ such that for some name $\tilde{\tau}$, $r\mathrel{\Vdash} \tilde{\tau}=\tau$ and $r\sforces \tilde{\tau}\in \sigma$. \item If $p\sforces \tau \in \sigma$ then $p\mathrel{\Vdash} \tau \in \sigma$. \end{enumerate-(1)} \end{proposition} \begin{proposition}\label{Prop_sforcingAndInterpretation} Let $\sigma$ and $\tau$ be names, let $p\in \mathbb{P}$ and let $g$ be any filter containing $p$. \begin{enumerate-(1)} \item If $p\sforces \tau \in \sigma$ then $\tau^g\in \sigma^g$. \item If for all $\tilde{\tau}$ with $(\tilde{\tau},q)\in \sigma$ (for some $q\in \mathbb{P}$) we either know $\tau^g\neq \tilde{\tau}^g$ or have $p\mathrel{\Vdash} \tilde{\tau}\not \in \sigma$ then $\tau^g \not \in \sigma^g$. \end{enumerate-(1)} \end{proposition} \section{Results for rank $1$} \label{Section results for rank 1} We will start by looking at the positive results we can prove in general about forcing axioms and rank 1 name principles. We again take $\mathbb{P}$ to be an arbitrary forcing. We also take $\kappa$ to be an uncountable cardinal, although we're mostly interested in the case where $\kappa=\omega_1$. Since $\mathbb{P}$ is arbitrary, we could just as easily replace it with a class $\mathcal{C}$ of forcings in all our results. \subsection{Basic implications} All the positive results expressed in Figure \ref{diagram of implications} are proved in this section. The negative results will be proved later, when we look at the specific forcings that provide counterexamples. We will not need that $\kappa$ is regular. In the case of $\mathrm{cof}(\kappa)=\omega$, a club is \begin{lemma}\label{Lemma_FA iff N} $\mathsf{FA}_{\mathbb{P},\kappa}\iff \mathsf{N}_{\mathbb{P},\kappa}$ \end{lemma} \begin{proof} $\Rightarrow$: Assume $\mathsf{FA}_\kappa$. (That is, $\mathsf{FA}_{\mathbb{P},\kappa}$, recall that we said we'd suppress the $\mathbb{P}$ whenever it was clear.) Let $\sigma$ be a rank $1$ name for a subset of $\kappa$, and suppose that $1\mathrel{\Vdash} \sigma=A$ for some $A\subseteq \kappa$. For $\gamma \in A$, let \begin{equation*} D_\gamma=\{p\in\mathbb{P}: p\sforces \check{\gamma} \in \sigma\} \end{equation*} It is clear that $D_\gamma$ is dense by Proposition \ref{Prop_sforcingAndForcing}. For $\gamma \in \kappa \setminus A$, let $D_\gamma=\mathbb{P}$. Using $\mathsf{FA}_\kappa$, take a filter $g$ that meets every $D_\gamma$. We claim that $\sigma^g=A$. For $\gamma\in A$, we know that some $p\in g$ strongly forces $\check{\gamma} \in \sigma$. By \ref{Prop_sforcingAndInterpretation} then, $\gamma \in\sigma^g$. Conversely, if $\gamma \not \in A$ then $1\mathrel{\Vdash} \check{\gamma}\not \in \sigma$ and by the same proposition $\gamma \not \in \sigma$. $\Leftarrow$: Assume $\mathsf{N}_\kappa$. Let $D_\gamma, \gamma<\kappa$ be a collection of dense subsets of $\mathbb{P}$. Let \begin{equation*} \sigma=\{(\check{\gamma},p): \gamma<\kappa, p\in D_\gamma\} \end{equation*} It is easy to see that $1\mathrel{\Vdash} \sigma=\kappa$. Take a filter $g$ such that $\sigma^g=\kappa$, and then for all $\gamma<\kappa$ $D_\gamma \cap g \neq \emptyset$. \end{proof} \begin{lemma} \label{Lemma FA bracket interpretation} $\mathsf{FA}_{\mathbb{P},\kappa}$ holds if and only if for every rank $1$ name $\sigma$ for a subset of $\kappa$, there is some $g$ with $\sigma^{(g)}=\sigma^g$. \end{lemma} \begin{proof} First suppose that $\mathsf{FA}_{\mathbb{P},\kappa}$ holds and $\sigma$ is a rank $1$ $\mathbb{P}$-name for a subset of $\kappa$. Note that $\sigma^g\subseteq \sigma^{(g)}$ holds for all filters $g$ on $\mathbb{P}$. For each $\alpha<\omega_1$, $$ D_\alpha= \{ p\in \mathbb{P} \mid p\mathrel{\Vdash} \check{\alpha}\notin \sigma \vee p \sforces \alpha\in \sigma \} $$ is dense. By $\mathsf{FA}_{\mathbb{P},\kappa}$, there is a filter $g$ with $g\cap D_\alpha$ for all $\alpha<\omega_1$. To see that $\sigma^{(g)}\subseteq \sigma^g$ holds, suppose that $\alpha\in \sigma^{(g)}$. Thus there is some $p\in g$ which forces $\check{\alpha}\in \sigma$. Take any $q\in g\cap D_\alpha$. Since $p\parallel q$, we have $p \sforces \alpha\in \sigma$ by the definition of $D_\alpha$ and thus $\alpha\in \sigma^g$. On the other hand, $\mathsf{N}_{\mathbb{P},\kappa}$ and thus $\mathsf{FA}_{\mathbb{P},\kappa}$ (by Lemma \ref{Lemma_FA iff N}) follows trivially from this principle, since for any rank $1$ name $\sigma$ with $\mathrel{\Vdash} \sigma=\check{A}$, we have $\sigma^{(g)}=A$ for any filter $g$. \end{proof} \begin{lemma} \ \label{Lemma basic FA implications} \begin{enumerate-(1)} \item \label{Lemma basic FA implications 1} $\mathsf{FA}_{\mathbb{P},\kappa}\implies \mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}\implies \mathsf{ub}\text{-} \mathsf{FA}_{\mathbb{P},\kappa}$ \item \label{Lemma basic FA implications 2} $\mathsf{FA}_{\mathbb{P},\kappa}\implies \mathsf{stat}\text{-} \mathsf{FA}_{\mathbb{P},\kappa}\implies \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ \item \label{Lemma basic FA implications 3} $\mathsf{FA}_{\mathbb{P},\kappa}\implies\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}\implies \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ \item \label{Lemma basic FA implications 4} If $\mathrm{cof}(\kappa)>\omega$, then $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}\implies \mathsf{stat}\text{-} \mathsf{FA}_{\mathbb{P},\kappa}$ \end{enumerate-(1)} \end{lemma} \begin{proof} Follows immediately from the definitions of the axioms. \end{proof} \begin{lemma} \label{Lemma equivalence of clubFA and FA} $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}\Longleftrightarrow\mathsf{FA}_{\mathbb{P},\mathrm{cof}(\kappa)}$. \end{lemma} \begin{proof} For $\mathrm{cof}(\kappa)=\omega$, the statements are both provably true. So assume $\mathrm{cof}(\kappa)>\omega$. $\Longleftarrow$: Let $\pi\colon \mathrm{cof}(\kappa)\rightarrow \kappa$ be a continuous cofinal function. Let $\vec{D}=\langle D_\alpha\mid \alpha<\kappa\rangle$ be a sequence of dense open subsets of $\mathbb{P}$. Let $\vec{E}=\langle E_\beta\mid \beta<\lambda\rangle$, where $E_\alpha=D_{\pi(\alpha)}$ for $\alpha<\mathrm{cof}(\kappa)$. By $\mathsf{FA}_{\mathbb{P},\mathrm{cof}(\kappa)}$, there is a filter $g$ with $g\cap E_\alpha$ for $\alpha<\mathrm{cof}(\kappa)$. Thus for all $\beta=\pi(\alpha)\in \mathrm{ran}(\pi)$, $g\cap D_\alpha=g\cap E_\beta\neq\emptyset$. This suffices since $\mathrm{ran}(\pi)$ is club in $\kappa$. $\Longrightarrow$: We first claim that $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ implies $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\mathrm{cof}(\kappa)}$. To see this, let $\pi\colon \mathrm{cof}(\kappa)\rightarrow \kappa$ be a continuous cofinal function. Let $\vec{D}=\langle D_\alpha\mid \alpha<\mathrm{cof}(\kappa)\rangle$ be a sequence of dense open subsets of $\mathbb{P}$. Let $E_{\pi(\alpha)}=D_\alpha$ and $E_\gamma=\mathbb{P}$ for all $\gamma\notin \mathrm{ran}(\pi)$. Since $C\cap \mathrm{ran}(\pi)$ is club in $\kappa$ and $\pi$ is continuous, $\pi^{-1}(C)$ is club in $\mathrm{cof}(\kappa)$ and $g\cap D_\alpha =g \cap E_{\pi(\alpha)}\neq \emptyset$ for all $\alpha\in \pi^{-1}(C)$ as required. It now suffices to prove $\mathsf{club}\text{-}\mathsf{FA}_{\mathbb{P},\lambda}\Longrightarrow\mathsf{FA}_{\mathbb{P},\lambda}$ for regular $\lambda$. Given a sequence $\vec{D}=\langle D_\alpha\mid \alpha<\lambda\rangle$ of dense open subsets, partition $\lambda$ into disjoint stationary sets $S_\alpha$ for $\alpha<\kappa$. Let $\vec{E}=\langle E_\beta\mid \beta<\lambda\rangle$, where $E_\beta=D_\alpha$ for $\beta\in S_\alpha$. By $\mathsf{club}\text{-}\mathsf{FA}_\lambda$, there is a filter $g$ and a club $C$ in $\lambda$ with $g\cap E_\beta$ for $\beta\in C$. Since $C$ is club, $S_\alpha\cap C\neq \emptyset$ for all $\alpha<\lambda$. Thus $g\cap D_\alpha=g\cap E_\beta\neq\emptyset$. \end{proof} \begin{lemma} \ \label{Lemma FA, club-N and club-FA} \begin{enumerate-(1)} \item \label{Lemma FA, club-N and club-FA 1} $\mathsf{FA}_\kappa\implies \mathsf{club}\text{-}\mathsf{N}_\kappa$ \item \label{Lemma FA, club-N and club-FA 2} $\mathsf{club}\text{-}\mathsf{N}_\kappa \implies \mathsf{club}\text{-}\mathsf{FA}_\kappa$ \end{enumerate-(1)} \end{lemma} \begin{proof} \ref{Lemma FA, club-N and club-FA 1}: Let $\sigma$ be a rank $1$ name such that $1\mathrel{\Vdash} ``\sigma$ contains a club in $\kappa$''. Then we can find a rank $1$ name $\tau$ such that $1\mathrel{\Vdash} \tau\subseteq \sigma$ and $1\mathrel{\Vdash} ``\tau$ is a club in $\kappa$''. For $\gamma<\kappa$, let $D_\gamma$ denote the set of $p\in\mathbb{P}$ such that either \begin{enumerate-(a)} \item $p\sforces \check{\gamma}\in\tau$, or \item for all sufficiently large $\alpha<\gamma$, $p \mathrel{\Vdash} \check{\alpha} \notin \tau$. \end{enumerate-(a)} We claim $D_\gamma$ is dense. Let $p\in \mathbb{P}$. If $p\mathrel{\Vdash} \check{\gamma} \in \tau$ then by Proposition \ref{Prop_sforcingAndForcing} we can find $q\leq p$ strongly forcing this, and then $q\in D_\gamma$. Otherwise, take $q\leq p$ with $q\mathrel{\Vdash} \check{\gamma}\not \in \tau$. Then $q\mathrel{\Vdash} ``\tau \cap \gamma$ is bounded in $\gamma$''. Take $r\leq q$ deciding that bound, and then $r$ satisfies condition b above. For any filter $g$ with $g\cap D_\gamma \neq \emptyset$, $\tau^g$ is closed at $\gamma$ by Proposition \ref{Prop_sforcingAndInterpretation}. Let $E_\gamma$ denote the set of $p\in \mathbb{P}$ such that for some $\delta\geq \gamma$, $p\sforces \check{\delta}\in \tau$. Again, this is dense since $\tau$ is forced to be unbounded. For any filter $g$ with $g\cap E_\gamma \neq \emptyset$ for all $\gamma<\kappa$, $\tau^g$ is unbounded. Let $F_\gamma$ denote the dense set of $p\in\mathbb{P}$ such that $p\sforces \check{\gamma}\in \sigma$ or $p\mathrel{\Vdash}\check{\gamma}\notin\tau$. Once again, $F_\gamma$ is dense: given $p\in \mathbb{P}$ take $q\leq p$ deciding whether $\gamma \in \tau$. If it decides $\gamma \not \in \tau$ then we're done; otherwise $q\mathrel{\Vdash} \check{\gamma} \in \sigma$ and we can find $r\leq q$ with $r\sforces\check{\gamma}\in \sigma$ For any filter $g$ with $g\cap F_\gamma\neq\emptyset$, $\gamma\in\tau^g \Rightarrow \gamma\in\sigma^g$. Putting things together, if we find a filter $g$ which meets every $D_\gamma$, $E_\gamma$ and $F_\gamma$ then $\tau^g$ will be both a club and a subset of $\sigma^g$. \ref{Lemma FA, club-N and club-FA 2}: This works much like the proof that $\mathsf{N}\Rightarrow \mathsf{FA}$ above. Let $D_\gamma:\gamma<\kappa$ be a collection of dense sets. Let \begin{equation*} \sigma=\{(\check{\gamma},p):\gamma<\kappa,p\in D_\gamma\} \end{equation*} Clearly $1\mathrel{\Vdash} \sigma=\check{\kappa}$, and hence that $\sigma$ contains a club. Take a filter $g$ where $\sigma^g$ contains a club. Then $\sigma^g=\{\gamma<\kappa: D_\gamma\cap g\neq \emptyset\}$ so $g$ meets a club of $D_\gamma$. \end{proof} Putting together the previous results, we complete the top left corner of Figure \ref{diagram of implications}. \begin{corollary} \label{Corollary equivalence of FA, NP, clubFA, clubNP} The following are all equivalent for all uncountable regular cardinals $\kappa$: $\mathsf{FA}_\kappa$, $\mathsf{N}_\kappa$, $\mathsf{club}\text{-}\mathsf{FA}_\kappa$, $\mathsf{club}\text{-}\mathsf{N}_\kappa$. \end{corollary} The second half of the previous lemma also applies for the other special name principles. \begin{lemma}\label{Lemma statN to statFA} $\mathsf{stat}\text{-}\mathsf{N}_\kappa \implies \mathsf{stat}\text{-}\mathsf{FA}_\kappa$ \end{lemma} \begin{proof} As for the $\mathsf{club}$ case, except that we just insist on $\sigma^g$ being stationary. \end{proof} \begin{lemma} \label{Lemma ubN to ubFA} $\mathsf{ub}\text{-}\mathsf{N}_\kappa \implies \mathsf{ub}\text{-}\mathsf{FA}_\kappa$ \end{lemma} \begin{proof} As for the $\mathrm{club}$ case, except that we insist on $\sigma^g$ being unbounded. \end{proof} \begin{lemma} $\omega\text{-}\mathsf{ub}\text{-}\mathsf{N}_\kappa \implies \omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_\kappa$ \end{lemma} \begin{proof}Define $\sigma$ as in the club case. Define \begin{equation*} \tau=\{(n,p):n<\omega,p\in E_n\} \end{equation*} where we want to meet all of the dense sets $\langle E_n\mid n<\omega\rangle$ as well as unboundedly many of the dense sets $D_\gamma$. Take $g$ such that $\tau^g=\omega$ and $\sigma^g$ is unbounded. \end{proof} We can also get converses for these in the case of $\mathsf{ub}$ and $\omega\text{-}\mathsf{ub}$. \begin{lemma} \ \label{Lemma ubFA to ubN} \begin{enumerate-(1)} \item \label{Lemma ubFA to ubN 1} $\mathsf{ub}\text{-}\mathsf{FA}_\kappa\implies \mathsf{ub}\text{-}\mathsf{N}_\kappa$ \item \label{Lemma ubFA to ubN 2} $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_\kappa \implies \omega\text{-}\mathsf{ub}\text{-}\mathsf{N}_\kappa$ \end{enumerate-(1)} \begin{proof} \ref{Lemma ubFA to ubN 1}: Assume $\mathsf{ub}\text{-}\mathsf{FA}_\kappa$. Let $\sigma$ be a rank $1$ name for an unbounded subset of $\kappa$. For $\gamma<\kappa$ let $D_\gamma$ be the set of all $p\in\mathbb{P}$ such that for some $\delta>\gamma$, $p\sforces \check{\delta}\in \sigma$. Let $g$ be a filter meeting unboundedly many $D_\gamma$; then $\sigma^g$ is unbounded. \ref{Lemma ubFA to ubN 2}: Let $\sigma$ be a rank $1$ name for an unbounded subset of $\kappa$ and $\tau$ be a good name for $\omega$. Define $D_\gamma$ as above, and for $n<\omega$ let $E_n$ be the set of all $p\in\mathbb{P}$ which strongly force $n\in \tau$. Find $g$ meeting unboundedly many $D_\gamma$ and every $E_n$; then $\sigma^g$ is unbounded and $\tau^g=\omega$. \end{proof} \end{lemma} This proves every implication in the left two columns of Figure \ref{diagram of implications}. \subsection{Extremely bounded name principles} Now, we address the right most column of Figure \ref{diagram of implications - bounded with finite lambda}. These axioms are more interesting if $\mathbb{P}$ is a complete Boolean algebra, since they can be trivial otherwise. \begin{lemma} $\mathsf{BN}^1_\kappa$ is provable in $\axiomft{ZFC}$. \end{lemma} \begin{proof} Let $\sigma$ be a $1$-bounded rank $1$ name such that $1\mathrel{\Vdash} \sigma=\check{A}$ for some set $A$. Then for $\gamma\in\kappa\setminus A$, there is no $p\in \mathbb{P}$ such that $(\check{\gamma},p)\in \sigma$. For $\gamma \in A$ there is a unique $p\in \mathbb{P}$ such that $(\check{\gamma},p)\in \sigma$; and $p$ is contained in every generic filter. Assuming $\mathbb{P}$ is atomless, it follows that $p=1$ and hence that, if we let $g$ be any filter at all, $\sigma^g=A$. It is also possible to adjust this proof to work for forcings with atoms; this is left as an exercise for the reader. \end{proof} All of these results also hold if we work with bounded name principles and forcing axioms, provided that the bound is at least $\kappa$. For bounds below $\kappa$, we can almost get an equivalence between the different bounds for the stationary and unbounded name principles. A forcing is called \emph{well-met} if any two compatible conditions $p,q$ have a greatest lower bound $p\wedge q$. The next result and proof is due to Hamkins for trees (see Corollary \ref{lemma_failure of lambda-bounded and 1-bounded name principle for trees}). We noticed that his proof shows a more general fact. \begin{lemma}[with Hamkins] \label{lemma_failure of lambda-bounded and 1-bounded name principle} Suppose $\lambda<\kappa$ and $\mathbb{P}$ is well-met. \begin{enumerate-(1)} \item If $\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^\lambda$ fails, then there are densely many conditions $p\in \mathbb{P}$ such that $\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P}_p,\kappa}^1$ fails, where $\mathbb{P}_p:=\{q\in \mathbb{P}: q\leq p\}$. \item The same result holds with $\mathsf{ub}$ in place of $\mathsf{stat}$. \end{enumerate-(1)} \end{lemma} \begin{proof} We prove the $\mathsf{stat}$ case; the $\mathsf{ub}$ case is identical. The key fact the proof uses is that if we partition a stationary/unbounded subset of $\kappa$ into $\lambda<\kappa$ many parts, then one of those parts must be stationary/unbounded. Let $\sigma$ be a $\lambda$-bounded (rank $1$) name for a stationary set, such that there is no $g\in V$ with $\sigma^g$ stationary. Then, without loss of generality, we can enumerate the elements of $\sigma$: \begin{equation*} \sigma=\{ (\check{\gamma},p_{\gamma,\delta}) : \gamma<\kappa, \delta<\lambda\} \end{equation*} For $\delta<\lambda$, we define: \begin{equation*} \sigma_\delta=\{(\check{\gamma},p_{\gamma,\delta}) : \gamma<\kappa\} \end{equation*} Clearly, $\sigma_\delta$ is $1$-bounded. For any generic filter $G$, $\bigcup \sigma_\delta^G=\sigma^G$ is stationary in $V[G]$. Hence, $\mathbb{P}$ forces ``There is some $\delta<\lambda$ such that $\sigma_\delta$ is stationary.'' Now, let $p\in \mathbb{P}$ be one of the densely many conditions which decides which $\delta$ this is. Then \begin{equation*} \sigma_{\delta,p}=\{(\check{\gamma},p_{\gamma,\delta}\wedge p) : \gamma<\kappa\} \end{equation*} is a $1$-bounded $\mathbb{P}_p$-name and $\mathbb{P}_p\mathrel{\Vdash} \sigma_{\delta,p}$ is stationary. If $\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P}_p,\kappa}^1$ would hold, there would exist a filter $g$ such that $\sigma_{\delta,p}^g$ is stationary. Then $g$ generates a filter $h$ in $\mathbb{P}$ such that $\sigma_{\delta,p}^h\supseteq \sigma_{\delta,p}^g$ is stationary. \end{proof} \begin{corollary}[Hamkins] \label{lemma_failure of lambda-bounded and 1-bounded name principle for trees} Suppose that $T$ is a tree, $\mathbb{P}_T$ is $T$ with reversed order and $\lambda<\kappa$. \begin{enumerate-(1)} \item If $\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P}_T,\kappa}^\lambda$ fails, then there are densely many conditions $p\in \mathbb{P}$ such that $\mathsf{stat}\text{-} \mathsf{BN}_{(\mathbb{P}_T)_p,\kappa}^1$ fails, where $(\mathbb{P}_T)_p:=\{q\in \mathbb{P}_T: q\leq p\}$. \item The same result holds with $\mathsf{ub}$ in place of $\mathsf{stat}$. \end{enumerate-(1)} \end{corollary} \begin{corollary} \label{corollary_equivalence of lambda-bounded and 1-bounded name principles} Suppose $\lambda<\kappa$ and $\mathbb{P}$ is a well-met forcing such that for every $p\in \mathbb{P}$, $\mathbb{P}_p$ embeds densely into $\mathbb{P}$. Then $$\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^\lambda \Longleftrightarrow \mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^1$$ $$\mathsf{ub}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^\lambda \Longleftrightarrow \mathsf{ub}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^1$$ \end{corollary} \begin{proof} We show that a failure of $\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^\lambda$ implies the failure of $\mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^1$. The converse direction is clear and the proof for the unbounded name principles is analogous. By Lemma \ref{lemma_failure of lambda-bounded and 1-bounded name principle}, there is some $p\in \mathbb{P}$ such that $ \mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{P}_p,\kappa}^1$ fails. Let $i\colon \mathbb{P}_p \rightarrow \mathbb{P}$ be a dense embedding and $\mathbb{Q}:=i(\mathbb{P}_p)$. Since $ \mathsf{stat}\text{-} \mathsf{BN}_{\mathbb{Q},\kappa}^1$ fails, let $\sigma$ be a $1$-bounded $\mathbb{Q}$-name witnessing this failure. We claim that there is no filter $g$ on $\mathbb{P}$ such that $\sigma^g$ is stationary. Assume otherwise. Using that $\mathbb{Q}$ is well-met, let $h$ denote the set of all $q\geq p_0\wedge_\mathbb{Q} \dots \wedge_\mathbb{Q} p_n$ for some $p_0,\dots,p_n \in g\cap \mathbb{Q}$. It is easy to check that $h$ is a well-defined filter on $\mathbb{Q}$ and contains $g\cap \mathbb{Q}$. Then $\sigma^h\supseteq \sigma^g$ is stationary. But this contradicts the choice of $\sigma$. \end{proof} \subsection{Extremely bounded forcing axioms} We next study forcing axioms for very small predense sets. The next lemmas show that $\mathsf{BFA}^\omega_{\mathbb{P},\omega_1}$ has some of the same consequences as $\mathsf{BFA}$. \begin{lemma} If $\mathbb{P}$ is a complete Boolean algebra such that $\mathsf{BFA}^\omega_{\mathbb{P},\omega_1}$ holds, then $1_\mathbb{P}$ does not force that $\omega_1$ is collapsed. \end{lemma} \begin{proof} Suppose $\mathrel{\Vdash} \dot{f}\colon \omega_1\rightarrow \omega$ is injective. Let $A_\alpha=\{ \llbracket \dot{f}(\alpha)=n \rrbracket\neq 0 \mid n\in\omega\}$. Since each $A_\alpha$ is a maximal antichain, there is a filter $g$ with $g\cap A_\alpha\neq\emptyset $ for all $\alpha<\omega_1$. Define $f'\colon \omega_1\rightarrow \omega$ by letting $f'(\alpha)=n$ if $\llbracket \dot{f}(\alpha)=n \rrbracket \in g$ for all $\alpha<\omega_1$. Since $g$ is a filter, $f'\colon \omega_1\rightarrow \omega$ is well-defined and injective. \end{proof} \begin{lemma} If $\mathbb{P}$ is a complete Boolean algebra such that $\mathsf{BFA}^\omega_{\mathbb{P},\omega_1}$ holds and $\mathbb{P}$ adds a real, then $\axiomft{CH}$ fails. \end{lemma} \begin{proof} Suppose $\axiomft{CH}$ holds and let $\langle x_\alpha\mid \alpha<\omega_1\rangle$ be an enumeration of all reals. Let $\sigma$ be a name for the real added by $\mathbb{P}$. For $\alpha<\omega_1$, let \begin{equation*} D_\alpha = \{ \llbracket t^\smallfrown \langle n\rangle \subseteq \sigma\rrbracket \colon t\in 2^{<\omega}, n\in 2, t\subseteq x_\alpha, t^\smallfrown \langle n\rangle \not \subseteq x_\alpha\} \end{equation*} For $n<\omega$, let \begin{equation*} E_n=\{\llbracket \sigma(n)=m\rrbracket \mid m\in 2\} \end{equation*} Then the $D_\alpha$ and $E_n$ are all predense and countable. Take a filter $g$ which meets every $D_\alpha$ and $E_n$. The $E_n$ ensure that $g$ defines a real $x$ (by $x(n)=m$ where $\llbracket \sigma(n)=m\rrbracket \in g$). But if $x=x_\alpha$ then $g\cap D_\alpha=\emptyset$. \end{proof} There exist forcings $\mathbb{P}$ such that the implication $\mathsf{BFA}^\omega_{\mathbb{P},\omega_1} \Rightarrow \mathsf{BFA}^{\omega_1}_{\mathbb{P},\omega_1}$ fails. To see this, suppose that $\mathbb{Q}$ is a forcing such that $\mathsf{BFA}^{\omega_1}_{\mathbb{Q},\omega_1}$ fails. Let $\mathbb{P}$ be a lottery sum of $\omega_1$ many copies of $\mathbb{Q}$. Since $\mathsf{BFA}^{\omega_1}_{\mathbb{Q},\omega_1}$ fails, $\mathsf{BFA}^{\omega_1}_{\mathbb{P},\omega_1}$ fails as well. On the other hand, $\mathsf{BFA}^\omega_{\mathbb{P},\omega_1}$ holds trivially since any countable predense subset of $\mathbb{P}$ contains $0_\mathbb{P}$. \begin{question} Does the implication $\mathsf{BFA}^\omega_{\mathbb{P},\omega_1} \Rightarrow \mathsf{BFA}^{\omega_1}_{\mathbb{P},\omega_1}$ hold for all complete Boolean algebras $\mathbb{P}$? \end{question} By the previous lemmas, any forcing which is a counterexample cannot force that $\omega_1$ is collapsed, and if it adds reals then $\axiomft{CH}$ holds. \subsection{Basic results on $\mathsf{ub}\text{-}\mathsf{FA}$} In this section, we collect some observations about weak forcing axioms. We aim to prove some consequences of these axioms. We first consider $\mathsf{ub}\text{-}\mathsf{FA}$ and $\mathsf{stat}\text{-}\mathsf{FA}$. How strong is $\mathsf{ub}\text{-}\mathsf{FA}$? The next lemmas show that is has some of the same consequences as $\mathsf{FA}$. \begin{lemma} If $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ holds, then $\mathbb{P}$ does not force that $\omega_1$ is collapsed. \end{lemma} \begin{proof} Towards a contradiction, suppose $\mathbb{P}$ forces that $\omega_1$ is collapsed. Let $\dot{f}$ be a $\mathbb{P}$-name for an injective function $ \omega_1\rightarrow \omega$. For $\alpha<\omega_1$, let $D_\alpha=\{ p \in \mathbb{P} \mid \exists n\in\omega\ p\Vdash \dot{f}(\alpha)=n \}$. By $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$, there is a filter $g$ and an unbounded subset $A$ of $\omega_1$ such that $g\cap D_\alpha\neq\emptyset$ for all $\alpha\in A$. Define $f\colon A\rightarrow \omega$ by letting $f(\alpha)=n$ if there is some $p\in g\cap D_\alpha$ with $p\Vdash \dot{f}(\alpha)=n$. Since $g$ is a filter, $f$ is injective. \end{proof} \begin{lemma} \label{Lemma - ubFA and no new reals imply stationary set preserving} If $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ holds and $\mathbb{P}$ does not add reals, then for each stationary subset $S$ of $\omega_1$, $\mathbb{P}$ does not force that $S$ is nonstationary. \end{lemma} \begin{proof} Suppose that $\dot{C}$ is a name for a club such that $\Vdash_\mathbb{P} S\cap \dot{C}=\emptyset$. Let $\dot{f}$ be a name for the characteristic function of $\dot{C}$. For each $\alpha<\omega_1$, $$D_\alpha=\{ p\in \mathbb{P} \mid \exists t \in 2^\alpha\ t \subseteq \dot{f}\}$$ is dense in $\mathbb{P}$, since $\mathbb{P}$ does not add reals. By $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$, there is a filter $g$ and an unbounded subset $A$ of $\omega_1$ such that $g\cap D_\alpha\neq\emptyset$ for all $\alpha\in A$. Since $g$ is a filter, $C:=\{\alpha<\omega_1 \mid \exists p\in g\ p\Vdash \alpha \in \dot{C} \}$ is a club in $\omega_1$ with $S\cap C\neq\emptyset$. \end{proof} The previous lemma also follows from Theorem \ref{Bagaria's characterisation} and Lemma \ref{ubFA implies BFA} below via an absoluteness argument, assuming $\mathbb{P}$ is a homogeneous complete Boolean algebra. It is open whether the lemma holds for forcings $\mathbb{P}$ which add reals. What is the relationship between $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ and other forcing axioms? We find two opposite situations. For any $\sigma$-centred forcing, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ and $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ are provable in $\axiomft{ZFC}$ by Lemma \ref{Lemma stat-NP for sigma-centred} below. For many other forcings though, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ implies nontrivial axioms such as $\mathsf{FA}_{\mathbb{P},\omega_1}$ or $\mathsf{BFA}^{\omega_1}_{\mathbb{P},\omega_1}$. For instance, the implication $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ $\Rightarrow$ $\mathsf{FA}_{\mathbb{P},\omega_1}$ holds for all $\sigma$-distributive forcing by Lemma \ref{ubFA implies FA for sigma-distributive forcings} below. We will further see in Lemma \ref{ubFA implies BFA} below that for any complete Boolean algebra $\mathbb{P}$ which does not add reals, $(\forall q\in \mathbb{P}\ \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_q,\omega_1})$ implies $\mathsf{BFA}^{\omega_1}_{\mathbb{P},\omega_1}$. Moreover, the implication $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ $\Rightarrow$ $\mathsf{FA}_{\mathbb{P},\omega_1}$ also holds for some forcings that add reals, for instance for random forcing by Lemma \ref{Lemma_Random ubFA}. We do not have any examples of forcings where $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ and $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ sit between these two extremes: strictly weaker than $\mathsf{FA}_{\mathbb{P},\omega_1}$, but not provable in ZFC. In particular, we have not been able to separate the two axioms: \begin{question} \label{Question ubFA versus statFA} Can forcings $\mathbb{P}$ exist such that $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ holds, but $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ fails? \end{question} For instance, we would like to know if these axioms hold for the following forcings: \begin{question} \label{Question Baumgartner's forcing} Do Baumgartner's forcing to add a club in $\omega_1$ with finite conditions \cite[Section 3]{baumgartner1984applications} and Abraham's and Shelah's forcing for destroying stationary sets with finite conditions \cite[Section 2]{abraham1983forcing} satisfy $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ and $\mathsf{stat}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$? \end{question} \subsection{Characterisations of $\mathsf{FA}^+$ and $\mathsf{FA}^{++}$} The proof of the equivalence of $\mathsf{FA}$ and $\mathsf{N}$ still goes through fine if we change the axioms slightly, demanding some extra property to be true of the filter $g$ we're looking for. This gives us a nice way to express $\mathsf{FA}^+$ and $\mathsf{FA}^{++}$. \begin{lemma} \label{Lemma_characterisation_of_FA+} $\mathsf{FA}_{\mathcal{C},\kappa}^+$ is equivalent to the following statement: \begin{quote} For all $\mathbb{P}\in \mathcal{C}$, for all rank $1$ names $\sigma$ and $\tau$ for subsets of $\kappa$ such that $\mathbb{P}$ forces ``$\sigma=\check{A}$'' for some $A$ and $``\tau$ is stationary'', there is some filter $g$ with $\sigma^g=A$ and $\tau^g$ stationary. \end{quote} Similarly, $\mathsf{FA}_{\mathcal{C},\kappa}^{++}$ is equivalent to being able to correctly interpret $\kappa$ many stationary rank $1$ names and a single rank $1$ name for a specific set $A$. \end{lemma} \begin{proof} Analogous to the proof of \ref{Lemma_FA iff N} in the previous section. \end{proof} In the case of $\mathsf{FA}^{++}$ this result can be sharpened further, getting rid of the name for $A$: \begin{lemma} $\mathsf{FA}_{\mathcal{C},\kappa}^{++}$ is equivalent to the statement: \begin{quote} For all collections of $\kappa$ many rank $1$ names $\langle \sigma_\gamma\mid \gamma<\kappa\rangle$ with $\mathbb{P} \mathrel{\Vdash} ``\sigma_\gamma \text{ is stationary for all }\gamma"$, there is a filter $g\in V$ such that for all $\gamma$, $\sigma_\gamma^g$ is stationary. \end{quote} \end{lemma} \begin{proof} $\Rightarrow$: By the previous lemma. $\Leftarrow$: Let $\sigma$ be a rank $1$ name, such that $\mathbb{P}\mathrel{\Vdash} \sigma=\check{A}$ for some $A\subseteq \kappa$. We claim there is a collection $\langle \tau_\gamma\mid \gamma<\kappa\rangle$ of rank $1$ names, which are forced to be stationary in $\kappa$, such that any filter $g$ which interprets every $\tau_\gamma$ as stationary will interpret $\sigma$ as $A$. Once we have proved this claim, the lemma follows immediately from the second part of Lemma \ref{Lemma_characterisation_of_FA+}. For $\gamma \in A$, let $\tau_\gamma=\{( \check{\alpha},p) : \alpha \in \kappa, p\sforces \check{\gamma} \in \sigma\}$. For $\gamma \not \in A$, let $\tau_\gamma=\check{\kappa}$. We will see that $\mathbb{P}\mathrel{\Vdash} ``\tau_\gamma =\kappa"$ for $\gamma\in A$. Note that $\mathbb{P}\mathrel{\Vdash} \sigma=\check{A}$ by assumption. So for $\gamma\in A$, every generic filter will contain some $p$ with $p\sforces \check{\gamma} \in \sigma$. Hence $\mathbb{P} \mathrel{\Vdash} \tau_\gamma=\check{\kappa}$. There is a filter $g$ such that $\tau_\gamma^g$ is stationary for all $\gamma<\kappa$ by assumption. If $\gamma \in A$, then in particular $\tau_\gamma^g\neq \emptyset$. Hence $\gamma \in \sigma^g$. If a filter interprets all the $\tau_\gamma$ as stationary sets, then $\sigma^g\supseteq A$. If $\gamma \in \sigma^g\setminus A$, then there is some $p\in \mathbb{P}$ with $\langle \check{\gamma},p\rangle \in \sigma$, which is impossible as $\mathbb{P}\mathrel{\Vdash} \check{\gamma}\not \in \sigma$. \end{proof} \section{A correspondence for arbitrary ranks} \label{Section correspondence and applications} We now move on to discuss higher ranked name principles, including those of the ranked or unranked simultaneous variety. It turns out that even at high ranks, a surprising variety of these are equivalent to one another and to a suitable forcing axiom. These are summarised in the following theorems. \subsection{The correspondence} \label{Section_correspondence} \begin{theorem} \label{correspondence forcing axioms name principles} Let $\mathbb{P}$ be a forcing and let $\kappa$ be a cardinal. The following implications hold, given the assumptions noted at the arrows: \begin{enumerate-(1)} \item \label{correspondence forcing axioms name principles 1} \[ \xymatrix@R=1em{ & & \mathsf{FA}_\kappa \ar@/_-0.3cm/@{->}[rdd] & & \\ & & & & \\ & \ \ \ \ \ \ \mathsf{N}_{\mathbb{P},\kappa}(\infty) \ar@{<-}[rr] \ar@/_-0.3cm/@{->}[ruu] & & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty) & }\] \item \label{correspondence forcing axioms name principles 2} For any ordinal $\alpha>0$, and any transitive set $X$ of size at most $\kappa$: \footnote{Recall that $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ is only defined if $X$ has size at most $\kappa$.} \[ \xymatrix@R=1em{ & & \mathsf{FA}_\kappa \ar@/_-0.3cm/@{->}[rdd] & & \\ & & & & \\ & \ \ \ \ \ \ \mathsf{N}_{\mathbb{P},X,\kappa}(\alpha) \ar@{<-}[rr] \ar@/_-0.3cm/@{->}[ruu]^{\lvert \mathcal{P}^{<\alpha}(X)\rvert \geq\kappa} & & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha) & }\] \end{enumerate-(1)} \end{theorem} As usual, we can generally think of $X$ as being a cardinal. There is also a bounded version of this theorem. \begin{theorem} \label{correspondence bounded forcing axioms name principles} Let $\mathbb{P}$ be a complete Boolean algebra, and let $\kappa,\lambda$ be cardinals. The following implications hold, given the assumptions noted at the arrows: \begin{enumerate-(1)} \item \[ \xymatrix@R=1em{ & & \mathsf{BFA}^\lambda_\kappa \ar@/_-0.3cm/@{->}[rdd]^{\kappa\leq \lambda} & & \\ & & & & \\ & \ \ \ \ \ \ \mathsf{BN}_{\mathbb{P},\kappa}^{\lambda}(\infty) \ar@{<-}[rr] \ar@/_-0.3cm/@{->}[ruu] & & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^{\lambda}(\infty) & }\] \item For any ordinal $\alpha>0$, and transitive set $X$ of size at most $\kappa$: \[ \xymatrix@R=1em{ & & \mathsf{BFA}^\lambda_\kappa \ar@/_-0.3cm/@{->}[rdd]^{\kappa\leq\lambda} & & \\ & & & & \\ & \ \ \ \ \ \ \mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha) \ar@{<-}[rr] \ar@/_-0.3cm/@{->}[ruu]^{\lvert \mathcal{P}^{<\alpha}(X)\rvert \geq\kappa} & & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha) & }\] \end{enumerate-(1)} \end{theorem} \iffalse \begin{theorem} Let $\mathbb{P}$ be a forcing. Let $\kappa,\lambda$ be cardinals, let $\alpha>0$ be an ordinal, and let $X$ be a transitive set of size at most $\kappa$. \begin{enumerate-(1)} \item \begin{enumerate-(a)} \item $\mathsf{FA}_{\mathbb{P},\kappa}\implies \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(*)$ \item $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(*)\implies \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ if $\kappa \geq \lvert \mathcal{P}^{<\alpha}(X)\rvert$ \item $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(*)\implies \mathsf{N}_{\mathbb{P},X,\kappa}(*)$ \\ $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)\implies \mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ \item $\mathsf{N}_{\mathbb{P},X,\kappa}(*)\implies \mathsf{FA}_{\mathbb{P},\kappa}$ \\ $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)\implies \mathsf{FA}_{\mathbb{P},\kappa}$ if $\kappa \leq \lvert \mathcal{P}^{<\alpha}(X)\rvert$ \end{enumerate-(a)} \item \todo{Replace this by something like: ``The same holds if we replace $\mathsf{N}$ by $\mathsf{BN}$ etc.} Suppose \todo{bounded forcing axioms for $\mathbb{P}$ are usually defined via Boolean algebras. We should discuss if we stick with this convention or not. } $\mathbb{P}$ is a Boolean algebra. \begin{enumerate-(a)} \item $\mathsf{BFA}_{\mathbb{P},\kappa}^\lambda\implies \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(*)$ if $\lambda \geq \kappa$ \item $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(*)\implies \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)$ if $\kappa \geq \lvert \mathcal{P}^{<\alpha}(X)\rvert$ \item $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(*)\implies \mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(*)$ \\ $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)\implies \mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)$ \item $\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(*)\implies \mathsf{BFA}_{\mathbb{P},\kappa}^\lambda$ \\ $\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)\implies \mathsf{BFA}_{\mathbb{P},\kappa}^\lambda$ if $\kappa \leq \lvert \mathcal{P}^{<\alpha}(X)\rvert$ \end{enumerate-(a)} \end{enumerate-(1)} \end{theorem} \fi \begin{remark} For the $\infty$ case it suffices to look only at $\emptyset$ names, as we discussed after Definition \ref{Defn_nameprinciple}. Moreover, for the implication $\mathsf{N}_{\mathbb{P},\kappa}(\infty) \Rightarrow\mathsf{FA}_{\mathbb{P},\kappa}$ (and the corresponding ones in the other diagrams), we need only rank $1$ $\kappa$-names for $\kappa$. These can be understood as rank $\kappa$ $\emptyset$-names for $\kappa$. For $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha) \Rightarrow\mathsf{FA}_{\mathbb{P},\kappa}$, rank $1$ $Y$-names for a fixed set $Y$ of size $\kappa$ suffice. These can be understood as rank ${\leq}\alpha$ $X$-names. These remarks are also true for the bounded versions. Note that for $\mathsf{N}_{\mathbb{P},\kappa}(1) \Rightarrow\mathsf{FA}_{\mathbb{P},\kappa}$, rank $1$ $\kappa$-names for $\kappa$ suffice by Lemma \ref{Lemma_FA iff N}. \end{remark} We give some simple instances of Theorem \ref{correspondence forcing axioms name principles} \ref{correspondence forcing axioms name principles 2} and postpone the proofs to Section \ref{Section proofs}. The variant for bounded forcing axioms has similar consequences. The next result follows by letting $\kappa=X$ and $\alpha=1$. \begin{corollary} For any forcing $\mathbb{P}$, $\mathsf{FA}_{\mathbb{P},\kappa}$ $\Longleftrightarrow$ $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}$ $\Longleftrightarrow$ $\mathsf{N}_{\mathbb{P},\kappa}$. \end{corollary} To illustrate this, we note how some concrete forcing axioms can be characterized by name principles. For example, we can characterize $\axiomft{PFA}$ as follows: $$\axiomft{PFA} \Longleftrightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathrm{proper},\omega_1} \Longleftrightarrow \mathsf{N}_{\mathrm{proper},\omega_1}.$$ In other words, rank $1$ names for $\omega_1$ can be interpreted correctly. For higher ranks, it is useful to choose $\alpha$, $\kappa$ and $X$ such that $|\mathcal{P}^{<\alpha}(X)\rvert \geq\kappa$ holds to get an equivalence in Theorem \ref{correspondence forcing axioms name principles} \ref{correspondence forcing axioms name principles 2}. This condition holds for $\kappa\geq 2^\omega$, $X=\omega$ and $\alpha=2$. \begin{corollary} For any cardinal $\kappa\geq 2^\omega$ and any forcing $\mathbb{P}$, we have $\mathsf{FA}_{\mathbb{P},\kappa}$ $\Longleftrightarrow$ $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\omega,\kappa}(2)$ $\Longleftrightarrow$ $\mathsf{N}_{\mathbb{P},\omega,\kappa}(2)$. \end{corollary} For example, we can characterize $\axiomft{PFA}$ as follows: $$\axiomft{PFA} \Longleftrightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathrm{proper},\omega,\omega_1}(2) \Longleftrightarrow \mathsf{N}_{\mathrm{proper},\omega,\omega_1}(2).$$ In other words, rank $2$ names for sets of reals can be interpreted correctly. We leave open how to characterise higher rank (e.g. rank $2$) principles for names for reals. \subsection{The proofs} \label{Section proofs} \begin{proof}[Proof of Theorem \ref{correspondence forcing axioms name principles}] We prove both parts of the theorem simultaneously, by fixing $X$ and $\alpha$ and proving all the implications in the following diagram: \smallskip \[ \xymatrix@R=1em{ & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty) \ar@{->}[r] \ar@{->}[dd] & \mathsf{N}_{\mathbb{P},\kappa}(\infty) \ar@{->}[rd] \ar@{->}[dd] & \\ \mathsf{FA}_{\mathbb{P},\kappa} \ar@{->}[ru] \ar@{->}[rd] & & & \mathsf{FA}_{\mathbb{P},\kappa} \\ \labelmargin{10pt} & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha) \ar@{->}[r] & \mathsf{N}_{\mathbb{P},X,\kappa}(\alpha) \ar@{->}[ru]_{\ \ \lvert \mathcal{P}^{<\alpha}(X)\rvert\geq\kappa} & }\] \bigskip Of these, the first $\mathsf{FA}_{\mathbb{P},\kappa}\Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty)$ is the hardest to prove, and the main work on the theorem. We'll leave it to the end, and prove the other implications first. Note that $\mathsf{FA}_{\mathbb{P},\kappa}\Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ follows from the rest of the diagram. \begin{proof}[Proof of $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty)\Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa,X}(\alpha)$] The latter is a special case of the former. \end{proof} \begin{proof}[Proof of $\mathsf{N}_{\mathbb{P},\kappa}(\infty)\Rightarrow \mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$] Again, this is a special case. \end{proof} \begin{proof}[Proof of $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)\Rightarrow \mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$] Given a $\kappa$-small name $\sigma$ of rank $\alpha$ or less, and a set $A$ as called for by $\mathsf{N}_{\mathbb{P},\kappa}(\alpha)$, we know $A\in \mathcal{P}^\alpha(X)\cap H_{\kappa^+}$. Hence $\check{A}$ is a $\kappa$ small $\alpha$ rank $X$ name, so ``$\sigma=\check{A}$'' is one of the formulas discussed by the simultaneous name principle. \end{proof} \begin{proof}[Proof of $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty)\Rightarrow \mathsf{N}_{\mathbb{P},\kappa}(\infty)$] Similar to the previous proof: if $\sigma$ is any $\kappa$-small name, and $A\in H_{\kappa^+}$ is such that $\mathbb{P}\mathrel{\Vdash} \sigma=\check{A}$, then since $\check{A}$ is $\kappa$-small we know from $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}(\infty)$ that we can find a filter $g$ such that $\sigma^g=\check{A}^g=A$. \end{proof} \begin{proof}[Proof of $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)\Rightarrow \mathsf{FA}_{\mathbb{P},\kappa}$] We assume $\lvert P^{<\alpha}(X)\rvert \geq \kappa$. The idea is similar to the proof of $\mathsf{N}_{\mathbb{P},\kappa}\Rightarrow \mathsf{FA}_{\mathbb{P},\kappa}$ from Lemma \ref{Lemma_FA iff N}, but first we must prove a technical claim. \begin{claim} $\mathcal{P}^{<\alpha}(X)$ contains at least $\kappa$ many elements whose check names are $\kappa$-small ${<}\alpha$-rank $X$-names. \end{claim} \begin{proof}[Proof (Claim)] Let $\alpha'\leq \alpha$ be minimal such that $\lvert \mathcal{P}^{<\alpha'}(X)\rvert \geq \kappa$. Let $A\in \mathcal{P}^{<\alpha'}(X)$. Then $A\in \mathcal{P}^\epsilon(X)$ for some $\epsilon<\alpha'$. We show by induction on $\epsilon$ that $\check{A}$ is in fact a $\kappa$-small $\epsilon$-rank $X$-name. From this and the assumption on the size of $\kappa$, it of course follows that there are at least $\kappa$ many elements of $\mathcal{P}^{<\alpha'}(X)\subseteq \mathcal{P}^{<\alpha}(X)$ whose check names are $\kappa$-small $<\alpha$-rank $X$-names. The case $\epsilon=0$ is trivial. Suppose $\epsilon>0$. By inductive hypothesis, we know that all the names which are contained in $\check{A}$ are $\kappa$-small $<\epsilon$-rank $X$-names. It remains to check that there are at most $\kappa$ many of them; that is, that $\lvert A \rvert \leq \kappa$. But this is obvious, since $A\subseteq \mathcal{P}^{<\epsilon}(X)$ and $\lvert \mathcal{P}^{<\epsilon}(X)\rvert < \kappa$ by our choice of $\alpha'$. \end{proof} Given the claim, we can now take a set of $\kappa$ many distinct sets $A:=\{A_\gamma: \gamma<\kappa\}\subseteq \mathcal{P}^{<\alpha}(X)$, such that for all $\gamma$, the name $\check{A}_\gamma$ is a $\kappa$ small ${<}\alpha$ rank $X$-name. Let $\langle D_\gamma\rangle_{\gamma<\kappa}$ be a sequence of dense sets in $\mathbb{P}$. We define a name $\sigma$: \begin{equation*} \sigma=\{\langle \check{A}_\gamma,p\rangle : \gamma<\kappa, p\in D_\gamma\} \end{equation*} Then $\sigma$ is a $\kappa$-small ${\leq}\alpha$-rank $X$-name, and $\mathbb{P}\mathrel{\Vdash} \sigma=\check{A}$. Hence, if we assume $\mathsf{N}_{\mathbb{P},X,\kappa}(\alpha)$ we can choose a filter $g$ such that $\sigma^g=A$. It is easy to see that $g$ must meet every $D_\gamma$. \end{proof} \begin{proof}[Proof of $\mathsf{N}_{\mathbb{P},\kappa}(\infty)\Rightarrow \mathsf{FA}_{\mathbb{P},\kappa}$] Essentially the same as the previous proof, but since we're no longer required to make sure $\sigma$ has rank $\alpha$ we can omit the technical claim and just take $A_\gamma:=\gamma$ for all $\gamma<\kappa$. \end{proof} \begin{proof}[Proof of $\mathsf{FA}_{\mathbb{P},\kappa}\Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty)$] This is the main work of the theorem. By a delicate series of inductions, we will prove the following lemma: \begin{lemma} \label{collection of dense sets to witness first order statement} Let $\varphi(\vec{\sigma})$ be a $\Sigma_0$ formula where $\vec{\sigma}$ is a tuple of $\kappa$-small names. Then there is a collection $\mathcal{D}_{\varphi(\vec{\sigma})}$ of at most $\kappa$ many dense sets, which has the following property: if $g$ is any filter meeting every set in $\mathcal{D}_{\varphi(\vec{\sigma})}$ and $g$ contains some $p$ such that $p\mathrel{\Vdash}\varphi(\vec{\sigma})$, then in fact $\varphi(\vec{\sigma}^g)$ holds in $V$. \end{lemma} The result we're trying to show follows easily from this lemma: Fix a tuple $\vec{\sigma}=\langle \sigma_0,\ldots,\sigma_n\rangle$ of $\kappa$ small names, and let $\mathcal{D}:=\bigcup\{\mathcal{D}_{\varphi(\vec{\sigma})} : \varphi(v_0,\ldots,v_n) \text{ is }\Sigma_0\}$. $\mathcal{D}$ is a collection of at most $\kappa$ many dense sets. Using $\mathsf{FA}_{\mathbb{P},\kappa}$, take a filter $g$ meeting every dense set in $\mathcal{D}$. If $\varphi(v_0,\ldots,v_n)$ is a $\Sigma_0$ formula and $1\mathrel{\Vdash} \varphi(\vec{\sigma})$ then since $1\in g$ we know that $\varphi(\vec{\sigma}^g)$ holds. We will work our way up to proving the lemma, by first proving it in simpler cases. We opt for a direct proof of the name principle $\mathsf{N}_{\mathbb{P},\kappa}(\infty)$ in the next Claim \ref{claim sigma equal emptyset}. This and Claim \ref{claim sigma neq emptyset} could be replaced by shorter arguments for $\kappa$-small $\emptyset$-names, since it suffices to deal with $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\emptyset,\kappa}(\infty)$ as discussed after Definition \ref{Defn_nameprinciple}. \begin{claim}\label{sigma=A} \label{claim sigma equal emptyset} The lemma holds when $\varphi$ is of the form $\sigma=\check{A}$ for some set $A\in H_{\kappa^+}$ and ($\kappa$-small) name $\sigma$. \end{claim} Note that since $A\in H_{\kappa^+}$, we know that $\check{A}$ is a $\kappa$ small name. So the statement in the claim does make sense. \begin{proof} We use induction on the rank of $\sigma$. If $\sigma$ is rank $0$ then it is a check name, and so the lemma is trivial: we can just take $\mathcal{D}_{\sigma=\check{A}}=\emptyset$. So say $\sigma$ is rank $\alpha>0$ and the lemma is proved for all names of rank ${<}\alpha$. Since $\sigma$ is $\kappa$-small, we can write $\sigma=\{(\sigma_\gamma,p): \gamma<\kappa, p\in S_\gamma\}$ for some $\kappa$-small names $\sigma_\gamma$ and sets $S_\gamma\subseteq \mathbb{P}$. First, let $B\in A$. We shall define a set $D_B$, whose ``job'' is to ensure $B$ ends up in $\sigma^g$. \begin{equation*} D_B=\big\{p\in \mathbb{P} : \big(p\mathrel{\Vdash} \sigma \neq \check{A}\big)\vee\big(\exists \gamma<\kappa \: (p\mathrel{\Vdash} \sigma_\gamma=\check{B})\wedge (p\sforces \sigma_\gamma \in \sigma)\big) \big\} \end{equation*} $D_B$ is dense: if we take $p\in \mathbb{P}$ then either we can find $r\leq p$ with $r\mathrel{\Vdash} \sigma \neq \check{A}$, or else $p\mathrel{\Vdash} \sigma = \check{A}$. In the first case, we're done. In the second, given any (truly) generic filter $G$ containing $p$, there will be some $\gamma<\kappa$ and $q\in G$ such that\footnote{Note the somewhat delicate nature of this statement: we cannot first take an arbitrary $\gamma$ such that $\sigma_\gamma^G=B$ then try to find $q$ such that $q\sforces \sigma_\gamma\in \sigma$.} $\sigma_\gamma^G=B$ and $(\sigma_\gamma,q)\in \sigma$, so $q\sforces \sigma_\gamma \in \sigma$. Take $r\in G$ such that $r\mathrel{\Vdash} \sigma_\gamma=\check{B}$, and take $s$ below $p,q$ and $r$ by compatibility; then $s\in D_B$. Now let $\gamma<\kappa$. In a similar way, we define a set $E_\gamma$, which is designed to ensure that $\sigma_\gamma$ ends up in $A$ if it's going to be in $\sigma$. \begin{equation*} E_\gamma=\big\{p\in \mathbb{P}: (p\mathrel{\Vdash} \sigma \neq \check{A}) \vee (p\mathrel{\Vdash} \sigma_\gamma \not \in \sigma) \vee \big(\exists B\in A, p\mathrel{\Vdash} \sigma_\gamma=\check{B}\big)\big\} \end{equation*} Again, $E_\gamma$ is dense: Let $p\in \mathbb{P}$. We can assume that $p\mathrel{\Vdash} \sigma=\check{A}$ and $p\mathrel{\Vdash} \sigma_\gamma\in \sigma$; otherwise we're done immediately. But now we can strengthen $p$ to some $r\leq p$ which forces $\sigma_\gamma\in \check{B}$ for some $B\in A$ and again we're done. We define \begin{equation*} \mathcal{D}_{\sigma=\check{A}}:=\{D_B\colon B\in A\}\cup \{E_\gamma \colon \gamma<\kappa\} \cup \bigcup_{\gamma<\kappa}\bigcup_{B\in A} \mathcal{D}_{\sigma_\gamma=\check{B}} \end{equation*} Every $\sigma_\gamma$ is a $\kappa$-small name of rank less than $\alpha$, and every $B\in H_{\kappa^+}$, so this is well defined by inductive hypothesis. By assumption, $\lvert A \rvert \leq \kappa$. Hence $\mathcal{D}_{\sigma=\check{A}}$ contains at most $\kappa$ many dense sets. Fix a filter $g$ which meets every element of $\mathcal{D}_{\sigma=\check{A}}$, and which contains some $p$ forcing $\sigma=\check{A}$. We must verify that $\sigma^g=A$. First, let $B\in A$. Find $q\in g\cap D_B$, and without loss of generality say $q\leq p$. Then clearly $q \mathrel{\Vdash} \sigma =\check{A}$, so (by definition of $D_B$) we can find $\gamma$ such that $q\mathrel{\Vdash} \sigma_\gamma=\check{B}$ and $q\sforces \sigma_\gamma\in \sigma$. The latter means that $\sigma_\gamma^g\in \sigma^g$. Since $g$ also meets every element of $\mathcal{D}_{\sigma_\gamma=\check{B}}$, the fact that $q\in g$ forces $\sigma_\gamma=\check{B}$ implies that $\sigma_\gamma^g=\check{B}^g=B$. Hence $B\in \sigma^g$. Now let $B\in \sigma^g$. Then we can find $\gamma<\kappa$ such that $B=\sigma_\gamma^g$ and such that for some $q\in g$ we have $q\sforces \sigma_\gamma\in \sigma$. Without loss of generality, say $q\leq p$. Then $q\mathrel{\Vdash} \sigma=\check{A}$. Let $r\in g\cap E_\gamma$, and again without loss of generality say $r\leq q$. Then for some $B'\in A$, $r\mathrel{\Vdash} \sigma_\gamma=\check{B}'$. Since $g$ meets every element of $\mathcal{D}_{\sigma_\gamma=\check{B}'}$, this tells us that $\sigma_\gamma^g=B'$. But then $B=\sigma_\gamma^g=B'\in A$. Hence $\sigma^g=A$ as required. \end{proof} Next, we go up one step in complexity, by allowing both sides of the equality to be nontrivial. \begin{claim} The lemma holds when $\varphi$ has the form $\sigma=\tau$ for two ($\kappa$-small) names $\sigma$ and $\tau$. \end{claim} \begin{proof} We use induction on the ranks of $\sigma$ and $\tau$. Without loss of generality, let us assume the rank of $\sigma$ is $\alpha$, and the rank of $\tau$ is $\leq\alpha$. If $\rank(\tau)=0$ then $\tau$ is a check name. Since $\tau$ is $\kappa$-small, it can only be a check name for some $A\in H_{\kappa^+}$, so we are already done by the previous claim. So suppose $\rank(\sigma)=\alpha \geq \rank(\tau)>0$, and the result is proven for all $\tau',\sigma'$ where $\rank(\sigma')<\rank(\sigma)$ and $\rank(\tau')<\rank(\tau)$. Let us write $\sigma=\{(\sigma_\gamma,p):\gamma<\kappa,p\in S_\gamma\}$ and $\tau=\{(\tau_\delta,q): \delta<\kappa,q\in T_\delta\}$. For $\gamma \in \kappa$, we define a set $D_\gamma$, whose job is to ensure that if $\sigma_\gamma$ ends up being put in $\sigma$ by $g$, then it will also be equal to some element of $\tau$. \begin{align*} D_\gamma=\Big\{p\in \mathbb{P}: &(p\mathrel{\Vdash} \sigma \neq \tau) \vee (p\mathrel{\Vdash} \sigma_\gamma\not\in \sigma) \\ &\vee\exists \delta<\kappa\Big((p\mathrel{\Vdash}\sigma_\gamma=\tau_\delta) \wedge (p\sforces\tau_\delta\in\tau)\Big)\Big\} \end{align*} We claim $D_\gamma$ is dense: Let $p\in \mathbb{P}$. If $p\not \mathrel{\Vdash} \sigma_\gamma\in \sigma$ or $p\not \mathrel{\Vdash} \sigma=\tau$ then take some $q\leq p$ forcing the converse of one of these statements, and we are done. If $p\mathrel{\Vdash} \sigma_\gamma\in \sigma\wedge \sigma=\tau$ then take a generic filter $G$ containing $p$. We know $\sigma_\gamma^G\in \tau^G$, so $\sigma_\gamma^G=\tau_\delta^G$ for some $\tau_\delta$ which is strongly forced to be in $\tau$ by some $q\in G$. Then take $r\in G$ below $p$ and $q$, and we know $r\mathrel{\Vdash} \sigma_\gamma=\tau_\delta$ and $r\sforces\tau_\delta\in \tau$. Hence $r\in D_\gamma$. Symmetrically, for $\delta<\kappa$ let \begin{align*} E_\delta=\Big\{p\in \mathbb{P}: &(p\mathrel{\Vdash} \sigma \neq \tau) \vee (p\mathrel{\Vdash} \tau_\delta\not\in \tau)\\ &\vee\exists \gamma<\kappa \Big((p\mathrel{\Vdash}\sigma_\gamma=\tau_\delta) \wedge (p\sforces\sigma_\gamma\in\sigma)\Big)\Big\} \end{align*} Again, $E_\delta$ is dense. We now let \begin{equation*} \mathcal{D}_{\sigma=\tau}:=\{D_\gamma: \gamma<\kappa\}\cup \{E_\delta:\delta<\kappa\} \cup \bigcup_{\gamma,\delta<\kappa} \mathcal{D}_{\sigma_\gamma=\tau_\delta} \end{equation*} Note that for all $\sigma,\delta<\kappa$, we know $\rank(\sigma_\gamma)<\rank(\sigma)$ and $\rank(\tau_\delta)<\rank(\tau)$, so $\mathcal{D}_{\sigma_\gamma=\tau_\delta}$ is already defined. Clearly, $\mathcal{D}_{\sigma=\tau}$ contains at most $\kappa$ many dense sets. Let $g$ be a filter meeting every element of it, and let $p\in g$ force $\sigma=\tau$. Suppose $B\in \sigma^g$. Then for some $q\in g$ and $\gamma<\kappa$, $B=\sigma_\gamma^g$ and $q\sforces \sigma_\gamma\in \sigma$ (and hence $q\mathrel{\Vdash} \sigma_\gamma \in \sigma$). We can also find some $r\in g\cap D_\gamma$. Without loss of generality, say $r$ is below both $p$ and $q$. Certainly $r$ cannot force $\sigma \neq \tau$, nor that $\sigma_\gamma \not \in \sigma$. Hence, for some $\delta<\kappa$, we know $r\mathrel{\Vdash} \sigma_\gamma=\tau_\delta$ and $r\sforces\tau_\delta\in \tau$. But then $\tau_\delta^g\in \tau^g$, and since $g$ meets every element of $\mathcal{D}_{\sigma_\gamma=\tau_\delta}$, we also know that $B=\sigma_\gamma^g=\tau_\delta^g$. Hence $B\in \tau$. Hence $\sigma^g\subseteq \tau^g$, and by a symmetrical argument $\tau^g\subseteq \sigma^g$. \end{proof} \begin{claim}The lemma holds when $\varphi$ has the form $\tau\in\sigma$. \end{claim} \begin{proof} Write $\sigma=\{(\sigma_\gamma,p): \gamma<\kappa,p\in S_\gamma\}$ as usual. Let \begin{equation*} D=\Big\{p\in \mathbb{P}: (p\mathrel{\Vdash} \tau\not\in \sigma) \vee \exists \gamma<\kappa \Big( (p\mathrel{\Vdash} \tau=\sigma_\gamma)\wedge(p\sforces \sigma_\gamma\in \sigma)\Big)\Big\} \end{equation*} As usual, $D$ is dense. Let \begin{equation*} \mathcal{D}_{\tau\in\sigma}:=\{D\}\cup \bigcup_{\gamma<\kappa}\mathcal{D}_{\tau=\sigma_\gamma} \end{equation*} Let $g$ meet every element of $\mathcal{D}_{\tau\in\sigma}$ and contain some $p$ forcing $\tau\in\sigma$. Let $q\in g\cap D$, and assume $q\leq p$. Then for some $\gamma$, $q\mathrel{\Vdash} \tau=\sigma_\gamma$ and $q\sforces \sigma_\gamma\in \sigma$, so $\sigma_\gamma^g\in \sigma^g$. Since $g$ meets every element of $\mathcal{D}_{\tau=\sigma_\gamma}$ we know $\tau^g=\sigma_\gamma^g\in \sigma^g$. \end{proof} We next need to prove similar claims about the negations of all these formulas. \begin{claim} \label{claim sigma neq emptyset} The lemma holds when $\varphi$ is of the form $\sigma \neq \check{A}$ for $A\in H_\kappa$. \end{claim} \begin{proof} As before, this is trivial is $\sigma$ is rank $0$. Otherwise, let us write {$\sigma=\{(\sigma_\gamma,p):\gamma<\kappa,p\in S_\gamma\}$} and let \begin{align*} D=\Big\{p\in \mathbb{P}: &(p\mathrel{\Vdash} \sigma=\check{A})\vee \Big(\exists \gamma<\kappa (p\sforces \sigma_\gamma\in \sigma) \wedge (p\mathrel{\Vdash} \sigma_\gamma\not \in \check{A})\Big)\\ & \vee (\exists B\in A: p\mathrel{\Vdash} \check{B}\not \in \sigma)\Big\} \end{align*} As usual, $D$ is dense. We then let \begin{equation*} \mathcal{D}_{\sigma\neq \check{A}}:=\{D\}\cup \bigcup_{\gamma<\kappa}\bigcup_{B\in A} \mathcal{D}_{\sigma_\gamma\neq \check{B}} \end{equation*} By induction, this is well defined, and since $A$ is in $H_{\kappa^+}$ it has cardinality at most $\kappa$. Let $g$ be a filter meeting all of $\mathcal{D}_{\sigma\neq \check{A}}$ with $p\in g$ forcing $\sigma\neq \check{A}$. Take $q\in g \cap D$ below $p$. There are two cases to consider. \begin{enumerate-(1)} \item For some $\gamma$, $q\sforces \sigma_\gamma\in\sigma$ and $q\mathrel{\Vdash} \sigma_\gamma\not\in\check{A}$. Then certainly $\sigma_\gamma^g\in\sigma^g$. Let $B\in A$. Then $q\mathrel{\Vdash} \sigma_\gamma\neq \check{B}$. Since $g$ meets all of $\mathcal{D}_{\sigma_\gamma \neq B}$, we know $\sigma_\gamma^g\neq B$. Hence $\sigma_\gamma^g\in \sigma^g\setminus A$ so $\sigma^g\neq A$. \item For some $B\in A$, $q\mathrel{\Vdash} \check{B}\not \in\sigma$. Let $B'\in \sigma^g$. Then for some $\gamma<\kappa$ and $r\leq q$ in $g$, $\sigma_\gamma^g=B'$ and $r\sforces \sigma_\gamma\in \sigma$. Hence $r\mathrel{\Vdash} \sigma_\gamma\in\sigma$. But also $r\mathrel{\Vdash} \check{B}\not\in\sigma$ since $r\leq q$. Therefore $r\mathrel{\Vdash} \sigma_\gamma\neq \check{B}$, and so $B'=\sigma_\gamma^g\neq B$ since $g$ meets $\mathcal{D}_{\sigma_\gamma \neq \check{B}}$. Hence $B\in A\setminus \sigma^g$, so again $\sigma^g\neq A$. \end{enumerate-(1)} \end{proof} \begin{claim} The lemma holds when $\varphi$ is of the form $\sigma \neq \tau$. \end{claim} \begin{proof} The dense sets we need to use are very similar to the ones in the previous lemma. We assume $\rank(\sigma)\geq \rank(\tau)$ and note that if $\rank(\tau)=0$ we're looking at the previous case. So let us assume $\rank(\sigma)\geq \rank(\tau)>0$ and that we have proved the statement for all $\sigma'$ and $\tau'$ with lower ranks than $\sigma$ and $\tau$ respectively. As usual, write $\sigma=\{(\sigma_\gamma,p):\gamma<\kappa,p\in S_\gamma\}$ and $\tau=\{(\tau_\delta,q): \delta<\kappa,q\in T_\gamma\}$. Let \begin{align*} D=\Big\{p\in\mathbb{P}: &(p\mathrel{\Vdash} \sigma=\tau) \vee \Big(\exists \gamma<\kappa (p \sforces \sigma_\gamma \in \sigma) \wedge (p\mathrel{\Vdash} \sigma_\gamma \not \in \tau)\Big)\\ &\vee \Big(\exists \delta<\kappa (p\sforces\tau_\delta\in \tau) \wedge (p\mathrel{\Vdash} \tau_\delta \not \in \sigma)\Big)\Big\} \end{align*} Once again $D$ is dense. We define \begin{equation*} \mathcal{D}_{\sigma \neq \tau} := \{D\}\cup \bigcup_{\gamma,\delta<\kappa} \mathcal{D}_{\sigma_\gamma \neq \tau_\delta} \end{equation*} Letting $g$ be our usual filter meeting all of $\mathcal{D}_{\sigma \neq \tau}$ and containing some $p$ forcing $\sigma \neq \tau$, we can find $q\in g\cap D$ below $p$. Without loss of generality, there exists $\gamma<\kappa$ such that $q\sforces \sigma_\gamma\in \sigma$ and $q\mathrel{\Vdash} \sigma_\gamma \not \in \tau$. As always, the first statement implies $\sigma_\gamma^g\in \sigma^g$. If $\sigma_\gamma^g\in \tau^g$ then for some $\delta<\kappa$ and $r\in g$ (which we can take to be below $q$), $\sigma_\gamma^g=\tau_\delta^g$ and $r\sforces \tau_\delta \in \tau$. But then we know $r\mathrel{\Vdash} \sigma_\gamma \neq \tau_\delta$. Since $g$ meets all of $\mathcal{D}_{\sigma_\gamma\neq \tau_\delta}$ this implies $\sigma_\gamma^g\neq \tau_\gamma^g$. Contradiction. Hence $\sigma_\gamma^g \in \sigma^g\setminus\tau^g$, so $\sigma^g\neq \tau^g$. \end{proof} \begin{claim} The lemma holds when $\varphi$ has the form $\tau\not\in\sigma$. \end{claim} \begin{proof} Write $\sigma=\{(\sigma_\gamma,p):\gamma<\kappa,p\in S_\gamma\}$ as usual. Let \begin{equation*} \mathcal{D}_{\tau\not\in\sigma} := \bigcup_{\gamma<\kappa}\mathcal{D}_{\tau\neq\sigma_\gamma} \end{equation*} Suppose $g$ meets all of $\mathcal{D}_{\tau\not\in\sigma}$ and contains some $p$ forcing $\tau \not \in \sigma$. Let $B\in\sigma^g$. For some $\gamma<\kappa$ and some $q\in g$ below $p$, $B=\sigma_\gamma^g$ and $q\sforces \sigma_\gamma\in\sigma$. Then $q\mathrel{\Vdash} \tau \neq \sigma_\gamma$, so $ \tau^g\neq\sigma_\gamma^g=B$. Hence $\tau^g\not \in \sigma^g$. \end{proof} We can now finally prove the full lemma. \begin{claim}The lemma holds in all cases. \end{claim} \begin{proof} We use induction on the length of the formula $\varphi$. By rearranging $\varphi$, we can assume that all the $\neg$'s in $\varphi$ are in front of atomic formulas. Throughout this proof, we will suppress the irrelevant variables $\vec{\sigma}$ of formulas $\psi(\vec{\sigma})$, and will write $\psi^g$ to denote $\psi(\vec{\sigma}^g)$. The base case, where $\varphi$ is either atomic or the negation of an atomic formula, was covered in the previous lemmas. $\varphi=\psi \wedge \chi$: We let $\mathcal{D}_{\varphi}:=\mathcal{D}_\psi \cup \mathcal{D}_\chi$. If $p\in g$ forces $\varphi$ then it also forces $\psi$ and $\chi$, so if also $g$ meets all of $\mathcal{D}_\varphi$ then $\psi^g$ and $\chi^g$ hold. $\varphi=\psi\vee \chi$: We let $D=\{p\in\mathbb{P}: (p\mathrel{\Vdash} \neg\varphi) \vee (p\mathrel{\Vdash} \psi) \vee (p\mathrel{\Vdash} \chi)\}$, and let $\mathcal{D}_\varphi:=\{D\}\cup \mathcal{D}_\psi\cup \mathcal{D}_\chi$. If $g$ meets all of $\mathcal{D}_\varphi$ and contains some $p$ which forces $\varphi$ then take $q\leq p$ in $g \cap D$. Then $q\mathrel{\Vdash} \psi$ or $q\mathrel{\Vdash} \chi$, and by definition of $\mathcal{D}_\psi$ and $\mathcal{D}_\chi$ this implies $\psi^g$ or $\chi^g$ respectively. $\varphi=\forall x \in \sigma \ \psi(x)$: Write $\sigma=\{(\sigma_\gamma,p):\gamma<\kappa,p\in S_\gamma\}$, and let $\mathcal{D}_\varphi:=\bigcup_{\gamma<\kappa} \mathcal{D}_{\psi(\sigma_\gamma)}$. Suppose, as usual, that $g$ meets all of $\mathcal{D}_\varphi$ and contains some $p$ forcing $\varphi$. Let $B\in \sigma^g$. Then we have some $\gamma<\kappa$ and $q\in g$ such that $\sigma_\gamma^g=B$ and $q\sforces \sigma_\gamma\in \sigma$. Taking (without loss of generality) $q\leq p$, we then have that $q\mathrel{\Vdash} \psi(\sigma_\gamma)$. Hence $\psi^g(\sigma_\gamma^g)$ holds. But we know $\sigma_\gamma^g=B$. Hence $\psi^g(B)$ holds for all $B\in \sigma^g$, so $\varphi^g$ holds. $\varphi=\exists x\in \sigma \ \psi(x)$: Again we write $\sigma=\{(\sigma_\gamma,p):\gamma<\kappa,p\in S_\gamma\}$. Let $D$ be the dense set $\{p\in\mathbb{P}: (p\mathrel{\Vdash} \neg \varphi) \vee \exists \gamma<\kappa \ ( p\sforces \sigma_\gamma\in\sigma \wedge p\mathrel{\Vdash} \psi(\sigma_\gamma) )\}$, and let $\mathcal{D}_\varphi := \{D\}\cup \bigcup_{\gamma<\kappa} \mathcal{D}_{\psi(\sigma_\gamma)}$. If $g$ meets all of $\mathcal{D}_\varphi$ and contains $p$ forcing $\varphi$ then we can take some element $q$ of $g\cap D$ below $p$. Then for some $\gamma<\kappa$, we know $q\mathrel{\Vdash} \psi(\sigma_\gamma)$ and $q\sforces \sigma_\gamma\in \sigma$. Then $\psi^g(\sigma_\gamma^g)$ holds, and $\sigma_\gamma^g\in \sigma^g$. \end{proof} This completes the proof of Lemma \ref{collection of dense sets to witness first order statement}. Hence $\mathsf{FA}_{\mathbb{P},\kappa}$ implies $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{N}_{\mathbb{P},\kappa}(\infty)$, as discussed earlier. \end{proof} This completes the proof of Theorem \ref{correspondence forcing axioms name principles}. \end{proof} In fact, this proof works even if we allow formulas to have conjunctions and disjunctions of $\kappa$ many formulas (and accordingly let formulas have $\kappa$ many variables). The proof of Theorem \ref{correspondence bounded forcing axioms name principles} is essentially the same: \begin{proof}[Proof of Theorem \ref{correspondence bounded forcing axioms name principles}] We prove all the implications in the following diagram. \smallskip \[ \xymatrix@R=1em{ & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}(\infty) \ar@{->}[r] \ar@{->}[dd] & \mathsf{BN}^\lambda_{\mathbb{P},\kappa}(\infty) \ar@{->}[rd] \ar@{->}[dd] & \\ \mathsf{BFA}^\lambda_{\mathbb{P},\kappa} \ar@{->}[ru]^{\kappa\leq\lambda \ \ \ } \ar@{->}[rd]_{\kappa\leq\lambda \ \ \ } & & & \mathsf{BFA}^\lambda_{\mathbb{P},\kappa} \\ \labelmargin{10pt} & \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},X,\kappa}(\alpha) \ar@{->}[r] & \mathsf{BN}^\lambda_{\mathbb{P},X,\kappa}(\alpha) \ar@{->}[ru]_{\ \ \lvert \mathcal{P}^{<\alpha}(X)\rvert\geq\kappa} & }\] \bigskip Note that $\mathsf{BFA}^\lambda_{\mathbb{P},\kappa}\Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},X,\kappa}(\alpha)$ for $\kappa\leq\lambda$ follows from the rest of the diagram. \begin{proof}[Proof of $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\infty)\Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-} \mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\alpha)$ and $\mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\infty) \Rightarrow \mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\alpha)$] \mbox{}\\* The latter are special cases of the former. \end{proof} \begin{proof}[Proof of $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)\Rightarrow\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)$ and $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\infty)\Rightarrow \mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\infty)$] \mbox{}\\* As before, similar to the proofs in Theorems \ref{correspondence forcing axioms name principles}. \end{proof} \begin{proof}[Proof of $\mathsf{BN}_{\mathbb{P},X,\kappa}^\lambda(\alpha)\Rightarrow \mathsf{BFA}_{\mathbb{P},\kappa}^\lambda$ and $\mathsf{BN}_{\mathbb{P},\kappa}^\lambda(\infty)\Rightarrow \mathsf{BFA}_{\mathbb{P},\kappa}^\lambda$] Letting $\langle D_\gamma \mid \gamma<\kappa\rangle$ be a sequence of predense sets of cardinality at most $\lambda$, we define a name $\sigma$ exactly as in the corresponding proof from Theorem \ref{correspondence forcing axioms name principles}. Since the $D_\gamma$ have cardinality at most $\lambda$, and all the names that appear in $\sigma$ are $1$ bounded check names, $\sigma$ is $\lambda$-bounded. As in the earlier proof, a filter $g$ such that $\sigma^g=A$ will meet all of the $D_\gamma$. \end{proof} \begin{proof}[Proof of $\mathsf{BFA}_{\mathbb{P},\kappa}^\lambda \Rightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^\lambda$] Assume $\lambda \geq \kappa$. We prove the following lemma (very similar to Lemma \ref{collection of dense sets to witness first order statement}). \begin{lemma} Let $\varphi(\vec{\sigma})$ be a $\Sigma_0$ formula where $\vec{\sigma}$ is a tuple of $\kappa$-small $\lambda$-bounded names. Then there is a collection $\mathcal{D}_{\varphi(\vec{\sigma})}$ of at most $\kappa$ many predense sets each of cardinality at most $\lambda$, which has the following property: if $g$ is any filter meeting every set in $\mathcal{D}_{\varphi(\vec{\sigma})}$ and $g$ contains some $p$ such that $p\mathrel{\Vdash}\varphi(\vec{\sigma})$, then in fact $\varphi(\vec{\sigma^g})$ holds in $V$. \end{lemma} We use the same proof as in Theorem \ref{correspondence forcing axioms name principles}, adjusting the dense sets we work with. Whenever a dense set appears, we will replace it with a predense set of size at most $\lambda$ which fulfills all the same functions. To obtain these sets, we use a few techniques. First, whenever the original proof calls for an arbitrary condition which forces some desirable property, we replace it with the supremum of all such conditions (exploiting the fact that we are in a complete Boolean algebra). For example, in place of \begin{equation*} E_\gamma = \Big\{p\in \mathbb{P}: (p\mathrel{\Vdash} \sigma \neq \check{A}) \vee (p\mathrel{\Vdash} \sigma_\gamma \not \in A) \vee \Big(\exists B\in A, p\mathrel{\Vdash} \sigma_\gamma=\check{B}\Big)\Big\} \end{equation*} in Claim \ref{sigma=A}, we would take the set \begin{equation*} E_\gamma^* := \{q_0,q_1\}\cup \{q_B: B\in A\} \end{equation*} where \begin{align*} q_0&=\sup \{p\in \mathbb{P}: p\mathrel{\Vdash} \sigma \neq \check{A}\} \\ q_1&=\sup \{p\in \mathbb{P}: p\mathrel{\Vdash} \sigma_\gamma\not \in \check{A}\} \end{align*} and for $B\in A$, \begin{equation*} q_B=\sup \{p\in \mathbb{P}: p\mathrel{\Vdash} \sigma_\gamma=\check{B}\}. \end{equation*} $E_\gamma^*$ has cardinality at most $\lambda$, since $\lvert A \rvert \leq \kappa \leq \lambda$. When the original set calls for a condition which strongly forces $\tau\in \sigma$ for some $\tau$ and $\sigma$, simply taking suprema won't work. Instead, we ask for a condition $q$ such that $(\tau,q)\in \sigma$. Since all the names $\sigma$ we deal with in the proof are $\lambda$-bounded, there will be at most $\lambda$ many such conditions. For example, in the same claim, \begin{equation*} D_B:=\Big\{p\in \mathbb{P} : (p\mathrel{\Vdash} \sigma \neq \check{A})\vee\Big(\exists \gamma<\kappa \: (p\mathrel{\Vdash} \sigma_\gamma=\check{B})\wedge (p\sforces \sigma_\gamma \in \sigma)\Big) \Big\} \end{equation*} will be replaced by \begin{equation*} D_B^*:=\{r\} \cup \{r_{\gamma,q}: \gamma<\kappa, q\in \mathbb{P}, (\sigma_\gamma,q)\in \sigma, r_{\gamma,q}\neq 0\} \end{equation*} where \begin{equation*} r=\sup\{p\in\mathbb{P}: p\mathrel{\Vdash} \sigma \neq \check{A}\} \end{equation*} and for $\gamma<\kappa$, $q\in \mathbb{P}$, \begin{equation*} r_{\gamma,q}=\sup \{p\leq q: p\mathrel{\Vdash} \sigma_\gamma=\check{B}\}. \end{equation*} Checking that we can indeed apply these techniques to turn all the dense sets in the proof into predense sets of cardinality at most $\lambda$ is left as an exercise for the particularly thorough reader. \end{proof} This completes the proof of Theorem \ref{correspondence bounded forcing axioms name principles}. \end{proof} \subsection{Generic absoluteness} \label{subsection - generic absoluteness} In this section, we derive generic absoluteness principles from the above correspondence. Fix a cardinal $\kappa$. We start by defining the class of $\Sigma^1_1(\kappa)$-formulas. To this end, work with a two-sorted logic with two types of variables, interpreted as ranging over ordinals below $\kappa$ and over subsets of $\kappa$, respectively. The language contains a binary relation symbol $\in$ and a binary function symbol $p$ for a pairing function $\kappa\times\kappa\rightarrow \kappa$. Thus, atomic formulas are of the form $\alpha=\beta$, $x=y$, $\alpha\in x$ and $p(\alpha,\beta)=\gamma$, where $\alpha,\beta,\gamma$ denote ordinals and $x,y$ denote subsets of $\kappa$. \begin{definition} A $\Sigma^1_1(\kappa)$ formula is of the form $$\exists x_0,\dots,x_m\ \varphi(x_0,\dots,x_m,y_0,\dots,y_n),$$ where the $x_i$ are variables for subsets of $\kappa$, the $y_i$ are either type of variables, and $\varphi$ is a formula which only quantifies over variables for ordinals. \end{definition} As a corollary to the results in Section \ref{Section_correspondence}, we obtain Bagaria's characterisation of bounded forcing axioms \cite[Theorem 5]{bagaria2000bounded} as the equivalence \ref{Bagaria's characterisation 1} $\Leftrightarrow$ \ref{Bagaria's characterisation 4} of the next theorem. It also shows that the principles $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}$ for $\lambda<\kappa$ are all equivalent to $\mathsf{BFA}^\kappa_{\mathbb{P},\kappa}$. \begin{theorem} \label{Bagaria's characterisation} Suppose that $\kappa$ is a cardinal with $\mathrm{cof}(\kappa)>\omega$, $\mathbb{P}$ is a complete Boolean algebra and $\dot{G}$ is a $\mathbb{P}$-name for the generic filter. Then the following conditions are equivalent:\footnote{The equivalence \ref{Bagaria's characterisation 1} $\Leftrightarrow$ \ref{Bagaria's characterisation 4} is equivalent to Bagaria's version, since his definition of $\mathsf{BFA}$ refers to Boolean completions.} \begin{enumerate-(1)} \item \label{Bagaria's characterisation 1} $\mathsf{BFA}^\kappa_{\mathbb{P},\kappa}$ \item \label{Bagaria's characterisation 2} $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^1_{\mathbb{P},\kappa}(1)$ \footnote{The version $\Sigma_0-\mathsf{BN}^1_{\mathbb{P},\kappa}(1)$ for single $\Sigma_0$-formulas is also equivalent by the proof below.} \item \label{Bagaria's characterisation 3} $\mathrel{\Vdash}_\mathbb{P} V \prec_{\Sigma^1_1(\kappa)} V[\dot{G}]$ \item \label{Bagaria's characterisation 4} $\mathrel{\Vdash}_\mathbb{P} H_{\kappa^+}^V \prec_{\Sigma_1} H_{\kappa^+}^{V[\dot{G}]}$ \end{enumerate-(1)} \end{theorem} \begin{proof} The implication \ref{Bagaria's characterisation 1} $\Rightarrow$ \ref{Bagaria's characterisation 2} holds since $\mathsf{BFA}^\kappa_{\mathbb{P},\kappa} \Leftrightarrow \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\kappa_{\mathbb{P},\kappa}(1)$ by Theorem \ref{correspondence bounded forcing axioms name principles} and $ \Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\kappa_{\mathbb{P},\kappa}(1) $ implies $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^1_{\mathbb{P},\kappa}(1)$. \ref{Bagaria's characterisation 2} $\Rightarrow$ \ref{Bagaria's characterisation 3}: To simplify the notation, we will only work with $\Sigma^1_1(\kappa)$-formulas of the form $\exists x\ \varphi(x,y)$, where $x$ and $y$ range over subsets of $\kappa$. Suppose that $y$ is a subset of $\kappa$ and $p\mathrel{\Vdash} \exists x\ \varphi(x,\check{y})$. Let $\sigma$ be a $1$-bounded rank $1$ $\mathbb{P}$-name with $p \Vdash_\mathbb{P} \varphi(\sigma,\check{y})$. Note that $\check{y}$ is a $1$-bounded rank $1$ name, too. By $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^{\kappa}(1)$, there exists a filter $g\in V$ on $\mathbb{P}$ such that $V \models \varphi(\sigma^g,y)$. Hence $V\models \exists x\ \varphi(x,y)$. The implication \ref{Bagaria's characterisation 3} $\Rightarrow$ \ref{Bagaria's characterisation 1} works just like in the proof of \cite[Theorem 5]{bagaria2000bounded}. In short, the existence of the required filter is equivalent to a $\Sigma^1_1(\kappa)$-statement. For \ref{Bagaria's characterisation 3} $\Rightarrow$ \ref{Bagaria's characterisation 4}, suppose that $\psi=\exists x\ \varphi(x,y)$ is a $\Sigma_1$-formula with a parameter $y\in H_{\kappa^+}$. Then $$ H_{\kappa^+} \models \psi \Longleftrightarrow H_{\kappa^+} \models ``\exists M \text{ transitive s.t. } y\in M \wedge M\models \psi". $$ We express the latter by a $\Sigma^1_1(\kappa)$-formula $\theta$ with a parameter $A\subseteq \kappa$ which codes $y$ in the sense that $f(0)=y$ for the transitive collapse $f$ of $(\kappa,p^{-1}[A])$. $\theta$ states the existence of a subset $B$ of $\kappa$ such that $\in_M:=p^{-1}[B]$ has the following properties: \begin{itemize} \item $\in_M$ is wellfounded and extensional \item For all $\alpha<\beta<\kappa$, $2 \cdot \alpha \in_M 2\cdot \beta$ and for all $\alpha,\beta<\kappa$, $2\cdot \alpha+1 \not\in_M 2\cdot \beta$. \item There is some $\hat{\kappa}< \kappa$ with $\{ \alpha<\kappa \mid \alpha \in_M \hat{\kappa}\} = \{ 2\cdot \alpha \mid \alpha<\kappa \}$ \item There exists some $\hat{A}<\kappa$ such that for all $\beta<\kappa$, $\beta \in_M \hat{A} \Leftrightarrow \exists \alpha\in A\ 2\cdot \alpha=\beta$ \item There exists some $\hat{y}<\kappa$ such that in $(\kappa,\in_M)$, $\hat{A}$ codes $\hat{y}$ \item $\varphi(\hat{y})$ holds in $(\kappa,\in_M)$ \end{itemize} The transitive collapse $f$ of $(\kappa,\in_M)$ to a transitive set $M$ will satisfy $f(2\cdot \alpha)=\alpha$ for all $\alpha<\kappa$, $f(\hat{\kappa})=\kappa$, $f(\hat{A})=A$, $f(\hat{y})=y$ and $M\models \psi(y)$. All the above conditions apart from wellfoundedness of $\in_M$ are first order over $(\kappa,\in,p,A,\in_M)$. It remains to express wellfoundedness of $\in_M$ in a $\Sigma^1_1(\kappa)$ way.\footnote{$\mathrm{cof}(\kappa)>\omega$ is in fact necessary to ensure that the set of codes on $\kappa$ for elements of $H_{\kappa^+}$ is $\Sigma^1_1(\kappa)$-definable with parameters in $\mathcal{P}(\kappa)$. If $\mathrm{cof}(\kappa)=\omega$ and $\kappa$ is a strong limit, then this set is $\Pi^1_1(\kappa)$-complete and hence not $\Sigma^1_1(\kappa)$ by a result of Dimonte and Motto Ros \cite{dimontemottoros}.} To see that we can do this, suppose that $R$ is a binary relation on $\kappa$. Since $\mathrm{cof}(\kappa)>\omega$, $R$ is wellfounded if and only if for all $\gamma<\kappa$, $R{\upharpoonright}\gamma$ is wellfounded. Since $\gamma<\kappa$, $R{\upharpoonright}\gamma$ is wellfounded if and only if there exists a map $f\colon \gamma\rightarrow \kappa$ such that for all $\alpha,\beta<\gamma$, $(\alpha,\beta)\in R \Rightarrow f(\alpha)<f(\beta)$. The existence of such a map $f$ is a $\Sigma^1_1(\kappa)$ statement. Finally, \ref{Bagaria's characterisation 4} $\Rightarrow$ \ref{Bagaria's characterisation 3} holds since every $\Sigma^1_1(\kappa)$-formula is equivalent to a $\Sigma_1$-formula over $H_{\kappa^+}$ with parameter $\kappa$. \end{proof} \begin{remark} Note that for rank $1$, $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^\lambda_{\mathbb{P},\kappa}(1)$ implies the simultaneous $\lambda$-bounded rank $1$ name principle for all $\Sigma^1_1(\kappa)$-formulas (see Definition \ref{Defn_foN}) by picking $1$-bounded names for witnesses. \end{remark} \begin{remark} The previous results cannot be extended to higher complexity. To see this, recall that a $\Pi^1_1(\kappa)$-formula is the negation of a $\Sigma^1_1(\kappa)$-formula. We claim that there exists a $\Pi^1_1(\omega_1)$-formula $\varphi(x)$ such that the $1$-bounded rank $1$ $\Pi^1_1(\omega_1)$-name principle for the class of c.c.c. forcings fails. Otherwise $\mathsf{MA}_{\omega_1}$ would hold by \ref{Bagaria's characterisation 2} $\Rightarrow$ \ref{Bagaria's characterisation 1} of Theorem \ref{Bagaria's characterisation}. So in particular, there are no Suslin trees. Since adding a Cohen real adds a Suslin tree, let $\sigma$ be a $1$-bounded rank $1$ $\mathbb{P}$-name for it, where $\mathbb{P}$ denotes the Boolean completion of Cohen forcing, and apply the name principle to the statement ``$\sigma$ is a Suslin tree''. But then we would have a Suslin tree in $V$. \end{remark} \begin{remark} Fuchs and Minden show in \cite[Theorem 4.21]{fuchs2018subcomplete} assuming $\axiomft{CH}$ that the bounded subcomplete forcing axiom $\mathsf{BSCFA}$ can be characterised by the preservation of $(\omega_1,{\leq}\omega_1)$-Aronszajn trees. The latter can be understood as the $1$-bounded name principle for statements of the form ``$\sigma$ is an $\omega_1$-branch in $T$'', where $T$ is an $(\omega_1,{\leq}\omega_1)$-Aronszajn tree. (See \cite{fuchs2018subcomplete,jensen2014subcomplete} for more about subcomplete forcing.) \end{remark} We now consider forcing axioms at cardinals $\kappa$ of countable cofinality. To our knowledge, these have not been studied before. $\mathsf{BFA}^\kappa_{c.c.c.,\kappa}=\axiomft{MA}_\kappa$ is an example of a consistent forcing axiom of this form. We fix some notation. If $\kappa$ is an uncountable cardinals with $\mathrm{cof}(\kappa)=\mu$, we fix a continuous strictly increasing sequence $\langle \kappa_i \mid i\in\mu \rangle $ of ordinals with $\kappa_0=0$ and $\sup_{i\in\mu} \kappa_i=\kappa$. We assume that each $\kappa_i$ is closed un the pairing function $p$.\footnote{If $\kappa_i$ is multiplicatively closed, i.e. $\forall \alpha<\kappa \alpha\cdot \alpha<\kappa_i$, then this holds for G\"odel's pairing function.} For each $x\in 2^\kappa$, we define a function $f_x\colon \mu\rightarrow 2^{<\kappa}$ by letting $f_x(i)=x{\upharpoonright}\kappa_i$. \begin{lemma} \label{tree projecting to a set} Suppose that $\kappa$ is an uncountable cardinal with $\mathrm{cof}(\kappa)=\mu$. Suppose that $\varphi(x,y)$ is a formula with quantifiers ranging over $\kappa$ and $y\in 2^\kappa$ is fixed. Then there is a subtree $T\in V$ of $((2^{<\kappa})^{<\mu})^2$ such that in all generic extensions $V[G]$ of $V$ \footnote{This includes the case $V[G]=V$.} which do not add new bounded subsets of $\kappa$, $$\varphi(x,y) \Longleftrightarrow \exists g\in (2^{<\kappa})^\mu\ (f_x,g)\in [T]$$ holds for all $x\in (2^\kappa)^{V[G]}$. Moreover, for any branch $(\vec{s},\vec{t})\in [T]$ in $V[G]$ with $\vec{s}=\langle s_i\mid i\in\mu\rangle$, $\bigcup_{i\in\mu} s_i=f_x$ for some $x\in (2^\kappa)^{V[G]}$. \end{lemma} \begin{proof} We construct the $i$-th levels $\mathrm{Lev}_i(T)$ by induction on $i\in\mu$. Let $\mathrm{Lev}_0(T)=\{(\emptyset,\emptyset)\}$. If $j\in\mu$ is a limit, let $(\vec{s},\vec{t}) \in \mathrm{Lev}_j(T)$ if $(\vec{s}{\upharpoonright}i,\vec{t}{\upharpoonright}i) \in \mathrm{Lev}_i(T)$ for all $i<j$. For the successor step, suppose that $\mathrm{Lev}_j(T)$ has been constructed. Write $\vec{s}=\langle s_i \mid i\leq j\rangle$ and $\vec{t}=\langle t_i \mid i\leq j\rangle$. Let $(\vec{s},\vec{t})\in \mathrm{Lev}_{j+1}(T)$ if the following conditions hold: \begin{enumerate-(1)} \item $(\vec{s}{\upharpoonright}j,\vec{t}{\upharpoonright}j)\in \mathrm{Lev}_j(T)$. \item $s_j\in 2^{\kappa_j}$ and $\forall i < j\ s_j{\upharpoonright}\kappa_i=s_i$. \item $t_j\in 2^{\kappa_j}$ codes the following two objects. \begin{enumerate-(i)} \item A truth table $p_j$ which assigns to each formula $\psi(\xi_0,\dots,\xi_k)$ and parameters $\alpha_0,\dots,\alpha_k<\kappa_j$ a truth value $0$ or $1$. \item A function $q_j$ which assigns a value in $\omega$ to each existential formula $\exists \eta\ \psi(\xi_0,\dots,\xi_k,\eta)$ and associated parameters $\alpha_0,\dots,\alpha_k<\kappa_j$. \end{enumerate-(i)} They satisfy $p_i \subseteq p_j$, $q_i\subseteq q_j=q_i$ for all $i<j$ and the following conditions: \begin{enumerate-(a)} \item $p_j(\varphi)=1$. \item $p_j$ satisfies the equality axioms: $$ p_j((\psi(\vec{\xi})),\vec{\alpha}) =1 \wedge \vec{\alpha}=\vec{\beta} \Longleftrightarrow p_j((\psi(\vec{\xi})),\vec{\beta}) =1$$ \item $p_j$ is correct about atomic formulas $\psi(\vec{\xi})$ which do not mention $\dot{x}$ and $\dot{y}$: $$ p_j((\psi(\vec{\xi})),\vec{\alpha}) =1 \Longleftrightarrow \psi(\vec{\alpha})$$ \item The truth in $p_j$ of all atomic formulas of the form $\xi\in \dot{x}$, $\xi\in \dot{y}$ is fixed according to $s_j$ and $y$, respectively: $$ p_j ((\xi \in \dot{x}),\alpha) =1 \Longleftrightarrow \alpha\in s_j$$ $$ p_j ((\xi \in \dot{y}),\alpha)=1 \Longleftrightarrow \alpha\in y$$ \item $p_j$ respects propositional connectives: $$ p_j(\psi\wedge \theta,\vec{\alpha})=1 \Longleftrightarrow p_j(\psi,\vec{\alpha})=1 \wedge p_j( \theta,\vec{\alpha})=1$$ $$ p_j(\neg \psi ,\vec{\alpha}) =1 \Longleftrightarrow p_j (\psi,\vec{\alpha}) =0 $$ \item $p_j$ respectes witnesses of existential formulas $\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})$ which it has identified: $$\exists \beta <\kappa_j\ p_j(\psi(\vec{\xi},\eta),\vec{\alpha},\beta)=1 \Longrightarrow p_j(\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})=1.$$ \item $q_j$ promises the existence of existential witnesses: for any existential formula $\exists \eta\ \psi(\vec{\xi},\eta)$ and any tuple $\vec{\alpha}$ of parameters, if ${p_j(\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})=1}$ and $q_j(\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})\leq n$, then there exists some $\beta<\kappa_j$ such that $p_j(\psi(\vec{\xi},\eta), \vec{\alpha},\beta)=1$. \end{enumerate-(a)} \end{enumerate-(1)} Let $V[G]$ be a generic extension of $V$ with no new bounded subsets of $\kappa$. Work in $V[G]$. $\Rightarrow$: Suppose that $\varphi(x,y)$ holds. We define $s_j=x{\upharpoonright}\kappa_j$ for each $j\in\mu$ and $p_j(\psi(\vec{\xi}),\vec{\alpha})=1$ if $(\kappa,\in,p,x,y)\models \psi (\vec{\alpha})$. We further define $q_j(\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})=0$ if $p_j(\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})=0$. Otherwise, $q_j(\exists \eta\ \psi(\vec{\xi},\eta),\vec{\alpha})$ is defined as the least $l\in\mu$ such that for some $\beta<\kappa_l$, $(\kappa,\in,p,x,y)\models \psi (\vec{\alpha},\beta)$. Let $t_j$ code $p_j$ and $q_j$ (via the pairing function $p$). Note that $s_j$, $p_j$ and $q_j$ are in $V$, since $V[G]$ has no new bounded subsets of $\kappa$. Hence $\langle (s_j,t_j)\mid j\in\mu\rangle$ is a branch through $T$. $\Leftarrow$: Suppose that $\langle (s_j,t_j)\mid j\in\mu\rangle$ is a branch through $T$. Let $x=\bigcup_{j\in\mu} s_j$. By induction on complexity of formulas, $p_j$ and $q_j$ are correct about $x$ and $y$. Therefore $(\kappa,\in,p,x,y)\models \varphi (x,y)$. \end{proof} \begin{theorem} \label{variant of Bagaria's characterisation for countable cofinality} Suppose that $\kappa$ is an uncountable cardinal with $\mathrm{cof}(\kappa)=\omega$, $\mathbb{P}$ is a complete Boolean algebra and $\dot{G}$ is a $\mathbb{P}$-name for the generic filter. Then the following conditions are equivalent: \begin{enumerate-(1)} \item \label{Bagaria's characterisation 1b} $\mathsf{BFA}^\kappa_{\mathbb{P},\kappa}$ \item \label{Bagaria's characterisation 2b} $\Sigma_0^{\mathrm{(sim)}}\text{-}\mathsf{BN}^1_{\mathbb{P},\kappa}$ \item \label{Bagaria's characterisation 3b} $\Vdash_\mathbb{P} V \prec_{\Sigma^1_1(\kappa)}V[\dot{G}]$ \end{enumerate-(1)} If moreover $2^{<\kappa}=\kappa$ holds,\footnote{The assumption $2^{<\kappa}=\kappa$ is not needed for \ref{Bagaria's characterisation 4b} $\Rightarrow$ \ref{Bagaria's characterisation 3b}.} then the next condition is equivalent to \ref{Bagaria's characterisation 1b}, \ref{Bagaria's characterisation 2b} and \ref{Bagaria's characterisation 3b}: \begin{enumerate-(1)} \setcounter{enumi}{3} \item \label{Bagaria's characterisation 4b} $1_\mathbb{P}$ forces that no new bounded subset of $\kappa$ are added. \end{enumerate-(1)} If there exists no inner model with a Woodin cardinal,\footnote{The assumption that there is no inner model with a Woodin cardinal is not used for \ref{Bagaria's characterisation 5b} $\Rightarrow$ \ref{Bagaria's characterisation 3b}.} then the next condition is equivalent to \ref{Bagaria's characterisation 1b}, \ref{Bagaria's characterisation 2b} and \ref{Bagaria's characterisation 3b}: \begin{enumerate-(1)} \setcounter{enumi}{4} \item \label{Bagaria's characterisation 5b} $\Vdash_\mathbb{P} H_{\kappa^+}^V \prec_{\Sigma_1} H_{\kappa^+}^{V[\dot{G}]}$ \end{enumerate-(1)} \end{theorem} \begin{proof} The proofs of \ref{Bagaria's characterisation 1} $\Leftrightarrow$ \ref{Bagaria's characterisation 2} $\Leftrightarrow$ \ref{Bagaria's characterisation 3} $\Leftarrow$ \ref{Bagaria's characterisation 5b} in Theorem \ref{Bagaria's characterisation} work for all uncountable cardinals $\kappa$. \ref{Bagaria's characterisation 3b} $\Rightarrow$ \ref{Bagaria's characterisation 4b}: We assume $2^{<\kappa}=\kappa$. Towards a contradiction, suppose that $V[G]$ is a generic extension that adds a new subset of $\gamma<\kappa$. Note that $2^\gamma\leq \kappa$. Let $\vec{y}=\langle y_i \mid i<2^\gamma\rangle$ list all subsets of $\gamma$. We define $x \subseteq \gamma\cdot 2^\gamma \subseteq \kappa$ by letting $\gamma\cdot i + j \in x \Leftrightarrow j\in y_i$. The next formula expresses ``there is a new subset of $\gamma<\kappa$'' as a $\Sigma^1_1(\kappa)$-statement in parameters coding the $+$ and $\cdot$ operations: $$\exists z\ [z\subseteq \gamma \wedge \neg\exists i\ \forall j<\gamma\ ( j\in z \Leftrightarrow \gamma\cdot i + j \in x)].$$ This contradicts $\Sigma^1_1(\kappa)$-absoluteness. \ref{Bagaria's characterisation 4b} $\Rightarrow$ \ref{Bagaria's characterisation 3b}: Suppose that $\exists x\ \psi(x,y)$ is a $\Sigma^1_1(\kappa)$-formula and $y\in (2^\kappa)^V$. Let $T$ be a subtree of $((2^{<\kappa})^{<\omega})^2$ as in Lemma \ref{tree projecting to a set}. Let $G$ be $\mathbb{P}$-generic over $V$ with $V[G]\vDash \exists x\ \psi(x,y)$. $V[G]$ does not have new bounded subsets of $\kappa$ by assumption. Then $[T]$ has a branch in $V[G]$ by the property of $T$ in Lemma \ref{tree projecting to a set}. Since wellfoundedness is absolute, $T$ has a branch $\langle s_n, t_n\mid n\in\omega \rangle$ in $V$. Then $\bigcup_{n\in\omega} s_n=f_x$ for some $x\in 2^\kappa$ by the properties of $T$. Since $$\psi(x,y) \Longleftrightarrow \exists g\ (f_x,g)\in [T],$$ we have $V\models \psi(x,y)$. \ref{Bagaria's characterisation 3b} $\Rightarrow$ \ref{Bagaria's characterisation 5b}: Note that the implication holds vacuously if $\kappa$ is collapsed in some $\mathbb{P}$-generic extension of $V$. In this case, both \ref{Bagaria's characterisation 3b} and \ref{Bagaria's characterisation 5b} fail, since the statement ``$\kappa$ is not a cardinal'' is $\Sigma^1_1(\kappa)$. We next show: if $q\in \mathbb{P}$ forces that $\kappa^+$ is preserved, then $q \Vdash H_{\kappa^+}^V \prec_{\Sigma_1} H_{\kappa^+}^{V[\dot{G}]}$ holds. To see this, let $G$ be $\mathbb{P}$-generic over $V$ with $q\in G$. Suppose $\psi=\exists x\ \varphi(x,y)$ is a $\Sigma_1$-formula with a parameter $y\in H_{\kappa^+}$. We follow the proof of \ref{Bagaria's characterisation 3} $\Rightarrow$ \ref{Bagaria's characterisation 4} in Corollary \ref{Bagaria's characterisation} to construct a $\Sigma^1_1(\kappa)$-formula $\theta$ that is equivalent to $\psi$. However, we replace the first condition by: \begin{itemize} \item $\in_M$ is extensional and wellfounded of rank $\gamma$ \end{itemize} for a fixed $\gamma<(\kappa^+)^V=(\kappa^+)^{V[G]}$. If $\psi$ is true, then for sufficiently large $\gamma$, $\theta$ will be true. Now we only need to modify the last step of the above proof. Let $C$ be a subset of $\kappa$ such that $(\kappa,p^{-1}[C])\cong (\gamma,<)$. Suppose $R$ is a binary relation on $\kappa$. The condition ``$R$ is wellfounded of rank ${\leq}\gamma$'' is $\Sigma^1_1(\kappa)$ in $C$, since it is equivalent to the existence of a function $f\colon \kappa\rightarrow \gamma$ such that for all $\alpha,\beta<\kappa$, $(\alpha,\beta)\in R \Rightarrow f(\alpha)<f(\beta)$. Towards a contradiction, suppose that there is no inner model with a Woodin cardinal and in some $\mathbb{P}$-generic extension $V[G]$ of $V$, $H_{\kappa^+}^V \prec_{\Sigma_1} H_{\kappa^+}^{V[G]}$ fails. By the previous remarks, $\kappa$ is preserved and $\kappa^+$ is collapsed in $V[G]$. Since there is no inner model with a Woodin cardinal, the Jensen-Steel core model $K$ from \cite{jensen2013k} is generically absolute and satisfies $(\lambda^+)^K=\lambda^+$ for all singular cardinals $\lambda$ by \cite[Theorem 1.1]{jensen2013k}. Therefore any generic extension $V[G]$ of $V$ which does not collapse $\lambda$ satisfies $(\lambda^+)^V=(\lambda^+)^{V[G]}$. For $\lambda=\kappa$, this contradicts our assumption. \end{proof} Can one remove the assumption that there is no inner model with a Woodin cardinal? A forcing $\mathbb{P}$ that witnesses the failure of \ref{Bagaria's characterisation 3b} $\Rightarrow$ \ref{Bagaria's characterisation 5b} must preserve $\kappa$ and collapse $\kappa^+$ by the above proof. The existence of a forcing $\mathbb{P}$ with these two properties is consistent relative to the existence of a $\lambda^+$-supercompact cardinal $\lambda$ by a result of Adolf, Apter and Koepke \cite[Theorem 7]{adolf2018singularizing}. Their forcing does not add new bounded subsets of $\kappa$ as in \ref{Bagaria's characterisation 4b} and thus also satisfies \ref{Bagaria's characterisation 1b}-\ref{Bagaria's characterisation 3b}. However, we do not know if it satisfies \ref{Bagaria's characterisation 5b}. \begin{question} Is it consistent that there exist an uncountable cardinal $\kappa$ with $\mathrm{cof}(\kappa)=\omega$ and a forcing $\mathbb{P}$ with the properties: \begin{enumerate-(a)} \item $\mathbb{P}$ does not add new bounded subsets of $\kappa$ and \item $\Vdash_\mathbb{P} H_{\kappa^+}^V \prec_{\Sigma_1} H_{\kappa^+}^{V[\dot{G}]}$ fails? \end{enumerate-(a)} (Thus $\mathbb{P}$ necessarily collapses $\kappa^+$.) \end{question} \subsection{Boolean ultrapowers} In this section, we translate the above correspondence to Boolean ultrapowers and use this to characterise forcing axioms via elementary embeddings. The Boolean ultrapower construction generalises ultrapowers with respect to ultrafilters on the power set of a set to ultrafilters on arbitrary Boolean algebras. We recall the basic definitions from Hamkins' and Seabold's work on Boolean ultrapowers \cite[Section 3]{hamkins2012well}. Suppose that $\mathbb{P}$ is a forcing and $\mathbb B$ its Boolean completion. Fix an ultrafilter $U$ on $\mathbb B$, which may or may not be in the ground model. We define two relations $=_U$ and $\in_U$ on $V^\mathbb B$: $$\sigma =_U \tau :\Leftrightarrow \llbracket \sigma=\tau \rrbracket \in U$$ $$\sigma \in_U \tau :\Leftrightarrow \llbracket \sigma\in\tau \rrbracket \in U$$ Let $[\sigma]_U$ denote the equivalence class of $\sigma\in V^\mathbb B$ with respect to $=_U$. Let $V^{\mathbb B}/U=\{ [\sigma]_U \mid \sigma\in V^\mathbb B \}$ denote the quotient with respect to $=_U$. $\in_U$ is well-defined on equivalence classes and $(V^\mathbb B/U, \in_U)$ is a model of $\axiomft{ZFC}$ \cite[Theorem 3]{hamkins2012well}. It is easy to see from these definitions that for any $\mathbb{P}$-generic filter $G$ over $V$, $V^\mathbb B/G$ is isomorphic to the generic extension $V[G]$. Moreover, we can determine the truth of sentences in $V^\mathbb B/U$ via \L os' theorem \cite[Theorem 10]{hamkins2012well}: $V^\mathbb B/U \models \varphi ([\sigma_0]_U,\dots[\sigma_n]_U) \Longleftrightarrow \llbracket \varphi(\sigma_0,\dots,\sigma_n) \rrbracket \in U$. In other words, the forcing theorem holds. The \emph{Boolean ultrapower} is the subclass $$ \check{V}_U= \{ [\sigma]_U \mid \llbracket\sigma\in \check{V}\rrbracket\in U \} $$ of $V^{\mathbb B}/U$. It is isomorphic to $V$ if and only if $U$ is generic over $V$. The \emph{Boolean ultrapower embedding} is the elementary embedding $$j_U\colon V \rightarrow \check{V}_U, \text{ \ } j_U(x) = [\check{x}]_U.$$ We are interested in the case that $U$ is an ultrafilter in the ground model. In particular, $U$ is not $\mathbb{P}$-generic over $V$. $j_U$ has the following properties: \begin{itemize} \item If $U$ is generic, then $j_U$ is an isomorphism. \item If $U$ is not generic, then $\check{V}_U$ is ill-founded and $\crit(j_U)$ equals the least size of a maximal antichain in $\mathbb B$ not met by $U$ \cite[Theorem 17]{hamkins2012well}. For example, if $\mathbb{P}$ is c.c.c. then $\crit(j_U)=\omega$. \end{itemize} For any $x\in V^\mathbb B/U$, let $x^{\in_U}=\{y \in V^\mathbb B/U\mid y \in_U x\}$ denote the set of all $\in_U$-elements of $x$. If $\kappa$ is a cardinal and $\sigma$ is a name for a subset of $\kappa$, then $[\sigma]_U^{\in_U}\cap j[\kappa]=j[\sigma^{(U)}]$, since $V^\mathbb B/U \models j_U(\alpha)= [\check{\alpha}]_U \in [\sigma]_U \Leftrightarrow \llbracket \check{\alpha}\in \sigma\rrbracket \in U \Leftrightarrow \alpha\in \sigma^{(U)}$ for all $\alpha<\kappa$. \begin{theorem} \label{characterisation of forcing axioms by Boolean ultrapowers} The following statements are equivalent: \begin{enumerate-(1)} \item \label{characterisation of forcing axioms by Boolean ultrapowers 1} $\mathsf{FA}_{\mathbb{P},\kappa}$ \item \label{characterisation of forcing axioms by Boolean ultrapowers 2} For any transitive set $M\in H_{\kappa^+}$ and for every $\kappa$-small $M$-name $\sigma$, there is an ultrafilter $U\in V$ on $\mathbb{P}$ such that $$ j_U{\upharpoonright} M \colon M \rightarrow j_U(M)^{\in_U}$$ is an elementary embedding from $(M,\in,\sigma^U)$ to $(j_U(M)^{\in_U},\in_U,[\sigma]_U)$. \item \label{characterisation of forcing axioms by Boolean ultrapowers 3} For any transitive set $M\in H_{\kappa^+}$ and for any $\kappa$-small $M$-name $\sigma$, there is an ultrafilter $U$ on $\mathbb{P}$ such that $$(M,\in,\sigma^U)\equiv (j_U(M)^{\in_U},\in_U,[\sigma]_U),$$ i.e. these structures are elementarily equivalent. \end{enumerate-(1)} \end{theorem} \begin{proof} \ref{characterisation of forcing axioms by Boolean ultrapowers 1} $\Rightarrow$ \ref{characterisation of forcing axioms by Boolean ultrapowers 2}: Recall from Lemma \ref{collection of dense sets to witness first order statement} that for any finite sequence $\vec{\sigma}= \sigma_0,\dots,\sigma_k$ of $\kappa$-small names and and every $\Sigma_0$-formula $\varphi(x_0,\dots,x_k)$, there is a collection $\mathcal{D}_{\varphi(\vec{\sigma})}$ of ${\leq}\kappa$ many dense subsets of $\mathbb{P}$ with the following property: if $g$ is any filter meeting every set in $\mathcal{D}_{\varphi(\vec{\sigma})}$ and $g$ contains some $p$ such that $p\mathrel{\Vdash}\varphi(\vec{\sigma})$, then in fact $\varphi(\vec{\sigma^g})$ holds in $V$. Let $\mathcal{D}$ be the union of all collections $\mathcal{D}_{\varphi(\vec{\sigma})}$, where $k\in \omega$, $\varphi(x_0,\dots,x_k)$ is a $\Sigma_0$-formula and each $\sigma_i$ is $\sigma$, $\check{M}$ or $\check{x}$ for some $x\in M$. By $\mathsf{FA}_{\mathbb{P},\kappa}$, there is a filter $g$ which meets all sets in $\mathcal{D}$. We extend $g$ to an ultrafilter $U$. Suppose that $\psi(x_0,\dots,x_k)$ is a formula such that $(j_U(M)^{\in_U},\in_U,[\sigma]_U) \models \psi(j_U(y_0),\dots,j_U(y_k))$. We obtain $\varphi(x_0,\dots,x_{k+2})$ by replacing the unbounded quantifiers in $\psi$ by quantifiers bounded by $x_{k+1}$, and any occurence of $[\sigma]_U$ by $x_{k+2}$. Then $$(V^\mathbb B/U,\in_U) \models \varphi(j_U(y_0),\dots,j_U(y_k),j_U(M),[\sigma]_U).$$ Recall that $j_U(y)=[\check{y}]_U$ for all $u\in M$. Therefore by \L os' theorem, we have $\llbracket \varphi(\check{y}_0,\dots,\check{y}_k,\check{M},\sigma) \rrbracket\in U$. So there is some $p\in U$ with $p\mathrel{\Vdash} \varphi(\check{y}_0,\dots,\check{y}_k,\check{M},\sigma)$. Since $U$ meets all dense sets in $\mathcal{D}_{\varphi(\check{y}_0,\dots,\check{y}_k,\check{M},\sigma)}$, $$(V,\in)\models \varphi(y_0,\dots,y_k,M,\sigma^U).$$ Hence $(M,\in,\sigma^U)\models \psi(y_0,\dots,y_k)$. \ref{characterisation of forcing axioms by Boolean ultrapowers 2} $\Rightarrow$ \ref{characterisation of forcing axioms by Boolean ultrapowers 3}: This is clear. \ref{characterisation of forcing axioms by Boolean ultrapowers 3} $\Rightarrow$ \ref{characterisation of forcing axioms by Boolean ultrapowers 1}: Let $M=\kappa$ and suppose that $\sigma$ is a rank $1$ $M$-name such that $\mathbb{P}\Vdash \sigma=\check{\kappa}$. Then $\sigma^{(g)}=\kappa$ for any filter $g$ on $\mathbb{P}$. It suffices to find a filter $g$ with $\sigma^g=\kappa$ by Lemma \ref{Lemma FA bracket interpretation}. Let $U$ be an ultrafilter as in \ref{characterisation of forcing axioms by Boolean ultrapowers 3}. Since $M=\kappa$ and $j_U(M)=j_U(\kappa)=[\check{\kappa}]_U=[\sigma]_U$, we have $(j_U(M)^{\in_U},\in_U,[\sigma]_U)\models \forall x\ x\in_U [\sigma]_U$. Thus $(\kappa,\in,\sigma^{U})\models \forall x\ x\in_U \sigma^{U}$ by elementary equivalence. Thus $\sigma^{U}=\kappa$. \end{proof} A version of Theorem \ref{characterisation of forcing axioms by Boolean ultrapowers} for $\mathsf{BFA}^\lambda_{\mathbb{P},\kappa}$ and $\lambda$-bounded names also holds for any cardinal $\lambda\geq\kappa$. The proof is essentially the same. \subsection{An application to $\mathsf{ub}\text{-}\mathsf{FA}$} \begin{lemma} \label{ubFA implies BFA} If $\mathbb{P}$ is a complete Boolean algebra that does not add reals, then $$(\forall q\in \mathbb{P}\ \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_q,\omega_1}) \Longrightarrow \mathsf{BFA}_{\mathbb{P},\omega_1}^{\omega_1}.$$ More generally, if $\kappa$ an uncountable cardinal and $\mathbb{P}$ is a complete Boolean algebra that does not add bounded subsets of $\kappa$, then $$(\forall q\in \mathbb{P}\ \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_q,\kappa}) \Longrightarrow \mathsf{BFA}_{\mathbb{P},\kappa}^\kappa.$$ \end{lemma} \begin{proof} If $\mathrm{cof}(\kappa)=\omega$, then adding no new bounded subsets of $\kappa$ already implies $\mathsf{BFA}^\kappa_{\mathbb{P},\kappa}$ by the proof of \ref{Bagaria's characterisation 4b} $\Rightarrow$ \ref{Bagaria's characterisation 3b} in Theorem \ref{variant of Bagaria's characterisation for countable cofinality}. Now suppose that $\mathrm{cof}(\kappa)>\omega$. Towards a contradiction, suppose that $\mathsf{BFA}^\kappa_{\mathbb{P},\kappa}$ fails. Then $\Sigma^1_1(\kappa)$-absoluteness fails for some $\Sigma^1_1(\kappa)$-formula $\exists x\ \psi(x,y)$ and some $y\in (2^\kappa)^V$ by Theorem \ref{Bagaria's characterisation}. Take a subtree $T$ of $(2^{<\kappa}\times \kappa^{<\kappa})^{<\mathrm{cof}(\kappa)}$ for $\psi$ as in Lemma \ref{tree projecting to a set}. Then $[T]\neq \emptyset$ in $V[G]$ in some $\mathbb{P}$-generic extension $V[G]$, but $[T]=\emptyset$ in $V$. Let $\sigma$ denote a rank $1$ $T$-name and let $q\in \mathbb{P}$ such that $q \Vdash_\mathbb{P} \sigma \in [T]$. Let $$ \tau = \{ (\alpha,p) \mid p\leq q\wedge \exists s\in \mathrm{Lev}_\alpha(T)\ p \sforces_\mathbb{P} \check{s}\in \sigma \} $$ Then $\Vdash_{\mathbb{P}_q} \tau=\kappa$. For any filter $g\in V$ on $\mathbb{P}_q$ we have $\tau^g = \dom(\sigma^g)$. But $\dom(\sigma^g)\in \kappa$, since $[T]=\emptyset$. Therefore $\mathsf{ub}\text{-}\mathsf{N}_{\mathbb{P}_q,\kappa}$ fails and hence $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_q,\kappa}$ fails by Lemma \ref{Lemma ubFA to ubN}. \end{proof} We have seen in Lemma \ref{ubFA implies FA for sigma-distributive forcings} that for any ${<}\kappa$-distributive forcing $\mathbb{P}$, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa}$ implies $\mathsf{FA}_{\mathbb{P},\kappa}$. In combination with the previous lemma, this begs the question: \begin{question} \label{question ubFA BFA} If $\lambda>\kappa$ is a cardinal and $\mathbb{P}$ is a complete Boolean algebra that does not add new elements of ${}^{<\kappa}\lambda$, then does the implication $$(\forall q\in \mathbb{P}\ \mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_q,\omega_1}) \Longrightarrow \mathsf{BFA}^\lambda_{\mathbb{P},\omega_1}$$ hold? \end{question} \section{Specific classes of forcings} \label{Section specific classes of forcings} \subsection{Classes of forcings} We now move on to look, over the next few sections, at what further results we can prove if we assume $\mathbb{P}$ is some specific kinds of forcing. We shall mostly return to the rank $1$ cases for this and discuss the $\mathsf{club}$, $\mathsf{stat}$, $\mathsf{ub}$ and $\omega\text{-}\mathsf{ub}$ axioms in Figure \ref{diagram of implications}. \subsubsection{$\sigma$-distributive forcings} We begin with a relatively simple case, where $\mathbb{P}$ is ${<}\kappa$-distributive. In this case, several of our axioms turn out to be equivalent to one another. The implications for the class of ${<}\kappa$-distributive forcings are summarised in the next diagram. \begin{figure}[H \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{N}_\kappa$}} \ar@{<->}[r] \ar@{<->}[d] & {\txt{$\mathsf{club}\text{-}\mathsf{N}_\kappa$}} \ar@{<->}[d] \ar@{<-}[r]^{\ref{Remark_strength of statNsigma-closed}} & \txt{$\mathsf{stat}\text{-}\mathsf{N}_\kappa$} \ar@{->}[r] \ar@{->}[d] & \txt{$\mathsf{ub}\text{-}\mathsf{N}_\kappa$} \ar@{<->}[d]& \\ {\txt{$\mathsf{FA}_\kappa$}} \ar@{<->}[r]& \txt{$\mathsf{club}\text{-}\mathsf{FA}_\kappa$} \ar@{<->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{FA}_\kappa$} \ar@{<->}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{FA}_\kappa$}\ar@/^0.8cm/@{<->}[lll]_{\ref{distributive stat}} \\ }\] \caption{Forcing axioms and name principles for any ${<}\kappa$-distributive forcing for regular $\kappa$. Lemma \ref{Remark_strength of statNsigma-closed} shows that $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\omega_1}$ is strictly stronger than the remaining principles for some $\sigma$-closed forcing $\mathbb{P}$. } \label{diagram of implications for distributive forcings} \end{figure} \begin{lemma} \label{ubFA implies FA for sigma-distributive forcings} For any ${<}\kappa$-distributive forcing $\mathbb{P}$, $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\kappa} \implies\mathsf{FA}_{\mathbb{P},\kappa}$. \end{lemma} \begin{proof} Given a sequence $\vec{D}=\langle D_i\mid i<\kappa\rangle$ of open dense subsets of $\mathbb{P}$, let $E_j=\bigcap_{i\leq j}D_i$ for $j<\kappa$. If for a filter $g$, $g\cap E_j\neq\emptyset$ for unboundedly many $j<\kappa$, then $g\cap D_i\neq \emptyset$ for all $i<\kappa$. \end{proof} \begin{lemma} \label{distributive stat} Let $\mathbb{P}$ be ${<}\kappa$-distributive. $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\kappa}\implies\mathsf{FA}^+_{\mathbb{P},\kappa}$ \end{lemma} \begin{proof} Suppose that $\vec{D}=\langle D_i\mid i<\kappa\rangle$ is a sequence of open dense subsets of $\mathbb{P}$ and $\sigma=\{(\check{\alpha},p)\mid p\in S_\alpha\}$ is a name with $1\mathrel{\Vdash}_\mathbb{P} ``\sigma$ is stationary''. For each $j<\kappa$, let $E_j=\bigcap_{i\leq j}D_i$. For $j<\kappa$ and $p\in\mathbb{P}$, let $E_{j,p}$ denote a subset of $\{q\in E_j\mid q\leq p\}$ that is dense below $p$. Let $$\tau=\{(\check{\alpha},q)\mid \alpha<\kappa,\ \exists p\in S_\alpha\ q\in E_{j,p}\}.$$ $1\Vdash_\mathbb{P} ``\tau$ is stationary'', since $1\Vdash_\mathbb{P} \sigma=\tau$. By $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\kappa}$, there is a filter $g$ such that $\tau^g$ is stationary. By the definition of $\tau$, $\tau^g\subseteq \sigma^g$. Thus $\sigma^g$ is stationary. We further have $g\cap E_j$ for unboundedly many $j<\kappa$ and hence $g\cap D_i\neq\emptyset$ for all $i<\kappa$. \end{proof} An equivalent argument can be made with names for unbounded sets, or for sets containing a club. \subsubsection{$\sigma$-closed forcings} Note that $\mathsf{FA}_{\mathbb{P},\omega_1}$ fails for some $\sigma$-distributive forcings, for instance for Suslin trees. But $\mathsf{FA}_{\sigma-\mathrm{closed},\omega_1}$ is provable: if $\langle D_\alpha \mid \alpha<\omega_1 \rangle$ is a sequence of dense subsets of a $\sigma$-closed $\mathbb{P}$, let $\langle p_\alpha\mid \alpha<\omega_1\rangle$ be a decreasing sequence of conditions in $\mathbb{P}$ with $p_\alpha\in D_\alpha$ and let $g=\{q \in \mathbb{P} \mid \exists \alpha<\omega_1\ p_\alpha\leq q\}$. Therefore, the other principles in Figure \ref{diagram of implications for distributive forcings} are provable, with the exception of $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\omega_1} $ by the next lemma. The lemma follows from known results. \begin{lemma} \label{Remark_strength of statNsigma-closed} It is consistent that there is a $\sigma$-closed forcing $\mathbb{P}$ such that $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P}}$ fails. \end{lemma} \begin{proof} It suffices to argue that $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P}}$ has large cardinal strength for some $\sigma$-closed forcing $\mathbb{P}$. Note that $\mathsf{stat}\text{-}\mathsf{N}_\mathbb{P}$ implies $\mathsf{FA}^+_\mathbb{P}$ for any $\sigma$-closed forcing $\mathbb{P}$ by Lemma \ref{distributive stat}. There is a cardinal $\mu\geq\omega_2$ such that $\mathsf{FA}^+_{\mathrm{Col}(\omega_1,\mu)}$ implies the failure of $\square(\kappa)$ for all regular $\kappa\geq\omega_2$ by \cite[Page 20 \& Proposition 14]{foreman1988martin} and \cite[Theorem 2.1]{sakai2015stationary}.\footnote{A more direct argument using \cite[Page 20]{foreman1988martin} and \cite[Theorem 3.8]{velickovic1992forcing} should be possible, but the required results are not explicitly mentioned there.} The proofs show that a single collapse suffices for the conclusion. The failure of $\square(\kappa^+)$ and thus Jensen's $\square_\kappa$ at a singular strong limit cardinal $\kappa$ implies the existence of an inner model with a proper class of Woodin cardinals by \cite[Theorem 0.2]{trang2016pfa} and \cite[Corollary 0.7]{sargsyan2012strength}. \end{proof} Presaturation of the nonstationary ideal on $\omega_1$ is another interesting consequence of $\mathsf{stat}\text{-}\mathsf{N}_{\sigma\text{-}\mathrm{closed},\omega_1}$ (equivalently, of $\mathsf{FA}^+_{\sigma\text{-closed},\omega_1}$) \cite[Theorem 25]{foreman1988martin}. Even for very well-behaved $\sigma$-closed forcings $\mathbb{P}$, $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\omega_1}$ is an interesting axiom. For instance, Sakai shoed in \cite[Section 3]{sakaiMA} that $\mathsf{FA}^+_{\Add(\omega_1),\omega_1}$ and thus $\mathsf{stat}\text{-}\mathsf{N}_{\Add(\omega_1),\omega_1}$ is not provable in $\axiomft{ZFC}$. We have not studied the weakest stationary name principle for $\sigma$-closed forcing: \begin{question} Is $\mathsf{stat}\text{-}\mathsf{BN}^1_{\sigma\text{-}\mathrm{closed}}$ provable in $\axiomft{ZFC}$? \end{question} \subsubsection{c.c.c. forcings} \label{section ccc forcings} The class of c.c.c. forcings is rather more interesting. It has also historically been a class where forcing axioms have been frequently used; for example $\mathsf{FA}_{\text{c.c.c.},\omega_1}$ is the well-known Martin's Axiom $\axiomft{MA}_{\omega_1}$. Note that $\mathsf{FA}_{\mathbb{P},\kappa}$ is equivalent to $\mathsf{BFA}^\omega_{\mathbb{P},\kappa}$. \begin{figure}[H \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{N}_{\omega_1}$}} \ar@{<->}[r] \ar@{<->}[d] & {\txt{$\mathsf{club}\text{-}\mathsf{N}_{\omega_1}$}} \ar@{<->}[d] \ar@{<->}[r]^{\ref{Baumgartner's lemma}} & \txt{$\mathsf{stat}\text{-}\mathsf{N}_{\omega_1}$} \ar@{<->}[r] \ar@{<->}[d] & \txt{$\mathsf{ub}\text{-}\mathsf{N}_{\omega_1}$} \ar@{<->}[d]& \\ {\txt{$\mathsf{FA}_{\omega_1}$}} \ar@{<->}[r]& \txt{$\mathsf{club}\text{-}\mathsf{FA}_{\omega_1}$} \ar@{<->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{FA}_{\omega_1}$} \ar@{<->}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$}\ar@/^0.8cm/@{<->}[lll]_{\ref{characterisation of precaliber}} \\ }\] \caption{Forcing axioms and name principles at $\omega_1$ for the class of all c.c.c. forcings. } \label{diagram of implications for ccc forcings} \end{figure} All principles in Figure \ref{diagram of implications} for $\kappa=\omega_1$ turn out to be equivalent to $\mathsf{FA}_{\omega_1}$. The implications are valid for the class of all c.c.c. forcings, but not for all single c.c.c. forcings. For instance, for the class of $\sigma$-centred forcings, the right side of Figure \ref{diagram of implications} is provable in $\axiomft{ZFC}$ by Lemma \ref{Lemma stat-NP for sigma-centred}, but the left side is not. \iffalse \begin{lemma}[Baumgartner] $\mathsf{FA}_{c.c.c.}$ implies $\mathsf{FA}^+_{c.c.c.}$. \end{lemma} \begin{proof} Suppose that $\sigma$ is a rank $1$ $\mathbb{P}$-name for a stationary subset of $\omega_1$. For each $i<\omega_1$, let $A_i$ be a maximal antichain of conditions which strongly decide whether $i\in\sigma$. Let $A=\bigcup_{i<\omega_1}A_i$. Since $\mathbb{P}$ satisfies the c.c.c. and $|A|\leq\omega_1$, there exists a subforcing $\mathbb{Q} \subseteq \mathbb{P}$ with $A\subseteq \mathbb{Q}$ and $|\mathbb{Q}|\leq\omega_1$ such that compatibility is absolute between $\mathbb{P}$ and $\mathbb{Q}$. In particular, $\mathbb{Q}$ is c.c.c. $\axiomft{MA}_{\omega_1}$ implies that every c.c.c. forcing of size $\omega_1$ is $\sigma$-centred (see \todo{cite} [Weiss: Versions of Martin's Axiom, Theorem 4.5] in the [Handbook of set-theoretic topology]). (Recall that a subset $A$ of $\mathbb{P}$ is centred if every finite subset of $A$ has a lower bound in $\mathbb{P}$.) Thus there is a sequence $\vec{g}=\langle g_n\mid n\in\omega\rangle$ of filters $g_n$ on $\mathbb{P}$ with $\mathbb{Q}\subseteq \bigcup_{n\in\omega} g_n$. Moverover, it is easy to see from the proof of [Theorem 4.5, Weiss] that we can choose $g_n$ with $g_n\cap B_i\neq\emptyset$ for all $(n,i)\in \omega\times \omega_1$, where $\vec{B}=\langle B_i\mid i<\omega_1\rangle$ is any sequence of dense subsets of $\mathbb{P}$. It remains to show that $\sigma^{g_n}$ is stationary for some $n\in\omega$. To see this, suppose that $G$ is $\mathbb{P}$-generic over $V$. Since $\sigma^G\subseteq \sigma^A$ by the definition of $A$, we have $\sigma^G\subseteq \sigma^A\subseteq \bigcup_{n\in\omega} \sigma^{g_n}$. Since $\sigma^G$ is stationary in $V[G]$, $\sigma^{g_n}$ is stationary in $V[G]$ and hence in $V$ for some $n\in\omega$. \end{proof} \fi We first derive the implication $\mathsf{ub}\text{-}\mathsf{FA}_{c.c.c.,\omega_1} \Longrightarrow \mathsf{FA}_{c.c.c.,\omega_1}$ from well-known results. Note that this implication does not hold for individual c.c.c. forcings, for instance it fails for Cohen forcing by Lemma \ref{Lemma stat-NP for sigma-centred} and Remark \ref{Remark_FACohen_meagre}. We need the following definition: \begin{definition} \label{definition centred} Suppose that $\mathbb{P}$ is a forcing. \begin{enumerate-(1)} \item A subset $A$ of $\mathbb{P}$ is \emph{centred} if every finite subset of $A$ has a lower bound in $\mathbb{P}$. $A$ is \emph{$\sigma$-centred} if it is a union of countably many centred sets. \item $\mathbb{P}$ is \emph{precaliber $\kappa$} if, whenever $A\in [\mathbb{P}]^\kappa$, there is some $B\in [A]^\kappa$ that is centred. \end{enumerate-(1)} \end{definition} The hard implications in the next lemma are due to Todor\v{c}evi\'c and Veli\v{c}kovi{\'c} \cite{todorcevic1987martin}. \begin{lemma} \label{characterisation of precaliber} The following conditions are equivalent: \begin{enumerate-(1)} \item \label{characterisation of precaliber 1} $\mathsf{ub}\text{-}\mathsf{FA}_{\mathrm{c.c.c.},\omega_1}$ holds. \item \label{characterisation of precaliber 2} Every c.c.c. forcing is precaliber $\omega_1$. \item \label{characterisation of precaliber 3} Every c.c.c. forcing of size $\omega_1$ is $\sigma$-centred. \item \label{characterisation of precaliber 4} $\mathsf{FA}_{\mathrm{c.c.c.},\omega_1}$ holds. \end{enumerate-(1)} \end{lemma} \begin{proof} \ref{characterisation of precaliber 1}$\Rightarrow$\ref{characterisation of precaliber 2}: This follows immediately from the proof of \cite[Theorem 16.21]{jech2013set}. The proof only requires meeting unboundedly many dense sets. \ref{characterisation of precaliber 2}$\Rightarrow$\ref{characterisation of precaliber 3}: See \cite[Corollary 2.7]{todorcevic1987martin}. \ref{characterisation of precaliber 3}$\Rightarrow$\ref{characterisation of precaliber 4}: See \cite[Theorem 3.3]{todorcevic1987martin}. \ref{characterisation of precaliber 4}$\Rightarrow$\ref{characterisation of precaliber 1}: This is immediate. \end{proof} Given Lemma \ref{characterisation of precaliber}, one wonders whether the characterisation also holds for $\sigma$-centered forcings instead of c.c.c. forcings. The next lemma together with the fact that $\mathsf{FA}_{\sigma\text{-centred}}$ is equivalent to $\mathfrak{p}>\omega_1$ (see \cite[Theorem 3.1]{todorcevic1987martin}) shows that this is not the case. \begin{lemma} \label{Lemma stat-NP for sigma-centred} For any cardinal $\kappa$ with $\mathrm{cof}(\kappa)>\omega$, $\mathsf{stat}\text{-}\mathsf{N}_{\sigma\text{-}\mathrm{centred},\kappa}$ holds. \end{lemma} \begin{proof} Suppose that $\sigma$ is name for a stationary subset of $\omega_1$. Let $f\colon \mathbb{P}\rightarrow \omega$ witness that $\mathbb{P}$ is $\sigma$-centered. Let $S$ be the stationary set of $\alpha$ such that $p\mathrel{\Vdash} \alpha\in \sigma$ for some $p\in\mathbb{P}$. For each $\alpha\in S$, let $p_\alpha$ be such that $(\alpha,p_\alpha)\in \sigma$. There is a stationary subset $R$ of $S$ and $n\in\omega$ with $f(p_\alpha)=n$ for all $\alpha\in R$. Let $g$ be a filter containing $p_\alpha$ for all $\alpha\in S$. Then $R\subseteq \sigma^g$, as required. \end{proof} This suggests to ask whether $\mathsf{FA}_{\sigma\text{-}\mathrm{centred}}$ implies $\mathsf{FA}^+_{\sigma\text{-}\mathrm{centred}}$ as well. A long-standing open question asks whether one can replace precaliber $\omega_1$ by Knaster in the implication \ref{characterisation of precaliber 2}$\Rightarrow$\ref{characterisation of precaliber 4} of Lemma \ref{characterisation of precaliber}. Recall that a subset of $\mathbb{P}$ is \emph{linked} if it consists of pairwise compatible conditions. $\mathbb{P}$ is called \emph{Knaster} if, whenever $A\in [\mathbb{P}]^{\omega_1}$, there is some $B\in [A]^{\omega_1}$ that is linked. \begin{question} \cite[Problem 11.1]{todorvcevic2011forcing} Does the statement that every c.c.c. forcing is Knaster imply $\mathsf{FA}_{\mathrm{c.c.c.},\omega_1}$? \end{question} We now turn to the implication $\mathsf{FA}_{\mathrm{c.c.c.},\omega_1} \Longrightarrow \mathsf{stat}\text{-}\mathsf{N}_{\mathrm{c.c.c.},\omega_1}$. To this end, we reconstruct Baumgartner's unpublished result $\mathsf{FA}_{\mathrm{c.c.c.},\kappa} \Longrightarrow\mathsf{FA}_{\mathrm{c.c.c.},\kappa}^{+n}$ that is mentioned without proof in \cite[Section 8]{baumgartner1984applications} and \cite[Page 14]{beaudoin1991proper}. Here $\mathsf{FA}_\kappa^{+n}$ denotes the version of $\mathsf{FA}^+$ with $n$ many names for stationary subsets of $\kappa$. \begin{lemma}[Baumgartner] \label{Baumgartner's lemma} For any uncountable cardinal $\kappa$ and for any $n\in\omega$, $\mathsf{FA}_{\mathrm{c.c.c.},\kappa}$ implies $\mathsf{FA}_{\mathrm{c.c.c.},\kappa}^{+n}$. \end{lemma} \begin{proof} Suppose that for each $i<n$, $\sigma_i$ is a rank $1$ $\mathbb{P}$-name for a stationary subset of $\omega_1$. For each $\vec{\alpha}=\langle \alpha_i\mid i< n\rangle \in \kappa^n$, let $A_{\vec{\alpha}}$ be a maximal antichain of conditions which strongly decide $\alpha\in\sigma_i$ for each $i<k$. Let $A=\bigcup_{\vec{\alpha}\in \kappa^n}A_{\vec{\alpha}}$. Since $\mathbb{P}$ satisfies the c.c.c. and $|A|\leq\omega_1$, there exists a subforcing $\mathbb{Q} \subseteq \mathbb{P}$ with $A\subseteq \mathbb{Q}$ and $|\mathbb{Q}|\leq\omega_1$ such that compatibility is absolute between $\mathbb{P}$ and $\mathbb{Q}$. In particular, $\mathbb{Q}$ is c.c.c. Since every c.c.c. forcing of size $\omega_1$ is $\sigma$-centred by $\axiomft{MA}_{\omega_1}$ (see \cite[Theorem 4.5]{weiss1984versions}), there is a sequence $\vec{g}=\langle g_k\mid k\in\omega\rangle$ of filters $g_k$ on $\mathbb{P}$ with $\mathbb{Q}\subseteq \bigcup_{k\in\omega} g_k$. Morover, it follows from the proof of \cite[Theorem 4.5]{weiss1984versions} (by a density argument) that we can choose the filters $g_k$ such that $g_k\cap B_\alpha\neq\emptyset$ for all $(k,\alpha)\in \omega\times \kappa$, where $\vec{B}=\langle B_\alpha\mid \alpha<\kappa\rangle$ is any sequence of dense subsets of $\mathbb{P}$. (The conditions in the c.c.c. forcing consists of finite approximations to finitely many filters.) It remains to find some $k\in\omega$ such that for all $i<n$, the set $\sigma_i^{g_k}$ is stationary. Let $G$ be $\mathbb{P}$-generic over $V$. We claim that $$\prod_{i<n} \sigma_i^G \subseteq \bigcup_{k\in\omega}\ \prod_{i<n}\sigma_i^{g_k}.$$ To see this, suppose that $\vec{\alpha}=\langle \alpha_i\mid i<n\rangle \in \prod_{i<n} \sigma_i^G $ and let $p \in A_{\vec{\alpha}}\cap G$. Then $p\Vdash^+ \bigwedge_{i<n} \alpha_i\in \sigma_i$. Since $p\in\mathbb{Q}$, we have $p\in g_k$ for some $k\in\omega$. Hence $\vec{\alpha} \in \prod_{i<n} \sigma_i^{g_k}$. Since $\sigma_i^G$ is stationary for all $i<n$, the above inclusion easily yields that there is some $k\in\omega$ such that $\prod_{i<n} \sigma_i^{g_k}$ is stationary. \end{proof} Our proof of the previous lemma does not work for $\axiomft{MA}^{+\omega}$. In fact, Baumgartner asked in \cite[Section 8]{baumgartner1984applications}: \begin{question}[Baumgartner 1984] Does $\axiomft{MA}_{\omega_1}$ imply $\axiomft{MA}^{+\omega_1}_{\omega_1}$? \end{question} We finally turn to bounded name principles for c.c.c. forcings. \begin{lemma} \ \label{clubBNccc} \begin{enumerate-(1)} \item \label{clubBNccc 1} $\mathsf{club}\text{-}\mathsf{BN}_{\mathrm{c.c.c.}}^1$ holds. \item \label{clubBNccc 2} For any c.c.c. forcing $\mathbb{P}$, $\mathsf{ub}\text{-}\mathsf{BN}_{\mathbb{P}}^1$ implies $\mathsf{ub}\text{-}\mathsf{FA}_\mathbb{P}$. \end{enumerate-(1)} \end{lemma} \begin{proof} \ref{clubBNccc 1} If $\sigma$ is a $\mathbb{P}$-name for a set that contains a club, then by the c.c.c. there is a club $C$ with $1\mathrel{\Vdash} C\subseteq \sigma$. Since $\sigma$ is $1$-bounded, $(\alpha,1)\in \sigma$ for all $\alpha\in C$. Thus for every filter $g$, we have $C\subseteq \sigma^g$. \ref{clubBNccc 2} Suppose that $\mathbb{P}$ satisfies the c.c.c. Suppose that $\vec{D}=\langle D_\alpha\mid \alpha<\omega_1\rangle$ is a sequence of dense subsets of $\mathbb{P}$. Let $A_\alpha$ be a maximal antichain in $D_\alpha$ and let $\vec{a}_\alpha=\langle a^n_\alpha\mid n\in\omega\rangle$ enumerate $A_\alpha$. (For ease of notation, we assume for that each $A_\alpha$ is infinite.) Let $\sigma=\{(\omega\cdot\alpha + n, a^n_\alpha)\mid \alpha<\omega_1,\ n\in\omega\}$. By $\mathsf{ub}\text{-}\mathsf{BN}^1_\mathbb{P}$, there is a filter $g$ such that $\sigma^g$ is unbounded. Hence $D_\alpha\cap g\neq\emptyset$ for unboundedly many $\alpha<\omega_1$. \end{proof} For any c.c.c. forcing $\mathbb{P}$, the principles $\mathsf{ub}\text{-}\mathsf{BN}_{\mathbb{P}}^1$, $\mathsf{ub}\text{-}\mathsf{N}_{\mathbb{P}}$ and $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}}$ are equivalent by Lemma \ref{clubBNccc} \ref{clubBNccc 2} and the implications in Figure \ref{diagram of implications for ccc forcings}. We do not know what is their relationship with $\mathsf{stat}\text{-}\mathsf{BN}^1_{\mathrm{c.c.c.}}$. However, we will show in Lemma \ref{CH implies failure of statN(random)} below that $\mathsf{stat}\text{-}\mathsf{BN}^1_{\mathrm{random},\omega_1}$ is not provable in $\axiomft{ZFC}$. Regarding Lemma \ref{clubBNccc} \ref{clubBNccc 1}, it is also easy to see that $\mathsf{club}\text{-}\mathsf{BN}_{\sigma\text{-}\mathrm{closed}}^1$ is provable. This suggests to ask: \begin{question} Is $\mathsf{club}\text{-}\mathsf{BN}_{\mathbb{P}}^1$ is provable for any proper forcing $\mathbb{P}$? \end{question} \subsection{Specific forcings} \subsubsection{Cohen forcing} \label{section - Cohen forcing} \iffalse \begin{figure \label{Figure.Theories} \begin{tikzpicture}[theory/.style={draw,rounded corners,scale=.5,minimum height=6.5mm},scale=.3] \node (1) at (0,0) [theory, thick,blue!80!black,scale=1.8] {{\parbox[c][1.7cm]{1.8cm}{$\mathsf{FA}^+$ \\ $\mathsf{FA}$ \\ $\mathsf{club}\text{-}\mathsf{FA}$ \\ $\mathsf{club}\text{-}\mathsf{N}$ }}}; \node (2) at (0,-10) [theory, thick,blue!80!black,scale=1.8] {\parbox[c][3.6cm]{1.8cm}{$\mathsf{club}\text{-}\mathsf{BN}^1$ \\ $\mathsf{stat}\text{-}\mathsf{FA}$ \\ $\mathsf{stat}\text{-}\mathsf{N}$ \\ $\mathsf{stat}\text{-}\mathsf{BN}^1$ \\ $\mathsf{ub}\text{-}\mathsf{FA}$ \\ $\mathsf{ub}\text{-}\mathsf{N}$ \\ $\mathsf{ub}\text{-}\mathsf{BN}^1$ \\ $\axiomft{ZFC}$}}; \draw[->] (1) edge (2); \end{tikzpicture} \caption{Forcing axioms and name principles for Cohen forcing} \end{figure} \fi We will now drop down from classes of forcings, to forcing axioms on specific forcings. This is also where we prove most of the negative results in the diagram from earlier. We start with the simplest, Cohen forcing and let $\kappa=\omega_1$. For Cohen forcing, all principles in the right part of the next diagram are provable in $\axiomft{ZFC}$ by Lemma \ref{Lemma stat-NP for sigma-centred} (on $\sigma$-centred forcing) and the basic implications in Figure \ref{diagram of implications}. The left part is not provable by Remark \ref{Remark_FACohen_meagre} below. \begin{figure}[H \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{N}_{\omega_1}$}} \ar@{<->}[r] \ar@{<->}[d] & {\txt{$\mathsf{club}\text{-}\mathsf{N}_{\omega_1}$}} \ar@{<->}[d] \ar@{->}[r]^{\ref{Lemma stat-NP for sigma-centred}} & \txt{$\mathsf{stat}\text{-}\mathsf{N}_{\omega_1}$} \ar@{<->}[r] \ar@{<->}[d] & \txt{$\mathsf{ub}\text{-}\mathsf{N}_{\omega_1}$} \ar@{<->}[d]& \\ {\txt{$\mathsf{FA}_{\omega_1}$}} \ar@{<->}[r]& \txt{$\mathsf{club}\text{-}\mathsf{FA}_{\omega_1}$} \ar@{->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{FA}_{\omega_1}$} \ar@{<->}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$} \\ }\] \caption{Forcing axioms and name principles at $\omega_1$ for Cohen forcing. } \label{diagram of implications for Cohen forcing} \end{figure} Our first result is an improvement to Lemma \ref{Lemma stat-NP for sigma-centred}. It shows that a simultaneous version of the stationary forcing axiom for countably many sequences of dense sets holds. \begin{lemma} Let $\mathbb{P}$ be Cohen forcing and $\kappa$ a cardinal with $\mathrm{cof}(\kappa)>\omega$. For each $n\in\omega$, let $\vec{D}_n=\langle D^n_\alpha \mid \alpha<\omega_1 \rangle$ be a sequence of dense sets. Then there exists a filter $g\in V$ such that for all $n$, the trace $\mathrm{Tr}_{g,\vec{D}_n}$ is stationary in $\kappa$.\footnote{See Definition \ref{notation_trace}.} \end{lemma} \begin{proof} Suppose that there is no filter $g$ as described. For $x\in 2^\omega$, let us write $g_x$ to denote the filter $\{x{\upharpoonright}n : n\in\omega\}$. Then for each $x\in 2^\omega$, the filter $g_x$ does not have the required property. So there is a natural number $n_x$ and a club $C_x\subseteq \kappa$ with $g_x\cap D^{n_x}_\alpha=\emptyset$ for all $\alpha\in C_x$. Then the sets $A_n:=\{x\in 2^\omega\mid n_x=n\}$ partition $2^\omega$. By the Baire Category Theorem, not all $A_n$ are nowhere dense. So there is some $n\in\omega$ and basic some open subset $N_t=\{x \in 2^\omega \mid t \subseteq x\}$ for some $t\in 2^{<\omega}$ such that $A_n\cap N_t$ is dense in $N_t$. Fix a countable set $D\subseteq A_n\cap U$ which is dense in $U$. Let $\alpha$ be an element of the club $\bigcap_{x\in D}C_x$. Let further $u\in D^n_\alpha$ with $u\leq t$. Since $D$ is dense in $N_t$, there is some $x\in D\cap N_u$. Then $u\in g_x\cap D^n_\alpha$ and hence $g_x \cap D^n_\alpha\neq\emptyset$. On the other hand, we have $x\in A_n$ and hence $n_x=n$. Since also $\alpha\in C_x$, we have $g_x \cap D^n_\alpha=\emptyset$. \end{proof} Using a variant of the previous proof, we can also improve $\mathsf{stat}\text{-}\mathsf{N}_\mathbb{P}$ to work for finitely many names. \begin{lemma} \label{lemma_omega-FA-Cohen} Let $\mathbb{P}$ be Cohen forcing and $\kappa$ a cardinal with $\mathrm{cof}(\kappa)>\omega$. Suppose that $\vec{\sigma}=\langle \sigma_i \mid i\leq n\rangle$ is a sequence of rank $1$ $\mathbb{P}$-names such that for each $i\leq n$, $ \mathbb{P}\Vdash \sigma_i$ is stationary in $\kappa$. Then there is a filter $g$ on $\mathbb{P}$ such that for all $i\leq n$, $\sigma_i^g$ is stationary in $\kappa$. In particular, $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\kappa}$ holds. \end{lemma} \begin{proof} As in the previous proof, let $g_x=\{x{\upharpoonright}n : n\in\omega\}$ for $x\in 2^\omega$. The result will follow from the next claim. \begin{claim} \label{claim_omega-FA-Cohen} If $D$ is any dense subset of $2^\omega$, then there is some $x\in D$ such that $\sigma_i^{g_x}$ is stationary in $\kappa$ for all $i\leq n$. \end{claim} \begin{proof} We can assume that $D$ is countable. If the claim fails, then for each $x\in D$, there is some $i\leq n$ and a club $C_x$ such that $\sigma_i^{g_x} \cap C_x=\emptyset$. Then $C:=\bigcap_{x\in D} C_x$ is a club. Moreover, for each $x\in D$, there is some $i\leq n$ such that $\sigma_i^{g_x} \cap C=\emptyset$. There is some $p\in \mathbb{P}$ such that for each $i\leq n$, there is some $\alpha_i\in C$ such that $p\mathrel{\Vdash} \check{\alpha}_i\in\sigma_i$. By Lemma \ref{Prop_sforcingAndForcing}, we can assume that $p\sforces \check{\alpha}_i\in\sigma_i$ for all $i\leq n$. Now, since $D$ is dense, we can find some $x\in D$ with $p\subseteq x$. Then $p\in g_x$, so by \ref{Prop_sforcingAndInterpretation} we conclude $\alpha_i\in \sigma_i^{g_x}$ for all $i\leq n$. This contradicts the above property of $C$. \end{proof} This completes the proof of Lemma \ref{lemma_omega-FA-Cohen}. \end{proof} Given the previous result about $\mathsf{stat}\text{-}\mathsf{FA}$, we might expect to be able to correctly interpret $\omega$ many names. But the above proof does not work: it breaks down where we introduce $p$. For each $i$, we can find $p_i$ strongly forcing $\alpha_i\in \sigma_i$; but then we would want to take some $p$ that was below every $p_i$ and that is only possible in $\sigma$-closed forcings. We can, however, apply the same technique in the presence of $\mathsf{FA}$ to prove $\mathsf{FA}^+$. \begin{lemma} \label{lemma_FA_implies_FA+} Let $\mathbb{P}$ be Cohen forcing and $\kappa$ a cardinal with $\mathrm{cof}(\kappa)>\omega$. Then $\mathsf{FA}_\mathbb{P}$ implies $\mathsf{FA}^+_\mathbb{P}$. \end{lemma} \begin{proof} We will in fact prove a stronger version for finitely many names. Suppose that $\vec{\sigma}=\langle \sigma_i \mid i\leq n\rangle$ is a sequence of rank $1$ $\mathbb{P}$-names such that for each $i\leq n$, $ \mathbb{P}\Vdash \sigma_i$ is stationary in $\kappa$. Suppose that $\vec{D}=\langle D_\alpha\mid \alpha<\kappa\rangle$ is a sequence of dense open sets. Then $$D:=\{x\in 2^\omega\mid \forall \alpha<\kappa\ \exists p\in D_\alpha\ p\subseteq x\}$$ consists of all reals $x$ such that $g_x \cap D_\alpha\neq\emptyset$ for all $\alpha<\omega_1$. The next claim suffices. By Claim \ref{claim_omega-FA-Cohen}, it implies that for some $x\in D$, $\sigma_i^{g_x}$ is stationary for all $i\leq n$. \begin{claim} $D$ is dense in $2^\omega$. \end{claim} \begin{proof} Fix $q\in \mathbb{P}$; we will find some $x\in D$ with $q\subseteq x$. Since the forcing $\mathbb{P}_q:=\{p\in \mathbb{P}\mid p\leq q\}$ is isomorphic to Cohen forcing via the map $r\mapsto q^\smallfrown r$, $\mathsf{FA}_{\mathbb{P}_q}$ holds. Hence, we can find a filter $g$ on $\mathbb{P}_q$ which meets $D_\alpha \cap \mathbb{P}_q$ for every $\alpha<\omega_1$. $\cup g$ is an element of $2^{\leq \omega}$ with $q\subseteq \cup g$ by compatibility of elements of a filter. Then any real $x$ with $\cup g \subseteq x$ satisfies $x\in D$ and $q\subseteq x$. \end{proof} Lemma \ref{lemma_FA_implies_FA+} follows. \end{proof} \begin{remark} \label{Remark_FACohen_meagre} Note that $\mathsf{FA}_{\text{Cohen},\omega_1}$ also has a well known characterisation via sets of reals: it is equivalent to the statement that the union of $\omega_1$ many meagre sets does not cover $2^\omega$. In particular, $\mathsf{FA}_{\text{Cohen},\omega_1}$ is not provable in $\axiomft{ZFC}$. \end{remark} \subsubsection{Random forcing} The product topology on $2^\omega$ is induced by the basic open sets $N_t= \{ x\in 2^\omega \mid t\subseteq x \}$ for $t\in 2^{<\omega}$. \emph{Lebesgue measure} is by definition the unique measure $\mu$ on the Borel subsets of $2^\omega$ with $\mu(N_t)=\frac{2}{2^n}$. \begin{definition} \emph{Random forcing} $\mathbb{P}$ is the set of Borel subsets of $2^\omega$ with positive Lebesgue measure. $\mathbb{P}$ is quasi-ordered by inclusion, i.e. $p\leq q :\Leftrightarrow p\subseteq q$ for $p,q \in \mathbb{P}$. \end{definition} Strictly speaking, random forcing is the partial order obtained by taking the quotient of the preorder, where two conditions are equivalent if their symmetric difference has measure $0$. To simplify notation, we will talk about Borel sets of positive measure as if they were conditions in random forcing. \begin{figure}[H \[ \xymatrix@R=3.5em{ {\txt{$\mathsf{N}_{\omega_1}$}} \ar@{<->}[r] \ar@{<->}[d] & {\txt{$\mathsf{club}\text{-}\mathsf{N}_{\omega_1}$}} \ar@{<->}[d] \ar@{<..}[r]^{\ref{Lemma random statN implies FA+}} & \txt{$\mathsf{stat}\text{-}\mathsf{N}_{\omega_1}$} \ar@{..>}[r] \ar@{..>}[d] & \txt{$\mathsf{ub}\text{-}\mathsf{N}_{\omega_1}$} \ar@{<->}[d]& \\ {\txt{$\mathsf{FA}_{\omega_1}$}} \ar@{<->}[r]& \txt{$\mathsf{club}\text{-}\mathsf{FA}_{\omega_1}$} \ar@{<->}[r] & \txt{$\mathsf{stat}\text{-}\mathsf{FA}_{\omega_1}$} \ar@{<->}[r] & \txt{$\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$}\ar@/^0.8cm/@{<->}[lll]_{\ref{Lemma_Random ubFA}} \\ }\] \caption{Forcing axioms and name principles at $\omega_1$ for random forcing. } \label{diagram of implications for random forcing} \end{figure} We have seen in Lemma \ref{Lemma stat-NP for sigma-centred} and the following remark that $\mathsf{ub}\text{-}\mathsf{FA}_\mathbb{P}$ implies $\mathsf{FA}_\mathbb{P}$ for $\sigma$-centred forcings. However, random forcing is not $\sigma$-centred by \cite[Lemma 3.7]{brendle2009forcing}. The implication still holds: \begin{lemma} \label{Lemma_Random ubFA} Let $\mathbb{P}$ denote random forcing. The following are equivalent: \begin{enumerate-(1)} \item \label{Lemma_Random ubFA 1} $\mathsf{FA}_{\mathbb{P},\omega_1}$ \item $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P},\omega_1}$ \label{Lemma_Random ubFA 2} \item $2^\omega$ is not the union of $\omega_1$ many null sets \label{Lemma_Random ubFA 3} \end{enumerate-(1)} \end{lemma} The equivalence of \ref{Lemma_Random ubFA 1} and \ref{Lemma_Random ubFA 3} is a well-known fact, but we really interested in the equivalence of \ref{Lemma_Random ubFA 1} and \ref{Lemma_Random ubFA 2}. The proof of \ref{Lemma_Random ubFA 2}$\Rightarrow$\ref{Lemma_Random ubFA 3} also works for certain forcings of the form $\mathbb{P}_I$. $\mathbb{P}_I$ consists of all Borel subsets $B\notin I$ of $2^\omega$, where $I$ is a $\sigma$-ideal on the Borel subsets of the Cantor space, ordered by inclusion up to sets in $I$. For \ref{Lemma_Random ubFA 2}$\Rightarrow$\ref{Lemma_Random ubFA 3}, it suffices that the set of closed $p\in \mathbb{P}$ is dense in $\mathbb{P}$ and $N_t\notin I$ for all $t\in 2^{<\omega}$. If additionally \ref{Lemma_Random ubFA 3}$\Rightarrow$\ref{Lemma_Random ubFA 1} holds, then $\mathsf{ub}\text{-}\mathsf{FA}_{\mathbb{P}_I,\omega_1}$ implies $\mathsf{FA}_{\mathbb{P}_I,\omega_1}$. \begin{proof} \ref{Lemma_Random ubFA 1}$\Rightarrow$\ref{Lemma_Random ubFA 2}: Immediate. \ref{Lemma_Random ubFA 2}$\Rightarrow$\ref{Lemma_Random ubFA 3}: We prove the contrapositive. Suppose $2^\omega = \bigcup_{\alpha<\omega_1} S_\alpha$, where $S_\alpha \subseteq 2^\omega$ has measure $0$. Without loss of generality, we may assume that $\langle S_\alpha \mid \alpha<\omega_1 \rangle$ is an increasing sequence; i.e. $\alpha<\beta<\omega_1\Rightarrow S_\alpha \subseteq S_\beta$. Then $$D_\alpha=\{B\in \mathbb{P}\mid B\subseteq 2^\omega\setminus S_\alpha \text{ and } B \text{ is closed}\}$$ is dense. Let $g\in V$ be a filter. Without loss of generality, assume $g$ is an ultrafilter. Then for any $n\in\omega$, there is some $t\in 2^n$ with $N_t\in g$. It follows that there is a unique $x\in 2^\omega$ such that $N_t\in g$ for all $t\subseteq x$. It is easy to check that $x$ is in the closure of any element of $g$. Towards a contradiction, suppose that for unboundedly many $\alpha$ we can find $B_\alpha\in D_\alpha\cap g$. Then $B_\alpha$ is closed, so $x\in B_\alpha\subseteq 2^\omega \setminus S_\alpha$ so $x\not \in S_\alpha$. This contradicts the assumptions that $2^\omega = \bigcup S_\alpha$ and the $S_\alpha$ are increasing. \ref{Lemma_Random ubFA 3}$Rightarrow$\ref{Lemma_Random ubFA 1}: Again we prove the contrapositive. Let $\langle D_\alpha\mid \alpha<\omega_1\rangle$ be a sequence of predense sets such that there is no filter in $V$ meeting all of them. $\mathbb{P}$ has the c.c.c., so without loss of generality we may assume every $D_\alpha$ is countable. Fix the following notation. Recall that $x\in 2^\omega$ is a density point of $B$ if $\frac{\mu(B\cap N_{(x\vert k)})}{\mu(N_{(x\vert k)})}$ tends to $1$ as $k$ tends to infinity. For $B\in\mathbb{P}$, let $D(B)$ be the set of density points of $B$. For $\alpha<\omega_1$, let $$T_\alpha=\bigcup_{B\in D_\alpha}D(B) \text{ \ \ \ and \ \ \ } S_\alpha =2^\omega\setminus T_\alpha.$$ We first show that $S_\alpha$ is a null set. To see this, suppose that $S_\alpha$ has positive measure. Then we can find a closed subset $C\subseteq S_\alpha$ with positive measure. Since $D_\alpha$ is predense, we can find some $B\in D_\alpha$ with $\mu(B\cap C)>0$. Since $B\triangle D(B)$ is null by Lebesgue's Density Theorem, we have $\mu(D(B)\cap C)>0$. This contradicts $D(B)\cap C\subseteq T_\alpha \cap C= \emptyset$. We now show $\bigcup_{\alpha<\omega_1}S_\alpha=2^\omega$. To see this, take any $x\in 2^\omega$ and let $$g_x=\{B\in \mathbb{P}: x\in D(B)\}$$ denote the filter generated by $x$. Take $\alpha<\omega_1$ such that $g_x\cap D_\alpha=\emptyset$. We show that $x\in S_\alpha$, as required. Otherwise $x\in T_\alpha$, so we can find $B\in D_\alpha$ with $x\in D(B)$. But then $B\in g_x\cap D_\alpha$. This contradicts $g_x\cap D_\alpha=\emptyset$. \end{proof} Combining the proofs of \ref{Lemma_Random ubFA 2}$\Rightarrow$\ref{Lemma_Random ubFA 3} and \ref{Lemma_Random ubFA 3}$\Rightarrow$\ref{Lemma_Random ubFA 1}, we can obtain the following refinement: \begin{lemma} \label{corollary_dense subsets for ubFA and FA} Let $\mathbb{P}$ be random forcing. Let $\langle D_\alpha\mid \alpha<\omega_1\rangle$ be a collection of predense sets. There exists another collection $\langle D'_\alpha\mid \alpha<\omega_1\rangle$ of dense sets, such that if a filter $g$ meets unboundedly many $D'_\alpha$, then it can be extended to a filter $g'$ which meets every $D_\alpha$. \end{lemma} \begin{proof} Define $S_\alpha$ as in the proof of \ref{Lemma_Random ubFA 3}$\Rightarrow$\ref{Lemma_Random ubFA 1}. Then for any $x\in 2^\omega$, we have $g_x\cap D_\alpha\neq\emptyset$ or $x \in S_\alpha$. Consider the null sets $S'_\alpha=\bigcup_{\beta<\alpha}S_\beta$. Then define $D'_\alpha$ from $S'_\alpha$ in the same way we defined $D_\alpha$ from $S_\alpha$ in the proof of \ref{Lemma_Random ubFA 2}$\Rightarrow$\ref{Lemma_Random ubFA 3}. As in the proof of \ref{Lemma_Random ubFA 2}$\Rightarrow$\ref{Lemma_Random ubFA 3}, we obtain the following for any $x \in 2^\omega$ and $\alpha<\omega_1$: if $g_x\cap D'_\alpha \neq\emptyset$, then $x\notin S'_\alpha$. Let $g$ be a filter which meets unboundedly many $D'_\alpha$. Then $g\subseteq g_x$ for some $x\in 2^\omega$. We have seen that $x\notin S'_\alpha$ for unboundedly many $\alpha$. Therefore $x$ misses all $S'_\alpha$ and all $S_\alpha$. By the choice of the $S_\alpha$, we have $g_x\cap D_\alpha\neq\emptyset$ for all $\alpha<\omega_1$. \end{proof} This then allows us to prove that $\mathsf{stat}\text{-}\mathsf{N}$ alone gives us the full $\mathsf{FA}^+$. \begin{lemma} \label{Lemma random statN implies FA+} Let $\mathbb{P}$ be random forcing. Then $\mathsf{stat}\text{-}\mathsf{N}_\mathbb{P}\implies\mathsf{FA}^+_\mathbb{P}$. \end{lemma} \begin{proof} Suppose that $\langle D_\alpha \mid \alpha <\omega_1\rangle$ is a sequence of dense subsets of $\mathbb{P}$. Suppose further that $\sigma$ is a rank $1$ name which is forced to be stationary. Let $\langle D'_\alpha \mid \alpha<\omega_1\rangle$ be a sequence as in Lemma \ref{corollary_dense subsets for ubFA and FA} and $$ \tau = \{ (\check{\alpha}, p) \mid p\in D'_\alpha \wedge p \sforces \check{\alpha}\in \sigma \}. $$ Note that $\mathbb{P}\mathrel{\Vdash} \sigma=\tau$. By $\mathsf{stat}\text{-}\mathsf{N}_\mathbb{P}$, we obtain a filter $g$ such that $\tau^g$ is stationary. Since $\tau^h \subseteq \sigma^h$ for all filters $h$, $\sigma^g$ is stationary as well. Moreover, $g \cap D'_\alpha\neq \emptyset$, for stationarily many $\alpha$. By the choice of $\langle D'_\alpha \mid \alpha<\omega_1\rangle$, we can extend $g$ to a filter $g'$ such that $g'\cap D_\alpha\neq \emptyset$ for all $\alpha<\omega_1$. Moreover, $\sigma^g \subseteq \sigma^{g'}$, so $\sigma^{g'}$ is stationary. \end{proof} The missing link in Figure \ref{diagram of implications for random forcing} is: \begin{question} If $\mathbb{P}$ denotes random forcing, does $\mathsf{FA}_{\mathbb{P},\omega_1}$ imply $\mathsf{stat}\text{-}\mathsf{N}_{\mathbb{P},\omega_1}$? \end{question} We finally show that the $1$-bounded stationary name principle for random forcing is non-trivial, as we discussed at the end of Section \ref{section ccc forcings}. \begin{lemma} \label{CH implies failure of statN(random)} Let $\kappa=2^{\aleph_0}$ and assume that every set of size ${<}\kappa$ is null.\footnote{This assumption is equivalent to $\mathrm{non}(\mathrm{null})=2^{\aleph_0}$. It follows from $\axiomft{MA}$, but not from $\mathsf{FA}_{\mathrm{random}}$ by known facts about Cichon's diagram.} Then $\mathsf{stat}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^1$ fails for random forcing $\mathbb{P}$. In particular, $\axiomft{CH}$ implies that $\mathsf{stat}\text{-}\mathsf{BN}_{\mathbb{P},\omega_1}^1$ fails. \end{lemma} \begin{proof} It suffices to show that $\mathsf{stat}\text{-}\mathsf{BN}_{\mathbb{P},\kappa}^\omega$ fails. To see this, apply Corollary \ref{corollary_equivalence of lambda-bounded and 1-bounded name principles} and use the fact that random forcing is well-met and for any $q\in \mathbb{P}$, the forcing $\mathbb{P}_q$ is isomorphic to $\mathbb{P}$ by \cite[Theorem 17.41]{kechris2012classical}. Let $\vec{x}=\langle x_\alpha \mid \alpha<\kappa\rangle$ enumerate all reals. Then $C_\beta:= \{ x_\alpha \mid \alpha<\beta \}$ is null for all $\beta<\kappa$. For each $\alpha<\kappa$, let $A_\alpha$ be a countable set of approximations to the complement of $C_\alpha$ in the following sense: \begin{enumerate-(a)} \item Each element of $A_\alpha$ is a closed set disjoint from $C_\alpha$, and \item \label{condition for random forcing 2} For all $\epsilon>0$, $A_\alpha$ contains a set $C$ with $\mu(C)\geq 1-\epsilon$. \end{enumerate-(a)} Let $\sigma= \{ (\check{\alpha},p) \mid p \in A_\alpha \}$. Then $\Vdash_{\mathbb{P}} \sigma$ is stationary, since each $A_\alpha$ is predense by \ref{condition for random forcing 2}. We claim that there is no filter $g$ in $V$ such that $\sigma^g$ is unbounded. If $g$ were such a filter, then we could assume that for every $n\in\omega$, $g$ contains $N_{t_n}$ for some (unique) $t_n\in 2^n$ by extending $g$. (Clearly $\sigma^g$ will remain unbounded.) Let $x=\bigcup_{n\in\omega} t_n$ and suppose that $x=x_\alpha$. Since $\sigma^g$ is unbounded, there is some $\gamma>\alpha$ in $\sigma^g$. Find some $p\in A_\gamma$ with $p\in g$. By the definition of $A_\gamma$, $p$ is a closed set with $x_\alpha\notin p$. Hence $p \cap N_{t_n}=\emptyset$ for some $n\in\omega$. But this contradicts the fact that both $p$ and $N_{t_n}$ are in $g$. \end{proof} \subsubsection{Hechler forcing} For $\sigma$-centred forcings $\mathbb{P}$, the principles on the right side of Figure \ref{diagram of implications} are provable in $\axiomft{ZFC}$ (see Lemma \ref{Lemma stat-NP for sigma-centred}). A subtle difference appears when we add the requirement that the filter has to meet countably many fixed dense sets. We write $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}$ for this axiom (see Definition \ref{Defn_SpecialFA}). For some forcings, this axiom is stronger that $\mathsf{ub}\text{-}\mathsf{FA}$. To see this, we will make use of the fact that for Hechler forcing, a filter that meets certain countably many dense sets corresponds to a real. Recall that a subset $A\subseteq \omega^\omega$ is \emph{unbounded} if no $y\in \omega^\omega$ eventually strictly dominates all $x\in A$, i.e. $\exists m\ \forall n\geq m\ x(n)< y(n)$. The next result shows that $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$ for Hechler forcing implies the negation of the continuum hypothesis. \begin{lemma} Let $\mathbb{P}$ denote Hechler forcing. If $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_\mathbb{P}$ holds, then the size of any unbounded family is at least $\omega_2$. \end{lemma} \begin{proof} Towards a contradiction, suppose $\omega\text{-}\mathsf{ub}\text{-}\mathsf{FA}_\mathbb{P}$ holds and $A$ is an unbounded family of size $\omega_1$. Let us enumerate its elements as $\vec{x}=\langle x_\alpha \mid \alpha<\omega_1 \rangle$. We define the following dense sets: For $\alpha<\omega_1$, we define a real $y_\alpha$ by taking a sort of ``diagonal maximum'' of $\vec{x}$. Let $\pi:\alpha \rightarrow \omega$ be a bijection and let $$y_\alpha(n)=\max \{x_\gamma(n): \pi(\gamma)\leq n\}.$$ It is easy to check that $y_\alpha$ is well defined, and that it eventually dominates $x_\gamma$ for all $\gamma<\alpha$. We now define \begin{equation*} D_\alpha = \{(s,x)\in \mathbb{P}: x \text{ eventually strictly dominates }y_\gamma\} \end{equation*} For $n<\omega$, let \begin{equation*} E_n=\{(s,x)\in \mathbb{P}: \text{length}(s)\geq n\} \end{equation*} Now let $g\in V$ be a filter meeting unboundedly many $D_\alpha$ and all $E_n$. Since $g$ meets all $E_n$, the first components of its conditions are arbitrarily long. Since all its elements are compatible, this means that the union $\cup \{s: (s,x)\in g\}$ is a real $y$. And $y$ must eventually strictly dominate $x$ for every $(s,x)\in g$. But there are unboundedly many $\alpha$ such that $g$ meets $D_\alpha$. For any such $D_\alpha$, then, we have $(s,x)\in g$ where $x$ eventually strictly dominates $y_\alpha$. Hence, $y$ must eventually strictly dominate unboundedly many $y_\alpha$ and hence every $x \in A$. But $A$ was assumed to be unbounded. \end{proof} \subsubsection{Suslin trees} A Suslin tree is a tree of height $\omega_1$, with no uncountable branches or antichains. The existence of Suslin trees is not provable from $\axiomft{ZFC}$, but follows from $\diamondsuit_{\omega_1}$. We can of course think of a Suslin tree $T$ as a forcing; it will add a cofinal branch through the tree. We use Suslin trees as test cases for the weakest principles defined above. As expected, we can show that $\mathsf{stat}\text{-}\mathsf{BN}^1_{T,\omega_1}$ fails in most cases. \begin{lemma} Suppose $T$ is a Suslin tree. Then $\mathsf{stat}\text{-}\mathsf{BN}^\omega_{T,\omega_1}$ fails. \end{lemma} \begin{proof} Let $\sigma=\{\langle \alpha,p\rangle: \alpha<\omega_1, p\in T, \text{height}(p)=\alpha\}$. It is easy to see that $\sigma$ is $\omega$ bounded, and is forced to be equal to $\omega_1$. But any filter $g\in V$ is a subset of a branch in $V$, and therefore countable. So $\sigma^g$ is not stationary, or even unbounded. \end{proof} \begin{corollary} \label{Suslin trees} Suppose that a Suslin tree exists. Then there exists a Suslin tree $T$ such that $\mathsf{stat}\text{-}\mathsf{BN}^1_{T,\omega_1}$ fails. \end{corollary} \begin{proof} Let $T$ be any Suslin tree. By the previous lemma we know that $\mathsf{stat}\text{-}\mathsf{BN}^\omega_{T,\omega_1}$ fails. But then by Corollary \ref{lemma_failure of lambda-bounded and 1-bounded name principle for trees}, $T$ contains a subtree $S$ such that $\mathsf{stat}\text{-}\mathsf{BN}^1_{S,\omega_1}$ fails. \end{proof} This also tells us that $\mathsf{stat}\text{-}\mathsf{BN}^1_{\mathbb{P},\omega_1}$ is not equivalent to $\mathsf{stat}\text{-}\mathsf{BFA}^1_{\mathbb{P},\omega_1}$, since the latter is trivially provable for any forcing in $\axiomft{ZFC}$. In fact, if we assume $\diamondsuit_{\omega_1}$ (which is somewhat stronger than the existence of a Suslin tree, see \cite[Section 3]{rinot2011jensen}) then we can do better than this: we can show that $\mathsf{stat}\text{-}\mathsf{BN}^1_{\omega_1}$ fails for every Suslin tree. \begin{lemma} \label{Lemma diamond, Suslin tree and failure of statBN1} Suppose $\diamondsuit_{\omega_1}$ holds. If $T$ is a Suslin tree, then $\mathsf{stat}\text{-}\mathsf{BN}^1_{T,\omega_1}$ fails. \end{lemma} \begin{proof} Let $(A_\gamma)$ be the sequence given by $\diamondsuit_{\omega_1}$. That is, let it be such that $A_\gamma \subseteq \gamma$ and for any $S\subseteq \omega_1$, the set $\{\gamma<\omega_1:S\cap \gamma=A_\gamma\}$ is stationary. We build up a rank 1 name $\sigma=\{\langle \check{\alpha},p\rangle:\alpha<\gamma, p\in B_\alpha\}$ recursively as follows. Suppose we have defined $B_\gamma$ for all $\gamma<\alpha$. Consider $\bigcup_{\gamma\in A_\alpha}B_\gamma$. If this union is predense, then we let $B_\alpha=\emptyset$. Otherwise, choose a condition $p\in T$, sitting beyond level $\alpha$ of the Suslin tree, such that $p$ is incompatible with every element of that union. Let $B_\alpha=\{p\}$. If $G$ is a generic filter, then every club $C'\subseteq \omega_1$ in $V[G]$ contains a club $C\in V$. Hence, to show that $T\Vdash``\sigma \text{ is stationary}"$ we only need to show that for every club $C\in V$, the set $\bigcup_{\alpha\in C}B_\alpha$ is predense. Suppose for some club $C$ that is not the case. For stationarily many $\alpha$, we have that $C\cap \alpha=S_\alpha$ and hence the union we are looking at in defining $B_\alpha$ is $\bigcup_{\gamma\in A_\alpha}B_\gamma=\bigcup_{\gamma\in C\cap \alpha} B_\gamma$. Hence, the union is not predense, and $B_\alpha$ contains an element that is incompatible with every element of $\bigcup_{\gamma\in C\cap \alpha} B_\gamma$. But this is true for unboundedly many such $\alpha$, so this gives us an $\omega_1$ long sequence of pairwise incompatible conditions, i.e. an uncountable antichain. Since a Suslin tree is by definition c.c.c., this is a contradiction. Hence $T\mathrel{\Vdash} ``\sigma \text{ is stationary}"$. But now let $g\in V$ be a filter. By extending it if necessary, without loss of generality we can assume $g$ is a maximal branch of the tree. Since $g\in V$, we know that $g$ is countable, so let the supremum of the heights of its elements be $\gamma$. Let $\alpha>\gamma$, and let $q\in g$. Since $B_\alpha$ is at most a singleton $\{p\}$ with $\text{ht}(p)\geq \alpha >\gamma > \text{ht}(q)$, and since $T$ is atomless, we know there is some $r\leq q$ with $r\mathrel{\Vdash} \alpha \not \in \sigma$. Hence $q \not\mathrel{\Vdash} \alpha \in \sigma$. Since this is true for all $q\in g$, it follows that $\alpha \not \in \sigma^{(g)}$. Hence far from being stationary, $\sigma^{(g)}$ is not even unbounded! \end{proof} So (assuming the existence of Suslin trees) there are certainly some Suslin trees in which $\mathsf{stat}\text{-}\mathsf{BN}^1$ fails. And with strong enough assumptions, we can show that $\mathsf{stat}\text{-}\mathsf{BN}^1$ fails for every tree. So it's natural to ask: \begin{question} Can we show in $\axiomft{ZFC}$ that $\mathsf{stat}\text{-}\mathsf{BN}^1_{T,\omega_1}$ fails for every Suslin tree $T$? \end{question} Note that we can show the failure of $\mathsf{ub}\text{-}\mathsf{BN}^1_{T,\omega_1}$ for any Suslin tree. Enumerate its level $\alpha$ elements as $\{p_{\alpha,n} : n\in \omega\}$. Now let $$ \sigma = \{(\check{\beta}, p_{\alpha,n}) : \alpha<\omega_1,n\in \omega, \beta=\omega.\alpha+n\}$$ Then $\sigma$ is forced to be unbounded but if $g\in V$ is such that $\sigma^g$ is unbounded, then $g$ defines an uncountable branch through $T$. \subsubsection{Club shooting} The next lemma is a counterexample to the implication $\mathsf{club}\text{-}\mathsf{BFA}_\kappa^\lambda$ $\Rightarrow$ $\mathsf{club}\text{-}\mathsf{BN}_\kappa^\lambda$ in Figure \ref{diagram of implications - bounded with lambda<kappa}. It is open whether there is such a counterexample for complete Boolean algebras. Suppose that $S$ is a stationary and co-stationary subset of $\omega_1$. Let $\mathbb{P}_S$ denote the forcing that shoots a club through $S$. Its conditions are closed bounded subsets of $S$, ordered by end extension. \begin{lemma} \ \label{Lemma separating BFA from clubBN} \begin{enumerate-(1)} \item \label{Lemma separating BFA from clubBN 1} $\mathsf{BFA}^\omega_{\mathbb{P}_S,\omega_1}$ holds. \item \label{Lemma separating BFA from clubBN 2} $\mathsf{club}\text{-}\mathsf{BN}^1_{\mathbb{P}_S,\omega_1}$ fails. \end{enumerate-(1)} In particular, for no $1\leq \lambda\leq \omega$ does $\mathsf{BFA}^\lambda_{\mathbb{P}_S,\omega_1}$ imply $\mathsf{club}\text{-}\mathsf{BN}^\lambda_{\mathbb{P}_S,\omega_1}$. \end{lemma} \begin{proof} \ref{Lemma separating BFA from clubBN 1}: We claim that every maximal antichain $A\neq \{1_{\mathbb{P}_S} \}$ is uncountable. (This shows that $\mathsf{BFA}^\omega_{\mathbb{P}_S,\omega_1}$ holds vacuously.) To see this, suppose that $A$ is countable. Let $\alpha=\sup\{ \min(p)\mid p\in A\}$ and find some $\beta>\alpha$ in $S$. Then $q=\{\beta\}$ is incompatible with all $p\in A$, so $A$ cannot be maximal. \ref{Lemma separating BFA from clubBN 2}: $\sigma=\check{S}$ is $1$-bounded and $\mathbb{P}_S \mathrel{\Vdash} ``\sigma$ contains a club''. But for every filter $g$, $\sigma^g=S$ does not contain a club, since $S$ is co-stationary. \end{proof} \section{Conclusion} The above results show that often, name principles are equivalent to forcing axioms. This provides an understanding of basic name principles $\mathsf{N}_{\mathbb{P},\kappa}$ and of simultaneous name principles for $\Sigma_0$-formulas. For bounded names, the results provide new characterisations of the bounded forcing axioms $\mathsf{BFA}^\lambda$ for $\lambda\geq\kappa$. Name principles are closely related with generic absoluteness and can be used to reprove Bagaria's equivalence between bounded forcing axioms of the form $\mathsf{BFA}^\kappa$and generic absoluteness principles. Bagaria's result has been recently extended by Fuchs \cite{fuchs2021aronszajn}. He introduced a notion of $\Sigma^1_1(\kappa,\lambda)$-absoluteness for cardinals $\lambda\geq\kappa$ and proved that it is equivalent to $\mathsf{BFA}^\lambda_\kappa$. It remains to see if this can be derived from our results. Several problems about the unbounded name principle $\mathsf{ub}\text{-}\mathsf{FA}_\kappa$ remain unclear. The results in Lemmas \ref{ubFA implies BFA} and \ref{ubFA implies FA for sigma-distributive forcings} about obtaining (bounded) forcing axioms from $\mathsf{ub}\text{-}\mathsf{FA}_\kappa$ for forcings that do not add reals or ${<}\kappa$-sequences, respectively, hint at possible generalisations (see Question \ref{question ubFA BFA}). For forcings which add reals, we have that $\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$ is trivial for all $\sigma$-linked forcings and implies $\mathsf{FA}_{\omega_1}$ for random forcing. In all these cases, $\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$ and $\mathsf{stat}\text{-}\mathsf{FA}_{\omega_1}$ are either both trivial or both equivalent to $\mathsf{FA}_{\omega_1}$. Can we separate $\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$ from $\mathsf{stat}\text{-}\mathsf{FA}_{\omega_1}$ (See Question \ref{Question ubFA versus statFA})? Can $\mathsf{ub}\text{-}\mathsf{FA}_{\omega_1}$ be nontrivial but not imply $\mathsf{FA}_{\omega_1}$? It remains to study other forcings adding reals and Baumgartner's forcing \cite[Section 3]{baumgartner1984applications} (see Question \ref{Question Baumgartner's forcing}). The stationary name principle $\mathsf{stat}\text{-}\mathsf{N}_{\omega_1}$ follows from the forcing axiom $\mathsf{FA}_{\omega_1}$ for some classes of forcings. For example, for the class of c.c.c. forcings both $\mathsf{stat}\text{-}\mathsf{N}_{\omega_1}$ and $\mathsf{FA}^+_{\omega_1}$ are equivalent to $\mathsf{FA}_{\omega_1}$ by results of Baumgartner (see Lemma \ref{Baumgartner's lemma}), Todor\v{c}evi\'c and Veli\v{c}kovi{\'c} \cite{todorcevic1987martin} (see Lemma \ref{characterisation of precaliber}). In general, $\mathsf{FA}^+$ goes beyond the forcing axiom, since being stationary is not first-order over $(\kappa,\in)$. For example, for the class of proper forcings, $\axiomft{PFA}^+$ is strictly stronger that $\axiomft{PFA}$ by results of Beaudoin \cite[Corollary 3.2]{beaudoin1991proper} and Magidor (see \cite{shelah1987semiproper}). So $\mathsf{FA}^+$ and $\mathsf{BFA}^+$ do not fall in the scope of generic absoluteness principles, unless one artificially adds a predicate for the nonstationary ideal. Can one formulate $\axiomft{PFA}^+$ as a generic absoluteness or name principle for a logic beyond first order? Some questions remain about the weak variant $\mathsf{stat}\text{-}\mathsf{BN}_{\mathbb{P},\omega_1}^1$ of $\mathsf{stat}\text{-}\mathsf{N}_{\omega_1}$. It is nontrivial for random forcing (see Lemma \ref{CH implies failure of statN(random)}) and for Suslin trees (see Corollary \ref{Suslin trees}). What is its relation with other principles? Does $\mathsf{stat}\text{-}\mathsf{BN}_{c.c.c.,\omega_1}^1$ imply $\axiomft{MA}_{\omega_1}$? \bibliographystyle{plain}