text
stringlengths 216
4.52M
| meta
dict |
---|---|

\section{Introduction}
\noindent The vertical dynamics of a free falling ball on a moving racket is considered. The racket is supposed to
move periodically in the vertical direction according to a regular periodic function $f(t)$ and
the ball is reflected according to the law of elastic bouncing when hitting the
racket. The only force acting on the ball is the gravity $g$. Moreover, the mass of the racket is assumed to be large with respect to the mass of the
ball so that the impacts do not affect the motion of the racket.
\noindent This model has inspired many authors as it represents a simple model exhibiting complex dynamics, depending on the properties of the function $f$. The first results were given by Pustyl'nikov in \cite{pust} who studied the possibility of having motions with velocity tending to infinity, for $\df$ large enough. On the other hand, KAM theory implies that if the $C^k$ norm, for $k$ large, of $\df$ is small then all motions are bounded. On these lines some recent results are given in \cite{ma_xu,maro4,maro6}. Bounded motions can be regular (periodic and quasiperiodic, see \cite{maro3}) and chaotic (see \cite{maro5,maro2,ruiztorres}). Moreover, the non periodic case is studied in \cite{kunzeortega2,ortegakunze}, the case of different potentials is considered in \cite{dolgo} and recent results on ergodic properties are present in \cite{studolgo}.
In this paper we are concerned with $(p,q)$-periodic motions understood as $p$-periodic with $q$ bounces in each period. Here $p,q$ are supposed to be positive coprime integers. In \cite{maro3} it is proved that if $p/q$ is sufficiently large, then there exists at least one $(p,q)$-periodic motion. This result comes from an application of Aubry-Mather theory as presented in \cite{bangert}. Actually, the bouncing motions correspond to the orbits of an exact symplectic twist map of the cylinder. The orbits of such maps can be found as critical points of an action functional and the $(p,q)$-periodic orbits found in \cite{maro3} correspond to minima. Here we first note that a refined version of Aubry-Mather theory (see \cite{katok_hass}) gives, for each couple of coprime $p,q$ such that $p/q$ is large, the existence of another $(p,q)$-periodic orbit that is not a minimum, since it is found via a minimax argument. This gives the existence of two different $(p,q)$-periodic motions for fixed values of $p,q$, with $p/q$ large.
We are first interested in the stability in the sense of Lyapunov of such periodic motions. This is related to the structure of the $(p,q)$-periodic orbits of the corresponding exact symplectic twist map, for fixed $p,q$. It comes from Aubry-Mather theory that the $(p,q)$-periodic orbits that are minima can be ordered and if there are two with no other in the middle then they are connected by heteroclinic orbits. In this case they are unstable. On the other hand, $(p,q)$-periodic orbits can form an invariant curve. In this case they are all minima but their stability cannot be determined as before since we are in a degenerate scenario. However, if we suppose to be in the real analytic case, a topological argument (see \cite{ortega_fp,ortega_book}) can be used to deduce instability. More precisely we will use the fact that for a real analytic area and orientation preserving embedding of the plane that is not the identity, every stable fixed point is isolated. Therefore, the hypothesis of $f$ being real analytic comes here into play.
Concerning the structure of the set of $(p,q)$-periodic motions we prove that in the real analytic case they can only be either isolated or in a degenerate case, in the sense that the corresponding orbits form an invariant curve that is the graph of a real analytic function. As before, in the isolated case at least one is unstable and in the degenerate case they all are minima and unstable. Note that this result differs from Aubry-Mather theory since we are not requiring the orbits to be minima of a functional. To prove this result we need the $q$-th iterate of the map to be twist. For $q=1$ this is true for every real analytic $f$, while for the general case $q>1$ we need to restrict to $\norm{\ddf}$ being small.
The paper is organized as follows. In Section \ref{sec:theory} we recall some known facts about exact symplectic twist maps together with the results for the analytic case. In Section \ref{sec:tennis} we introduce the bouncing ball map and describe its main properties. Finally, the results on the existence of two $(p,q)$-periodic motions, the instability and the structure of the set are given in Section \ref{sec:per}.
\section{Some results on periodic orbits of exact symplectic twist maps}\label{sec:theory}
Let us denote by $\Sigma=\RR\times(a,b)$ with $-\infty\leq a<b\leq +\infty$ a possibly unbounded strip of $\RR^2$. We will deal with $C^k$ ($k\geq 1$) or real analytic embeddings $\tilde{S}:\Sigma \rightarrow \RR^2$ such that
\begin{equation}\label{def_cyl}
\tilde{S}\circ\sigma=\sigma\circ \tilde{S}
\end{equation}
where $\sigma:\RR^2\rightarrow\RR^2$ and $\sigma(x,y)=(x+1,y)$. By this latter property, $\tilde{S}$ can be seen as the lift of an embedding $S:\Sigma\rightarrow\AA$ where $\AA = \TT\times \RR$ with $\TT = \RR/\ZZ$ and $\Sigma$ is now understood as the corresponding strip of the cylinder. We denote $\tilde{S}(x,y)=(\bar{x},\bar{y})$ and the corresponding orbit by $(x_n,y_n)_{n\in\ZZ}$.
We say that $\tilde{S}$ is exact symplectic if there exists a $C^1$ function $V:\Sigma\rightarrow \RR$ such that $V\circ\sigma=V$ and
\[
\bar{y} d \bar{x} -y dx = dV(x,y) \quad\mbox{in }\Sigma.
\]
Moreover, by the (positive) twist condition we understand
\[
\frac{\partial \bar{x}}{\partial y} >0 \quad\mbox{in }\Sigma.
\]
A negative twist condition would give analogous results. The exact symplectic condition implies that $\tilde{S}$ preserves the two-form $dy\wedge dx$ so that it is area and orientation preserving.
An equivalent characterization is the existence of a generating function, i.e. a $C^2$ function $h:\Omega\subset \RR^2\rightarrow \RR$ such that $h(x+1,\bar{x}+1) = h(x,\bar{x})$ and $h_{12}(x,\bar{x}) <0$ in $\Omega$ and for $(x,y)\in\Sigma$ we have $\tilde{S}(x,y) = (\bar{x},\bar{y})$ if and only if
\[
\left\{
\begin{split}
h_1(x,\bar{x})&=-y \\
h_2(x,\bar{x})&=\bar{y}.
\end{split}
\right.
\]
Moreover, $\tilde{S}$ preserves the ends of the cylinder if, uniformly in $x$,
\[
\bar{y}(x,y) \rightarrow \pm\infty \quad\mbox{as }y\rightarrow \pm\infty
\]
and twists each end infinitely if, uniformly in $x$,
\[
\bar{x}(x,y)-x \rightarrow \pm\infty \quad\mbox{as }y\rightarrow \pm\infty.
\]
Finally we will say that an embedding of the cylinder $S:\Sigma\rightarrow\AA$ satisfies any of these properties if so does its corresponding lift.
These maps enjoy several properties and many interesting orbits are proved to exist. Here we recall some results concerning periodic orbits. We start with the following
\begin{definition}
Fix two coprime integers $p,q$ with $q\neq 0$. An orbit $(x_n,y_n)_{n\in\ZZ}$ of $\tilde{S}$ is said $(p,q)$-periodic if $(x_{n+q},y_{n+q})=(x_n+p,y_n)$ for every $n\in\ZZ$. Moreover, we say that it is stable (in the sense of Lyapunov) if for every $\varepsilon>0$ there exists $\delta>0$ such that for every $(\hat{x}_0,\hat{y}_0)$ satisfying $|(x_0,y_0)-(\hat{x}_0,\hat{y}_0)|<\delta$ we have $|\tilde{S}^n(\hat{x}_0,\hat{y}_0)-({x}_n,{y}_n)|<\varepsilon$ for every $n\in\ZZ$.
\end{definition}
\begin{remark}\label{rem_periodic}
Note that $(q,p)$-periodic orbits correspond to fixed points of the map $\sigma^{-p}\circ \tilde{S}^q$. This follows from the fact that $\tilde{S}$ is a diffeomorphism defined on the cylinder. Each point of the orbit is a fixed point of $\sigma^{-p}\circ\tilde{S}^q$ and a fixed point of $\sigma^{-p}\circ\tilde{S}^q$ is the initial condition of a $(p,q)$-periodic orbit. Note that different fixed points may correspond to the same orbit but not viceversa. Moreover, if an orbit is $(q,p)$-periodic then it cannot be also $(q',p')$-periodic unless $p/q=p'/q'$. Actually, let $\xi=(x,y)$ and suppose that $\xi=\sigma^{-p}\circ\Phi^q(\xi)=\sigma^{-p'}\circ \Phi^{q'}(\xi)$. Then $\sigma^{p}(\xi)=\Phi^q(\xi)$ and $\sigma^{p'}(\xi)=\Phi^{q'}(\xi)$ from which $\Phi^{qq'}(\xi)=\sigma^{pq'}(\xi)=\sigma^{p'q}(\xi)$ and $pq'=p'q$.
Finally, the stability of a $(q,p)$-periodic orbit corresponds to the stability in the sense of Lyapunov of the corresponding fixed point of the map $\sigma^{-p}\circ\tilde{S}^q$.
\end{remark}
A particular class of periodic orbits are the so called Birkhoff periodic orbits.
\begin{definition}
Fix two coprime integers $p,q$ with $q\neq 0$. An orbit $(x_n,y_n)_{n\in\ZZ}$ of $\tilde{S}$ is said a Birkhoff $(p,q)$-periodic orbit if there exists a sequence $(s_n,u_n)_{n\in\ZZ}$ such that
\begin{itemize}
\item $(s_0,u_0)=(x_0,y_0)$
\item $s_{n+1}>s_n$
\item $(s_{n+q},u_{n+q})=(s_{n}+1,u_{n})$
\item $(s_{n+p},u_{n+p})=\tilde{S}(s_{n},u_{n})$
\end{itemize}
\end{definition}
\begin{remark}
Note that a Birkhoff $(p,q)$-periodic orbit is a $(p,q)$-periodic orbit since
\[
(x_{n+q},y_{n+q})=(s_{np+qp},u_{np+qp})=(s_{np}+p,u_{np})=\tilde{S}^n(s_0,u_0)+(p,0)=(x_{n}+p,y_{n}).
\]
\end{remark}
The existence of Birkhoff $(p,q)$-periodic orbits comes from Aubry-Mather theory. Here we give a related result \cite[Th. 9.3.7]{katok_hass}
\begin{theorem}\label{birk_orbits}
Let $S:\AA\rightarrow\AA$ be an exact symplectic twist diffeomorphism that preserves and twists the ends infinitely and let $p,q$ be two coprime integers. Then there exist at least two Birkhoff $(p,q)$-periodic orbits for $S$.
\end{theorem}
\begin{remark}
The Theorem is proved via variational techniques. The periodic orbits correspond to critical points of an action defined in terms of the generating function. One of this point is a minimum and the other is a minimax if the critical points are isolated.
\end{remark}
In the analytic case, something more can be said on the topology of these orbits.
\begin{proposition}\label{unstable}
Let $\tilde{S}:\Sigma\rightarrow\RR^2$ be an exact symplectic twist embedding satisfying condition \eqref{def_cyl} and admitting a $(p,q)$-periodic orbit. Then there exists at least one $(p,q)$-periodic orbit that is unstable.
\end{proposition}
\begin{proof}
The proof is essentially given in \cite{maroTMNA,ortega_fp,ortega_book}. We give here a sketch. It is enough to prove that there exists at least one unstable fixed point of the area and orientation preserving 1-1 real analytic map $\sigma^{-p}\circ \tilde{S}^q$. Let us first note that since $\tilde{S}$ is twist, the map $\sigma^{-p}\circ \tilde{S}^q$ is not the identity. Actually, it is known (see for example \cite{herman})) that if $\tilde{S}$ is twist, then the image of a vertical line under $\tilde{S}^q$ is a positive path, i.e. a curve such that the angle between the tangent vector and the vertical is always positive. This implies that $\tilde{S}^q$ cannot be a horizontal translation.\\
By hypothesis the set of fixed points of $\sigma^{-p}\circ \tilde{S}^q$ is not empty so that applying \cite[Chapter 4.9, Theorem 15]{ortega_book} we deduce that every stable fixed point is an isolated fixed point.\\
Hence, if there exists some non isolated fixed point, it must be unstable. Finally, suppose that we only have isolated fixed points that are all stable. From \cite[Chapter 4.5, Theorem 12]{ortega_book} they must all have index $1$. On the other hand, the Euler characteristic of the cylinder is null, and by the Poincar\'e-Hopf index formula we have a contradiction. Hence, there must exists at least one fixed point that is unstable.
\end{proof}
\begin{corollary}
In the conditions of Theorem \ref{birk_orbits}, if $S$ is real analytic then there exists at least one unstable $(p,q)$-periodic orbit.
\end{corollary}
In the analytic case, the twist condition gives information on the structure of the set of $(p,q)$-periodic orbits. Actually the following result has been proved in \cite{maroTMNA,ortega_pb}
\begin{theorem}\label{maro_teo}
Consider a $C^1$-embedding $\tilde{S}:\Sigma\rightarrow\RR$ satisfying property \eqref{def_cyl} and suppose it is exact symplectic and twist. Fix a positive integer $p$ and assume that for every $x\in\RR$ there exists $y\in (a,b)$ such that
\begin{equation}\label{maro_cond}
\bar{x}(x,a)<x+p<\bar{x}(x,y).
\end{equation}
Then the map $\sigma^{-p}\circ\tilde{S}$ has at least two fixed points in $[0,1)\times (a,b)$. Moreover, if $\tilde{S}$ is real analytic then the set of fixed points is finite or the graph of a real analytic $1$-periodic function. In the first case the index of such fixed points is either $-1$, $0$, $1$ and at least one is unstable. In the second case, all the fixed points are unstable.
\end{theorem}
\begin{remark}
Aubry-Mather theory gives a description, for fixed $p,q$ of those $(p,q)$-periodic orbits that are global minimizers. They can be ordered and if two of them are neighbouring, in the sense that there is no other minimal $(p,q)$-periodic orbit in the middle, then there are heteroclinic connections between them (\cite{bangert,katok_hass}). In this case, the $(p,q)$-periodic orbits are unstable. On the other hand, they can form an invariant curve. We stress that in the analytic case Theorem \ref{maro_teo} gives the description of the set of all $(p,q)$-periodic orbits, not only those that are action minimizing.
\end{remark}
\section{The bouncing ball map and its properties}\label{sec:tennis}
Consider the motion of a bouncing ball on a vertically
moving racket. We assume that the impacts do not affect the racket whose vertical position is described by a
$1$-periodic $C^k$, $k\geq 2$, or real analytic function $f:\RR\rightarrow\RR$. Let us start getting the equations of motion, following \cite{maro5}. In an inertial frame, denote by $(\to,w)$ the time of impact and the corresponding velocity just after the bounce, and by
$(\Pto,\bar w)$ the corresponding values at the subsequent bounce.
From
the free falling condition we have
\begin{equation}\label{timeeq}
f(t) + w(\Pto-\to) - \frac{g}{2}(\Pto -\to)^2 = f(\Pto) \,,
\end{equation}
where $g$ stands for the standard acceleration due to gravity.
Noting that the velocity just before the impact at time $\Pto$ is $w-g(\Pto-\to)$, using
the elastic impact condition and recalling that the racket is not affected by the ball, we obtain
\begin{equation}
\label{veleq}
\bar{w}+w-g(\Pto-\to) = 2\dot{f}(\Pto)\,,
\end{equation}
where $\dot{}$ stands for the derivative with respect to time. From conditions (\ref{timeeq}-\ref{veleq}) we can define a bouncing motion given an initial condition $(t,w)$ in the following way. If $w\leq\df(t)$ then we set $\bar{t}=t$ and $\bar{w}=w$. If $w>\df(t)$, we claim that we can choose $\bar{t}$ to be the smallest solution $\bar{t}> t$ of \eqref{timeeq}. Bolzano theorem gives the existence of a solution of \eqref{timeeq} considering
\[
F_t(\bar{t})=f(t)-f(\Pto) + w(\Pto-\to) - \frac{g}{2}(\Pto -\to)^2
\]
and noting that $F_t(\bar{t})<0$ for $\Pto-\to$ large and $F_t(\bar{t})>0$ for $\Pto-\to\rightarrow 0^+$. Moreover, the infimum $\bar{t}$ of all these solutions is strictly larger than $t$ since if there exists a sequence $\bar{t}_n\rightarrow t$ satisfying \eqref{timeeq} then,
\[
w - \frac{g}{2}(\Pto_n -\to) = (f(\Pto_n)- f(t)) /(\Pto_n-\to)
\]
that contradicts $w>\df(t)$ using the mean value theorem.
For this value of $\bar{t}$, condition \eqref{veleq} gives the updated velocity $\bar{w}$.
For $\Pto-\to>0$, we introduce the notation
\[
f[\to,\Pto]=\frac{f(\Pto)-f(\to)}{\Pto-\to},
\]
and write
\begin{equation}
\label{w1}
\Pto = \to + \frac 2g w -\frac 2g f[\to,\Pto]\,,
\end{equation}
that also gives
\begin{equation}
\label{w2}
\bar{w}= w -2f
[\to,\Pto]
+ 2\dot{f}(\Pto).
\end{equation}
Now we change to the moving frame attached to the racket, where the velocity after the impact is expressed as $v=w-\dot{f}(t)$,
and we get
the equations
\begin{equation}\label{eq:unb}
\left\{
\begin{split}
\Pto = {} & \to + \frac 2g \vo-\frac 2g f[\to,\Pto]+\frac 2g \df(\to)\textcolor{blue}{\,}
\\
\Pvo = {} & \vo - 2f[\to,\Pto] + \df (\Pto)+\df(\to)\textcolor{blue}{\,.}
\end{split}
\right.
\end{equation}
By the periodicity of the
function $f$, the coordinate $t$ can be seen as an angle. Hence, equations \eqref{eq:unb} define formally a map
\[
\begin{array}{rcl}
\tilde\Psi:
\RR & \longrightarrow & \RR \\
(\to,\vo) & \longmapsto & (\Pto, \Pvo),
\end{array}
\]
satisfying $\tilde\Psi\circ\sigma=\sigma\circ\tilde\Psi$ and the associated map of the cylinder $\Psi:\AA\rightarrow\AA$.
This is the formulation considered by Kunze and Ortega
\cite{kunzeortega2}. Another approach was considered by Pustylnikov in
\cite{pust} and leads to a map that is equivalent to
\eqref{eq:unb}, see \cite{maro3}.
Noting that $w>\df(t)$ if and only if $v>0$, we can define a bouncing motion as before and denote it as a sequence $(t_n,v_n)_{n\in\ZZ^+}$ with $\ZZ^+=\{n\in\ZZ \::\: n\geq 0\}$ such that $(t_n,v_n)\in \TT\times [0,+\infty)$ for every $n\in\ZZ^+$.
The maps $\Psi$ and its lift $\tilde{\Psi}$ are only defined formally. In the following lemma we state that they are well defined and have some regularity.
Let us introduce the notation $\RR_{v_*}=\{v\in\RR \: :\: v>v_* \}$, $\AA_{v_*} = \TT\times \RR_{v_*}$ and $\RR^2_{v_*}=\RR\times\RR_{v_*}$. We will denote the $\sup$ norm by $\norm{\cdot}$ and recall that $f\in C^k(\TT), k\geq 2$ or real analytic.
\begin{lemma}
\label{well_def}
There exists $v_*>4\norm{\df}$ such that the map $\Psi:\AA_{v_*}\rightarrow \AA$ is a $C^{k-1}$ embedding. If $f$ is real analytic, then $\Psi$ is a real analytic embedding.
\end{lemma}
\begin{proof}
The proof is essentially given in \cite{maro5}. We give here a sketch.
To prove that the map is well defined and $C^{k-1}$ we denote $v_{**}=4\norm{\df}$ and apply the implicit function theorem to the $C^{k-1}$ function $F :\{(\to,\vo,\Pto,\Pvo)\in \AA_{v_{**}} \times \RR^2 \: :\:\to\neq\Pto \} \rightarrow \RR^2$
given by
\begin{equation*}
F(\to,\vo,\Pto,\Pvo):=
\left(
\begin{split}
& \Pto - \to - \frac 2g \vo + \frac 2g f[\to,\Pto]-\frac 2g \df (\to) \\
& \Pvo - \vo + 2f[\to,\Pto] - \df(\Pto)-\df(\to)
\end{split}
\right),
\end{equation*}
This gives the existence of a $C^{k-1}$ map $\Psi$ defined in $\AA_{v_{**}}$ such that $F(\to,\vo,\Psi(\to,\vo) )=0$. If $f$ is real analytic, we get that $\Psi$ is real analytic applying the analytic version of the implicit function theorem. \\
One can easily check that $\Psi$ is a local diffeomorphism since
\[
\det \Dif_{\to,\vo} \Psi (t,v) =-\frac{\det (\Dif_{\to,\vo} F(\to,\vo,\Psi(\to,\vo) ))}{ \det (\Dif_{\Pto,\Pvo} F(\to,\vo,\Psi(\to,\vo) ))} \neq 0 \quad\mbox{on }\AA_{v_{**}}.
\]
To prove that $\Psi$ is a global embedding we need to prove that it is injective in $\AA_{v_*}$ for $v_*$ eventually larger than $v_{**}$. This can be done as in \cite{maro5}.
\end{proof}
\begin{remark}
Note that we cannot guarantee that if $(\to_0,\vo_0) \in \AA_{v_*}$ then $\Psi(\to_0,\vo_0) \in \AA_{v_*}$. This is reasonable, since the ball can slow down decreasing its velocity at every bounce.
However, a bouncing motion is defined for $v\geq 0$
\end{remark}
\begin{remark}
From the physical point of view, the condition $\Psi^n(\to_0,\vo_0)\in\AA_{v_*}$ for every $n$, implies that we can only hit the ball when it is falling. To prove it, suppose that $\to_0 =0$ and let us see what happens at the first iterate. The time at which the ball reaches its maximum height is $t^{max}=\frac{\vo_0}{g}$. On the other hand, the first impact time $\Pto$ satisfies,
\[
\Pto \geq \frac{2}{g}\vo_0 - \frac{4}{g}\norm{\df}= t^{max}\left( 2-\frac{4}{\vo_0}\norm{\df} \right) > t^{max},
\]
where the last inequality comes from $\vo_0\in\RR_{v_*}$ and $v_*>4\norm{\df}$.
\end{remark}
The map $\tilde\Psi$ is exact symplectic if we pass to the variables time-energy $(\to,\Eo)$ defined by
\[
(\to,\Eo) = \left(\to,\frac{1}{2}\vo^2\right),
\]
obtaining the conjugated map
\[
\Phi :\AA_{\Eo_*}
\longrightarrow \AA, \qquad \Eo_*=\frac{1}{2}v_*^2
\]
defined by
\begin{equation}\label{eq:unbe}
\left\{
\begin{split}
\Pto = & \to + \frac 2g \sqrt{2\Eo}-\frac 2g f[\to,\Pto]+\frac 2g \df(\to)
\\
\PEo = & \frac{1}{2}\left( \sqrt{2\Eo} - 2f[\to,\Pto] + \df (\Pto)+\df(\to) \right)^2,
\end{split}
\right.
\end{equation}
that by Lemma \ref{well_def} is a $C^{k-1}$-embedding or real analytic if $f$ is real analytic. More precisely, we have the following
\begin{lemma}
\label{lemma:exact}
The map $\Phi$ is exact symplectic and twist in $\AA_{e_*}$.
Moreover, $\Phi$ preserves and twists infinitely the upper end.
\end{lemma}
The map $\Phi$ is not defined in the whole cylinder. However, it is possible to extend to the whole cylinder preserving its properties. More precisely:
\begin{lemma}
\label{extension}
There exists a $C^{k-2}$ exact symplectic and twist diffeomorphism $\bar{\Phi}:\AA\rightarrow\AA$ such that $\bar{\Phi} \equiv \Phi$ on $\AA_{e_*}$ and $\bar{\Phi}\equiv\Phi_0$ on $\AA\setminus\AA_{\frac{e_*}{2}}$ where $\Phi_0$ is the integrable twist map $\Phi_0(t,e)=(t+e,e)$. Moreover, $\bar{\Phi}$ preserves the ends of the cylinder and twists them infinitely. If $f$ is real analytic, then the extension $\bar{\Phi}$ is $C^\infty$.
\end{lemma}
Due to Lemma \ref{well_def} and the fact that the maps $\Phi$ and $\Psi$ are conjugated we can consider the lift $\tilde{\Phi}:\RR^2_{e_*}\rightarrow\RR^2$ and give the following
\begin{definition}
A complete bouncing motion $(t_n,e_n)_{n\in\ZZ}$ is a complete orbit of the map $\tilde{\Phi}$.
\end{definition}
In the following section we will study the existence and properties of periodic complete bouncing motions as orbits of the map $\tilde{\Phi}$ defined in \eqref{eq:unbe}.
\section{Periodic bouncing motions}\label{sec:per}
The existence of periodic complete bouncing motions follows from an application of Aubry-Mather theory. In this section we prove it and in the analytic case we give some results on the structure of such motions and their stability.
We start saying that a complete bouncing motion is $(q,p)$-periodic if in time $p$ the ball makes $q$ bouncing before repeating the motion, more precisely:
\begin{definition}
Given two coprime integers $q,p\in\ZZ^+$, a complete bouncing motion $(t_n,e_n)_{n\in\ZZ}$ is $(q,p)$-periodic if the corresponding orbit of $\tilde{\Phi}$ is $(p,q)$-periodic. Moreover, we say that it is stable if the corresponding orbit is stable.
\end{definition}
The existence of two $(q,p)$-periodic complete bouncing motions comes from an application of Theorem \ref{birk_orbits}.
\begin{theorem}\label{pre_bouncing}
For every $f\in C^3$ there exists $\alpha>0$ such that for every positive coprime $p,q$ satisfying $p/q>\alpha$ there exist two different $(q,p)$-periodic complete bouncing motions. Moreover, if $f$ is real analytic, then at least one of the $(q,p)$-periodic complete bouncing motion is unstable.
\end{theorem}
\begin{proof}
By Lemma \ref{lemma:exact}, the map $\Phi$ defined in \eqref{eq:unbe} is a $C^2$ exact symplectic twist embedding in $\AA_{e_*}$ for some large $e_*$ depending on $\norm{\df}$. Moreover $\Phi$ preserves and twists infinitely the upper end. Its extension $\bar{\Phi}$ coming from Lemma \ref{extension} satisfies the hypothesis of Theorem \ref{birk_orbits} and admits for every coprime $p,q$ two Birkhoff $(q,p)$-periodic orbit. Consider the Birkhoff $(q,p)$-periodic orbits for $p,q$ positive and $p/q$ large enough such that
\begin{equation}\label{choice_pq}
\frac{p}{q}-1-\frac{4}{g}\norm\df>\frac{2}{g}\sqrt{2e_*}.
\end{equation}
Since Birkhoff periodic orbits are cyclically ordered, from \cite[lemma 9.1]{gole} we have that they satisfy the estimate $t_{n+1}-t_n>p/q - 1$. On the other hand, from \eqref{eq:unbe},
\[
t_{n+1}-t_n\leq\frac{2}{g}\sqrt{2e_n}+\frac{4}{g}\norm\df
\]
so that it must be
\[
\frac{2}{g}\sqrt{2e_n}+\frac{4}{g}\norm\df>\frac{p}{q}-1.
\]
By the choice of $p/q$ in \eqref{choice_pq} we have that $e_n>e_*$ for every $n\in\ZZ$ so that these Birkhoff periodic orbits are all contained in $\AA_{e_*}$ and so they are orbits of the original map $\Phi$. If $f$ is real analytic the result on the instability follows from Proposition \ref{unstable}.
\end{proof}
Theorem \ref{pre_bouncing} gives the existence of $(p,q)$-periodic bouncing motions but does not give information on the topological structure of the set of $(p,q)$-periodic bouncing motions for fixed values of $(p,q)$. This is a complicated issue and some results comes from Aubry-Mather theory. However, here we will see which results can be obtained using Theorem \ref{maro_teo}. To state this result we give the following
\begin{definition}
We say that the set of $(p,q)$-periodic complete bouncing motions is (analytically) degenerate if there exists a real analytic curve $(t(s),e(s))$ such that $(t(s+1),e(s+1))=(t(s)+1,e(s))$ for every $s\in\RR$, the function $t(s)$ is bijective for $s\in[0,1)$ and $(t_n,e_n)_{n\in\ZZ}$ is a $(p,q)$-periodic complete bouncing motion if and only if there exist $n_0,s_0$ such that $(t_{n_0},e_{n_0})=(t(s_0),e(s_0))$.
\end{definition}
The following result is a quite direct consequence of Lemma \ref{lemma:exact}.
\begin{proposition}
If $f$ is real analytic, then there exists $\alpha>0$ such that for every $p>\alpha$ the set of $(p,1)$-periodic complete bouncing motions is either finite or degenerate. In the first case at least one $(p,1)$-periodic complete bouncing motion is unstable. In the degenerate case, all $(p,1)$-periodic complete bouncing motion are unstable.
\end{proposition}
\begin{proof}
By Lemma \ref{lemma:exact} the map $\tilde{\Phi}$ is exact symplectic and twist on $\RR^2_{e_*}$. Moreover, let us choose $a$ such that $e_*<\sqrt{2a}$ and $p$ such that
\[
\frac{gp}{2}-2\norm{\dot{f}}>\sqrt{2a}.
\]
Let us start with the following estimates for the lift $\tilde{\Phi}$ that can be easily proved by induction on $n$ from \eqref{eq:unbe}:
\begin{equation}\label{stim_t_e}
|\sqrt{2e_n}-\sqrt{2e}|\leq 4n\norm{\dot{f}}, \qquad \left|t_n-t-\frac{2}{g}n\sqrt{2e}\right|\leq 4n^2\frac{\norm{\dot{f}}}{g}.
\end{equation}
These give
\[
\bar{t}(t,a)-t\leq \frac{2}{g}\sqrt{2a}+4\frac{\norm{\dot{f}}}{g}<p.
\]
On the other hand, Lemma \ref{lemma:exact} also gives that $\tilde{\Phi}$ twists the upper end infinitely, i.e. $\lim_{e\rightarrow +\infty}\bar{t}(t,e)-t=+\infty$ uniformly in $t$. Hence, condition \eqref{maro_cond} is satisfied in the strip $\Sigma=[a,+\infty)$ for every $p$. The conclusion comes from the application of Theorem \ref{maro_teo} and the fact that $(p,1)$-periodic complete bouncing motions corresponds to the fixed points of the map $\sigma^{-p}\circ\tilde{\Phi}$.
\end{proof}
This result is not trivially extended to $(p,q)$-periodic motions for $q\geq 1$ since $\Phi^q$ need to be exact symplectic and twist. The twist condition is in general not preserved by composition, while the exactness is, as shown in the following result, inspired by \cite{bosc_ort}
\begin{lemma}\label{lemma_isot_exact}
For every $q>0$ there exists $e_\#\geq e_*$ such that for every $p>0$ the map $\sigma^{-p}\circ\tilde{\Phi}^q:\RR^2_{e_\#}\rightarrow\RR^2$ is exact symplectic
\end{lemma}
\begin{proof}
Since the map $\tilde{\Phi}$ is defined in $\RR^2_{e_*}$, the image $\tilde{\Phi}(\RR^2_{e_*})$ could not be contained in $\RR^2_{e_*}$ so that the iterate could not be defined. From \eqref{eq:unb} we have that $|\bar{v}-v|\leq 4\norm{\dot{f}}$ from which, the map $\tilde{\Psi}^q$ is well defined in $\RR^2_{v_\#}$ with $v_\#=v_* + 4 q \norm{\dot{f}}$. Hence passing to the variables $(t,e)$, the map $\tilde{\Phi}^q$ is defined in $\RR^2_{e_\#}$ with $e_\#=\frac{1}{2}v_\#^2$.
Since $\Phi$ is exact symplectic, there exists $V:\RR^2_{e_\#}\rightarrow\RR$ such that, defining $\lambda=e dt$ we have $\tilde{\Phi}^*\lambda-\lambda=dV$. Hence, denoting $V_1=V+V\circ\tilde{\Phi}+\dots+V\circ\tilde{\Phi}^{q-1}$ it holds that $V_1\circ\sigma=V_1$ on $\RR^2_{e_\#}$ and
\begin{align*}
dV_1 &= dV+\tilde{\Phi}^*dV+\dots+(\tilde{\Phi}^{q-1})^*dV \\
& = \tilde{\Phi}^*\lambda-\lambda + (\tilde{\Phi}^2)^*\lambda-\tilde{\Phi}^*\lambda +\dots+(\tilde{\Phi}^q)^*\lambda-(\tilde{\Phi}^{q-1})^*\lambda \\
&= (\tilde{\Phi}^q)^*\lambda-\lambda
\end{align*}
from which $\tilde{\Phi}$ is exact symplectic. Finally, we conclude noting that by the definition of $\sigma^{-p}$, $(\sigma^{-p}\circ\tilde{\Phi}^q)^*\lambda = (\tilde{\Phi}^q)^*\lambda$.
\end{proof}
Concerning the twist condition the following technical result holds.
\begin{lemma}\label{twist_q}
Let $f$ be $C^2$. For every $q\geq 1$ there exist $\epsilon_q>0$, $e^q>e_\#$, such that if $\norm{\ddf}<\epsilon_q$ then
\[
\frac{\partial t_q}{\partial e}=\frac{2q}{g\sqrt{2e}}(1+\tilde{f}_q(t,e))
\]
where $|\tilde{f}_q(t,e)|< 1/2$ on $\RR^2_{e^q}$.
\end{lemma}
\begin{proof}
To simplify the computation, let us perform the change of variables $y=\sqrt{2e}+\df(t)$ so that \eqref{eq:unbe} becomes
\begin{equation}\label{eq:unby}
\left\{
\begin{split}
\Pto = & \to + \frac 2g y-\frac 2g f[\to,\Pto]
\\
\bar{y} = & y - 2f[\to,\Pto] + 2\df (\Pto).
\end{split}
\right.
\end{equation}
Since $\partial t_q/\partial e=(\partial t_q/\partial y) (\partial y/\partial e)$ it is enough to prove that for every $q\geq 1$ there exist $\epsilon_q>0$ and $y^q$ large, such that if $\norm{\ddf}<\epsilon_q$ then
\begin{equation}\label{new_th}
\frac{\partial t_q}{\partial y}=\frac{2q}{g}(1+\tilde{f}_q(t,y))
\end{equation}
where $|\tilde{f}_q(t,y)|< 1/2$ on $\RR^2_{y^q}$.
Let us start with some estimates that hold for every $q\geq 1$. It comes from \eqref{eq:unby} that
\begin{equation}\label{y_q}
y_q=y+2\sum_{i=1}^{q}\df(t_i)+2\sum_{i=1}^{q}f[t_{i-1},t_{i}]
\end{equation}
so that
\[
|y_q-y|\leq 4q\norm\df.
\]
Using it,
\begin{equation}\label{t_q}
|t_q-t_{q-1}|\geq \frac{2}{g}y-\frac{2}{g}(4q+1)\norm\df,
\end{equation}
from which there exists $y^q$ large enough and $C_q$ such that
\begin{equation}\label{fqq-11}
\left|\partial_{t_q}f[t_{q-1},t_q]\right|=\left|\frac{\dot{f}(t_q)-f[t_{q-1},t_q]}{t_q-t_{q-1}}\right|\leq
\frac{g\norm{\dot{f}}}{y-(4q+2)\norm{\dot{f}}}<\frac{C_q}{y} \qquad \mbox{on } \RR^2_{y^q}
\end{equation}
and analogously
\begin{equation}\label{fqq-12}
\left|\partial_{t_{q-1}}f[t_{q-1},t_q]\right|<\frac{C_{q-1}}{y} \qquad \mbox{on } \RR^2_{y^{q-1}}
\end{equation}
Now we can start the proof by induction on $q\geq 1$. For $q=1$ we have, differentiating the first equation in \eqref{eq:unby}
\begin{equation}\label{t_1_y}
\frac{\partial t_1}{\partial y}\left(1+\frac{2}{g}\partial_{t_{1}}f[t,t_1]\right)=\frac{2}{g}y
\end{equation}
from which, using \eqref{fqq-11} we get the initial step taking a suitably larger value of $y^1$.\\
For the inductive step, let us suppose \eqref{new_th} to be true for $i=1,\dots,q-1$. By implicit differentiation
\begin{equation}\label{t_q_e}
\frac{\partial t_q}{\partial y}\left(1+\frac{2}{g}\partial_{t_{q}}f[t_{q-1},t_q]\right)=\frac{2}{g}\frac{\partial y_{q-1}}{\partial y}+\frac{\partial t_{q-1}}{\partial y}\left(1-\frac{2}{g}\partial_{t_{q-1}}f[t_{q-1},t_q]\right)
\end{equation}
From \eqref{y_q} and the inductive hypothesis we have
\begin{align*}
\frac{\partial y_{q-1}}{\partial y}&=1+2\sum_{i=1}^{q-1}\left(\ddf(t_i) \frac{\partial t_i}{\partial y}-\partial_{t_{i}}f[t_{i-1},t_{i}]\frac{\partial t_i}{\partial y} -\partial_{t_{i-1}}f[t_{i-1},t_{i}]\frac{\partial t_{i-1}}{\partial y}\right)\\
&=1+2\sum_{i=1}^{q-1}\left(\ddf(t_i) (1+\tilde{f}_i)-\partial_{t_{i}}f[t_{i-1},t_{i}](1+\tilde{f}_i) -\partial_{t_{i-1}}f[t_{i-1},t_{i}](1+\tilde{f}_{i-1})\right).
\end{align*}
Since by (\ref{fqq-11}-\ref{fqq-12}) for every $i$ $|\partial_{t_{i}}f[t_{i-1},t_{i}]|$ tends to zero uniformly as $y \rightarrow +\infty$ and $|\tilde{f}_i|<1/2$ for $y$ large, we can find new constants $C_{q-1}$ and $y^{q-1}$ such that on $\RR^2_{y^{q-1}}$,
\begin{equation}\label{stim_y}
\frac{\partial y_{q-1}}{\partial y} = 1 + \bar{f}_{q-1}(t,y) \qquad \mbox{with } |\bar{f}_{q-1}|\leq C_{q-1}\norm\ddf.
\end{equation}
Using it and the inductive hypothesis in \eqref{t_q_e} we get
\begin{align}\label{final}
\frac{\partial t_q}{\partial y}\left(1+\frac{2}{g}\partial_{t_{q}}f[t_{q-1},t_q]\right)&=\frac{2}{g}(1 + \bar{f}_{q-1}(t,y))+\frac{2(q-1)}{g}(1+\tilde{f}_{q-1}(t,y))\left(1-\frac{2}{g}\partial_{t_{q-1}}f[t_{q-1},t_q]\right)\\
&=\frac{2q}{g}\left(1+ \tilde{f}_q(t,y)\right)
\end{align}
where,
\[
\tilde{f}_q(t,y)=\frac{1}{q}\bar{f}_{q-1}(t,y)+\frac{q-1}{q}\tilde{f}_{q-1}(t,y)+\frac{2(q-1)}{gq}\partial_{t_{q-1}}f[t_{q-1},t_q](1+\tilde{f}_{q-1}(t,y)).
\]
Now (\ref{fqq-11}-\ref{fqq-12}) and \eqref{stim_y} imply
\[
|\tilde{f}_q|<\frac{C_{q-1}}{q}\norm\ddf+\frac{q-1}{2q}+\frac{C'_{q-1}}{y}
\]
so that we can find $\epsilon_q$ and $y_q$ such that if $\norm\ddf<\epsilon_q$ then $|\tilde{f}_q|<\frac{1}{2}-\frac{1}{2q}$ on $\RR^2_{y^{q}}$. Plugging it into \eqref{final} and using again (\ref{fqq-11}-\ref{fqq-12}) we get the thesis, eventually increasing $y^q$.
\end{proof}
This is used to prove the following result
\begin{proposition}
Suppose that $f$ is real analytic. For every $q>0$ there exist $\alpha>0$ and $\epsilon_q>0$ such that if $p>\alpha$ and $\norm{\ddf}<\epsilon_q$ then the set of $(p,q)$-periodic complete bouncing motion is either finite or degenerate. In the first case at least one $(p,q)$-periodic complete bouncing motion is unstable. In the degenerate case, all $(p,q)$-periodic complete bouncing motion are unstable.
\end{proposition}
\begin{proof}
We would like to apply Theorem \ref{maro_teo} to the map $\tilde{\Phi}^q$, noting that $(p,q)$-periodic bouncing motions correspond to fixed points of the map $\sigma^{-p}\circ\tilde{\Phi}^q$. Let us fix $q>0$. In Lemma \ref{lemma_isot_exact} we proved that $\tilde{\Phi}^q$ is exact symplectic in $\RR^2_{e_\#}$ for some $e_\#$ depending on $q$. Moreover, by Lemma \ref{twist_q}, there exist $\epsilon_q$ and $e^q>e_\#$ such that if $\norm{\ddf}<\epsilon_q$ then $\tilde{\Phi}^q$ is also twist on $\RR^2_{e^q}$. Now choose $p>0$ such that
\[
\frac{gp}{2q}-2q\norm{\dot{f}}>e^q.
\]
Hence, there exist $b>a$ such that
\[
e^q<\sqrt{2a}<\frac{gp}{2q}-2q\norm{\dot{f}}<\frac{gp}{2q}+2q\norm{\dot{f}}<\sqrt{2b}.
\]
This choice for $a,b$ gives condition \eqref{maro_cond} on the strip $\Sigma=\RR\times (a,b)$. Actually, from \eqref{stim_t_e},
\begin{align*}
t_q(t,b)-t & \geq\frac{2}{g}q\sqrt{2b}-4q^2\frac{\norm{\dot{f}}}{g}>p \\
t_q(t,a)-t & \leq\frac{2}{g}q\sqrt{2a}+4q^2\frac{\norm{\dot{f}}}{g}<p.
\end{align*}
This concludes the proof.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |

\subsection{Deligne's category $\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t)$}
From the perspective of the
Killing-Cartan-Weyl classification of simple Lie algebras
and their representation theory in terms of highest weights, root systems, Weyl groups and associated combinatorics
it is not so easy to
understand the extreme uniformity in the representation theory
that exists among different Lie groups. With possible application
to a universal Chern-Simons type knot invariant in mind, P. Vogel \cite{Vog1999}
tried to define a universal Lie algebra, $\mathfrak{g}(\alpha:\beta:\gamma)$
depending on three {\em Vogel parameters} that determine a point
$(\alpha:\beta:\gamma)$ in the {\em Vogel plane}, in which all simple
Lie algebras find their place. The dimension of the Lie algebra
$\mathfrak{g}(\alpha:\beta:\gamma)$ is given by a universal rational expression
\begin{equation*}
\dim \mathfrak{g}(\alpha:\beta:\gamma)\, = \, \frac{(\alpha-2t)(\beta-2t)(\gamma-2t)}{\alpha\beta\gamma},\qquad
t=\alpha+\beta+\gamma ,
\end{equation*} and similar universal rational formulas can be
given for the dimensions of irreducible constituents of $S^2\mathfrak{g}, S^3\mathfrak{g}$ and
$S^4\mathfrak{g}$. Although the current status of Vogel's suggestions is unclear to us,
these ideas have led to many interesting developments, such as the discovery of
$E_{7\frac{1}{2}}$ by Landsberg and Manivel,
\cite{LM2002}, \cite{LM2004}, \cite{LM2006}, \cite{LM2006a}, \cite{LM2006a}.
In order to interpolate within the classical $A,B,C,D$ series of Lie algebras,
Deligne has defined $\otimes$-categories
\[ \mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t),\;\; \mathop{\underline{\smash{\mathrm{Rep}}}}(O_t), \]
where $t$ is a parameter that can take on any complex value.
(The category $\mathop{\underline{\smash{\mathrm{Rep}}}}(Sp_{2t})$
is usually not discussed as it can be
expressed easily in terms of the category $\mathop{\underline{\smash{\mathrm{Rep}}}}(O_T)$ with $T=-2t$.)
If $n$ is an
integer, there are natural surjective functors
\[\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_n) \to \mathop{\mathrm{Rep}}(GL_n)\]
In the tannakian setup one would attempt to reconstruct a group $G$ from its
$\otimes$-category of representations $\mathop{\mathrm{Rep}}(G)$ using a fibre functor to
the $\otimes$-category $Vect$ of vector spaces, but Deligne's category has no fibre functor and is not tannakian, or, in general, even abelian. (However, when $t$ is not an integer, the category \emph{is}
abelian semisimple.)
According to the axioms, in an arbitrary rigid $\otimes$-category $\mathcal{R}$ there exist a unit object~${\bf 1}$ and
canonical evaluation and coevaluation morphisms
\[ \epsilon: V \otimes V^* \to {\bf 1},\qquad \delta: {\bf 1} \to V \otimes V^*\]
so that we can assign to any object a dimension by setting
\[ \dim V =\epsilon \circ \delta \in \mathop{\mathrm{End}}({\bf 1}) \in \mathbb{C}. \]
A simple diagrammatic description of $\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t)$ can be found in
\cite{CW2012}. One first constructs a skeletal category ${\mathop{\underline{\smash{\mathrm{Rep}}}}\,}_0(GL_t)$,
whose objects are words in the alphabet $\{\bullet, \circ\}$. The letter
$\bullet$ corresponds to the fundamental representation $V$ of $GL_t$,
$\circ$ to its dual $V^*$. A~$\otimes$-structure is induced by concatenation
of words. The space of morphisms between two such words is the $\mathbb{C}$-span of
a set of admissible graphs, with vertices the circles and dots of the two
words. Such an admissible graph consists of edges between the letters of
the two words. Each letter is contained in one edge. Such an edge connects
different letters of the same word or the same letter if the words are
different.
$$\vcenter{ \xymatrix{
\bullet \ar@/_2ex/@{-}[rr] & \bullet \ar@{-}[ld] & \circ & \circ \ar@{-}[d] \\
\bullet \ar@{-}[rrd] & \circ \ar@/^/@{-}[r] \ar@/_/@{-}[r]& \bullet & \circ \ar@{-}[lld] \\
& \circ & \bullet & \\
} }
= t \cdot
\left( \vcenter{ \xymatrix{\bullet \ar@/_2ex/@{-}[rr] & \bullet \ar@{-}[ddr] & \circ & \circ \ar@{-}[ddll] \\
& & \\
& \circ & \bullet & \\
}} \right) $$
The composition is juxtaposition of the two graphs, followed by
the elimination of loops, which results in a factor $t$.\\
Deligne's
category is now obtained by first forming its additive hull by introducing
formally direct sums and then passing to the
Karoubian hull, i.e. forming a category of pairs $(W,e)$, consisting of
an object together with an idempotent:
\[\mathop{\underline{\smash{\mathrm{Rep}}}} (GL_t) =({\mathop{\underline{\smash{\mathrm{Rep}}}}\,}_0(GL_t)^{\text{add}})^\text{Karoubi}. \]
\bf Example. \rm Consider the word $\bullet \bullet$ and
the morphisms $\mathrm{Id}$ and $\mathrm{Swap}$ with the obvious meaning.
One then can put
\[ S^2V=(\bullet \bullet, s), \;\;\wedge^2 V=(\bullet\bullet, a),\]
where
\[ s=\frac{1}{2}(\mathrm{Id}+\mathrm{Swap}),\;\;a=\frac{1}{2}(\mathrm{Id}-\mathrm{Swap})\]
so that in $\mathop{\underline{\smash{\mathrm{Rep}}}}(Gl_t)$ one has:
\[ V \otimes V=(\bullet \bullet ,Id)=S^2V \oplus \wedge^2V,\]
which upon taking dimensions is the identity
\[ t^2 = \frac{t(t+1)}{2}+\frac{t(t-1)}{2} .\]
\subsection{`Spaces of sections' as objects in Deligne's category and the beta integral.}
As above, we assume that $n$ is a natural number. Write $t=N+1$ and let $V_t=V$ be the fundamental object of $\mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t)$ so that
$\dim V_t=t$. We do not define the projective space $\P =\P^N$,
but we can pretend that, in the sense of Deligne, the space of global sections is
\[ H(\mathcal{O}_{\P}(n)) :=\mathop{\mathrm{Sym}^n}(V_t^*) \in \mathop{\underline{\smash{\mathrm{Rep}}}}(GL_t) .\]
Its dimension is then, as expected
\begin{equation}
\chi(\mathcal{O}_{\P}(n)) :=\dim H (\mathcal{O}_{\P}(n))={N+n \choose n} \label{chi-interpret},
\end{equation}
(interpreted in the obvious way as a polynomial
in $N$ if $N\not\in\mathbb{Z}$),
so that e.g.
\[\chi(\mathcal{O}_{\P^{1/2}}(2))=\frac{3}{8}.\]
The Poincar\'e series
\[P(y):=\sum_{n=0}^{\infty} \chi(\mathcal{O}_{\P}(n)) y^n =\frac{1}{(1-y)^{N+1}},\]
which is consistent with the idea that $\dim V_t = N+1$.
\medskip
Returning to the question posed at the beginning,
`is there a way to extend the interpolation of $\chi$
individually to the Chern and the Todd ingredients?', we reason as
follows. If $X$ is a smooth projective $n$-dimensional variety, and $E$ a vector
bundle on $X$, then the Euler characteristic
\[\chi(X,E):=\sum_{i=0}^n (-1)^i\dim H^i(X,E)\]
can be expressed in terms of characteristic numbers
\[\chi(X,E)=\int_X \mathop{\mathrm{ch}}(E) \cdot \mathop{\mathrm{td}}(X) . \]
Here the integral in the right hand side is usually
interpreted
as resulting from evaluating the cap product with the fundamental class $[X]$
on the cohomology algebra $H^*(X)$, and the Chern character and Todd class
are defined in terms of the Chern roots $x_i$ of $E$ and $y_i$ of $TX$:
\[\mathop{\mathrm{ch}}(E)=\sum_{i=1}^r e^{x_i}\,, \qquad \mathop{\mathrm{td}}(X)=\prod_{i=1}^n \frac{y_i}{1-e^{-y_i}} .\]
The cohomology ring of an $n$-dimensional projective space is
a truncated polynomial ring:
\[H^*(\P^N)=\mathbb{Z}[\xi]/(\xi^{N+1})\,, \qquad\xi=c_1(\mathcal{O}(1)),\]
and it is not directly clear how to make sense of this if $N$ is not an
integer. Our tactic will be to drop the relation
\[\xi^{N+1}=0\]
altogether, thinking instead of $\mathbb{Z}[\xi]$ as a Verma module over the $sl_2$
of the Lefschetz theory, and replacing taking the cap product
with integration.
As we will be integrating meromorphic functions in $\xi$,
the polynomial ring is too small, and we put
\[ \hat{H}(\P) :=\mathbb{Z}[[s]] \supset \mathbb{Z}[s] .\]
One has
\[ \chi(\mathcal{O}(n))=e^{n\xi}\,, \qquad \mathop{\mathrm{td}}(\P)=\left(\frac{\xi}{1-e^{-\xi}}\right)^{N+1}, \]
so Hirzebruch-Riemann-Roch reads
\[\chi(\mathcal{O}(n))=\left[e^{n\xi} \left(\frac{\xi}{1-e^{-\xi}}\right)^{N+1}\right]_N\]
where $[...]_N$ is the coefficient at $\xi^N$ in a series.
This can be expressed analytically as a residue integral
along a small circle around the origin:
\begin{equation*}
\chi(\mathcal{O}(n))=\frac{1}{2\pi i}\oint e^{n \xi}\left(\frac{\xi}{1-e^{-\xi}}\right)^{N+1}\frac{d\xi}{\xi^{N+1}} .
\end{equation*}
As it stands, it cannot be extended to non-integer
$N$ since the factor $(1- e^{-\xi})^{-N-1}$ is not univalued on the circle. The usual way to adapt it is to consider, for $n \ge 0$, the integral along the path
going from $- \infty - i \varepsilon$ to $ - i \varepsilon$, making a half--turn round the origin and going back, and choosing the standard branch of the logarithm. Because of the change in the argument this integral is equal to
\begin{equation*}
J(N,n) =
\frac{e^{2 \pi i (N+1)}-1}{2 \pi i}
\int_{-\infty}^0 \frac{e^{n \xi}}{(1-e^{-\xi})^{N+1}}
d \xi ,
\end{equation*}
or, after the substitution $s=e^{\xi}$,
\begin{equation*}
J(N,n) =
\frac{e^{2 \pi i (N+1)}-1}{2 \pi i}
\int_0^1 s^{n-1} (1-1/s)^{-N-1} ds
=
\frac{\sin \pi (N+1)}{ \pi }
\int_0^1 s^{n+N} (1-s)^{-N-1} ds .
\end{equation*}
Using Euler's formulas
\begin{equation}
\Gamma(x)\Gamma(1-x) =\frac{\pi}{\sin \pi x} \,,
\label{gamma-one-minus-argument}
\end{equation}
\begin{equation}
\int_0^1 s^{\alpha-1}(1-s)^{\beta-1} ds = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)} \,,
\label{beta-integral}
\end{equation}
and
\begin{equation*}
\frac{\Gamma(N+n+1)}{\Gamma(n+1) \Gamma(N+1)}
=
{N+n \choose n} \,,
\end{equation*}
we arrive at a version of RRH `with integrals':
\medskip
\noindent \bf Proposition 1. \rm Let $n \in \mathbb{N}$. Assume $\mathop{\mathrm{Re}} N < 0, \, N \notin \mathbb{Z}$. Interpret the Euler characteristic of~$\P ^N$ via formula \eqref{chi-interpret}. Then
\begin{equation*} \label{little-propo}
\chi_\P(\O (n)) = \frac{e^{2 \pi i (N+1)}-1}{2 \pi i}
\int_{-\infty}^0 \frac{e^{n \xi}}{(1-e^{-\xi})^{N+1}}
d \xi .
\end{equation*}
\qed
\bigskip
\medskip
\subsection{The grassmannian and the Selberg integral.} For $\P^N$, we ended up with the beta function, a one-dimensional integral, as the cohomology ring
is generated by a single class~$\xi$. In the cases where the cohomology ring is generated
by $k$ elements, for example the grassmannian $G(k,N+k)$,
we would like to see a $k$-dimensional integral appear in a natural way.
For $N \in \mathbb{N}$ the cohomology ring of the grassmannian $\mathbb{G}:=G(k,N+k)$
is given by
\[H^*(G(k,N+k))=\mathbb{C}[s_1,s_2,\ldots,s_k]/(q_{N+1},q_{N+2}, \dots, q_{N+k}),\]
where the $s_i$ are the Chern classes of the universal rank $k$ sub-bundle
and $q_i=c_i(Q)$ are formally the Chern classes of the universal quotient bundle $Q$ (so that the generating series of $q$'s is inverse to that of $s$'s).
In the same vein as before, we set:
\begin{equation}
\hat{H}^*(\mathbb{G}):=\mathbb{C}[[s_1,s_2,\ldots,s_k]]=\mathbb{C}[[x_1,x_2,\ldots,x_k]]^{S_k} \label{drop-rel}
\end{equation}
by dropping the relations. A $\mathbb{C}$-basis of this ring
given by the Schur polynomials
\[\sigma_{\lambda} :=\frac{\det(x_i^{\lambda_j+k-j})}{\det(x_i^{k-j})}\]
where $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_k)$
is an arbitrary Young diagram with at most $k$ rows.
There is a Satake--type map for the extended cohomology:
\[ \mathrm{Sat}: \hat{H}(\mathbb{G}) \to \wedge^k \hat{H}(\P) \]
obtained from the Young diagram by `wedging its rows':
\[\sigma_{\lambda} \mapsto s^{\lambda_1+k-1} \wedge s^{\lambda_2+k-2}\wedge \ldots \wedge s^{\lambda_k-1}. \]
We are therefore seeking an expression for the values of the Hilbert polynomial of $G(k,N)$ in terms of a $k$--dimensional integral of the beta type involving $k$--wedging.
Euler's beta integral
\eqref{beta-integral}
has several generalizations. Selberg introduced \cite{Selberg1944}
an integral \cite{FW2008} over the $k$-dimensional cube
\begin{equation*}
S(\alpha, \beta,\gamma, k):=\int_0^1\ldots\int_0^1 (s_1 s_2\ldots s_k)^{\alpha-1}((1-s_1)(1-s_2)\ldots(1-s_k))^{\beta-1}\Delta(s)^{2\gamma} ds_1ds_2\ldots ds_k
\end{equation*}
where
\[ \Delta(s)=\Delta(s_1,s_2,\ldots,s_k)=\prod_{i <j} (s_i-s_j) ,\]
and showed that it admits meromorphic continuation, which
we will also denote by $S$.
\medskip
\noindent {\bf Proposition 2.} For $k \in \mathbb{N},\, n \in \mathbb{Z}_+$, let $\chi(\mathcal{O}_{\mathbb{G}}(n))$
denote the result of interpolating the polynomial function $\chi(\mathcal{O}_{G (k,k+N)}(n))$ of the argument
$N \in \mathbb{N}$ to $\mathbb{C}$.
One has
\begin{equation*}
\chi(\mathcal{O}_{\mathbb{G}}(n))= \frac{(-1)^{k(k-1)/2}}{k!}\left( \frac{\sin \pi(N+1)}{\pi}\right)^k S(n+N+1,-N-k+1;1,k) .
\end{equation*} \rm
{\sc Proof.} The shortest (but not the most transparent) way to see this is to use the expressions for the LHS and the RHS in terms of the product of gamma factors found by Littlewood and Selberg respectively. By Selberg,
\begin{equation}
S(\alpha,\beta,\gamma,k)=\prod_{i=0}^{k-1} \frac{\Gamma(\alpha+i\gamma)\Gamma(\beta+i\gamma)\Gamma(1+(i+1)\gamma)}
{\Gamma(\alpha+\beta+(k+i-1)\gamma) \Gamma(1+\gamma)} \label{Selberg-formula}.
\end{equation}
By Littlewood \cite{Lit1942}, for $N \in \mathbb{Z}_{>0}$ one has
\begin{equation*}
\chi(\mathcal{O}_{G(k,k+N)}(n)) =\frac{{N+n \choose n} {N+n+1 \choose n+1} \ldots {N+n+(k-1) \choose n+(k-1)}}{
{N \choose 0} {N+1 \choose 1} \ldots {N+(k-1) \choose (k-1)}},
\end{equation*}
where there are $k$ factors at the top and the bottom.
Rearranging the terms in the RHS of \eqref{Selberg-formula} and using
\eqref{gamma-one-minus-argument},
we bring the $\Gamma$-factors that involve $\beta$ to the denominator in order to form the binomial coefficients at the expense of the sine factor.
\qed
\bigskip
As an example, for $k=2$ and
$N=-1/2$, we get
the Hilbert series
\[ \sum_{k=0}^\infty \chi(\mathcal{O}_{G(2,3/2)}(n)) \, y^n = 1+6\, \frac{t}{16} +60\left(\frac{t}{16}\right)^2 + 700 \left(\frac{t}{16}
\right)^3+8820 \left(\frac{t}{16}
\right)^4 +\ldots
\]
which is no longer algebraic, but can be expressed in terms of elliptic functions.
More generally, one can consider a Selberg--type integral
with an arbitrary symmetric function rather than the discriminant in the numerator and use separation of variables together with the Jacobi--Trudi formula in order to obtain similar expressions in terms of the gamma function in order to interpolate between the Euler characteristics of more general vector bundle on grassmannians (or the dimensions of highest weight
representations of $GL_{N+k}$).
\subsection{Towards a gamma conjecture in
non--integral dimensions.} \label{gamma-phenomena} The by now standard
predictions of mirror symmetry relate the RRH formalism
on a Fano variety $F$ to the monodromy of its regularized
quantum differential equation. It is expected that this
differential equation arises from the Gauss--Manin
connection
in the middle cohomology of level hypersurfaces of a
regular function $f$ defined on some quasiprojective
variety (typically a Laurent
polynomial on $\mathbb{G}_\mathrm{m}^{\, d}$),
called in this case a Landau--Ginzburg model of
$F$. By stationary phase,
the monodromy of the Gauss--Manin connection in a pencil
translates
into the asymptotic behavior of oscillatory
integrals of the generic form
$I (z) = \int \exp (izf)\, d\mu (\mathbb{G}_\mathrm{m}^{\, d})$,
which satisfy the quantum differential equation of $F$,
this time without the word `regularized'.
The asymptotics
are given by Laplace integrals computed at the critical points,
and the critical values of $f$ are the exponents occurring
in the oscillatory integrals $I_i(z)$ that have `pure'
asymptotic behavior in sectors.
One wants to express these pure asymptotics in terms of the Frobenius
basis of solutions $\{ \Psi_i (z) \}$ around $z = 0$.
The gamma conjecture \cite{GGI2016}
predicts that such an expression
for the highest--growth asymptotic (arising from the critical
value next to infinity) will give the `gamma--half' of the Todd
genus and therefore effectively encode the Hilbert
polynomial of $F$ with respect to the anticanonical bundle.
At first sight, none of this seems capable of surviving in non--integer dimensions.
Yet, to return to the example of $G(2,N+2)$, define the numbers $c_j$ and $d_j$ by the expansions
\begin{equation*}
\Gamma_\P^{(0)} (\varepsilon) = \Gamma (1+\varepsilon)^{N+2} = \sum_{j=0}^\infty d_j \varepsilon^j ,
\end{equation*}
\begin{equation*}
\Gamma_\P^{(1)} (\varepsilon) = \Gamma (1+\varepsilon)^{N+2} e^{2 \pi i \varepsilon} = \sum_{j=0}^\infty c_j \varepsilon^j.
\end{equation*}
Put
\begin{equation*}
F(\varepsilon,z) = \sum_{k=0}^{\infty} \frac{z^{l+\varepsilon}}{\Gamma(1+l+\varepsilon)^{N+2}}
\end{equation*}
and
\begin{equation*}
\Psi (\varepsilon,z) = \Gamma_\P (\varepsilon) F (\varepsilon, z) = \sum_{k=0}^\infty \Psi_k (z) \varepsilon^k.
\end{equation*}
\medskip
\noindent \bf Claim \rm (rudimentary gamma conjecture). For fixed $N > 2$ and $i, \, j$ in a box of at least some moderate size,
one should have
\begin{equation*}
\label{claim-grass}
\lim_{z \to - \infty} \frac{\Psi_i (z) \Psi'_j (z) - \Psi_j (z) \Psi'_i (z)}{\Psi_1 (z) \Psi'_0 (z) - \Psi_0 (z) \Psi'_1 (z)}
= \frac{c_i d_j - c_j d_i}{c_1 d_0 - c_0 d_1} .
\end{equation*}
\bigskip
\bigskip
\noindent The LHS and RHS mimic, in the setup of formula \eqref{drop-rel}, the $\sigma_{[j-1,i]}$-coefficients in the expansion of the `principal asymptotic class' and the gamma class of the usual grassmannian: in the case when $N \in \mathbb{N}$ and $0 \le i,j \le N$ one would use the identification of $2$--Wronskians of a fundamental matrix of solutions to a higher Bessel equation with homology classes of $G(2,N+2)$. Preliminary considerations together with numerical evidence suggest that the claim has a good chance to be true, as well as its versions for $G(k,N+k)$ with $k > 2$.
\bigskip
\bigskip
\bigskip
The first--named author is grateful to Yuri Manin and Vasily Pestun for stimulating discussions. We thank Hartmut Monien for pointing us to \cite{FW2008}.
\bigskip
\nocite{MV2017}
\nocite{GM2014}
\nocite{Bra2013}
\nocite{BS2013}
\nocite{Etingof1999}
\nocite{Etingof2014}
\nocite{Etingof2016}
\nocite{EGNO2015}
\nocite{FW2008}
\nocite{Man2006}
\nocite{Opd1999}
\nocite{Man1985}
\nocite{Lit1942}
\nocite{Lit1943}
\nocite{BD2016}
\nocite{LM2002}
\nocite{LM2004}
\nocite{LM2006}
\nocite{LM2006a}
\nocite{GW2011}
\nocite{Del2002}
\nocite{}
\nocite{}
\nocite{}
\nocite{}
\nocite{}
\nocite{}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |

\section{Introduction}
\label{sec:intro}
Mobile ad-hoc networking has presented many challenges to the research community,
especially in designing suitable, efficient, and well performing protocols.
The practical analysis and validation of such protocols often depends on
synthetic data, generated by some mobility model. The model has the goal of
simulating real life scenarios~\cite{camp02wcmc} that can be used to tune
networking protocols and to evaluate their performance. A lot of work has been
done in designing realistic mobility models. Till a few years ago, the model of
choice in academic research was the random way point mobility model
(RWP)~\cite{rwp}, simple and very efficient to use in simulations.
Recently, with the aim of understanding human mobility~\cite{toronto, hui05,
hui06, milan07, UCAM-CL-TR-617}, many researchers have performed
real-life experiments by distributing wireless devices to people. From the data
gathered during the experiments, they have observed the typical distribution of
metrics such as inter-contact time (time interval between two successive
contacts of the same people) and contact duration. Inter-contact time, which
corresponds to how often people see each other, characterizes the opportunities
of packet forwarding between nodes. Contact duration, which limits the duration
of each meeting between people in mobile networks, limits the amount of data
that can be transferred.
In~\cite{hui05, hui06}, the authors show that the distribution of
inter-contact time is a power-law. Later, in~\cite{milan07}, it has been
observed that the distribution of inter-contact time is best described as a
power law in a first interval on the time scale (12 hours, in the experiments
under analysis), then truncated by an exponential cut-off. Conversely,
\cite{cai07mobicom} proves that RWP yields exponential inter-contact time
distribution. Therefore, it has been established clearly that models like RWP
are not good to simulate human mobility, raising the need of new, more realistic
mobility models for mobile ad-hoc networking.
In this paper we present small world in motion (SWIM), a simple mobility model that generates small worlds. The model is very simple to implement and very efficient in simulations. The mobility pattern of the nodes is based on a simple intuition on human mobility: People go more often to places not very far from their home and where they can meet a lot of other people. By implementing this simple rule, SWIM is able to raise social behavior among nodes, which we believe to be the base of human mobility in real life.
We validate our model using real traces and compare the distribution of inter-contact time, contact duration and number of contact distributions between nodes, showing that synthetic data that we generate match very well real data traces.
Furthermore, we show that SWIM can predict well the performance of forwarding protocols. We compare the performance of two forwarding protocols---epidemic forwarding~\cite{vahdat00epidemic} and (a simplified version of) delegation forwarding~\cite{dfw08}---on both real traces and synthetic traces generated with SWIM. The performance of the two protocols on the synthetic traces accurately approximates their performance on real traces, supporting the claim that SWIM is an excellent model for human mobility.
The rest of the paper is organized as follows: Section~\ref{sec:relatedwork} briefly reports on current work in the field; in Section~\ref{sec:solution} we present the details of SWIM and we prove theoretically that the distribution of inter-contact time in SWIM has an exponential tail, as recently observed in real life experiments; Section~\ref{sec:experiments} compares synthetic data traces to real traces and shows that the distribution of inter-contact time has a head that decays as a power law, again like in real experiments;
in Section~\ref{sec:forwarding} we show our experimental results on the behavior of two forwarding protocols on both synthetic and real traces; lastly, Section~\ref{sec:conclusions} present some concluding remarks.
\section{Related work}
\label{sec:relatedwork}
The mobility model recently presented in~\cite{levy} generates movement traces
using a model which is similar to a random walk, except that the flight lengths
and the pause times in destinations are generated based on Levy Walks, so with
power law distribution. In the past, Levy Walks have been shown to approximate
well the movements of animals. The model produces inter-contact time
distributions similar to real world traces. However, since every node moves
independently, the model does not capture any social behavior between nodes.
In~\cite{musolesi07}, the authors present a mobility model based on social
network theory which takes in input a social network and discuss the community
patterns and groups distribution in geographical terms. They validate their
synthetic data with real traces and show a good matching between them.
The work in \cite{LCA-CONF-2008-049} presents a new mobility model for clustered networks. Moreover, a closed-form expression for the stationary distribution of node position is given. The model captures the phenomenon of emerging clusters, observed in real partitioned networks, and correlation between the spatial speed distribution and the cluster formation.
In~\cite{workingDay}, the authors present a mobility model that simulates the every day life of people that go to their work-places in the morning, spend their day at work and go back to their homes at evenings. Each one of this scenarios is a simulation per se. The synthetic data they generate match well the distribution of inter-contact time and contact durations of real traces.
In a very recent work, Barabasi et al.~\cite{barabasi08} study the trajectory of
a very large (100,000) number of anonymized mobile phone users whose position is
tracked for a six-months period. They observe that human trajectories show a
high degree of temporal and spatial regularity, each individual being
characterized by a time independent characteristic travel distance and a
significant probability to return to a few highly frequented locations. They
also show that the probability density function of individual travel distances
are heavy tailed and also are different for different groups of users and
similar inside each group. Furthermore, they plot also the frequency of visiting
different locations and show that it is well approximated by a power law. All
these observations are in contrast with the random trajectories predicted by
Levy flight and random walk models, and support the intuition behind SWIM.
\section{Small World in Motion}
\label{sec:solution}
We believe that a good mobility model should
\begin{enumerate}
\item be simple; and
\item predict well the performance of networking protocols on real mobile
networks.
\end{enumerate}
We can't overestimate the importance of having a \emph{simple} model. A simple
model is easier to understand, can be useful to distill the fundamental
ingredients of ``human'' mobility, can be easier to implement, easier to tune
(just one or few parameters), and can be useful to support theoretical work. We
are also looking for a model that generates traces with the same statistical
properties that real traces have. Statistical distribution of inter-contact time
and number of contacts, among others, are useful to characterize the behavior of
a mobile network. A model that generates traces with statistical properties that
are far from those of real traces is probably useless. Lastly, and most
importantly, a model should be accurate in predicting the performance of network
protocols on real networks. If a protocol performs well (or bad) in the model,
it should also perform well (or bad) in a real network. As accurately as
possible.
None of the mobility models in the literature meets all of these properties. The random way-point mobility model is simple, but its traces do not look real at all (and has a few other problems). Some of the other protocols we reviewed in the related work section can indeed produce traces that look real, at least with respect to some of the possible metrics, but are far from being simple. And, as far as we know, no model has been shown to predict real world performance of protocols accurately.
Here, we propose \emph{small world in motion} (SWIM), a very simple mobility
model that meets all of the above requirements. Our model is based on a couple
of simple rules that are enough to make the typical properties of real traces
emerge, just naturally. We will also show that this model can predict the
performance of networking protocols on real mobile networks extremely well.
\subsection{The intuition}
When deciding where to move, humans usually trade-off. The best supermarket or the most popular restaurant that are also not far from where they live, for example. It is unlikely (though not impossible) that we go to a place that is far from home, or that is not so popular, or interesting. Not only that, usually there are just a few places where a person spends a long period of time (for example home and work office or school), whereas there are lots of places where she stays less, like for example post office, bank, cafeteria, etc. These are the basic intuitions SWIM is built upon. Of course, trade-offs humans face in their everyday life are usually much more complicated, and there are plenty of unknown factors that influence mobility. However, we will see that simple rules---trading-off proximity and popularity, and distribution of waiting time---are enough to get a mobility model with a number of desirable properties and an excellent capability of predicting the performance of forwarding protocols.
\subsection{The model in details}
More in detail, to each node is assigned a so called \emph{home}, which is a
randomly and uniformly chosen point over the network area. Then, the node itself
assigns to each possible destination a \emph{weight} that grows with the
popularity of the place and decreases with the distance from home. The weight
represents the probability for the node to chose that place as its next
destination.
At the beginning, no node has been anywhere. Therefore, nodes do not know how
popular destinations are. The number of other nodes seen in each destination is
zero and this information is updated each time a node reaches a destination.
Since the domain is continuous, we divided the network area into many small
contiguous cells that represent possible destinations. Each cell has a squared
area, and its size depends on the transmitting range of the nodes. Once a node
reaches a cell, it should be able to communicate with every other node that is
in the same cell at the same time. Hence, the size of the cell is such that its
diagonal is equal to the transmitting radius of the nodes. Based on this, each
node can easily build a \emph{map} of the network area, and can also calculate
the weight for each cell in the map. These information will be used to
determine the next destination: The node chooses its cell
destination randomly and proportionally with its weight, whereas the exact
destination point (remind that the network area is continuous) is taken
uniformly at random over the cell's area. Note that,
according to our experiments, it is not really necessary that the node has a
\emph{full} map of the domain. It can remember just the most popular cells it
has visited and assume that everywhere else there is nobody (until, by chance,
it chooses one of these places as destination and learn that they are indeed
popular). The general properties of SWIM holds as well.
Once a node has chosen its next destination, it starts moving towards it
following a straight line and with a speed that is proportional to the distance
between the starting point and the destination. To keep things simple, in the
simulator the node chooses as its speed value exactly the distance between these
two points. The speed remains constant till the node reaches the destination. In
particular, that means that nodes finish each leg of their movements in constant
time. This can seem quite an oversimplification, however, it is useful and also
not far from reality. Useful to simplify the model; not far from reality since
we are used to move slowly (maybe walking) when the destination is nearby,
faster when it is farther, and extremely fast (maybe by car) when the
destination is far-off.
More specifically, let $A$ be one of the nodes and $h_A$ its home. Let also $C$
be one of the possible destination cells. We will denote with $\textit{seen}(C)$
the number of nodes that node~$A$ encountered in $C$ the last time it reached
$C$. As we already mentioned, this number is $0$ at the beginning of the
simulation and it is updated each time node~$A$ reaches a destination in
cell~$C$. Since $h_A$ is a point, whereas $C$ is a cell, when calculating the
distance of $C$ from its home $h_A$, node~$A$ refers to the center of the cell's
area. In our case, being the cell a square, its center is the mid diagonal
point. The weight that node~$A$ assigns to cell $C$ is as follows:
\begin{equation}
\label{eq:weight}
w(C) = \alpha\cdot\textit{distance}(h_A, C) + (1-\alpha)\cdot\textit{seen}(C).
\end{equation}
where $\textit{distance}(h_A, C)$ is a function that decays as a power law as
the distance between node~$A$ and cell~$C$ increases.
In the above equation $\alpha$ is a constant in $[0;1]$. Since the weight that a
node assigns to a place represents the probability that the node chooses it as
its next destination, the value of $\alpha$ has a strong effect on the node's
decisions---the larger is $\alpha$, the more the node will tend to go to places
near its home. The smaller is $\alpha$, the more the node will tend to go to
``popular'' places. Even if it goes beyond our scope in this paper, we strongly
believe that would be interesting to exploit consequences of using different
values for $\alpha$. We do think that both small and big values for $\alpha$
rise clustering effect of the nodes. In the first case, the clustering effect
is based on the neighborhood locality of the nodes, and is more related to a
social type: Nodes that ``live'' near each other should tend to frequent the
same places, and therefore tend to be ``friends''. In the second case, instead,
the clustering effect should raise as a consequence of the popularity of the
places.
When reaching destination the node decides how long to remain there. One of the
key observations is that in real life a person usually stays for a long time
only in a few places, whereas there are many places where he spends a short
period of time. Therefore, the distribution of the waiting time should follow a
power law. However, this is in contrast with the experimental evidence that
inter-contact time has an exponential cut-off, and with the intuition that, in
many practical scenarios, we won't spend more than a few hours standing at the
same place (our goal is to model day time mobility). So, SWIM uses an upper
bounded power law distribution for waiting time, that is, a truncated power law.
Experimentally, this seems to be the correct choice.
\subsection{Power law and exponential decay dichotomy}
In a recent work~\cite{milan07}, it is observed that the distribution of
inter-contact time in real life experiments shows a so called dichotomy: First
a power law until a certain point in time, then an exponential cut-off.
In~\cite{cai07mobicom}, the authors suggest that the exponential cut-off is due
to the bounded domain where nodes move. In SWIM, inter-contact time distribution
shows exactly the same dichotomy. More than that, our experiments show that, if
the model is properly tuned, the distribution is strikingly similar to that of
real life experiments.
We show here, with a mathematically rigorous proof, that the distribution of
inter-contact time of nodes in SWIM has an exponential
tail. Later, we will see experimentally that the same distribution has indeed
a head distributed as a power law. Note that the proof has to cope with a
difficulty due to the social nature of SWIM---every decision taken in SWIM by a
node \emph{not} only depends on its own previous decisions, but also depends on
other nodes' decisions: Where a node goes now, strongly affects where it
will choose to go in the future, and, it will affect also where other
nodes will chose to go in the future. So, in SWIM there are no renewal
intervals, decisions influence future decisions of other nodes, and nodes never
``forget'' their past.
In the following, we will consider two nodes $A$ and $B$. Let $A(t)$, $t\ge0$,
be the position of node~$A$ at time~$t$. Similarly, $B(t)$ is the position of
node~$B$ at time~$t$. We assume that at time~$0$ the two nodes are leaving
visibility after meeting. That is, $||A(0)-B(0)||=r$, $||A(t)-B(t)||<r$ for
$t\in 0^-$, and $||A(t)-B(t)||>r$ for $t\in 0^+$. Here, $||\cdot||$ is the
euclidean distance in the square. The inter-contact time of nodes $A$ and $B$
is defined as:
\begin{equation*}
T_I=\inf_{t>0} \{t:||A(t)-B(t)||\le r\}
\end{equation*}
\begin{assumption}
\label{ass:lower}
For all nodes~$A$ and for all cells~$C$, the distance function $distance(A,C)$
returns at least $\mu>0$.
\end{assumption}
\begin{theorem}
If $\alpha>0$ and under Assumption~\ref{ass:lower}, \emph{the tail} of the
inter-contact time distribution between nodes~$A$ and~$B$ in SWIM has an
exponential decay.
\end{theorem}
\begin{IEEEproof}
To prove the presence of the exponential cut-off, we will show that there exists
constant $c>0$ such that
\begin{equation*}
\mathbb{P}\{T_I>t\}\le e^{-ct}
\end{equation*}
for all sufficiently large $t$. Let $t_i=i\lambda$, $i=1,2,\dotsc$, be
a sequence of times. Constant $\lambda$ is large enough that each node has
to make a way point decision in the interval between $t_i$ and $t_{i+1}$
and that each node has enough time to finish a leg. Recall that this is of
course possible since waiting time at way points is bounded above and since
nodes complete each leg of movement in constant time. The idea is to take
snapshots of nodes $A$ and $B$ and see whether they see each other at each
snapshot. However, in the following, we also need that at least one of the two
nodes is not moving at each snapshot. So, let
\begin{equation*}
\begin{split}
\delta_i=\text{min}\{ & \delta\ge 0 : \text{either $A$ or $B$ is}\\
& \text{at a way point at time $t_i+\delta$}\}.
\end{split}
\end{equation*}
Clearly, $t_i+\delta_i<t_{i+1}$, for all $i=1,2,\dotsc$.
We take the sequence of snapshots $\{t_i+\delta_i\}_{i>0}$. Let $\epsilon_i=\{||A(t_i+\delta_i)-B(t_i+\delta_i)||>r\}$ be the event that nodes $A$ and $B$ are not in visibility range at time $t_i+\delta_i$. We have that
\begin{equation*}
\mathbb{P}\{T_I>t\}\le \mathbb{P}\left\{\bigcap_{i=1}^{\lfloor t/\lambda\rfloor
-1}
\epsilon_i\right\}=\prod_{i=1}^{\lfloor t/\lambda\rfloor -1}
\mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}.
\end{equation*}
Consider $\mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}$. At
time~$t_i+\delta_i$, at least one of the two nodes is at a way point, by
definition of $\delta_i$. Say node~$A$, without loss of generality. Assume that
node~$B$ is in cell $C$ (either moving or at a way point). During its last way
point decision, node~$A$ has chosen cell $C$ as its next way point with
probability at least $\alpha\mu>0$, thanks to Assumption~\ref{ass:lower}. If
this is the case, the two nodes~$A$ and~$B$ are now in visibility. Note that the
decision has been made after the previous snapshot, and that it is not
independent of previous decisions taken by node~$A$, and it is not even
independent of previous decisions taken by node~$B$ (since the social nature of
decisions in SWIM). Nonetheless, with probability at least $\alpha\mu$ the two
nodes are now in visibility. Therefore,
\begin{equation*}
\mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}\le 1-\alpha\mu.
\end{equation*}
So,
\begin{equation*}
\begin{split}
\mathbb{P}\{T_I>t\} & \le \mathbb{P}\left\{\bigcap_{i=1}^{\lfloor
t/\lambda\rfloor -1}
\epsilon_i\right\}=\prod_{i=1}^{\lfloor t/\lambda\rfloor -1}
\mathbb{P}\{ \epsilon_i| \epsilon_{i-1}\cdots\epsilon_1\}\\
& \le (1-\alpha\mu)^{\lfloor t/\lambda\rfloor -1}\sim e^{-ct},
\end{split}
\end{equation*}
for sufficiently large $t$.
\end{IEEEproof}
\section{Real traces}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Experimental data set \T \B & Cambridge~05 & Cambridge~06 & Infocom~05\\ \hline
Device \T & iMote & iMote & iMote\\
Network type & Bluetooth & Bluetooth & Bluetooth\\
Duration (days)& 5 & 11 & 3\\
Granularity (sec)& 120 & 600 & 120\\
Devices number & 12 & 54 (36 mobile) & 41\\
Internal contacts number& 4,229 & 10,873 & 22,459\\
Average Contacts/pair/day & 6.4 & 0.345 & 4.6\\[1mm] \hline
\end{tabular}
\caption{The three experimental data sets}
\label{tab:realtraces}
\end{center}
\end{table*}
In order to show the accuracy of SWIM in simulating real life scenarios, we will
compare SWIM with three traces gathered during experiments done with real
devices carried by people. We will refer to these traces as \emph{Infocom~05},
\emph{Cambridge~05} and \emph{Cambridge~06}. Characteristics of these data sets
such as inter-contact and contact distribution have been observed in several
previous works~\cite{hui05, leguay06,hui06}.
\begin{itemize}
\item In \emph{Cambridge 05}~\cite{cambridge05} the authors used Intel iMotes to
collect the data. The iMotes were distributed to students of the University of
Cambridge and were programmed to log contacts of all visible mobile devices. The
number of devices that were used for this experiment is 12. This data set covers
5 days.
\item In \emph{Cambridge 06}~\cite{upmcCambridgeData} the authors repeated the
experiment using more devices. Also, a number of stationary nodes were deployed
in various locations around the city of Cambridge UK. The data of the stationary
iMotes will not be used in this paper. The number of mobile devices used is 36
(plus 18 stationary devices). This data set covers 11 days.
\item In \emph{Infocom~05} ~\cite{cambridgeInfocomData} the same devices as in
\emph{Cambridge} were distributed to students attending the Infocom 2005 student
workshop. The number of devices is 41. This experiment covers approximately 3
days.
\end{itemize}
Further details on the real traces we use in this paper are shown in
Table~\ref{tab:realtraces}.
\section{SWIM vs Real traces}
\label{sec:experiments}
\subsection{The simulation environment}
In order to evaluate SWIM, we built a discrete even simulator of the model.
The simulator takes as input
\begin{itemize}
\item $n$: the number of nodes in the network;
\item $r$: the transmitting radius of the nodes;
\item the simulation time in seconds;
\item coefficient $\alpha$ that appears in Equation~\ref{eq:weight};
\item the distribution of the waiting time at destination.
\end{itemize}
The output of the simulator is a text file containing records on each main event
occurrence. The main events of the system and the related outputs are:
\begin{itemize}
\item \emph{Meet} event: When two nodes are in range with each other. The
output line contains the ids of the two nodes involved and the time of
occurrence.
\item \emph{Depart} event: When two nodes that were in range of each other are
not anymore. The output line contains the ids of the two nodes involved and the
time of occurrence.
\item \emph{Start} event: When a node leaves its current location and starts
moving towards destination. The output line contains the id of the location, the
id of the node and the time of occurrence.
\item \emph{Finish} event: When a node reaches its destination. The output line
contains the id of the destination, the id of the node and the time of
occurrence.
\end{itemize}
In the output, we don't really need information on the geographical position of
the nodes when the event occurs. However, it is just straightforward to extend
the format of the output file to include this information. In this form, the
output file contains enough information to compute correctly inter-contact
intervals, number of contacts, duration of contacts, and to implement state of
the art forwarding protocols.
During the simulation, the simulator keeps a vector $\textit{seen}(C)$ updated
for each sensor. Note that the nodes do not necessarily agree on what is the
popularity of each cell. As mentioned earlier, it is not necessary to keep in
memory the whole vector, without changing the qualitative behavior of the mobile
system. However, the three scenarios Infocom~05, Cambridge~05, and Cambridge~06
are not large enough to cause any real memory problem. Vector~$\textit{seen}(C)$
is updated at each \emph{Finish} and \emph{Start} event, and is not changed
during movements.
\subsection{The experimental results}
In this section we will present some experimental results in order to show that
SWIM is a simple and good way to generate synthetic traces with the same
statistical properties of real life mobile scenarios.
\begin{figure}[!ht]
\centering
\subfigure[Distribution of the inter-contact time in Infocom~05 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Infocom/InterContacts}
\label{fig:ICT infocom}}
\qquad
\subfigure[Distribution of the contact duration for each pair of nodes in
Infocom~05 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Infocom/Contacts}
\label{fig:CONT infocom}}
\qquad
\subfigure[Distribution of the number of contacts for each pair of nodes in
Infocom~05 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Infocom/ContactsNumber}
\label{fig:CONT-NR infocom}}
\caption{SWIM and Infocom~05}
\label{fig:infocom}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Distribution of the inter-contact time in Cambridge~05 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Cambridge05/InterContacts}
\label{fig:ICT cambridge05}}
\qquad
\subfigure[Distribution of the contact duration for each pair of nodes in
Cambridge~05 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Cambridge05/Contacts}
\label{fig:CONT cambridge05}}
\qquad
\subfigure[Distribution of the number of contacts for each pair of nodes in
Cambridge~05 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Cambridge05/ContactsNumber}
\label{fig:CONT-NR cambridge05}}
\caption{SWIM and Cambridge~05}
\label{fig:cambridge05}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Distribution of the inter-contact time in Cambridge~06 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Cambridge06/InterContacts}
\label{fig:ICT cambridge}}
\qquad
\subfigure[Distribution of the contact duration for each pair of nodes in
Cambridge~06 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Cambridge06/Contacts}
\label{fig:CONT cambridge}}
\qquad
\subfigure[Distribution of the number of contacts for each pair of nodes in
Cambridge~06 and in SWIM]{
\centering
\includegraphics[width=.4\textwidth]{graphics/Cambridge06/ContactsNumber}
\label{fig:CONT-NR cambridge}}
\caption{SWIM and Cambridge~06}
\label{fig:cambridge06}
\end{figure}
The idea is to tune the few parameters used by SWIM in order to simulate
Infocom~05, Cambridge~05, and Cambridge~06. For each of the experiments we
consider the following metrics: inter-contact time CCD function, contact
distribution per pair of nodes, and number of contacts per pair of nodes. The
inter-contact time distribution is important in mobile networking since it
characterizes the frequency with which information can be transferred between
people in real life. It has been widely studied for real traces in a large
number of previous papers~\cite{hui05, hui06, leguay06, cai07mobicom,
milan07, musolesi07, cai08mobihoc}. The contact distribution per pair
of nodes and the number of contacts per pair of nodes are also important. Indeed
they represent a way to measure relationship between people. As it was also
discussed in~\cite{hui07community, hui07socio, hui08mobihoc} it's natural to
think that if a couple of people spend more time together and meet each other
frequently they are familiar to each other. Familiarity is important in
detecting communities, which may help improve significantly the design and
performance of forwarding protocols in mobile environments such as
DNTs~\cite{hui08mobihoc}. Let's now present the experimental results obtained
with SWIM when simulating each of the real scenarios of data sets.
Since the scenarios we consider use iMotes, we model our network node according
to iMotes properties (outdoor range $30\textrm{m}$). We initially distribute the
nodes over a network area of size $300\times300~\textrm{m}^2$. In the following,
we assume for the sake of simplicity that the network area is a square of side
1, and that the node transmission range is 0.1. In all the three experiments we
use a power law with slope $a=1.45$ in order to generate waiting time values of
nodes when arriving to destination, with an upper bound of 4 hours. We use as
$\textit{seen}(C)$ function the fraction of the nodes seen in cell~$C$, and as
$\textit{distance}(x,C)$ the following
\begin{equation*}
\textit{distance}(x,C)=\frac{1}{\left(1+k||x-y||\right)^2},
\end{equation*}
where $x$ is the position of the home of the current node, and $y$ is the
position of the center of cell~$C$. Positions are coordinates in the square of
size 1. Constant $k$ is a scaling factor, set to $0.05$, which accounts for the
small size of the experiment area. Note that function $\textit{distance}(x,C)$
decays as a power law. We come up with this choice after a large set of
experiments, and the choice is heavily influenced by scaling factors.
We start with Infocom~05. The number of nodes $n$ and the simulation time are
the same as in the real data set, hence 41 and 3 days respectively. Since the
area of the real experiment was quite small (a large hotel), we deem that
$300\times300~m^2$ can be a good approximation of the real scenario. In
Infocom~05, there were many parallel sessions. Typically, in such a case one
chooses to follow what is more interesting to him. Hence, people with the same
interests are more likely to meet each other. In this experiment, the parameter
$\alpha$ such that the output fit best the real traces is $\alpha=0.75$. The
results of this experiment are shown in Figure~\ref{fig:infocom}.
We continue with the Cambridge scenario. The number of nodes and the simulation
time are the same as in the real data set, hence 11 and 5 days respectively. In
the Cambridge data set, the iMotes were distributed to two groups of students,
mainly undergrad year~1 and~ 2, and also to some PhD and Master students.
Obviously, students of the same year are more likely to see each other more
often. In this case, the parameter $\alpha$ which best fits the real traces is
$\alpha=0.95$. This choice proves to be fine for both Cambridge~05 and
Cambridge~06. The results of this experiment are shown in
Figure~\ref{fig:cambridge05} and~\ref{fig:cambridge06}.
In all of the three experiments, SWIM proves to be an excellent way to generate
synthetic traces that approximate real traces. It is particularly interesting
that the same choice of parameters gets goods results for all the metrics under
consideration at the same time.
\section{Comparative performance of forwarding protocols}
\label{sec:forwarding}
\begin{figure*}
\label{fig:forwarding}
\centering
\subfigure{
\centering
\includegraphics[width=.31\textwidth]{graphics/Infocom/PerformanceInfocom}
\label{fig:perf infocom}}
\subfigure{
\centering
\includegraphics[width=.31\textwidth]{graphics/Cambridge05/PerformanceCambridge05}
\label{fig:perf cambridge05}}
\subfigure{
\centering
\includegraphics[width=.31\textwidth]{graphics/Cambridge06/PerformanceCambridge06}
\label{fig:perf cambridge 06}}
\caption{Performance of both forwarding protocols on real traces and SWIM
traces. EFw denotes Epidemic Forwarding while DFwd Delegation Forwarding.}
\label{fig:performance}
\end{figure*}
In this section we show other experimental results of SWIM, related to
evaluation of two simple forwarding protocols for DNTs such as Epidemic
Forwarding~\cite{vahdat00epidemic} and simplified version of Delegation
Forwarding\cite{dfw08} in which each node has a random constant as its quality.
Of course, this simplified version of delegation forwarding is not very
interesting and surely non particularly efficient. However, we use it just as a
worst case benchmark against epidemic forwarding, with the understanding that
our goal is just to validate the quality of SWIM, and not the quality of the
forwarding protocol.
In the following experiments, we use for each experiment the same tuning used
in the previous section. That is, the parameters input to SWIM are not
``optimized'' for each of the forwarding protocols, they are just the same that
has been used to fit real traces with synthetic traces.
For the evaluation of the two forwarding protocols we use the same assumptions
and the same way of generating traffic to be routed as in~\cite{dfw08}.
For each trace and forwarding protocol a set of messages is generated with
sources and destinations chosen uniformly at random, and generation times form a
Poisson process averaging one message per 4 seconds.
The nodes are assumed to have infinite buffers and carry all message replicas
they receive until the end of the simulation. The metrics we are concerned with
are: \emph{cost}, which is the number of replicas per generated message;
\emph{success rate} which is the fraction of generated messages for which at
least one replica is delivered; \emph{average delay} which is the average
duration per delivered message from its generation time to the first arrival of
one of its replicas.
As in \cite{dfw08} we isolated 3-hour periods for each data trace (real and
synthetic) for our study. Each simulation runs therefore 3 hours. to avoid
end-effects no messages were generated in the last hour of each trace.
In the two forwarding protocols, upon contact with node $A$, node $B$ decides
which message from its message queue to forward in the following way:
\begin{trivlist}
\item
\textbf{Epidemic Forwarding:} Node $A$ forwards message~$m$ to node $B$ unless
$B$ already has a replica of $m$. This protocol achieves the best possible
performance, so it yields upper bounds on success rate and average delay.
However, it does also have a high cost.
\item
\textbf{(Simplified) Delegation Forwarding:} To each node is initially given a
quality, distributed uniformly in $(0;1]$. To each message is given a rate,
which, in every instant corresponds to the quality of the node with the best
quality that message have seen so far. When generated the message inherits the
rate from the node that generates it (that would be the sender for that
message). Node $A$ forwards message $m$ to node $B$ if the quality of node $B$
is greater than the rate of the copy of $m$ that $A$ holds. If $m$ is forwarded
to $B$, both nodes $A$ and $B$ update the rate of their copy of $m$ to the
quality of $B$.
\end{trivlist}
Figure~\ref{fig:forwarding} shows how the two forwarding protocols perform in
both real and synthetic traces, generated with SWIM.
As you can see, the results are excellent---SWIM predicts very accurately the
performance of both protocols. Most importantly, this is not due to a customized
tuning that has been optimized for these forwarding protocols, it is just the
same output that SWIM has generated with the tuning of the previous section.
This can be important methodologically: To tune SWIM on a particular scenario,
you can concentrate on a few well known and important statistical properties
like inter-contact time, number of contacts, and duration of contacts. Then, you
can have a good confidence that the model is properly tuned and usable to get
meaningful estimation of the performance of a forwarding protocol.
\section{Conclusions}
\label{sec:conclusions}
In this paper we present SWIM, a new mobility model for ad hoc networking. SWIM
is simple, proves to generate traces that look real, and provides an accurate
estimation of forwarding protocols in real mobile networks. SWIM can be used to
improve our understanding of human mobility, and it can support theoretical work
and it can be very useful to evaluate the performance of networking protocols in
scenarios that scales up to very large mobile systems, for which we don't have
real traces.
\IEEEtriggeratref{7}
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |

\section{Introduction}
\label{SecIntro}
Let $\mathcal{G}=(V,E)$ be a connected undirected graph, with $V$ at most countable and each vertex $x\in V$ of finite degree. We do not allow self-loops, however the edges might be multiple. Given $e\in E$ an edge, we will denote
$e_{+}$ and $e_{-}$ its end-vertices, even though $e$ is non-oriented and one can interchange $e_{+}$ and $e_{-}$.
Each edge $e\in E$ is endowed with a conductance $W_{e}>0$. There may be a killing measure $\kappa=(\kappa_{x})_{x\in V}$ on vertices.
We consider $(X_{t})_{t\ge0}$ the \textit{Markov jump processes} on $V$ which being in $x\in V$, jumps along an adjacent edge $e$ with rate $W_{e}$. Moreover if
$\kappa_{x}\neq 0$, the process is killed at $x$ with rate $\kappa_{x}$ (the process is not defined after that time). $\zeta$ will denote the time up to which $X_{t}$ is defined. If $\zeta<+\infty$, then either the process has been killed by the killing measure $\kappa$ (and $\kappa \not\equiv 0$) or it has gone off to infinity in finite time
(and $V$ infinite). We will assume that the process $X$ is transient, which means, if $V$ is finite, that
$\kappa\not\equiv 0$. $\mathbb{P}_{x}$ will denote the law of $X$ started from $x$.
Let $(G(x,y))_{x,y\in V}$ be the Green function of $X_{t}$:
\begin{displaymath}
G(x,y)=G(y,x)=\mathbb{E}_{x}\left[\int_{0}^{\zeta} 1_{\{X_{t}=y\}} dt\right].
\end{displaymath}
Let $\mathcal{E}$ be the Dirichlet form defined on functions $f$ on $V$ with finite support:
\begin{eqnarray}\label{Dirichlet-form}
\mathcal{E}(f,)=\sum_{x\in V}\kappa_{x} f(x)^{2}+
\sum_{e\in E}W_e(f(e_{+})-f(e_{-}))^{2}.
\end{eqnarray}
$P_{\varphi}$ will be the law of $(\varphi_{x})_{x\in V}$ the centred
\textit{Gaussian free field} (GFF) on $V$ with covariance
$E_{\varphi}[\varphi_{x}\varphi_{y}]=G(x,y)$. In case $V$ is finite, the density of $P_{\varphi}$ is
\begin{displaymath}
\dfrac{1}{(2\pi)^{\frac{\vert V\vert}{2}}\sqrt{\det G}}
\exp\left(-\dfrac{1}{2}\mathcal{E}(f,f)\right)\prod_{x\in V} df_{x}.
\end{displaymath}
Given $U$ a finite subset of $V$, and $f$ a function on $U$, $P^{U,f}_{\varphi}$ will denote the law of
the GFF $\varphi$ conditioned to equal $f$ on $U$.
$(\ell_{x}(t))_{x\in V, t\in [0,\zeta]}$ will denote the family of local times of $X$:
\begin{displaymath}
\ell_{x}(t)=\int_{0}^{t}1_{\{X_{s}=x\}} ds.
\end{displaymath}
For all $x\in V$, $u>0$, let
\begin{displaymath}
\tau_{u}^{x}=\inf\lbrace t\geq 0; \ell_{x}(t)>u\rbrace.
\end{displaymath}
Recall the generalized second Ray-Knight theorem on discrete graphs by Eisenbaum, Kaspi, Marcus, Rosen and Shi \cite{ekmrs} (see also
\cite{MarcusRosen2006MarkovGaussianLocTime,Sznitman2012LectureIso}):
\begin{2ndRK}
For any $u>0$ and $x_{0}\in V$,
\begin{center}
$\left(\ell_{x}(\tau_{u}^{x_{0}})+\dfrac{1}{2}\varphi_{x}^{2}\right)_{x\in V}$
under $\mathbb{P}_{x_{0}}(\cdot \vert \tau_{u}^{x_{0}}<\zeta)\otimes P^{\lbrace x_{0}\rbrace,0}_{\varphi}$
\end{center}
has the same law as
\begin{center}
$\left(\dfrac{1}{2}\varphi_{x}^{2}\right)_{x\in V}$
under $P^{\lbrace x_{0}\rbrace,\sqrt{2u}}_{\varphi}$.
\end{center}
\end{2ndRK}
Sabot and Tarrès showed in \cite{SabotTarres2015RK} that the so-called ``magnetized'' reverse Vertex-Reinforced Jump Process provides an inversion of the generalized second Ray-Knight theorem, in the sense that it enables to retrieve the law of $(\ell_x(\tau_u^{x_0}), \varphi^2_x)_{x\in V}$ conditioned on $\left(\ell_x(\tau_u^{x_0})+\frac{1}{2}\varphi^2_x\right)_{x\in V}$. The jump rates of that latter process can be interpreted as the two-point functions of the Ising model associated to the time-evolving weights.
However in \cite{SabotTarres2015RK} the link with the Ising model is only implicit, and a natural question is whether Ray-Knight inversion can be described in a simpler form if we enlarge the state space of the dynamics, and in particular include the ``hidden'' spin variables.
The answer is positive, and goes through an extension of the Ray-Knight isomorphism introduced by Lupu \cite{Lupu2014LoopsGFF}, which couples the sign of the GFF to the path of the Markov chain. The Ray-Knight inversion will turn out to take a rather simple form in Theorem \ref{thm-Poisson} of the present paper, where it will be defined not only through the spin variables but also random currents associated to the field though an extra Poisson Point Process.
The paper is organised as follows.
In Section \ref{sec:srk} we recall some background on loop soup isomorphisms and on related couplings and state and prove a signed version of generalized second Ray-Knight theorem. We begin in Section \ref{sec:lejan} by a statement of Le Jan's isomorphism which couples the square of the Gaussian Free Field to the loop soups, and recall how the generalized second Ray-Knight theorem can be seen as its Corollary: for more details see \cite{lejan4}. In Subsection \ref{sec:lupu} we state Lupu's isomorphism which extends Le Jan's isomorphism and couples the sign of the GFF to the loop soups, using a cable graph extension of the GFF and Markov Chain. Lupu's isomorphism yields an interesting realisation of the well-known FK-Ising coupling, and provides as well a ``Current+Bernoulli=FK'' coupling lemma \cite{lupu-werner}, which occur in the relationship between the discrete and cable graph versions. We briefly recall those couplings in Sections \ref{fkising} and \ref{randomcurrent}, as they are implicit in this paper. In Section \ref{sec:glupu} we state and prove the generalized second Ray-Knight ``version'' of Lupu's isomorphism, which we aim to invert.
Section \ref{sec:inversion} is devoted to the statements of inversions of those isomorphisms. We state in Section \ref{sec_Poisson} a signed version of the inversion of the generalized second Ray-Knight theorem through an extra Poisson Point Process, namely Theorem \ref{thm-Poisson}. In Section \ref{sec_dicr_time} we provide a discrete-time description of the process, whereas in Section \ref{sec_jump} we yield an alternative version of that process through jump rates, which can be seen as an annealed version of the first one. We deduce a signed inversion of Le Jan's isomorphism for loop soups in Section \ref{sec:lejaninv}, and an inversion of the coupling of random current with FK-Ising in Section \ref{sec:coupinv}
Finally Section \ref{sec:proof} is devoted to the proof of Theorem \ref{thm-Poisson}: Section \ref{sec:pfinite} deals with the case of a finite graph without killing measure, and Section \ref{sec:pgen} deduces the proof in the general case.
\section{Le Jan's and Lupu's isomorphisms}
\label{sec:srk}
\subsection{Loop soups and Le Jan's isomorphism}
\label{sec:lejan}
The \textit{loop measure} associated to the Markov jump process
$(X_{t})_{0\leq t<\zeta}$ is defined as follows. Let $\mathbb{P}^{t}_{x,y}$ be the bridge probability measure from
$x$ to $y$ in time $t$ (conditionned on $t<\zeta$). Let $p_{t}(x,y)$ be the transition probabilities of
$(X_{t})_{0\leq t<\zeta}$.
Let $\mu_{\rm loop}$ be the measure on time-parametrised nearest-neighbour based loops (i.e. loops with a starting site)
\begin{displaymath}
\mu_{\rm loop}=\sum_{x\in V}\int_{t>0}\mathbb{P}^{t}_{x,x} p_{t}(x,x) \dfrac{dt}{t}.
\end{displaymath}
The loops will be considered here up to a rotation of parametrisation (with the corresponding pushforward measure induced by $\mu_{\rm loop}$), that is to say a loop $(\gamma(t))_{0\leq t\leq t_{\gamma}}$ will be the same as
$(\gamma(T+t))_{0\leq t\leq t_{\gamma}-T}\circ (\gamma(T+t-t_{\gamma}))_{t_{\gamma}-T\leq t\leq t_{\gamma}}$, where $\circ$ denotes the concatenation of paths.
A \textit{loop soup} of intensity $\alpha>0$, denoted
$\mathcal{L}_{\alpha}$, is a Poisson random measure of intensity
$\alpha \mu_{\rm loop}$. We see it as a random collection of loops in $\mathcal{G}$. Observe that a.s. above each vertex
$x\in V$, $\mathcal{L}_{\alpha}$ contains infinitely many trivial "loops" reduced to the vertex $x$. There are also with positive probability non-trivial loop that visit several vertices.
Let $L_{.}(\mathcal{L}_{\alpha})$ be the \textit{occupation field} of $\mathcal{L}_{\alpha}$ on $V$ i.e., for all $x\in V$,
\begin{displaymath}
L_x(\mathcal{L}_{\alpha})=
\sum_{(\gamma(t))_{0\leq t\leq t_{\gamma}}\in\mathcal{L}_{\alpha}}
\int_{0}^{t_{\gamma}}1_{\{\gamma(t)=x\}} dt.
\end{displaymath}
In \cite{LeJan2011Loops} Le Jan shows that for transient Markov jump processes, $L_x(\mathcal{L}_{\alpha})<+\infty$ for all $x\in V$ a.s. For $\alpha=\frac{1}{2}$ he identifies the law of
$L_.(\mathcal{L}_{\alpha})$:
\begin{IsoLeJan}
$L_.(\mathcal{L}_{1/2})=\left(L_x(\mathcal{L}_{1/2})\right)_{x\in V}$ has the same law as
$\dfrac{1}{2}\varphi^2=\left(\dfrac{1}{2}\varphi_{x}^{2}\right)_{x\in V}$
under $P_{\varphi}$.
\end{IsoLeJan}
Let us briefly recall how Le Jan's isomorphism enables one to retrieve the generalized second Ray-Knight theorem stated in Section \ref{SecIntro}: for more details, see for instance \cite{lejan4}. We assume that $\kappa$ is supported by $x_0$: the general case can be dealt with by an argument similar to the proof of Proposition \ref{PropKillingCase}.
Let $D=V\setminus\{x_0\}$, and note that the isomorphism in particular implies that $L_.(\mathcal{L}_{1/2})$ conditionally on $L_{x_0}(\mathcal{L}_{1/2})=u$ has the same law as $\varphi^2/2$ conditionally on $\varphi_{x_0}^2/2=u$.
On the one hand, given the classical energy decomposition, we have $\varphi=\varphi^D+\varphi_{x_0}$, with $\varphi^D$ the GFF associated to the restriction of $\mathcal{E}$ to $D$, where $\varphi^D$ and $\varphi_{x_0}$ are independent. Now $\varphi^2/2$ conditionally on $\varphi_{x_0}^2/2=u$ has the law of $(\varphi^D+\eta\sqrt{2u})^2/2$, where $\eta$ is the sign of $\varphi_{x_0}$, which is independent of $\varphi^D$. But $\varphi^D$ is symmetric, so that the latter also has the law of $(\varphi^D+\sqrt{2u})^2/2$.
On the other hand, the loop soup $\mathcal{L}_{1/2}$ can be decomposed into the two independent loop soups $\mathcal{L}_{1/2}^D$ contained in $D$ and $\mathcal{L}_{1/2}^{(x_0)}$ hitting $x_0$. Now $L_.(\mathcal{L}_{1/2}^D)$ has the law of $(\varphi^D)^2/2$ and $L_.(\mathcal{L}_{1/2}^{(x_0)})$ conditionally on $L_{x_0}(\mathcal{L}_{1/2}^{(x_0)})=u$ has the law of the occupation field of the Markov chain $\ell(\tau_{u}^{x_{0}})$
under $\mathbb{P}_{x_{0}}(\cdot \vert \tau_{u}^{x_{0}}<\zeta)$, which enables us to conclude.
\subsection{Lupu's isomorphism}
\label{sec:lupu}
As in \cite{Lupu2014LoopsGFF}, we consider the \textit{metric graph} $\tilde{\mathcal{G}}$ associated to $\mathcal{G}$. Each edge $e$ is replaced by a continuous line of length
$\frac{1}{2}W_{e}^{-1}$.
The GFF $\varphi$ on $\mathcal{G}$ with law $P_\varphi$ can be extended to a GFF $\tilde{\varphi}$ on $\tilde{\mathcal{G}}$ as follows. Given $e\in E$, one considers inside $e$ a conditionally independent Brownian bridge, actually a bridge of a $\sqrt{2} \times$
\textit{standard Brownian motion}, of length $\frac{1}{2}W_{e}^{-1}$, with end-values
$\varphi_{e_{-}}$ and $\varphi_{e_{+}}$. This provides a continuous field on the metric graph which satisfies the spatial Markov property.
Similarly one can define a standard Brownian motion $(B^{\tilde{\mathcal{G}}})_{0\le t\le \tilde{\zeta}}$ on $\tilde{\mathcal{G}}$, whose trace on $\mathcal{G}$ indexed by the local times at $V$ has the same law as the Markov process $(X_t)_{t\ge0}$ on $V$ with jump rate $W_e$ to an adjacent edge $e$ up to time $\zeta$, as explained in Section 2 of \cite{Lupu2014LoopsGFF}. One can associate a measure on time-parametrized continuous loops $\tilde{\mu}$, and let $\tilde{\mathcal{L}}_{\frac{1}{2}}$ be the Poisson Point Process of loops of intensity $\tilde{\mu}/2$: the discrete-time loops $\mathcal{L}_{\frac{1}{2}}$ can be obtained from $\tilde{\mathcal{L}}_{\frac{1}{2}}$ by taking the print of the latter on $V$.
Lupu introduced in \cite{Lupu2014LoopsGFF} an isomorphism linking the GFF $\tilde{\varphi}$ and the loop soup $\tilde{\mathcal{L}}_{\frac{1}{2}}$ on $\tilde{\mathcal{G}}$.
\begin{theorem}[Lupu's Isomorphism,\cite{Lupu2014LoopsGFF}]
\label{thm:Lupu}
There is a coupling between the Poisson ensemble of loops $\tilde{\mathcal{L}}_{\frac{1}{2}}$ and $(\tilde{\varphi}_y)_{y\in\tilde{\mathcal{G}}}$ defined above, such that the two following constraints hold:
\begin{itemize}
\item For all $y\in\tilde{\mathcal{G}}$, $L_y(\tilde{{\mathcal{L}}}_{\frac{1}{2}})=\frac{1}{2}\tilde{\varphi}_y^2$
\item The clusters of loops of $\tilde{\mathcal{L}}_{\frac{1}{2}}$ are exactly the sign clusters of $(\tilde{\varphi}_y)_{y\in\tilde{\mathcal{G}}}$.
\end{itemize}
Conditionally on $(|\tilde{\varphi}_y|)_{y\in\tilde{\mathcal{G}}}$, the sign of $\tilde{\varphi}$ on each of its connected components is distributed independently and uniformly in $\{-1,+1\}$.
\end{theorem}
Lupu's isomorphism and the idea of using metric graphs were applied in \cite{Lupu2015ConvCLE} to show that on the discrete half-plane $\mathbb{Z}\times\mathbb{N}$, the scaling limits of outermost boundaries of clusters of loops in loop soups are the Conformal Loop Ensembles $\hbox{CLE}$.
Let $\mathcal{O}(\tilde{\varphi})$ (resp. $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$) be the set of edges $e\in E$ such that $\tilde{\varphi}$ (resp. $\tilde{\mathcal{L}}_{\frac{1}{2}}$) does not touch $0$ on $e$, in other words such that all the edge $e$ remains in the same sign cluster of $\tilde{\varphi}$ (resp. $\tilde{\mathcal{L}}_{\frac{1}{2}}$). Let $\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$ be the set of edges $e\in E$ that are crossed (i.e. visited consecutively) by the trace of the loops $\mathcal{L}_{\frac{1}{2}}$ on $V$.
In order to translate Lupu's isomorphism back onto the initial graph $\mathcal{G}$, one needs to describe on one hand the distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on the values of $\varphi$, and on the other hand the distribution of $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$ conditionally on $\mathcal{L}_{\frac{1}{2}}$ and the cluster of loops $\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$ on the discrete graph $G$. These two distributions are described respectively in Subsections \ref{fkising} and \ref{randomcurrent}, and provide realisations of the FK-Ising coupling and the ``Current+Bernoulli=FK'' coupling lemma \cite{lupu-werner}.
\subsection{The FK-Ising distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on $|\varphi|$}
\label{fkising}
\begin{lemma}
\label{lem:fki}
Conditionally on $(\varphi_{x})_{x\in V}$, $(1_{e\in \mathcal{O}(\tilde{\varphi})})_{e\in E}$
is a family of independent random variables and
\begin{displaymath}
\mathbb{P}\left(e\not\in \mathcal{O}(\tilde{\varphi})\vert \varphi\right)=
\left\lbrace
\begin{array}{ll}
1 & \text{if}~ \varphi_{e_{-}}\varphi_{e_{+}}<0,\\
\exp\left(-2W_{e}\varphi_{e_{-}}\varphi_{e_{+}}\right)
& \text{if}~ \varphi_{e_{-}}\varphi_{e_{+}}>0.
\end{array}
\right.
\end{displaymath}
\end{lemma}
\begin{proof}
Conditionally on $(\varphi_{x})_{x\in V}$, are constructed as independent Brownian bridges on each edge, so that $(1_{e\in \mathcal{O}(\tilde{\varphi})})_{e\in E}$ are independent random variables, and it follows from the reflection principle that, if $\varphi_{e_{-}}\varphi_{e_{+}}>0$, then
$$\mathbb{P}\left(e\not\in \mathcal{O}(\tilde{\varphi})\vert \varphi\right)=\dfrac{\exp\left(-\frac{1}{2}W_{e}(\varphi_{e_{-}}+\varphi_{e_{+}})^{2}\right)}
{\exp\left(-\frac{1}{2}W_{e}(\varphi_{e_{-}}-\varphi_{e_{+}})^{2}\right)}=\exp\left(-2W_{e}\varphi_{e_{-}}\varphi_{e_{+}}\right).$$
\end{proof}
Let us now recall how the conditional probability in Lemma \ref{lem:fki} yields a realisation of the FK-Ising coupling.
Assume $V$ is finite. Let $(J_{e})_{e\in E}$ be a family of positive weights. An \textit{Ising model} on $V$ with interaction constants $(J_{e})_{e\in E}$ is a
probability on configuration of spins $({\sigma}_{x})_{x\in V}\in \{+1,-1\}^V$ such that
\begin{displaymath}
\mathbb{P}^{\rm Isg}_{J}((\sigma_x)_{x\in V})=
\dfrac{1}{\mathcal{Z}^{\rm Isg}_{J}}\exp\left(\sum_{e\in E}
J_{e}\sigma_{e_{+}}\sigma_{e_{-}}\right).
\end{displaymath}
An \textit{FK-Ising random cluster model} with weights
$(1-e^{-2J_{e}})_{e\in E}$ is a random configuration of open (value $1$) and closed
edges (value $0$) such that
\begin{displaymath}
\mathbb{P}^{\rm FK-Isg}_{J}((\omega_{e})_{e\in E})=
\dfrac{1}{\mathcal{Z}^{\rm FK-Isg}_{J}}
2^{\sharp~\text{clusters}}
\prod_{e\in E}(1-e^{-2J_{e}})^{\omega_{e}}(e^{-2J_{e}})^{1-\omega_{e}},
\end{displaymath}
where "$\sharp~\text{clusters}$" denotes the number of clusters created by open edges.
The well-known FK-Ising and Ising coupling reads as follows.
\begin{proposition}[FK-Ising and Ising coupling]
\label{FK-Ising}
Given an FK-Ising model, sample on each cluster an independent uniformly distributed spin. The spins are then distributed according to the Ising model. Conversely, given a spins configuration
$\hat{\sigma}$ following the Ising distribution, consider each edge $e$, such that
$\hat{\sigma}_{e_{-}}\hat{\sigma}_{e_{+}}<0$, closed, and each edge $e$, such that
$\hat{\sigma}_{e_{-}}\hat{\sigma}_{e_{+}}>0$ open with probability
$1-e^{-2J_{e}}$. Then the open edges are distributed according to the FK-Ising model.
The two couplings between FK-Ising and Ising are the same.
\end{proposition}
Consider the GFF $\varphi$ on $\mathcal{G}$ distributed according to $P_{\varphi}$. Let
$J_{e}(\vert\varphi\vert)$ be the random interaction constants
\begin{displaymath}
J_{e}(\vert\varphi\vert)=W_{e}\vert\varphi_{e_{-}}\varphi_{e_{+}}\vert.
\end{displaymath}
Conditionally on $\vert\varphi\vert$,
$(\operatorname{sign}(\varphi_{x}))_{x\in V}$ follows an Ising distribution with interaction constants $(J_{e}(\vert\varphi\vert))_{e\in E}$:
indeed, the Dirichlet form (\ref{Dirichlet-form}) can be written as
\begin{displaymath}
\mathcal{E}(\varphi,\varphi)=\sum_{x\in V}\kappa_{x} \varphi(x)^{2}+
\sum_{x\in V}(\varphi(x))^2(\sum_{y\sim x} W_{x,y})-
2\sum_{e\in E}J_e(\vert\varphi\vert) \operatorname{sign}(\varphi(e_{+}))\operatorname{sign}(\varphi(e_{-})).
\end{displaymath}
Similarly, when $\varphi\sim P_{\varphi}^{\{x_0\},\sqrt{2u}}$ has boundary condition $\sqrt{2u}\ge 0$ on $x_0$, then $(\operatorname{sign}(\varphi_{x}))_{x\in V}$ has an Ising distribution
with interaction $(J_{e}(\vert\varphi\vert))_{e\in E}$ and conditioned on $\sigma_{x_0}=+1$.
Now, conditionally on $\varphi$, $\mathcal{O}(\tilde{\varphi})$ has FK-Ising distribution with weights
$(1-e^{-2J_{e}(\vert\varphi\vert)})_{e\in E}$. Indeed, the probability for $e\in\mathcal{O}(\tilde{\varphi})$ conditionally on $\varphi$ is $1-e^{-2J_{e}(\vert\varphi\vert)}$, by Lemma \ref{lem:fki}, as in Proposition \ref{FK-Ising}.
Note that, given that $\mathcal{O}(\tilde{\varphi})$ has FK-Ising distribution, the fact that the sign of on its connected components is distributed independently and uniformly in $\{-1,1\}$ can be seen either as a consequence of Proposition \ref{FK-Ising}, or from Theorem \ref{thm:Lupu}.
Given $\varphi=(\varphi_x)_{x\in V}$ on the discrete graph $\mathcal{G}$, we introduce in Definition \ref{def_FK-Ising} as the random set of edges which has the distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on $\varphi=(\varphi_x)_{x\in V}$.
\begin{definition}\label{def_FK-Ising}
We let $\mathcal{O}(\varphi)$ be a random set of edges which has the distribution of $\mathcal{O}(\tilde{\varphi})$ conditionally on $\varphi=(\varphi_x)_{x\in V}$ given by Lemma \ref{lem:fki}.
\end{definition}
\subsection{Distribution of $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$ conditionally on $\mathcal{L}_{\frac{1}{2}}$ }
\label{randomcurrent}
The distribution of $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$ conditionally on $\mathcal{L}_{\frac{1}{2}}$ can be retrieved by Corollary 3.6 in \cite{Lupu2014LoopsGFF}, which reads as follows.
\begin{lemma}[Corollary 3.6 in \cite{Lupu2014LoopsGFF}]
\label{36} Conditionally on $\mathcal{L}_{\frac{1}{2}}$, the events $e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$, $e\in E\setminus\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$, are independent and have probability
\begin{equation}
\label{cp}
\exp\left(-2 W_{e} \sqrt{L_{e_{+}}(\mathcal{L}_{\frac{1}{2}})L_{e_{-}}(\mathcal{L}_{\frac{1}{2}})}\right).
\end{equation}
\end{lemma}
This result gives rise, together with Theorem \ref{thm:Lupu}, to the following discrete version of Lupu's isomorphism, which is stated without any recourse to the cable graph induced by $\mathcal{G}$.
\begin{definition}
\label{def:out}
Let $(\omega_{e})_{e\in E}\in\lbrace 0,1\rbrace^{E}$ be a percolation defined as follows: conditionally on $\mathcal{L}_{\frac{1}{2}}$, the random variables
$(\omega_{e})_{e\in E}$ are independent, and $\omega_{e}$ equals $0$ with conditional probability given by \eqref{cp}.
Let $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ the set of edges:
\begin{displaymath}
\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})=\mathcal{O}(\mathcal{L}_{\frac{1}{2}})
\cup \lbrace e\in E\vert \omega_{e}=1\rbrace.
\end{displaymath}
\end{definition}
\begin{proposition}[Discrete version of Lupu's isomorphism, Theorem 1 bis in \cite{Lupu2014LoopsGFF}]
\label{PropIsoLupuLoops}
Given a loop soup $\mathcal{L}_{\frac{1}{2}}$, let $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ be as in Definition \ref{def:out}.
Let $(\sigma_{x})_{x\in V}\in\lbrace -1,+1\rbrace^{V}$ be random spins taking constant values on
clusters induced by $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$
($\sigma_{e_{-}}=\sigma_{e_{+}}$ if $e\in \mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$) and such that the values on each cluster, conditional on $\mathcal{L}_{\frac{1}{2}}$ and $\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$, are independent and uniformly distributed. Then
\begin{displaymath}
\left(\sigma_{x}\sqrt{2 L_{x}(\mathcal{L}_{\frac{1}{2}})}\right)_{x\in V}
\end{displaymath}
is a Gaussian free field distributed according to $P_{\varphi}$.
\end{proposition}
Proposition \ref{PropIsoLupuLoops} induces the following coupling between FK-Ising and random currents.
If $V$ is finite, a \textit{random current model} on $\mathcal{G}$ with weights
$(J_{e})_{e\in E}$ is a random assignment to each edge $e$ of a non-negative integer
$\hat{n}_{e}$ such that for all $x\in V$,
\begin{displaymath}
\sum_{e~\text{adjacent to}~x}\hat{n}_{e}
\end{displaymath}
is even, which is called the \textit{parity condition}. The probability of a configuration
$(n_{e})_{e\in E}$ satisfying the parity condition is
\begin{displaymath}
\mathbb{P}^{\rm RC}_{J}(\forall e\in E, \hat{n}_{e}=n_{e})=
\dfrac{1}{\mathcal{Z}^{\rm RC}_{J}}\prod_{e\in E}\dfrac{(J_{e})^{n_{e}}}{n_{e}!},
\end{displaymath}
where actually $\mathcal{Z}^{\rm RC}_{J}=\mathcal{Z}^{\rm Isg}_{J}$. Let
\begin{displaymath}
\mathcal{O}(\hat{n})=\lbrace e\in E\vert \hat{n}_{e}>0\rbrace.
\end{displaymath}
The open edges in $\mathcal{O}(\hat{n})$ induce clusters on the graph $\mathcal{G}$.
Given a loop soup $\mathcal{L}_{\alpha}$, we denote by $N_{e}(\mathcal{L}_{\alpha})$ the number of times the loops in $\mathcal{L}_{\alpha}$ cross the nonoriented edge $e\in E$. The transience of the Markov jump process $X$ implies that
$N_{e}(\mathcal{L}_{\alpha})$ is a.s. finite for all $e\in E$. If $\alpha=\frac{1}{2}$, we have the following identity (see for instance \cite{Werner2015}):
\begin{LoopsRC}
Assume $V$ is finite and consider the loop soup $\mathcal{L}_{\frac{1}{2}}$. Conditionally on the occupation field $(L_{x}(\mathcal{L}_{\frac{1}{2}}))_{x\in V}$,
$(N_{e}(\mathcal{L}_{\frac{1}{2}}))_{e\in E}$ is distributed as a random current model with weights
$\left(2W_{e}\sqrt{L_{e_{-}}(\mathcal{L}_{\frac{1}{2}})L_{e_{+}}
(\mathcal{L}_{\frac{1}{2}})}\right)_{e\in E}$. If $\varphi$ is the GFF on $\mathcal{G}$ given by Le Jan's or Lupu's isomorphism, then these weights are
$(J_{e}(\vert\varphi\vert))$.
\end{LoopsRC}
Conditionally on the occupation field
$(L_{x}(\mathcal{L}_{\frac{1}{2}}))_{x\in V}$,
$\mathcal{O}(\mathcal{L}_{\frac{1}{2}})$ are the edges occupied by a random current and
$\mathcal{O}_{+}(\mathcal{L}_{\frac{1}{2}})$ the edges occupied by FK-Ising.
Lemma \ref{lem:fki} and Proposition \ref{PropIsoLupuLoops} imply the following coupling, as noted by Lupu and Werner in
\cite{lupu-werner}.
\begin{proposition}[Random current and FK-Ising coupling,
\cite{lupu-werner}]
\label{RCFKIsing}
Assume $V$ is finite. Let $\hat{n}$ be a random current on $\mathcal{G}$ with weights
$(J_{e})_{e\in E}$. Let $(\omega_{e})_{e\in E}\in\lbrace 0,1\rbrace^{E}$ be an independent percolation, each edge being opened (value $1$) independently with probability
$1-e^{-J_{e}}$. Then
\begin{displaymath}
\mathcal{O}(\hat{n})\cup\lbrace e\in E\vert \omega_{e}=1\rbrace
\end{displaymath}
is distributed like the open edges in an FK-Ising with weights
$(1-e^{-2 J_{e}})_{e\in E}$.
\end{proposition}
\subsection{Generalized second Ray-Knight ``version'' of Lupu's isomorphism}\label{sec:glupu}
We are now in a position to state the coupled version of the second Ray-Knight theorem.
\begin{theorem}
\label{Lupu}
Let $x_{0}\in V$. Let $(\varphi_{x}^{(0)})_{x\in V}$ with distribution $P_{\varphi}^{\lbrace x_{0}\rbrace,0}$, and define $\mathcal{O}(\varphi^{(0)})$ as in Definition \ref{def_FK-Ising}.
Let $X$ be an independent Markov jump process started from $x_{0}$.
Fix $u>0$. If $\tau_{u}^{x_{0}}<\zeta$, we
let $\mathcal{O}_{u}$ be the random subset of $E$ which contains $\mathcal{O}(\varphi_{x}^{(0)})$, the edges used by the path $(X_{t})_{0\leq t\leq \tau_{u}^{x_{0}}}$, and additional edges $e$ opened conditionally independently with probability
\begin{displaymath}
1-e^{W_{e}\vert\varphi_{e_{-}}^{(0)}\varphi_{e_{+}}^{(0)}\vert -
W_{e}\sqrt{(\varphi_{e_{-}}^{(0)2}+2\ell_{e_{-}}(\tau_{u}^{x_{0}}))
(\varphi_{e_{+}}^{(0)2}+2\ell_{e_{+}}(\tau_{u}^{x_{0}}))}}.
\end{displaymath}
We let $\sigma\in\lbrace -1,+1\rbrace^{V}$ be random spins sampled uniformly independently on each cluster induced by
$\mathcal{O}_{u}$, pinned at $x_0$, i.e. $\sigma_{x_0}=1$, and define
\begin{displaymath}
\varphi_{x}^{(u)}:=\sigma_{x}\sqrt{\varphi_{x}^{(0)2}+2\ell_{x}(\tau_{u}^{x_{0}})}.
\end{displaymath}
Then, conditionally on $\tau_{u}^{x_{0}}<\zeta$, $\varphi^{(u)}$ has distribution $P_{\varphi}^{\lbrace x_{0}\rbrace,\sqrt{2u}}$, and
$\mathcal{O}_{u}$ has distribution $\mathcal{O}(\varphi^{(u)})$ conditionally on $\varphi^{(u)}$.
\end{theorem}
\begin{remark}
One consequence of that coupling is that the path $(X_{s})_{s\le \tau_{u}^{x_{0}}}$ stays in the
positive connected component of ${x_0}$ for $\varphi^{(u)}$. This yields a coupling between the range
of the Markov chain and the sign component of $x_{0}$ inside a GFF $P_{\varphi}^{\lbrace x_{0}\rbrace,\sqrt{2u}}$.
\end{remark}
\noindent{\it Proof of Theorem \ref{Lupu}:~}
The proof is based on
\cite{Lupu2014LoopsGFF}. Let $D=V\setminus\{x_0\}$, and let $\tilde{\mathcal{L}}_{\frac{1}{2}}$ be the loop soup of intensity $1/2$ on the cable graph $\tilde{\mathcal{G}}$, which we decompose into $\tilde{\mathcal{L}}_{\frac{1}{2}}^{(x_0)}$ (resp. $\tilde{\mathcal{L}}_{\frac{1}{2}}^{D}$) the loop soup hitting (resp. not hitting) $x_0$, which are independent. We let $\mathcal{L}_{\frac{1}{2}}$ and $\mathcal{L}_{\frac{1}{2}}^{(x_0)}$ (resp. $\mathcal{L}_{\frac{1}{2}}^{D}$) be the prints of these loop soups on $V$ (resp. on $D=V\setminus\{x_0\}$). We condition on $L_{x_0}(\mathcal{L}_{\frac{1}{2}})=u$.
Theorem \ref{thm:Lupu} implies (recall also Definition \ref{def_FK-Ising}) that we can couple $\tilde{\mathcal{L}}_{\frac{1}{2}}^{D}$ with $\varphi^{(0)}$ so that
$L_x(\mathcal{L}_{\frac{1}{2}}^{D})=\varphi_x^{(0)2}/2$ for all $x\in V$, and
$\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})=\mathcal{O}(\varphi^{(0)})$.
Define
$\varphi^{(u)}=(\varphi^{(u)}_x)_{x\in V}$ from
$\tilde{\mathcal{L}}_{\frac{1}{2}}$ by, for all $x\in V$,
\begin{equation*}
\label{abs}
|\varphi_x^{(u)}|=\sqrt{2L_x(\mathcal{L}_{\frac{1}{2}})}
\end{equation*}
and $\varphi_x^{(u)}=\sigma_x|\varphi_x^{(u)}|$, where $\sigma\in\{-1,+1\}^V$ are random spins sampled uniformly independently on each cluster induced by $\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})$, pinned at $x_0$, i.e. $\sigma_{x_0}=1$. Then, by Theorem \ref{thm:Lupu}, $\varphi^{(u)}$ has distribution $P_{\varphi}^{\lbrace x_{0}\rbrace,\sqrt{2u}}$.
For all $x\in V$, we have
$$L_x(\tilde{\mathcal{L}}_{\frac{1}{2}})=\frac{\varphi_x^{(0)2}}{2}+L_x(\mathcal{L}_{\frac{1}{2}}^{(x_0)}).$$
On the other hand, conditionally on $L_.(\mathcal{L}_{\frac{1}{2}})$,
\begin{align*}
&\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})\,|\,
e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\cup\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))
=\frac{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}))}{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\cup\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}=
\frac{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})\,|\,
e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\,|\,
e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}\\
&=\frac{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}})\,|\,
e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}))}{\mathbb{P}(e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)\,|\,
e\not\in\mathcal{O}(\mathcal{L}_{\frac{1}{2}}^D))}
=\exp\left(-W_e\sqrt{L_{e_-}(\mathcal{L}_{\frac{1}{2}})L_{e_+}(\mathcal{L}_{\frac{1}{2}})}
+W_e\sqrt{L_{e_-}(\mathcal{L}_{\frac{1}{2}}^D)L_{e_+}(\mathcal{L}_{\frac{1}{2}}^D)}\right),
\end{align*}
where we use in the third equality that the event $e\not\in\mathcal{O}(\tilde{\mathcal{L}}_{\frac{1}{2}}^D)$ is measurable with respect to the $\sigma$-field generated by $\tilde{\mathcal{L}}_{\frac{1}{2}}^D$, which is independent of $\tilde{\mathcal{L}}_{\frac{1}{2}}^{(x_0)}$, and where we use Lemma \ref{36} in the fourth equality, for $\tilde{\mathcal{L}}_{\frac{1}{2}}$ and for $\tilde{\mathcal{L}}_{\frac{1}{2}}^D$.
We conclude the proof by observing that $\mathcal{L}_{\frac{1}{2}}^{(x_0)}$ conditionally on $L_{x_0}(\mathcal{L}_{\frac{1}{2}}^{(x_0)})=u$ has the law of the occupation field of the Markov chain $\ell(\tau_{u}^{x_{0}})$
under $\mathbb{P}_{x_{0}}(\cdot \vert \tau_{u}^{x_{0}}<\zeta)$.
{\hfill $\Box$}
\section{Inversion of the signed isomorphism}
\label{sec:inversion}
In \cite{SabotTarres2015RK}, Sabot and Tarrès give a new proof of the generalized second Ray-Knight theorem together with a construction that inverts the coupling between the square of a GFF conditioned by its value at a vertex $x_{0}$ and the excursions of the jump process $X$ from and to $x_{0}$.
In this paper we are interested in inverting the coupling of Theorem \ref{Lupu} with the signed GFF : more precisely, we want to describe the law of
$(X_t)_{0\le t\le \tau_u^{x_0}}$ conditionally on $\varphi^{(u)}$.
We present in section \ref{sec_Poisson}
an inversion involving an extra Poisson process. We provide in Section \ref{sec_dicr_time} a discrete-time description of the process and in Section \ref{sec_jump} an alternative description via jump rates. Sections \ref{sec:lejaninv} and \ref{sec:coupinv} are respectively dedicated to a signed inversion of Le Jan's isomorphism for loop soups, and to an inversion of the coupling of random current with FK-Ising.
\subsection{A description via an extra Poisson point process}\label{sec_Poisson}
Let $(\check \varphi_x)_{x\in V}$ be a real function on $V$ such that
$\check\varphi_{x_0}=+\sqrt{2u}$ for some $u>0$. Set
$$
\check \Phi_x=\vert\check\varphi_x\vert, \;\;\sigma_x=\operatorname{sign}(\check\varphi_x).
$$
We define a self-interacting process $(\check X_t, (\check n_e(t))_{e\in E})$ living on $V\times {\mathbb{N}}^E$ as follows.
The process $\check X$ starts at $\check X(0)=x_0$.
For $t\ge 0$, we set
$$
\check\Phi_x(t)=\sqrt{(\check\Phi_x)^2-2\check\ell_x(t)},\;\;\forall x\in V,\;\;\;\hbox{ and }\;
J_e(\check\Phi(t))=W_e \check\Phi_{e-}(t)\check\Phi_{e+}(t), \;\; \forall e\in E.
$$
where $\check\ell_x(t)=\int_0^t{{\mathbbm 1}}_{\{\check X_s=x\}}ds$ is the local time of the process $\check X$ up to time $t$.
Let $(N_e(u))_{u\ge 0}$ be an independent Poisson Point Processes on ${\mathbb R}_+$ with intensity 1, for each edge $e\in E$.
We set
$$
\check n_e(t)=
\begin{cases}
N_e(2J_e(t)), &\hbox{ if } \sigma_{e-}\sigma_{e+}=+1,
\\
0, &\hbox{ if } \sigma_{e-}\sigma_{e+}=-1.
\end{cases}
$$
We also denote by $\check {\mathcal C}(t)\subset E$ the configuration of edges such that $\check n_e(t)>0$.
As time increases, the interaction parameters $J_{e}(\check\Phi(t))$ decreases for the edges neighboring $\check X_t$, and at some random times $\check n_e(t)$ may drop
by 1.
The process $(\check X_t)_{t\ge 0}$ is defined as the process that jumps only at the times when one of the $\check n_e(t)$ drops by 1, as follows:
\begin{itemize}
\item
if $\check n_e(t)$ decreases by 1 at time $t$, but does not create a new cluster in $\check {\mathcal C}_t$, then $\check X_t$ crosses the edge
$e$ with probability ${1/2}$ or does not move with probability ${1/2}$,
\item
if $\check n_e(t)$ decreases by 1 at time $t$, and does create a new cluster in $\check {\mathcal C}_t$,
then $\check X_t$ moves/or stays with probability 1 on the unique extremity
of $e$ which is in the cluster of the origin $x_0$ in the new configuration.
\end{itemize}
We set
$$
\check T:=\inf\{t\ge 0,\;\; \exists x\in V, \hbox{ s. t. } \check\Phi_x(t)=0\},
$$
clearly, the process is well-defined up to time $\check T$.
\begin{proposition}
For all $0\le t\le \check T$, $\check X_t$ is in the connected component of $x_0$ of the configuration $\check {\mathcal C}(t)$. If $V$ is finite,
the process ends at $x_0$, i.e. $\check X_{\check T}=x_0$.
\end{proposition}
\begin{theorem}
\label{thm-Poisson}
Assume that $V$ is finite.
With the notation of Theorem \ref{Lupu}, conditioned on $\varphi^{(u)}=\check\varphi$, $(X_{t})_{t\le \tau_{u}^{x_{0}}}$ has the law
of $(\check X_{\check T-t})_{0\le t\le \check T}$.
Moreover, conditioned on $\varphi^{(u)}=\check\varphi$, $(\varphi^{(0)},\mathcal{O}(\varphi^{(0)}))$ has the law of
$(\sigma'_x\check\Phi_x(\check T), \check{\mathcal C}(\check T))$ where $(\sigma'_x)_{x\in V}\in \lbrace -1,+1\rbrace^{V}$ are random spins sampled uniformly independently on
each cluster induced by $\check{\mathcal C}(\check T)$,
with the condition that $\sigma'_{x_0}=+1$.
If $V$ is infinite, then $P_{\varphi}^{\lbrace x_{0}\rbrace, \sqrt{2u}}$-a.s.,
$\check X_t$ (with the initial condition $\check\varphi=\varphi^{(u)}$)
ends at $x_0$, i.e. $\check T<+\infty$ and $\check X_{\check T}=x_0$.
All previous conclusions for the finite case still hold.
\end{theorem}
\subsection{Discrete time description of the process}
\label{sec_dicr_time}
We give a discrete time description of the process
$(\check X_t, (\check n_e(t))_{e\in E})$
that appears in the previous section.
Let $t_{0}=0$ and $0<t_{1}<\dots<t_{j}$ be the stopping times when one of the
stacks $n_e(t)$ decreases by $1$, where $t_{j}$ is the time when one of the stacks is completely depleted. It is elementary to check the following:
\begin{proposition}
\label{PropDiscrTime}
The discrete time process
$(\check X_{t_{i}}, (\check n_e(t_{i}))_{e\in E})_{0\leq i\leq j}$ is a stopped Markov process. The transition from time $i-1$ to $i$ is the following:
\begin{itemize}
\item first chose $e$ an edge adjacent to the vertex $\check X_{t_{i-1}}$
according to a probability proportional to $\check n_e(t_{i-1})$,
\item decrease the stack $\check n_e(t_{i-1})$ by 1,
\item
if decreasing $\check n_e(t_{i-1})$ by 1 does not create a new cluster in
$\check {\mathcal C}_{t_{i-1}}$, then $\check X_{t_{i-1}}$ crosses the edge
$e$ with probability ${1/2}$ or does not move with probability ${1/2}$,
\item
if decreasing $\check n_e(t_{i-1})$ by 1 does create a new cluster in
$\check {\mathcal C}_{t_{i-1}}$,
then $\check X_{t_{i-1}}$ moves/or stays with probability 1 on the unique extremity of $e$ which is in the cluster of the origin $x_0$ in the new configuration.
\end{itemize}
\end{proposition}
\subsection{An alternative description via jump rates}\label{sec_jump}
We provide an alternative description of the process $(\check X_t, \check {\mathcal C}(t))$ that appears in Section \ref{sec_Poisson}.
\begin{proposition}\label{prop-jump}
The process $(\check X_t, \check{\mathcal C}(t))$ defined in section \ref{sec_Poisson} can be alternatively described by its jump rates :
conditionally on its past at time $t$, if $\check X_t=x$, $y\sim x$ and $\lbrace x,y\rbrace\in \check{\mathcal{C}}(t)$, then
\begin{itemize}
\item[(1)] $\check X$ jumps to $y$ without modification of $\check{\mathcal C}(t)$ at rate
\begin{displaymath}
W_{x,y}\dfrac{\check\Phi_{y}(t)}{\check\Phi_{x}(t)}
\end{displaymath}
\item[(2)] the edge $\lbrace x,y\rbrace$ is closed in $\check{\mathcal C}(t)$ at rate
\begin{displaymath}
2W_{x,y}\dfrac{\check\Phi_{y}(t)}{\check\Phi_{x}(t)}
\left(e^{2W_{x,y}\check\Phi_{x}(t)\check\Phi_{y}(t)}-1\right)^{-1}
\end{displaymath}
and, conditionally on that last event:
- if $y$ is connected to
$x$ in the configuration $\check {\mathcal C}(t)\setminus\{x,y\}$, then $\check X$ simultaneously jumps to $y$ with probability $1/2$ and stays at $x$ with probability $1/2$
- otherwise $\check X_t$ moves/or stays with probability 1 on the unique extremity
of $\{x,y\}$ which is in the cluster of the origin $x_0$ in the new configuration.
\end{itemize}
\end{proposition}
\begin{remark}
It is clear from this description that the joint process $(\check X_t, \check {\mathcal C}(t), \check \Phi(t))$ is Markov process, and well defined up to the time
$$
\check T:=\inf\{t\ge 0:\;\; \exists x\in V, \hbox{ s.t. } \check\Phi_x(t)=0\}.
$$
\end{remark}
\begin{remark}
One can also retrieve the process in Section \ref{sec_Poisson} from the representation in Proposition \ref{prop-jump} as follows.
Consider the representation of Proposition \ref{prop-jump} on the graph where each edge $e$ is replaced by a large number $N$ of
parallel edges with conductance $W_e/N$. Consider now $\check n^{(N)}_{x,y}(t)$ the number of parallel edges that are open in the configuration
$\check {\mathcal C}(t)$ between $x$ and $y$. Then, when $N\to\infty$, $(\check n^{(N)}(t))_{t\ge0}$, converges in law to
$(\check n(t))_{t\ge0}$, defined in section \ref{sec_Poisson}.
\end{remark}
\noindent {\it Proof of Proposition \ref{prop-jump}:~}
Assume $\check X_t=x$, fix $y\sim x$ and let $e=\{x,y\}$. Recall that $\{x,y\}\in\check{\mathcal C}(t)$ iff $\check n_e(t)\ge1$.
Let us first prove (1):
\begin{align*}
&\mathbb{P}\left(\check X\text{ jumps to $y$ on time interval $[t,t+\Delta t]$ without modification of }\check{\mathcal C}(t)\,|\,\{x,y\}\in\check{\mathcal C}(t)\right)\\
&=\frac{1}{2}\mathbb{P}\left(\check n_e(t)-\check n_e(t+\Delta t)=1,\,\check n_e(t+\Delta t)\ge1\,|\,\check n_e(t)\ge1\right)\\
&=\frac{1}{2}(2J_e(t)-2J_e(t+\Delta t))+o(\Delta t)=W_{xy}\dfrac{\check\Phi_{y}(t)}{\check\Phi_{x}(t)}\Delta t+o(\Delta t).
\end{align*}
Similarly, (2) follows from the following computation:
\begin{align*}
&\mathbb{P}\left(\{x,y\}\text{ closed in }\check{\mathcal C}(t+\Delta t)\,|\,\{x,y\}\in\check{\mathcal C}(t)\right)
=\mathbb{P}\left(n_e(t+\Delta t)=0\,|\,\check n_e(t)\ge1\right)\\
&=\frac{\mathbb{P}\left(\check n_e(t)=1,\,\check n_e(t+\Delta t)=0\right)}{\mathbb{P}\left(\check n_e(t)\ge1\right)}
=\frac{e^{-2J_e(t)}2J_e(t)}{1-e^{-2J_e(t)}}(J_e(t)-J_e(t+\Delta t))+o(\Delta t)
\end{align*}
{\hfill $\Box$}
We easily deduce from the Proposition \ref{prop-jump} and Theorem \ref{thm-Poisson2} the following alternative inversion of the coupling in Theorem \ref{Lupu}.
\begin{theorem}\label{thm-jump-rates}
With the notation of Theorem \ref{Lupu}, conditionally on $(\varphi^{(u)},\mathcal{O}_{u})$, $(X_{t})_{t\le \tau_{u}^{x_{0}}}$ has the law
of self-interacting process $(\check X_{\check T-t})_{0\le t\le \check T}$ defined by jump rates of Proposition \ref{prop-jump}
starting with
$$
\check \Phi_x=\sqrt{(\varphi_{x}^{(0)})^2+2\ell_{x}(\tau_{u}^{x_{0}})} \hbox{ and } \check{\mathcal C}(0)=\mathcal{O}_{u}.
$$
Moreover $(\varphi^{(0)},\mathcal{O}(\varphi^{(0)}))$ has the same law as
$(\sigma'\check\Phi(T), \check{\mathcal C}(\check T))$ where $(\sigma'_x)_{x\in V}$ is a configuration of signs obtained by picking a sign at random independently on
each connected component of $\check{\mathcal C}(T)$, with the condition that the component of $x_0$ has a + sign.
\end{theorem}
\subsection{A signed version of Le Jan's isomorphism for loop soup}
\label{sec:lejaninv}
Let us first recall how the loops in $\mathcal{L}_{\alpha}$ are connected to the excursions of the jump process $X$.
\begin{proposition}[From excursions to loops]
\label{PropPD}
Let $\alpha>0$ and $x_{0}\in V$.
$L_{x_{0}}(\mathcal{L}_{\alpha})$ is distributed according to a Gamma
$\Gamma(\alpha, G(x_{0},x_{0}))$ law, where $G$ is the Green's function. Let $u>0$, and consider the path $(X_{t})_{0\leq t\leq \tau_{u}^{x_{0}}}$ conditioned on $\tau_{u}^{x_{0}}<\zeta$. Let $(Y_{j})_{j\geq 1}$ be an independent Poisson-Dirichlet partition $PD(0,\alpha)$ of $[0,1]$. Let $S_{0}=0$ and
\begin{displaymath}
S_{j}=\sum_{i=1}^{j}Y_{i}.
\end{displaymath}
Let
\begin{displaymath}
\tau_{j}= \tau_{u S_{j}}^{x_{0}}.
\end{displaymath}
Consider the family of paths
\begin{displaymath}
\left((X_{\tau_{j-1}+t})_{0\leq t\leq \tau_{j}-\tau_{j-1}}\right)_{j\geq 1}.
\end{displaymath}
It is a countable family of loops rooted in $x_{0}$. It has the same law as the family of all the loops in $\mathcal{L}_{\alpha}$ that visit $x_{0}$, conditioned on $L_{x_0}(\mathcal{L}_{\alpha})=u$.
\end{proposition}
Next we describe how to invert the discrete version fo Lupu's isomorphism Proposition \ref{PropIsoLupuLoops} for the loop-soup in the same way as in Theorem \ref{thm-Poisson}.
Let $(\check \varphi_x)_{x\in V}$ be a real function on $V$ such that
$\check\varphi_{x_0}=+\sqrt{2u}$ for some $u>0$. Set
$$
\check \Phi_x=\vert\check\varphi_x\vert, \;\;\sigma_x=\operatorname{sign}(\check\varphi_x).
$$
Let $(x_{i})_{1\leq i\leq\vert V\vert}$ be an enumeration of $V$ (which may be infinite).
We define by induction the self interacting processes
$((\check X_{i,t})_{1\leq i\leq\vert V\vert},
(\check n_e(t))_{e\in E})$.
$\check{T}_{i}$ will denote the end-time for $\check X_{i,t}$, and
$\check{T}^{+}_{i}=\sum_{1\leq j\leq i}\check{T}_{j}$.
By definition, $\check{T}^{+}_{0}=0$.
$L(t)$ will denote
\begin{displaymath}
L_{x}(t):=\sum_{1\leq i\leq\vert V\vert}
\check{\ell}_{x}(i,0\vee(t-\check{T}^{+}_{i})),
\end{displaymath}
where $\check{\ell}_{x}(i,t)$ are the occupation times for
$\check X_{i,t}$.
For $t\ge 0$, we set
$$
\check\Phi_x(t)=\sqrt{(\check\Phi_x)^2-2L_x(t)},\;\;\forall x\in V,\;\;\;\hbox{ and }\;
J_e(\check\Phi(t))=W_e \check\Phi_{e-}(t)\check\Phi_{e+}(t), \;\; \forall e\in E.
$$
The end-times $\check{T}_{i}$ are defined by inductions as
\begin{displaymath}
\check{T}_{i}=\inf\lbrace t\geq 0\vert
\check{\Phi}_{\check{X}_{i,t}}(t+\check{T}^{+}_{i-1})=0\rbrace.
\end{displaymath}
Let $(N_e(u))_{u\ge 0}$ be independent Poisson Point Processes on ${\mathbb R}_+$ with intensity 1, for each edge $e\in E$.
We set
$$
\check n_e(t)=
\begin{cases}
N_e(2J_e(t)), &\hbox{ if } \sigma_{e-}\sigma_{e+}=+1,
\\
0, &\hbox{ if } \sigma_{e-}\sigma_{e+}=-1.
\end{cases}
$$
We also denote by $\check {\mathcal C}(t)\subset E$ the configuration of edges such that $\check n_e(t)>0$.
$\check X_{i,t}$ starts at $x_{i}$.
For $t\in[\check{T}^{+}_{i-1},\check{T}^{+}_{i}]$,
\begin{itemize}
\item
if $\check n_e(t)$ decreases by 1 at time $t$, but does not create a new cluster in $\check {\mathcal C}_t$, then $\check X_{i,t-\check{T}^{+}_{i-1}}$ crosses the edge
$e$ with probability ${1/2}$ or does not move with probability ${1/2}$,
\item
if $\check n_e(t)$ decreases by 1 at time $t$, and does create a new cluster in $\check {\mathcal C}_t$,
then $\check X_{i,t-\check{T}^{+}_{i-1}}$ moves/or stays with probability 1 on the unique extremity
of $e$ which is in the cluster of the origin $x_i$ in the new configuration.
\end{itemize}
By induction, using Theorem \ref{thm-Poisson}, we deduce the following:
\begin{theorem}
\label{ThmPoissonLoopSoup}
Let $\varphi$ be a GFF on $\mathcal{G}$ with the law $P_{\varphi}$.
If one sets $\check{\varphi}=\varphi$ in the preceding construction, then
for all $i\in \lbrace 1,\dots,\vert V\vert\rbrace$,
$\check{T}_{i}<+\infty$,
$\check{X}_{i,\check{T}_{i}} = x_{i}$ and the
path $(\check{X}_{i,t})_{t\leq\check{T}_{i}}$ has the same law as a concatenation in $x_{i}$ of all the loops in a loop-soup
$\mathcal{L}_{1/2}$ that visit $x_{i}$, but none of the
$x_{1},\dots,x_{i-1}$. To retrieve the loops out of each path
$(\check{X}_{i,t})_{t\leq\check{T}_{i}}$, on has to partition it according to
a Poisson-Dirichlet partition as in Proposition \ref{PropPD}.
The coupling between the GFF $\varphi$ and the loop-soup obtained from
$((\check X_{i,t})_{1\leq i\leq\vert V\vert},
(\check n_e(t))_{e\in E})$ is the same as in Proposition
\ref{PropIsoLupuLoops}.
\end{theorem}
\subsection{Inverting the coupling of random current with FK-Ising}
\label{sec:coupinv}
By combining Theorem \ref{ThmPoissonLoopSoup} and the discrete time
description of Section \ref{sec_dicr_time}, and by conditionning on the occupation field of the loop-soup, one deduces an inversion of the coupling
of Proposition \ref{RCFKIsing} between the random current and FK-Ising.
We consider that the graph $\mathcal{G}=(V,E)$ and that the edges are endowed with weights $(J_{e})_{e\in E}$. Let
$(x_{i})_{1\le i\le \vert V\vert}$ be an enumeration of $V$.
Let $\check{\mathcal{C}}(0)$ be a subset of open edges of $E$.
Let $(\check{n}_{e}(0))_{e\in E}$ be a family of
random integers such that
$\check{n}_{e}(0)=0$ if $e\not\in\check{\mathcal{C}}(0)$, and
$(\check{n}_{e}(0)-1)_{e\in\check{\mathcal{C}}(0)}$
are independent Poisson random variables, where
$\mathbb{E}[\check{n}_{e}(0)-1]=2J_{e}$.
We will consider a family of discrete time self-interacting processes
$((\check X_{i,j})_{1\leq i\leq \vert V\vert},
(\check{n}_{e}(j))_{e\in E})$. $\check X_{i,j}$ starts at $j=0$ at
$x_{i}$ and is defined up to a integer time $\check{T}_{i}$.
Let $\check{T}_{i}^{+}=\sum_{1\leq k\leq i}\check{T}_{k}$, with
$\check{T}_{0}^{+}=0$. The end-times $\check{T}_{i}$ are defined by induction as
\begin{displaymath}
\check{T}_{i}=
\inf\Big\lbrace j\geq 0\Big\vert
\sum_{e~\text{edge adjacent to}~\check X_{i,j}}
\check{n}_{e}(j+\check{T}_{i-1}^{+})=0\Big\rbrace.
\end{displaymath}
For $j\geq 1$, $\check{\mathcal{C}}(j)$ will denote
\begin{displaymath}
\check{\mathcal{C}}(j)=\lbrace e\in E\vert \check{n}_{e}(j)\geq 1\rbrace,
\end{displaymath}
which is consistent with the notation $\check{\mathcal{C}}(0)$.
The evolution is the following. For
$j\in \lbrace \check{T}_{i-1}^{+}+1,\dots, \check{T}_{i}^{+}\rbrace$, the transition from time $j-1$ to time $j$ is the following:
\begin{itemize}
\item first chose an edge $e$ adjacent to the vertex
$\check{X}_{i,j-1-\check{T}_{i-1}^{+}}$ with probability proportional to
$\check{n}_{e}(j-1)$,
\item decrease the stack $\check{n}_{e}(j-1)$ by 1,
\item if decreasing $\check{n}_{e}(j-1)$ by 1 does not create a new cluster in $\check{\mathcal{C}}(j-1)$, then
$\check{X}_{i,\cdot}$ crosses $e$ with probability $1/2$ and
does not move with probability $1/2$.
\item if decreasing $\check{n}_{e}(j-1)$ by 1 does create a new cluster in $\check{\mathcal{C}}(j-1)$, then $\check{X}_{i,\cdot}$
moves/or stays with probability 1 on the unique extremity of $e$ which is in the cluster of the origin $x_{i}$ in the new configuration.
\end{itemize}
Denote $\hat{n}_{e}$ the number of times the edge $e$ has been crossed, in both directions, by all the walks
$((\check{X}_{i,j})_{0\le j\le \check{T}_{i}})_{1\le i\le\vert V\vert}$.
\begin{proposition}
A.s., for all $i\in\lbrace 1,\dots,\vert V\vert\rbrace$,
$\check{T}_{i}<+\infty$ and $\check{X}_{i,\check{T}_{i}}=x_{i}$. If the initial configuration of open edges
$\check{\mathcal{C}}(0)$ is random and follows an FK-Ising distribution
with weights $(1-e^{-2 J_{e}})_{e\in E}$, then the family of integers
$(\hat{n}_{e})_{e\in E}$ is distributed like a random current with weights
$(J_{e})_{e\in E}$. Moreover, the coupling between the random current and the FK-Ising obtained this way is the same as the one given by
Proposition \ref{RCFKIsing}.
\end{proposition}
\section{Proof of theorem \ref{thm-Poisson} }
\label{sec:proof}
\subsection{Case of finite graph without killing measure}
\label{sec:pfinite}
Here we will assume that $V$ is finite and that the killing measure
$\kappa\equiv 0$.
In order to prove Theorem \ref{thm-Poisson}, we first enlarge the state space of the process $(X_t)_{t\ge 0}$. We define a process
$(X_t,(n_e(t)))_{t\ge 0}$ living on the space $V\times {\mathbb N}^E$ as follows. Let
$\varphi^{(0)}\sim P_{\varphi}^{\{x_0\},0}$ be a GFF pinned at $x_0$.
Let $\sigma_x=\hbox{sign}(\varphi^{(0)}_x)$ be the signs of the GFF with the convention that $\sigma_{x_0}=+1$.
The process $(X_t)_{t\ge 0}$ is as usual the Markov Jump process starting at $x_0$ with jump rates $(W_e)$. We set
\begin{equation}
\label{Phi-J}
\Phi_x=\vert\varphi^{(0)}_x\vert, \;\; \Phi(t)=\sqrt{\Phi_x^2+2\ell_x(t)}, \;\;\;\forall x\in V, \;\;\; J_e(\Phi(t))=W_e \Phi_{e-}(t)\Phi_{e+}(t), \;\;\; \forall e\in E.
\end{equation}
The initial values $(n_e(0))$ are choosen independently on each edge with distribution
$$
n_e(0)\sim
\begin{cases}
0,& \hbox{ if $\sigma_{e-}\sigma_{e+}=-1$}
\\
\mathcal{P}(2J_e(\Phi)),& \hbox{ if $\sigma_{e-}\sigma_{e+}=+1$}
\end{cases}
$$
where ${\mathcal{P}}(2J_e(\Phi))$ is a Poisson random variable with parameter $2J_e(\Phi)$. Let $((N_e(u))_{u\ge 0})_{e\in E}$ be independent Poisson point processes
on ${\mathbb R}_+$ with intensity 1. We define the process $(n_e(t))$ by
$$
n_e(t)=n_e(0)+N_e(J_e(\Phi(t)))-N_e(J_e(\Phi))+K_e(t),
$$
where $K_e(t)$ is the number of crossings of the edge $e$ by the Markov jump process $X$ before time $t$.
\begin{remark}
Note that compared to the process defined in Section \ref{sec_Poisson}, the speed of the Poisson process is related to $J_e(\Phi(t))$ and not $2J_e(\Phi(t))$.
\end{remark}
We will use the following notation
$$
{\mathcal C}(t)=\{e\in E, \;\; n_e(t)>0\}.
$$
Recall that $\tau_u^{x_0}=\inf\{t\ge 0, \; \ell_{x_0}(t)=u\}$ for $u>0$. To simplify notation, we will write $\tau_u$ for $\tau_u^{x_0}$ in the sequel.
We define $\varphi^{(u)}$ by
$$
\varphi^{(u)}_x=\sigma_x\Phi(\tau_u), \;\;\; \forall x\in V,
$$
where $(\sigma_x)_{x\in V}\in \lbrace -1,+1\rbrace^{V}$ are random spins sampled uniformly independently on
each cluster induced by $\check{\mathcal C}(\check T)$ with the condition that $\sigma_{x_0}=+1$.
\begin{lemma}
\label{end-distrib}
The random vector $(\varphi^{(0)}, {\mathcal C}(0), \varphi^{(u)}, {\mathcal C}(\tau_u^{x_0}))$ thus defined has the same distribution
as $(\varphi^{(0)}, {\mathcal{O}}(\varphi^{(0)}), \varphi^{(u)}, {\mathcal{O}}_u)$ defined in Theorem \ref{Lupu}.
\end{lemma}
\begin{proof}
It is clear from construction, that ${\mathcal C}(0)$ has the same law as ${\mathcal{O}}(\varphi^{(0)})$ (cf Definition \ref{def_FK-Ising}), the FK-Ising configuration coupled with the signs of
$\varphi^{(0)}$ as in Proposition \ref{FK-Ising}. Indeed, for each edge $e\in E$ such that $\varphi^{(0)}_{e-}\varphi^{(0)}_{e+}>0$, the probability that
$n_e(0)>0$ is $1-e^{-2J_e(\Phi)}$.
Moreover, conditionally on ${\mathcal C}(0)={\mathcal{O}}(\varphi^{(0)})$, ${\mathcal C}(\tau_u^{x_0})$ has the same law as ${\mathcal{O}}_u$ defined in Theorem \ref{Lupu}. Indeed, ${\mathcal C}(\tau_u^{x_0})$
is the union of the set ${\mathcal C}(0)$, the set of edges crossed by the process $(X_u)_{u\le \tau_u^{x_0}}$, and the additional edges such that $N_e(J_e(\tau_u^{x_0}))-N_e(J_e(\Phi))>0$.
Clearly $N_e(J_e(\tau_u^{x_0}))-N(J_e(\Phi))>0$ independently with probability $1-e^{-(J_e(\Phi(\tau_u^{x_0}))-J_e(\Phi))}$ which coincides with the probability given in
Theorem \ref{Lupu}.
\end{proof}
We will prove the following theorem that, together with Lemma \ref{end-distrib}, contains the statements of both Theorem \ref{Lupu} and \ref{thm-Poisson}.
\begin{theorem}\label{thm-Poisson2}
The random vector $\varphi^{(u)}_x$ is a GFF distributed according to $P_{\varphi}^{\{x_0\},\sqrt{2u}}$.
Moreover, conditionally on $\varphi^{(u)}_x=\check \varphi$, the process
$$(X_{t},(n_{e}(t))_{e\in E})_{t\le \tau_u^{x_0}}$$
has the law of the process $(\check X_{\check T-t },(\check n_e(\check T -t))_{e\in E})_{t\le \check T}$
described in section \ref{sec_Poisson}.
\end{theorem}
\begin{proof}
{\bf Step 1 :}
We start by a simple lemma.
\begin{lemma}\label{distrib-phi-n}
The distribution of $(\Phi:=\vert \varphi^{(0)}\vert, n_e(0))$ is given by the following formula for any bounded measurable test function $h$
\begin{multline*}
{\mathbb{E}}\left(h(\Phi, n(0))\right)= \\\sum_{(n_e)\in {\mathbb N}^E} \int_{{\mathbb R}_+^{V\setminus\{x_0\}}} d\Phi h(\Phi, n)
e^{-{1\over 2} \sum_{x\in V} W_x(\Phi_x)^2-\sum_{e\in E} J_e(\Phi)}
\left(\prod_{e\in E}{\frac{(2J_e(\Phi))^{n_e}}{n_e!}}\right)
2^{\#{\mathcal C}(n_e)-1}.
\end{multline*}
where the integral is on the set $\{(\Phi_x)_{x\in V}, \;\; \Phi_x>0\; \forall x\neq x_0,\; \Phi_{x_0}=0\}$ and
$d\Phi={\frac{\prod_{x\in V\setminus\{x_0\}} d\Phi_x}{\sqrt{2\pi}^{\vert V\vert -1}}}$
and $\#{\mathcal C}(n)$ is the number of clusters
induced by the edges such that $n_e>0$.
\end{lemma}
\begin{proof}
Indeed, by construction, summing on possible signs of $\varphi^{(0)}$, we have
\begin{eqnarray}
\nonumber
&&{\mathbb{E}}\left(h(\Phi, n(0))\right)
\\
\label{int-eee}&=&\sum_{\sigma_x}
\sum_{n\ll \sigma_x}
\int_{{\mathbb R}_+^{V\setminus\{x_0\}}} d\Phi h(\Phi, n) e^{-{1\over 2} {\mathcal E}(\sigma\Phi)}\left(\prod_{e\in E, \; \sigma_{e+}\sigma_{e-}=+1} {e^{-2J_e(\Phi)} (2J_e(\Phi))^{n_e}\over n_e!}\right).
\end{eqnarray}
where the first sum is on the set $\{\sigma_x\in \{+1,-1\}^V, \; \sigma_{x_0}=+1\}$ and the second sum is on the set of
$\{(n_e)\in {\mathbb N}^E, \; n_e=0\hbox{ if $\sigma_{e-}\sigma_{e+}=-1$}\}$ (we write $n\ll \sigma$ to mean that $n_e$ vanishes on the edges
such that $\sigma_{e-}\sigma_{e+}=-1$). Since
\begin{eqnarray*}
{1\over 2}{\mathcal E}(\sigma \Phi)&=& {1\over 2}\sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi)\sigma_{e-}\sigma_{e+}.
\\
&=&
{1\over 2}\sum_{x\in V} W_x (\Phi_x)^2+\sum_{e\in E} J_e(\Phi)
-\sum_{\substack{e\in E\\\sigma_{e-}\sigma_{e+}=+1}} 2J_e(\Phi),
\end{eqnarray*}
we deduce that the integrand in (\ref{int-eee}) is equal to
\begin{eqnarray*}
&& h(\Phi,n) e^{-{1\over 2} {\mathcal E}(\sigma\Phi)}\left(\prod_{e\in E, \; \sigma_{e+}\sigma_{e-}=+1} {e^{-2J_e(\Phi)} (2J_e(\Phi))^{n_e}\over n_e!}\right)
\\
&=&
h(\Phi,n) e^{-{1\over 2} {\mathcal E}(\sigma\Phi)}e^{-\sum_{e\in E, \; \sigma_{e+}\sigma_{e-}=+1} 2J_e(\Phi)}\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right)
\\
&=&
h(\Phi,n) e^{-{1\over 2}\sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi)}\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right)
\end{eqnarray*}
where we used in the first equality that $n_e=0$ on the edges such that
$\sigma_{e+}\sigma_{e-}=-1$.
Thus,
\begin{eqnarray*}
&&{\mathbb{E}}\left(h(\Phi, n(0))\right)
\\
&=&
\sum_{\sigma_x}\sum_{n_e\ll \sigma_x}
\int_{{\mathbb R}_+^{V\setminus\{x_0\}}} d\Phi h(\Phi, n) e^{-{1\over 2}\sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi)}\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right).
\end{eqnarray*}
Inverting the sum on $\sigma$ and $n$ and summing on the number of possible signs which are constant on clusters induced by the configuration of edges
$\{e\in E, \; n_e>0\}$,
we deduce Lemma \ref{distrib-phi-n}.
\end{proof}
\noindent{\bf Step 2 :} We denote by $Z_t=(X_t, \Phi(t), n_e(t))$ the process defined previously and by
$E_{x_0, \Phi, n_0}$ its law with initial condition $(x_0, \Phi, n_0)$.
We now introduce a process $\tilde Z_t$, which is a "time reversal" of the process $Z_t$. This process will be related to the
process defined in section \ref{sec_Poisson} in Step 4, Lemma \ref{RN}.
For $(\tilde n_e)\in {\mathbb N}^E$ and $(\tilde \Phi_x)_{x\in V}$ such that
$$
\tilde \Phi_{x_0}=u, \;\; \tilde \Phi_x>0, \;\; \forall x\neq x_0,
$$
we define the process $\tilde Z_t=(\tilde X_t, \tilde\Phi(t), \tilde n_e(t))$ with values in $V\times {\mathbb R}_+^V\times {\mathbb Z}^E$ as follows.
The process $(\tilde X_t)$ is a Markov jump process with jump rates $(W_e)$ (so that $\tilde X\stackrel{\text{law}}{=} X$), and
$\tilde\Phi(t)$, $\tilde n_e(t)$ are defined by
\begin{eqnarray}\label{tildePhi}
\tilde \Phi_x(t)=\sqrt{\tilde \Phi_x^2-2\tilde \ell_x(t)},\;\;\;\forall x\in V,
\end{eqnarray}
where $(\tilde\ell_x(t))$ is the local time of the process $\check X$ up to time $t$,
\begin{eqnarray}\label{tilden}
\tilde n_e(t)= \tilde n_e-\left(N_e(J_e(\tilde\Phi))-N_e(J_e(\tilde\Phi(t)))\right)-\tilde K_e(t)
\end{eqnarray}
where $((N_e(u))_{u\ge 0})_{e\in E}$ are independent Poisson point process on ${\mathbb R}_+$ with intensity 1 for each edge $e$, and
$\tilde K_e(t)$ is the number of crossings of the edge $e$ by the process $\tilde X$ before time $t$.
We set
\begin{eqnarray}\label{tildeZ}
\tilde Z_t=(\tilde X_t, (\tilde \Phi_x(t)), (\tilde n_e(t))),
\end{eqnarray}
This process is well-defined up to time
$$
\tilde T=\inf\left\{t\ge 0, \;\; \exists x\in V\; \tilde \Phi_x(t)=0\right\}.
$$
We denote by $\tilde E_{x_0, \tilde\Phi, \tilde n_0}$ its law. Clearly $\tilde Z_t=(\tilde X_t, \tilde\Phi(t), \tilde n_e(t))$ is a Markov process, we will later on make explicit its generator.
We have the following change of variable lemma.
\begin{lemma}\label{change-var}
For all bounded measurable test functions $F,G,H$
\begin{multline*}
\sum_{(n_e)\in {\mathbb N}^E} \int d\Phi F(\Phi, n)E_{x_0,\Phi,n}
\left( G((Z_{\tau_u^{x_0}-t})_{0\le t\le\tau_u^{x_0}})
H(\Phi(\tau_u^{x_0}), n(\tau_u^{x_0}))\right)=
\\
\sum_{(\tilde n_e)\in {\mathbb N}^E} \int d\tilde\Phi H(\tilde\Phi, \tilde n)
\tilde E_{x_0,\tilde \Phi,\tilde n}
\Big({{\mathbbm 1}}_{\{\tilde X_{\tilde T}=x_0,\tilde n_e(\tilde T)\ge 0\; \forall e\in E\}}
G((\tilde Z_{t})_{t\le\check T})
F(\tilde\Phi(\tilde T), \tilde n(\tilde T))\prod_{x\in V\setminus\{x_0\}} {\tilde \Phi_x\over \tilde\Phi_x(\tilde T) }\Big)
\end{multline*}
where the integral on the l.h.s. is on the set $\{(\Phi_x)\in {\mathbb R}_+^V, \;\; \Phi_{x_0}=0\}$ with $d\Phi= {\prod_{x\in V\setminus\{x_0\}} d\Phi_x\over \sqrt{2\pi}^{\vert V\vert -1}}$
and the integral on the r.h.s. is on the set $\{(\tilde\Phi_x)\in {\mathbb R}_+^V, \;\; \tilde\Phi_{x_0}=u\}$ with
$d\tilde\Phi= {\prod_{x\in V\setminus\{x_0\}} d\tilde\Phi_x\over \sqrt{2\pi}^{\vert V\vert -1}}$
\end{lemma}
\begin{proof}
We start from the left-hand side, i.e. the process, $(X_t, n_e(t))_{0\le\le \tau_u^{x_0}}$.
We define
$$
\tilde X_{t}=X_{\tau_u-t},\;\;\; \tilde n_e(t)=n_e(\tau_u-t),
$$
and
$$
\tilde \Phi_x=\Phi_x(\tau_u),\;\;\;, \tilde\Phi_x(t)=\Phi_x({\tau_u-t}),
$$
(The law of the processes such defined will later be identified with the law of the processes ($\tilde X_t, \tilde \Phi(t),\tilde n(t))$ defined at the beginning of step 2, cf (\ref{tildePhi}) and (\ref{tilden})).
We also set
$$
\tilde K_e(t)= K_e(\tau_u)-K_e(t),
$$
which is also the number of crossings of the edge $e$ by the process $\tilde X$, between time 0 and $t$. With these notations we clearly have
$$
\tilde \Phi_x(t)=\sqrt{\tilde \Phi_x^2-2\tilde \ell_x(t)},
$$
where $\tilde \ell_x(t)=\int_{0}^t{{\mathbbm 1}}_{\{\tilde X_u=x\}} du$ is the local time of $\tilde X$ at time $t$, and
$$
\tilde n_e(t)= \tilde n_e(0)+(N_e(J_e(\tilde \Phi(t)))-N_e(J_e(\tilde\Phi(0))))-\tilde K_e(t).
$$
By time reversal, the law of $(\tilde X_t)_{0\le s\le \tilde \tau_u}$ is the same as the law of the Markov Jump process $(X_t)_{0\le t\le \tau_u}$, where
$\tilde \tau_u=\inf\{t\ge 0, \; \tilde\ell_{x_0}(t)=u\}$. Hence, we see that up to the time $\tilde T=\inf\{t\ge 0, \; \exists x\; \tilde\Phi_x(t)=0\}$, the process
$(\tilde X_t, (\tilde \Phi_x(t))_{x\in V}, (\tilde n_e(t))_{t\le \tilde T}$ has the same law as the process defined at the beginning of step 2.
Then, following \cite{SabotTarres2015RK}, we make the following change of variables conditionally on the processes $(X_t, (N_e(t)))$
\begin{eqnarray*}
({\mathbb R}_+^*)^V\times {\mathbb N}^E&\mapsto& ({\mathbb R}_+^*)^V\times {\mathbb N}^E\\
((\Phi_x), (n_e)_{e\in E})&\mapsto&
((\tilde \Phi_x), (\tilde n_e)_{e\in E})
\end{eqnarray*}
which is bijective onto the set
\begin{multline*}
\{\tilde\Phi_x, \;\; \tilde\Phi_{x_0}=\sqrt{2u}, \; \check\Phi_x>\sqrt{2\ell_x(\tau_u^{x_0})}\;\;\forall x\neq x_0\}
\\\times \{(\tilde n_e),\;\; \tilde n_e\ge K_e(\tau_u)+(N_e(J_e(\tilde \Phi(\tau_u)))-N_e(J_e(\Phi)))\}
\end{multline*}
(Note that we always have $\tilde \Phi_{x_0}=\sqrt{2u}$.) The last conditions on $\tilde \Phi$ and $\tilde n_e$ are equivalent to
the conditions $\tilde X_{\tilde T}=x_0$ and $\tilde n_e(\tilde T)\ge 0$.
The Jacobian of the change of variable is given by
$$
\prod_{x\in V\setminus\{x_0\}} d\Phi_x=\left({\prod_{x\in V\setminus\{x_0\}} {\check\Phi_x\over \Phi_x} }\right)\prod_{x\in V\setminus\{x_0\}} d\check\Phi_x.
$$
\end{proof}
\noindent
{\bf Step 3:}
With the notations of Theorem \ref{thm-Poisson2}, we consider the following expectation for $g$ and $h$ bounded measurable test functions
\begin{eqnarray}\label{test-functions}
{\mathbb{E}}\left( g\left(\left(X_{\tau_u-t}, n_e(\tau_u-t)\right)_{0\le t\le \tau_u}\right)h(\varphi^{(u)})\right)
\end{eqnarray}
By definition, we have
$$
\varphi^{(u)}=\sigma \Phi(\tau_u),
$$
where $(\sigma_x)_{x\in V}\in \{\pm 1\}^V$ are random signs sampled uniformly independently on clusters induced by
$\{e\in E, \; n_e(\tau_u)>0\}$ and conditioned on the fact that $\sigma_{x_0}=+1$.
Hence, we define for $(\Phi_x)\in {\mathbb R}_+^V$ and $(n_e)\in {\mathbb N}^E$
\begin{eqnarray}\label{h}
H(\Phi, n)=2^{-\#{\mathcal C}(n)+1} \sum_{\sigma\ll n} h(\sigma \Phi),
\end{eqnarray}
where $\sigma\ll n$ means that the signs $(\sigma_x)$ are constant on clusters of $\{ e\in E, \; n_e>0\}$ and such that $\sigma_{x_0}=+1$.
Hence, setting
$$
F(\Phi, n)=e^{-{1\over 2} \sum_{x\in V} W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi) }\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e!}\right)
2^{\#{\mathcal C}(n_e)-1},
$$
$$
G\left((Z_{\tau_u-t})_{t\le\tau_u}\right)= g\left(\left(X_{\tau_u-t}, n_e(\tau_u-t)\right)_{t\le \tau_u}\right),
$$
using lemma \ref{distrib-phi-n} in the first equality and lemma \ref{change-var} in the second equality, we deduce that
(\ref{test-functions}) is equal to
\begin{multline}
\label{eq-3.3}
{\mathbb{E}}\left( G\left((Z_{\tau_u-t})_{0\le t\le \tau_u}\right)H(\Phi(\tau_u), n(\tau_u)))\right)=
\\
\sum_{(n_e)\in {\mathbb N}^E} \int
d\Phi
F(\Phi, n) E_{x_0, \Phi,n}\left(G\left((Z_{\tau_u-t})_{t\le\tau_u}\right)H\left(\Phi(\tau_u, n(\tau_u))\right)\right)
d\Phi =
\\
\sum_{(\tilde n_e)\in {\mathbb N}^E} \int d\tilde\Phi
H\left(\tilde \Phi,\tilde n\right)
\tilde E_{x_0, \tilde \Phi, \tilde n}\Big({{\mathbbm 1}}_{\{\tilde X_{\tilde T}=x_0,\tilde n_e(\tilde T)\ge 0\; \forall e\in E\}}
F\left(\tilde \Phi(\tilde T) , \tilde n(\tilde T)\right) G\left((\tilde Z_{t})_{t\le\tilde T}\right) \prod_{x\in V\setminus\{x_0\}} {\tilde \Phi_x\over \tilde\Phi_x(\tilde T) }
\Big)
\end{multline}
with notations of Lemma \ref{change-var}.
Let $\tilde{\mathcal F}_t=\sigma\{\tilde X_s, \; s\le t\}$ be the filtration generated by $\tilde X$. We define the $\tilde {\mathcal F}$-adapted process
$\tilde M_t$, defined up to time $\tilde T$ by
\begin{multline}
\label{Mart}
\tilde M_t
= {F(\tilde \Phi(t), \tilde n(t))\over \prod_{V\setminus\{\tilde X_t\}} \tilde\Phi_x(t) }{{\mathbbm 1}}_{\{\tilde X_t\in {\mathcal C}(x_0,\tilde n)\}}{{\mathbbm 1}}_{\{\tilde n_e(t)\ge 0\; \forall e\in E\}}=
\\
e^{-{1\over 2} \sum_{x\in V} W_x(\tilde \Phi_x(t))^2-\sum_{e\in E} J_e(\tilde\Phi(t)) }
\Big(\prod_{e\in E} {(2J_e(\tilde \Phi(t)))^{\tilde n_e(t)}\over \tilde n_e(t) !}\Big)
{2^{\#{\mathcal C}(\tilde n_e(t))-1}
\over \prod_{x\in V\setminus\{\tilde X_t\}} \tilde\Phi_x(t) }{{\mathbbm 1}}_{\{\tilde X_t\in {\mathcal C}(x_0,\tilde n(t)),\tilde n_e(t)\ge 0\; \forall e\in E\}}
\end{multline}
where ${\mathcal C}(x_0,\tilde n(t))$ denotes the cluster of the origin $x_0$ induced by the configuration ${\mathcal C}(\tilde n(t))$.
Note that at time $t=\tilde T$, we also have
\begin{eqnarray}\label{M-T}
\tilde M_{\tilde T}= {F(\tilde \Phi(\tilde T), \tilde n(\tilde T))\over \prod_{V\setminus\{\tilde x_0\}} \tilde\Phi_x(\tilde T) }{{\mathbbm 1}}_{\{\tilde X_{\tilde T}=x_0}{{\mathbbm 1}}_{\tilde n_e(t)\ge 0\; \forall e\in E\}}
\end{eqnarray}
since $\tilde M_{\tilde T}$ vanishes on the event where $\{\tilde X_{\tilde T}=x\}$, with $x\neq x_0$. Indeed, if $\tilde X_{\tilde T}=x\neq x_0$, then
$\tilde\Phi_x(\tilde T)=0$ and $J_e(\tilde\Phi(\tilde T))=0$ for $e\in E$ such that $x\in e$. It means that $\tilde M_{\tilde T}$ is equal to
0 if $\tilde n_{e}(\tilde T)>0$ for some edge $e$ neighboring $x$. Thus, $\tilde M_{\tilde T}$ is null unless $\{x\}$ is a cluster in ${\mathcal C}(\tilde n(\tilde T))$.
Hence, $\tilde M_{\tilde T}=0$ if $x\neq x_0$ since $\tilde M_{\tilde T}$ contains the indicator of the event that $\tilde X_{\tilde T}$ and $x_0$ are in the same cluster.
Hence, using identities (\ref{eq-3.3}) and (\ref{M-T})
we deduce that (\ref{test-functions}) is equal to
\begin{eqnarray}
\label{equ-M}
(\ref{test-functions})&=&
\sum_{(\tilde n_e)\in {\mathbb N}^E} \int d\tilde\Phi
H\left(\tilde \Phi,\tilde n\right) F\left(\tilde \Phi,\tilde n\right)
\tilde E_{x_0, \tilde \Phi, \tilde n}\left(
{\tilde M_{\tilde T}\over \tilde M_0}
G\left((\tilde Z_{t})_{t\le\tilde T}\right)
\right)
\end{eqnarray}
\noindent {\bf Step 4 :}
We denote by $\check Z_t=(\check X_t, \check \Phi_t, \check n(t))$ the process defined in section \ref{sec_Poisson}, which is well defined up to
stopping time $\check T$, and $\check Z^T_t=\check Z_{t\wedge \check T}$. We denote by $\check E_{x_0, \check \Phi, \check n}$
the law of the process $\check Z$ conditionnally on the initial value $\check n(0)$, i.e. conditionally on $(N_e(2J(\check\Phi)))=(\check n_e)$.
The last step of the proof goes through the following lemma.
\begin{lemma}\label{RN}
i) Under $\check E_{x_0,\check\Phi,\check n}$, $\check X$ ends at $\check X_{\check T}=x_0$ a.s. and
$\check n_e(\check T)\ge 0$ for all $e\in E$.
ii) Let $\tilde P^{\le t}_{x_0,\tilde\Phi,\tilde n}$ and $\check P^{\le t}_{x_0,\check\Phi,\check n}$ be the law
of the process $(\tilde Z^T_s)_{s\le t}$ and $(\check Z^T_s)_{s\le t}$, then
$$
{d\check P^{\le t}_{x_0,\tilde \Phi,\tilde n}\over d\tilde P^{\le t}_{x_0,\tilde \Phi,\check n}}={\tilde M_{t\wedge \tilde T}\over \tilde M_0}.
$$
\end{lemma}
Using this lemma we obtain that in the right-hand side of (\ref{equ-M})
$$
\tilde E_{x_0, \tilde \Phi , \tilde n}\left(
{\tilde M_{\tilde T}\over \tilde M_0}
G\left((\tilde Z_{t})_{t\le\tilde T}\right)\right)=
\check E_{x_0, \tilde \Phi , \tilde n}
\left(
G\left((\check Z_{t})_{t\le\check T}\right)\right)
$$
Hence, we deduce, using formula (\ref{h}) and proceeding as in lemma \ref{distrib-phi-n}, that (\ref{test-functions}) is equal to
\begin{multline*}
\label{final}
\int_{{\mathbb R}^{V\setminus\{x_0\} }} d\tilde\varphi
e^{-{1\over 2} {\mathcal E}(\tilde\varphi)} h(\tilde \varphi)
\sum_{(\tilde n_e)\ll (\tilde \varphi_x)} \left(\prod_{e\in E, \; \tilde\varphi_{e-}\tilde\varphi_{e+}\ge 0}
{e^{-2J_e(\vert \tilde \varphi\vert)}(2J_e(\vert \tilde \varphi\vert ))^{\tilde n_e}\over \tilde n_e !}\right)
\\\tilde E_{x_0, \vert \tilde \varphi\vert , \tilde n}\left({\tilde M_{\tilde T}\over \tilde M_0}
G\left((\tilde Z_{t})_{t\le\tilde T}\right)\right),
\end{multline*}
where the last integral is on the set $\{(\tilde\varphi_x)\in {\mathbb R}^V, \;\; \varphi_{x_0}=u\}$,
$d\tilde\varphi={\prod_{x\in V\setminus\{x_0\}} d\tilde\varphi_x\over \sqrt{2\pi}^{\vert V\vert -1}}$, and where $(n_e)\ll (\varphi_x)$ means that
$(\tilde n_e)\in {\mathbb N}^E$ and $\tilde n_e=0$ if $\tilde\varphi_{e-}\tilde\varphi_{e+}\le 0$.
Finally, we conclude that
\begin{eqnarray*}
{\mathbb{E}}\left[ g\left(\left(X_{\tau_u^{x_0}-t}, n_e(\tau_u^{x_0}-t)\right)_{0\le t\le \tau_u^{x_0}}\right)h(\varphi^{(u)})\right]=
{\mathbb{E}}\left[ g\left(\left(\check X_{t}, \check n_e(t)\right)_{0\le t\le \check T}\right)h(\check \varphi)\right]
\end{eqnarray*}
where in the right-hand side
$\check \varphi\sim P_{\varphi}^{\{x_0\}, \sqrt{2u}} $
is a GFF and $(\check X_t, \check n(t))$
is the process defined in section \ref{sec_Poisson} from the
GFF $\check \varphi$.
This exactly means that
$\varphi^{(u)} \sim P_{\varphi}^{\{x_0\}, \sqrt{2u}}$
and that
$$
{\mathcal L}\left(\left(X_{\tau_u^{x_0}-t}, n_e(\tau_u^{x_0}-t)\right)_{0\le t\le \tau_u^{x_0}}\;
\Big| \; \varphi^{(u)}=\check\varphi\right)= {\mathcal L}\left(\left(\check X_t, \check n(t)\right)_{t\le \check T}\right).
$$
This concludes the proof of Theorem \ref{thm-Poisson2}.
\end{proof}
\begin{proof}[Proof of lemma \ref{RN}]
The generator of the process $\tilde Z_t$ defined in (\ref{tildeZ}) is given, for any bounded and $\mathcal{C}^{1}$ for the second component test function $f$, by
\begin{equation}
\label{tildeL2}
\begin{split}
&(\tilde L f)(x,\tilde\Phi,\tilde n)=
-{1\over \tilde \Phi_x} ({\partial\over \partial \tilde\Phi_x}f)
(x,\tilde\Phi, \tilde n) +\\
&
\sum_{y, \; y\sim x} \left(W_{x,y} \left(f(y,\tilde\Phi,\tilde n-\delta_{\{x,y\}})-f(x,\tilde\Phi,n)\right)+
W_{x,y} {\tilde \Phi_{y}\over \tilde \Phi_x} \left(f(x,\tilde\Phi, n-\delta_{\{x,y\}})-f(x,\tilde\Phi,n)\right)\right).
\end{split}
\end{equation}
where $ n-\delta_{\{x,y\}}$ is the value obtained by removing 1 from $n$ at edge $\{x,y\}$.
Indeed, since $\tilde \Phi_x(t)=
\sqrt{\tilde\Phi_{x}(0)^{2} -2\tilde \ell_x(t)}$, we have
\begin{eqnarray}
\label{deriv-Phi}
{\partial\over\partial t} \tilde \Phi_x(t)=
-{{\mathbbm 1}}_{\{\tilde X_t=x\}}{1\over \tilde \Phi_x(t)},
\end{eqnarray}
which is explains the first term in the expression. The second term is obvious from the definition of $\tilde Z_t$, and corresponding to the term induced by jumps
of the Markov process $\tilde X_t$. The last term corresponds to the decrease of $\tilde n$ due to the increase in the process
$\tilde N_e(\tilde \Phi)-\tilde N_e(\tilde \Phi(t))$. Indeed, on the interval $[t,t+dt]$, the probability that
$\tilde N_{e}(\tilde \Phi(t))-\tilde N_{e}(\tilde \Phi(t+dt))$
is equal to 1 is of order
$$-{\partial\over \partial t} \tilde N_e(\tilde \Phi(t))dt={{\mathbbm 1}}_{\{\tilde X_t\in e\}}
{W_e \tilde \Phi_{\underline e}(t)\tilde\Phi_{\overline e}(t)\over \Phi_{\tilde X_t}(t)^2}dt
$$
using identity (\ref{deriv-Phi}).
Let $\check L$ be the generator of the Markov jump process $\check Z_t=(\check X_t, (\check \Phi_x(t)), (\check n_e(t)))$.
We have that the generator is equal, for any smooth test function $f$, to
\begin{eqnarray*}
&&(\check L f)(x,\Phi, n)=
-{1\over \Phi_x} ({\partial\over \partial \Phi_x}f)(x,\Phi, n) +\\
&&{1\over 2} \sum_{y, \; y\sim x}{ n_{x,y} \over \Phi_x^2}
{{{\mathbbm 1}}_{\mathcal{A}_1(x,y)}} \left(f(y,\tilde\Phi,n-\delta_{\{x,y\}})+f(x,\tilde\Phi,n-\delta_{\{x,y\}})- 2f(x,\tilde\Phi,n)\right)
\\
&&+ \sum_{y, \; y\sim x}{ n_{x,y} \over \Phi_x^2} {{\mathbbm 1}}_{\mathcal{A}_2(x,y)} \left( f(y,\tilde\Phi,n-\delta_{\{x,y\}})- f(x,\tilde\Phi,n)) \right)
\\
&&+\sum_{y, \; y\sim x}{n_{x,y} \over \Phi_x^2}
{{\mathbbm 1}}_{\mathcal{A}_3(x,y)} \left(f(x,\tilde\Phi,n-\delta_{\{x,y\}}) - f(x,\tilde\Phi,n) \right)
\end{eqnarray*}
where
$\mathcal{A}_{i}(x,y)$ correspond to the following disjoint events
\begin{itemize}
\item
$\mathcal{A}_1(x,y)$ if the numbers of connected clusters induced by $n-\delta_{\{x,y\}}$ is the same as that of $\check n$.
\item
$\mathcal{A}_2(x,y)$ if a new cluster is created in $ n-\delta_{\{x,y\}}$ compared with $\check n$ and if $y$ is in the connected component
of $x_0$ in the cluster induced by $ n-\delta_{\{x,y\}}$.
\item
$\mathcal{A}_3(x,y)$ if a new cluster is created in $ n-\delta_{\{x,y\}}$ compared with $n$ and if $x$ is in the connected component
of $x_0$ in the cluster induced by $ n-\delta_{\{x,y\}}$.
\end{itemize}
Indeed, conditionally on the value of $\check n_e(t)=N_e(2J_e(\check\Phi(t)))$ at time $t$, the point process $N_e$ on the interval $[0, 2J_e(\check\Phi(t)))]$ has the law of
$n_e(t)$ independent points with uniform distribution on $[0, 2J_e(\check\Phi(t)))]$. Hence, the probability that a point lies in the interval
$[2J_e(\check\Phi(t+dt))), 2J_e(\check\Phi(t)))]$ is of order
$$
-\check n_e(t) {1\over J_e(\check\Phi(t)))}{\partial\over \partial t} J_e(\check\Phi(t))) dt= {{\mathbbm 1}}_{\{X_t\in e\}}\;\check n_e(t){1\over \check\Phi_{X_t}(t)^2}dt.
$$
We define the function
\begin{multline}
\nonumber\Theta(x,(\Phi_x),(n_e))=\\
e^{-{1\over 2} \sum_{x\in V}W_x (\Phi_x)^2-\sum_{e\in E} J_e(\Phi) }
\left(\prod_{e\in E} {(2J_e(\Phi))^{n_e}\over n_e !}\right)
{2^{\#{\mathcal C}(n_e)-1}
\over \prod_{V\setminus\{x\}} \Phi }{{\mathbbm 1}}_{\{x\in {\mathcal C}(x_0,n),
n_e\ge 0\; \forall e\in E\}},
\end{multline}
so that
$$
\tilde M_{t\wedge \tilde T}= \Theta(\tilde Z_{t\wedge\tilde T}).
$$
To prove the lemma it is sufficient to prove (\cite{ChungWalsh05MP}, Chapter 11) that for any bounded smooth test function $f$
\begin{eqnarray}\label{LcheckL}
{1\over \Theta}\tilde L\left(\Theta f\right)= \check L\left(f\right)
\end{eqnarray}
Let us first consider the first term in (\ref{tildeL2}).
Direct computation gives
$$
\left({1\over \Theta}{1\over \Phi_x}\left({\partial\over\partial \Phi_x} \Theta\right)\right) (x,\Phi,n)= -W_x+\sum_{y\sim x} \left(- W_{x,y}{\Phi_y\over\Phi_x}+n_{x,y}{1\over \Phi_x^2}\right).
$$
For the second part, remark that the indicators ${{\mathbbm 1}}_{\{x\in {\mathcal C}(x_0,n)\}}$ and ${{\mathbbm 1}}_{\{n_e\ge 0\; \forall e\in E\}}$ imply that
$
\Theta(y,\Phi, n-\delta_{x,y})
$
vanishes if $n_{x,y}=0$ or if $y\not\in {\mathcal C}(x_0,n-\delta_{x,y})$.
By inspection of the expression of $\Theta$, we obtain for $x\sim y$,
\begin{eqnarray*}
\Theta (y,\Phi, n-\delta_{x,y})&=& \left({{\mathbbm 1}}_{\{n_{x,y}>0\}} ({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_2}) {n_{x,y}\over 2J_{x,y}(\Phi)}{\Phi_y\over \Phi_x}\right)\Theta(x,\Phi, n)
\\
&=&\left(({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_2}) {n_{x,y}\over 2W_{x,y}}{1\over \Phi_x^2}\right)\Theta(x,\Phi, n).
\end{eqnarray*}
Similarly, for $x\sim y$,
\begin{eqnarray*}
\Theta(x,\Phi, n-\delta_{x,y})&=& \left({{\mathbbm 1}}_{\{n_{x,y}>0\}}({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_3}){n_{x,y}\over 2J_{x,y}}\right)\Theta(x,\Phi, n)\\
&=&
\left(({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_3}) {n_{x,y}\over 2W_{x,y}\Phi_x\Phi_y}\right)\Theta(x,\Phi, n).
\end{eqnarray*}
Combining these three identities with the expression (\ref{tildeL2}) we deduce
\begin{eqnarray*}
&&{1\over \Theta}\tilde L\left(\Theta f\right)(x,\Phi,n)=\\
&&
-{1\over \Phi_x} {\partial\over\partial \Phi_x}f(x,\Phi,n)-\sum_{y\sim x} \left(n_{x,y}{1\over \Phi_x^2}\right)f(x,\Phi,n)
\\
&& +\sum_{y\sim x} ({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_2}) n_{x,y}{1\over 2\Phi_x^2} f(y, n-\delta_{\{x,y\}},\Phi)+
\sum_{y\sim x}({{\mathbbm 1}}_{\mathcal{A}_1}+2{{\mathbbm 1}}_{\mathcal{A}_3}){1\over 2 \Phi_x^2} f(x, n-\delta_{\{x,y\}},\Phi).
\end{eqnarray*}
It exactly coincides with the expression for $\check L$ since $1={{\mathbbm 1}}_{\mathcal{A}_1}+{{\mathbbm 1}}_{\mathcal{A}_2}+{{\mathbbm 1}}_{\mathcal{A}_3}$.
\end{proof}
\subsection{General case}
\label{sec:pgen}
\begin{proposition}
\label{PropKillingCase}
The conclusion of
Theorem \ref{thm-Poisson} still holds
if the graph $\mathcal{G}=(V,E)$ is finite and the killing measure is non-zero ($\kappa\not\equiv 0$).
\end{proposition}
\begin{proof}
Let $h$ be the function on $V$ defined as
\begin{displaymath}
h(x)=\mathbb{P}_{x}(X~\text{hits}~x_{0}~\text{before}~\zeta).
\end{displaymath}
By definition $h(x_{0})=1$. Moreover, for all
$x\in V\setminus\lbrace x_{0}\rbrace$,
\begin{displaymath}
-\kappa_{x} h(x)+\sum_{y\sim x}W_{x,y}(h(y)-h(x))=0.
\end{displaymath}
Define the conductances
$W^{h}_{x,y}:=W_{x,y}h(x)h(y)$, and the corresponding jump process $X^{h}$, and the GFF $\varphi_{h}^{(0)}$ and $\varphi_{h}^{(u)}$ with conditions
$0$ respectively $\sqrt{2u}$ at $x_{0}$. The Theorem \ref{thm-Poisson}
holds for the graph $\mathcal{G}$ with conductances
$(W^{h}_{e})_{e\in E}$ and with zero killing measure. But the process
$(X^{h}_{t})_{t\leq \tau_{u}^{x_{0}}}$ has the same law as the process
$(X_{s})_{s\leq \tau_{u}^{x_{0}}}$, conditioned on
$\tau_{u}^{x_{0}}<\zeta$, after the change of time
\begin{displaymath}
dt = h(X_{s})^{-2}ds.
\end{displaymath}
This means in particular that for the occupation times,
\begin{equation}
\label{EqTimeChange}
\ell_{x}(t)=h(X_{s})^{-2}\ell_{x}(s).
\end{equation}
Moreover, we have the equalities in law
\begin{displaymath}
\varphi_{h}^{(0)}\stackrel{\text{law}}{=}h^{-1}\varphi^{(0)},\qquad
\varphi_{h}^{(u)}\stackrel{\text{law}}{=}h^{-1}\varphi^{(u)}.
\end{displaymath}
Indeed, at the level of energy functions, we have:
\begin{equation*}
\begin{split}
&\mathcal{E}(hf,hf)=
\sum_{x\in V}\kappa_{x} h(x)^{2}f(x)^{2}+
\sum_{e}W_{e}(h(e_{+})f(e_{+})-h(e_{-})f(e_{-}))^{2}\\&=
\sum_{x\in V}[\kappa_{x}h(x)^{2}f(x)^{2}+
\sum_{y\sim x}W_{x,y}h(y)f(y)(h(y)f(y)-h(x)f(x))]\\
&=
\sum_{x\in V}[\kappa_{x}h(x)^{2}f(x)^{2}-
\sum_{y\sim x}W_{x,y}(h(y)-h(x))h(x)f(x)^{2}]
-\sum_{\substack{x\in V\\y\sim x}}W_{x,y}h(x)h(y)(f(y)-f(x))f(x)
\\&=[\kappa_{x_{0}}-
\sum_{y\sim x_{0}}W_{x_{0},y}(h(y)-1)]f(x_{0})^{2}
+\sum_{e}W_{e}^{h}(h(e_{+})f(e_{+})-h(e_{-})f(e_{-}))^{2}
\\&= \text{Cste}(f(x_{0}))+\mathcal{E}^{h}(f,f),
\end{split}
\end{equation*}
where $\text{Cste}(f(x_{0}))$ means that this term does not depend of $f$
once the value of the function at $x_{0}$ fixed.
Let $\check{X}^{h}_{t}$ be the inverse process for the conductances
$(W_{e}^{h)_{e\in E}}$ and the initial condition for the field
$\varphi_{h}^{(u)}$, given by Theorem \ref{thm-Poisson}.
By applying the time change
\ref{EqTimeChange} to the process $\check{X}^{h}_{t}$, we obtain an inverse process for the conductances $W_{e}$ and the field $\varphi^{(u)}$.
\end{proof}
\begin{proposition}
\label{PropInfiniteCase}
Assume that the graph $\mathcal{G}=(V,E)$ is infinite. The killing measure $\kappa$ may be non-zero. Then the conclusion of
Theorem \ref{thm-Poisson} holds.
\end{proposition}
\begin{proof}
Consider an increasing sequence of connected sub-graphs
$\mathcal{G}_{i}=(V_{i},E_{i})$ of $\mathcal{G}$ which converges to the whole graph. We assume that $V_{0}$ contains $x_{0}$.
Let $\mathcal{G}_{i}^{\ast}=(V_{i}^{\ast},E_{i}^{\ast})$ be the graph obtained by adding to $\mathcal{G}_{i}$ an abstract vertex
$x_{\ast}$, and for every edge $\lbrace x,y\rbrace$, where $x\in V_{i}$ and
$y\in V\setminus V_{i}$, adding an edge $\lbrace x,x_{\ast}\rbrace$,
with the equality of conductances
$W_{x,x_{\ast}}=W_{x,y}$.
$(X_{i,t})_{t\geq 0}$ will denote the Markov jump process on
$\mathcal{G}_{i}^{\ast}$, started from $x_{0}$.
Let $\zeta_{i}$ be the first hitting time of $x_{\ast}$ or the first
killing time by the measure $\kappa{{\mathbbm 1}}_{V_{i}}$. Let
$\varphi^{(0)}_{i}$,
$\varphi^{(u)}_{i}$ will denote the GFFs on $\mathcal{G}_{i}^{\ast}$ with condition $0$ respectively $\sqrt{2u}$ at $x_{0}$, with condition $0$ at
$x_{\ast}$, and taking in account the possible killing measure
$\kappa{{\mathbbm 1}}_{V_{i}}$.
The limits in law of $\varphi^{(0)}_{i}$
respectively $\varphi^{(u)}_{i}$ are
$\varphi^{(0)}$
respectively $\varphi^{(u)}$.
We consider the process
$(\hat{X}_{i,t},(\check{n}_{i,e}(t))_{e\in E_{i}^{\ast}})
_{0\leq t\leq\check{T}_{i}}$ be the inverse process on
$\mathcal{G}_{i}^{\ast}$, with initial field $\varphi^{(u)}_{i}$.
$(X_{i,t})_{t\leq \tau_{i,u}^{x_{0}}}$, conditional on
$\tau_{i,u}^{x_{0}}$, has the same law as
$(\check{X}_{i,\check{T}_{i}-t})_{t\leq \check{T}_{i}}$.
Taking the limit in law as $i$ tends to infinity, we conclude that
$(X_{t})_{t\leq \tau_{u}^{x_{0}}}$, conditional on
$\tau_{u}^{x_{0}}<+\infty$, has the same law as
$(\check{X}_{\check{T}-t})_{t\leq \check{T}}$ on the infinite graph
$\mathcal{G}$. The same for the clusters.
In particular,
\begin{multline*}
\mathbb{P}(\check{T}\leq t, \check{X}_{[0,\check{T}]}~\text{stays in}~V_{j})=
\lim_{i\to +\infty}
\mathbb{P}(\check{T}_{i}\leq t, \check{X}_{i,[0,\check{T}_{i}]}~\text{stays in}~V_{j})
\\=
\lim_{i\to +\infty}
\mathbb{P}(\tau_{i,u}^{x_{0}}\leq t, X_{i,[0,\tau_{i,u}^{x_{0}}]}~\text{stays in}~V_{j}\vert \tau_{i,u}^{x_{0}}<\zeta_{i})=
\mathbb{P}(\tau_{u}^{x_{0}}\leq t, X_{[0,\tau_{u}^{x_{0}}]}
~\text{stays in}~V_{j}\vert \tau_{u}^{x_{0}} < \zeta),
\end{multline*}
where in the first two probabilities we also average by the values of the
free fields.
Hence
\begin{displaymath}
\mathbb{P}(\check{T}=+\infty~\text{or}~\check{X}_{\check{T}}\neq x_{0})=
1-\lim_{\substack{t\to +\infty\\ j\to +\infty}}
\mathbb{P}(\tau_{u}^{x_{0}}\leq t, X_{[0,\tau_{u}^{x_{0}}]}
~\text{stays in}~V_{j}\vert \tau_{u}^{x_{0}} < \zeta) = 0.
\end{displaymath}
\end{proof}
\section*{Acknowledgements}
TL acknowledges the support of Dr. Max Rössler, the Walter Haefner
Foundation and the ETH Zurich Foundation.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} |

"\\section{Introduction} \\label{farey_intro}\n\nIn \\cite{Weyl1} Hermann Weyl developed a general a(...TRUNCATED) | {
"redpajama_set_name": "RedPajamaArXiv"
} |

"\\section{Introduction}\n\\label{sec:intro}\n\nThe design of efficient flow control strategies for (...TRUNCATED) | {
"redpajama_set_name": "RedPajamaArXiv"
} |

"\\section{Introduction}\n\nLet $R$ be a finite ring equipped with a weight $w$. Two linear codes\n(...TRUNCATED) | {
"redpajama_set_name": "RedPajamaArXiv"
} |

"\\section{Introduction}\nAll graphs considered in this paper are finite and simple\n(undirected, lo(...TRUNCATED) | {
"redpajama_set_name": "RedPajamaArXiv"
} |

"\\section{Introduction}\nIn this paper, we study the cyclicity problem with respect to the forward (...TRUNCATED) | {
"redpajama_set_name": "RedPajamaArXiv"
} |

"\\section{Introduction} \n\\label{Introduction}\nDetecting communities (or clusters) in graphs is a(...TRUNCATED) | {
"redpajama_set_name": "RedPajamaArXiv"
} |

End of preview. Expand
in Dataset Viewer.

### Dataset Description:

This is a split version of cerebras/SlimPajama-627B that divides data based on its sources. The content of this dataset is the same as SlimPajama-627B. We divide data from different sources based on the "redpajama_setname" and save them in different directories, which is convenient for future dataset combination related research.

This dataset consists of 15,967 jsonl files and is ~ 883G compressed.

### Primary Usage:

This dataset is used for our study: SlimPajama-DC: Understanding Data Combinations for LLM Training.

For more details about the content in this dataset, please refer to the original cerebras/SlimPajama-627B.

### License:

Please refer to the licenses of the data subsets you use.

- Common Crawl Foundation Terms of Use
- C4 license
- GitHub was limited to MIT, BSD, or Apache licenses only
- Books: the_pile_books3 license and pg19 license
- ArXiv Terms of Use
- Wikipedia License
- StackExchange license on the Internet Archive

- Downloads last month
- 43