original_id
int64 0
931k
| chunk_id
int64 0
166
| domain
stringclasses 6
values | text
stringlengths 3
21.6k
|
|---|---|---|---|
627
| 1
|
arxiv
|
\end{eqnarray}
Using $Im(M(\tau))=0 $ we see that
$\hat{\sigma}(-\omega)=\hat{\sigma}^{*}(\omega)$, which implies
that $\sigma_{1 }(-\omega)=\sigma_{1 }(\omega)$ and
$\sigma_{2 }(-\omega)=-\sigma_{2 }(\omega)$. These relations can be
used to rewrite equations (\ref{KK1 }) and (\ref{KK2 }),
\begin{eqnarray}
\sigma_{1 }(\omega)&=&\frac{2 }{\pi}\mathcal{P}\int_{0 }^{\infty}\frac{\omega'\sigma_{2 }(\omega')}{\omega'^{2 }-\omega^{2 }}d\omega', \\
\sigma_{2 }(\omega)&=&-\frac{2 \omega}{\pi}\mathcal{P}\int_{0 }^{\infty}\frac{\sigma_{1 }(\omega')}{\omega'^{2 }-\omega^{2 }}d\omega'. \end{eqnarray}
The relations (\ref{KK1 }) and (\ref{KK2 }) between the real and
imaginary parts of the optical conductivity are examples of the
general relations between real and imaginary parts of causal
response functions and they are referred to as Kramers-Kronig (KK)
relations. \subsection{Polaritons}\label{polariton}
In this section we discuss the properties of electromagnetic waves
propagating through solids. Such a wave is called a polariton. A
polariton is a photon dressed up with the excitations that exist
inside solids. For example one can have phonon-polaritons which
are photons dressed up with phonons. Although the solutions of the
Maxwell equations, i. e. the fields $\textbf{E}$ and $\textbf{B}$,
have the same form as before (Eq. (\ref{E}) and (\ref{B})) they
obey different dispersion relations as we will now see. As before
we assume that $\nabla\varepsilon'=\nabla\mu'=0 $ and that
$\rho_{ext}=J_{ext}=0 $. Taking the curl of Eq. (\ref{linresmax2 })
we obtain for the left-hand side,
\begin{equation}
\nabla\times\nabla\times\mathbf{E}=-\nabla^{2 }\mathbf{E}^{T},
\end{equation}
where the T indicates that we are left with a purely transverse
field. We then use Eq. (\ref{linresmax4 }) to work out the
right-hand side of Eq. (\ref{linresmax2 }) and we obtain the wave
equation,
\begin{equation}
\nabla^{2 }\mathbf{E}^{T}=\frac{\varepsilon'\mu}{c^{2 }}\frac{\partial^{2 }\mathbf{E}^{T}}{\partial
t^{2 }}+\frac{4 \pi\sigma\mu}{c^{2 }}\frac{\partial\mathbf{E}^{T}}{\partial
t}. \end{equation}
From this wave equation we easily obtain the dispersion relation
for polaritons travelling through a solid by substituting Eq. (\ref{E}),
\begin{equation}\label{poldisp}
\mu(\mathbf{q}, \omega)\{\varepsilon'(\mathbf{q}, \omega)+i\frac{4 \pi\sigma(\mathbf{q}, \omega)}{\omega}\}\omega^{2 }=\mu\varepsilon^{L}\omega^{2 }=\mathbf{q}^{2 }c^{2 }. \end{equation}
The dispersion relation for longitudinal waves can be found by
observing that for longitudinal waves $\nabla\times\textbf{E}=0 $
and hence the dispersion relation is simply,
\begin{equation}
\mu(\mathbf{q}, \omega)\{\varepsilon'(\mathbf{q}, \omega)+i\frac{4 \pi\sigma(\mathbf{q}, \omega)}{\omega}\}=0. \end{equation}
The polariton solutions to Eq. (\ref{poldisp}) are of the form
\begin{equation}
\mathbf{E}(\mathbf{r}, t)=\mathbf{E_{0 }}e^{i(\mathbf{q\centerdot
r}-\omega t)},
\end{equation}
with
\begin{equation}
|q|=\frac{\sqrt{\mu\varepsilon}\omega}{c}. \end{equation}
We now define the refractive index,
\begin{equation}
\hat{n}(\mathbf{q}, \omega)=n+ik\equiv\sqrt{\mu\varepsilon}. \end{equation}
In all cases considered here $n>0 $ and $k>0 $. We also note that
$Im(\varepsilon)\geq 0 $ but it is possible to have
$Re(\varepsilon)<0 $. If $k>0 $ the wave travelling through the
solid gets attenuated,
\begin{equation}
\mathbf{E}(\mathbf{r}, t)=\mathbf{E_{0 }}e^{i\omega(nr/c-t)-r/\delta}. \end{equation}
The extinction of the wave occurs over a characteristic length
scale $\delta$ called the skin depth,
\begin{equation}\label{skin}
\delta=\frac{c}{\omega k}=\frac{c}{\omega
Im\sqrt{\mu\varepsilon_{1 }+i4 \pi\mu\sigma_{1 }/\omega}}. \end{equation}
\begin{figure}[hbt]
\begin{minipage}{8 cm}
\includegraphics[width=8 cm]{Drudecond. png}
\caption{\label{drudecond} Real part of the optical conductivity
for parameter values indicated in the graph. The curve is
calculated using equation (\ref{drude}). }
\end{minipage}\hspace{2 pc}%
\begin{minipage}{8 cm}
\includegraphics[width=8 cm]{Drudediel. png}
\caption{\label{dielfunc} Dielectric function corresponding to
equation (\ref{drudediel}) with the same parameters as in figure
\ref{drudecond}. }
\end{minipage}
\end{figure}
Note that we can have $k>0 $ if $Im(\varepsilon)=0 $ and
$Re(\varepsilon)<0 $ so that the wave gets attenuated even though
there is no absorption. In table \ref{skindepth} we indicate some
limits of the skin depth. \begin{table}[tbh]
\begin{tabular}{lrlrl}
\hline\hline
Insulator & & $\frac{4 \pi\sigma_{1 }}{\omega}\ll\varepsilon_{1 }$ &
&
$\delta\approx\frac{c}{2 \pi\sigma_{1 }}\sqrt{\frac{\varepsilon_{1 }}{\mu}}$\\
Metal & & $\frac{4 \pi\sigma_{1 }}{\omega}\gg\varepsilon_{1 }$ & &
$\delta\approx\frac{c}{\sqrt{2 \pi\mu\sigma_{1 }\omega}}$\\
Superconductor & &
$\frac{4 \pi\sigma_{1 }}{\omega}\ll\varepsilon_{1 }=-\frac{c^{2 }}{\lambda^{2 }\omega^{2 }}$
& & $\delta\approx\frac{\lambda}{\sqrt{\mu}}$\\
\hline\hline
\end{tabular}
\caption{Some limiting cases of the general expression Eq. (\ref{skin}). $\lambda$ in the last line is the London penetration
depth. }\label{skindepth}
\end{table}
To illustrate some of the previous results we now have a look at
the simplest model of a metal: the Drude-model. The optical
conductivity in the Drude model is,
\begin{equation}\label{drude}
\hat{\sigma}=\frac{ne^{2 }}{m}\frac{1 }{\tau^{-1 }-i\omega}. \end{equation}
Often $1 /\tau$, the time in between scattering events, is
written as a scattering rate $\gamma$. The plasma frequency is
defined as $\omega^{2 }_{p}\equiv 4 \pi ne^{2 }/2 m$. The
dielectric function can now be written as,
\begin{equation}\label{drudediel}
\varepsilon(\omega)=1 +4 \pi\chi_{bound}-\frac{4 \pi
ne^{2 }}{m}\frac{1 }{\omega(\gamma-i\omega)}=\varepsilon_{\infty}-\frac{4 \pi
ne^{2 }}{m}\frac{1 }{\omega(\gamma-i\omega)},
\end{equation}
where for completeness we have included the contribution due to
the bound charges, represented by a high energy contribution
$\varepsilon_{\infty}$. Figure \ref{drudecond} shows the optical
conductivity given by equation (\ref{drude}) for parameter values
typical of a metal. Using the same parameters we can calculate the
dielectric function given by equation (\ref{drudediel}). The
results are shown in figure \ref{dielfunc}. We note that the real
part of the dielectric function is negative for
$\omega<\omega_{p}/\sqrt{\varepsilon_{\infty}}$ and positive for
$\omega>\omega_{p}/\sqrt{\varepsilon_{\infty}}$. The point where
it crosses zero is called the screened plasma frequency
$\omega^{*}_{p}$ (screened by interband transitions). We can also easily calculate the optical constants,
\begin{equation}\label{Druden}
\hat n = \sqrt {\varepsilon _\infty - \frac{{\omega _p^2 }}
{{\omega \left( {\omega + i\tau ^{ - 1 } } \right)}}}. \end{equation}
The real and imaginary part are displayed in figure
\ref{druden}. We see that at the screened plasma frequency both
$n$ and $k$ show a discontinuity. \begin{figure}[hbt]
\begin{minipage}{8 cm}
\includegraphics[width=8 cm]{Druden. png}
\caption{\label{druden} Optical constants corresponding to
equation (\ref{drudediel}) with the same parameters as in figure
\ref{drudecond}. }
\end{minipage}\hspace{2 pc}%
\begin{minipage}{8 cm}
\includegraphics[width=8 cm]{poldisp. png}
\caption{\label{figpoldisp} Polariton dispersion calculated with
the same parameters as in figure \ref{drudecond}. }
\end{minipage}
\end{figure}
The polariton dispersion follows from equation (\ref{poldisp}). Here we assume that $\mu$ = 1 and frequency independent and use
Eq. (\ref{drudediel}) to solve (\ref{poldisp}) for $\omega(q)$. The polariton dispersion consists of two branches the lowest
one for 0 $\le$ $\omega$ $\le$ $1 /\tau$ and one for $\omega$
$\ge$ $\omega_{p}/\sqrt{\varepsilon_{\infty}}$. Finally we show the skin depth in figure \ref{figskindepth}. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{skindepth. png}
\caption{\label{figskindepth} Skin depth calculated with the same
parameters as in figure \ref{drudecond}. }
\end{figure}
We see that for frequencies smaller than the scattering rate,
$\gamma$, light waves can enter the material. This is called the
classical skin effect. For frequencies larger than the screened
plasma frequency the material becomes transparent again. \section{Experimental Techniques}
The goal of optical spectroscopy is to determine the complex
dielectric function or equivalently the complex optical
conductivity. Since electromagnetic waves have small momenta
compared to the typical momenta of a solid, i. e. $\textbf{q}\ll
1 /a_{0 }$, we usually only probe the $q\to 0 $ limit of the optical
constants. In this limit,
\begin{eqnarray}
\lim_{q\to
0 }(\varepsilon^{T}(\mathbf{q}, \omega)-\varepsilon^{L}(\mathbf{q}, \omega))=0, \\
\varepsilon(\mathbf{q\to
0 }, \omega)=\varepsilon_{1 }(\omega)+i\frac{4 \pi}{\omega}\sigma_{1 }(\omega). \end{eqnarray}
In some cases we can directly obtain information on both real and
imaginary components seperately, but more often we obtain
information where the contributions are mixed. We then make use of
some form the KK-relations to disentangle the two. \subsection{Reflection and Transmission at an interface}
When we shine light on an interface between vacuum and a material,
part of the light is reflected and another part is transmitted as
in figure \ref{reflection}. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{reflection. png}
\caption{\label{reflection}Electromagnetic waves reflecting from a
material. The reflected wave has a smaller amplitude and is phase
shifted with respect to the incoming wave. The transmitted wave is
continuously attenuated inside the material. }
\end{figure}
At the boundary the electromagnetic waves have to obey the
following boundary conditions,
\begin{eqnarray}
\mathbf{E}_{i}+\mathbf{E}_{r}=\mathbf{E}_{t}, \label{boundcond1 }\\
\mathbf{E}\times\mathbf{H}\;//\;\mathbf{k}. \end{eqnarray}
From these two equations it follows that the reflected magnetic
field suffers a phase shift at the boundary,
\begin{equation}\label{boundcond2 }
\mathbf{H}_{i}-\mathbf{H}_{r}=\mathbf{H}_{t}. \end{equation}
Using equation (\ref{E}) in equation (\ref{linresmax2 }) we obtain,
\begin{equation}
iqc\mathbf{E}^{T}=i\omega\mu\mathbf{H}. \end{equation}
so that, using the dispersion relation (\ref{poldisp}),
\begin{equation}
\frac{\mathbf{H}}{\mathbf{E}^{T}}=\sqrt{\frac{\varepsilon}{\mu}}. \end{equation}
From now on we set $\mu=1 $ unless otherwise indicated. In that
case $\mathbf{H}/\mathbf{E}^{T}=\hat{n}$. Combining this result
with Eq. (\ref{boundcond2 }) we get,
\begin{equation}
\mathbf{E}_{i}-\mathbf{E}_{r}=\hat{n}. \end{equation}
Together with Eq. (\ref{boundcond1 }) we can now solve for
$\textbf{E}_{r}/\textbf{E}_{i}$ and
$\textbf{E}_{t}/\textbf{E}_{i}$,
\begin{eqnarray}
\hat{r}\equiv\mathbf{E}_{r}/\mathbf{E}_{i}=\frac{1 -\hat{n}}{1 +\hat{n}}, \label{rhat}\\
\hat{t}\equiv\mathbf{E}_{t}/\mathbf{E}_{i}=\frac{2 }{1 +\hat{n}}. \label{that}
\end{eqnarray}
The two quantities $\hat{r}$ and $\hat{t}$ are the complex
reflectance and transmittance. \subsection{Reflectivity experiments}
The real reflection coefficient $R(\omega)$ which is measured in a
reflection experiment is related to $\hat{r}$ via
\begin{equation}
R=|\hat{r}|^{2 }=|\frac{(n-1 )^{2 }+k^{2 }}{(n+1 )^{2 }+k^{2 }}|. \end{equation}
Note that in this experiment we obtain no information on the phase
of $\hat{r}$. In these experiments the angle of incidence is as
close to normal incidence as possible. To measure $R(\omega)$ one
first measures the reflected intensity $I_{s}$ from the sample
under study. To normalize this intensity one then has to take a
reference measurement. This can be done by replacing the sample
with a mirror (i. e a piece of aluminum or gold) and again measure
the reflected intensity, $I_{ref}$. The reflection coefficient is
then $R(\omega)\equiv I_{s}(\omega)/I_{ref}(\omega)$. A better way
is to evaporate a layer of gold or aluminum \textit{in-situ} and
measure the reflected intensity as a reference. This way one
automatically corrects for surface imperfections and, if done
properly, there are no errors due to different size and shape of
the reflecting surface. To obtain the optical constants from such
an experiment we have to make use of KK-relations. If we define,
$\hat{r}(\omega)\equiv\sqrt{R(\omega)}e^{i\theta}$, then the
logarithm of $\hat{r}(\omega)$ is
\begin{equation}
\ln\hat{r}(\omega)=\ln\sqrt{R(\omega)}+i\theta. \end{equation}
The phase $\theta$ in this expression is the unknown we want to
determine.
|
627
| 2
|
arxiv
|
\sqrt{\omega}. \end{equation}
For frequencies above the interband transitions one often uses an
extrapolation that is proportional to $\omega^{-4 }$. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{Druderefl. png}
\caption{\label{druderefl} Reflectivity calculated using
parameters typical for a metal. The inset shows the low energy
reflectivity on an enhanced scale. }
\end{figure}
As an example of a possible experimental result we show in figure
\ref{druderefl} the reflectivity calculated from the Drude model
for the same parameters as in section on polaritons. The reflectivity is close to one until just below the plasma
frequency. At the zero crossing of $\varepsilon_{1 }$ the
reflectivity has a minimum. The inset shows a blow up of the
"flat" region below 50 meV. Here one can clearly see the
Hagen-Rubens behavior mentioned above. If the sample under
investigation is anisotropic one has to use polarized light along
one of the principle crystal axes to perform the experiment. \subsection{Grazing Incidence Experiments}
A closely related technique is to measure reflectance under a
grazing angle of incidence. Here one has to distinguish between
experiments performed with different incoming polarizations as
shown in figure \ref{grazincid}. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{grazincidence. png}
\caption{\label{grazincid} Grazing incidence experiment. The
result of the experiment is extremely sensitive to the precise
orientation of the crystal axes with respect to the incoming
light. }
\end{figure}
We distinguish between p-polarized light and s-polarized light. For p-polarization the electric field is parallel to the plane of
incidence, whereas for s-polarization it is perpendicular to it (s
stands for senkrecht). Since in principal the optical constants
along the three crystal axes can be different, we use the labels
$a$, $b$ and $c$ for the optical constants as indicated in figure
\ref{grazincid}. For p-polarized light the complex reflectance is,
\begin{equation}\label{rp}
{\rm }r_{\rm p} = \frac{{{ \hat n}_c { \hat n}_b \cos \theta -
\sqrt {{ \hat n}_c ^2 - \sin ^2 \theta } }}{{{ \hat n}_c { \hat
n}_b \cos \theta + \sqrt {{ \hat n}_c ^2 - \sin ^2 \theta } }}. \end{equation}
The angle $\theta$ in this equation is the angle relative to the
surface normal under which the experiment is performed. For
s-polarized light the complex reflectance is,
\begin{equation}\label{rs}
{\rm }r_{\rm s} = \frac{{\cos \theta - \sqrt {{ \hat n}_a ^2 -
\sin ^2 \theta } }}{{\cos \theta + \sqrt {{ \hat n}_a ^2 - \sin
^2 \theta } }}. \end{equation}
An example of such an experiment is shown in figure
\ref{grazincidbi2212 }. \begin{figure}[tbh]
\includegraphics[width = 6 cm]{grazincidbi2212. png}
\caption{\label{grazincidbi2212 } Grazing incidence reflectivity of
Bi-2201, Bi-2212, Tl-2201 and Tl-2212. The inset in panel (b)
indicates the measurement geometry. The figure is adapted from
ref. \citep{tsvetkov}. }
\end{figure}
In this example the samples are from the bismuth based family of
cuprates \cite{tsvetkov}. They have a layered structure consisting
of conducting copper-oxygen sheets, interspersed with insulating
bismuth-oxygen layers. Since the bonding between layers is not
very strong it is very difficult to obtain samples that are
sufficiently thick along the insulating c-direction. The grazing
incidence technique is used here to probe the optical constants of
the c-axis without the need of a large ac-face surface area. A
disadvantage in this particular experiment is that it is not
possible to determine accurately the absolute value of the optical
constants. It is possible however to determine the so-called loss
function $Im(-1 /\hat{\varepsilon}_{c})$. The experiment is
performed on the ab-plane of the sample using p-polarized light
and we can simplify the expression for $\hat{r}_{p}$ by using the
fact that the $a$ and $b$ direction are almost isotropic. The
resulting expression for $\hat{r}_{p}$ is,
\begin{equation}
\hat{r}_{\rm p} = \frac{{\sqrt {\hat \varepsilon _b } \cos \theta
- \sqrt {1 - {{\sin ^2 \theta } \mathord{\left/
{\vphantom {{\sin ^2 \theta } {\hat \varepsilon _c }}} \right. \kern-\nulldelimiterspace} {\hat \varepsilon _c }}} }}{{\sqrt {\hat \varepsilon _b } \cos \theta + \sqrt {1 - {{\sin ^2 \theta } \mathord{\left/
{\vphantom {{\sin ^2 \theta } {\hat \varepsilon _c }}} \right. \kern-\nulldelimiterspace} {\hat \varepsilon _c }}} }}. \end{equation}
From this equation we can derive the following relation between
the grazing incidence reflectivity and a pseudo loss-function
$L(\omega)$ \cite{dvdm},
\begin{equation}
L(\omega)\equiv\frac{{\left( {1 - R_p } \right)}} {{\left( {1 +
R_p } \right)}} \approx Im \frac{{2 e^{i\phi _p } }} {{\left| {n_b
} \right|\cos \theta }}\sqrt {1 - \frac{{\sin ^2 \theta }} {{\hat
\varepsilon _c }}}. \end{equation}
The function
$\sqrt{1 -\frac{\sin^{2 }\theta}{\hat{\varepsilon}_{c}}}$ has maxima
at the same position as the true loss-function. In this way
information was gained on the phonon structure of the c-axis of
this material. \subsection{Spectroscopic Ellipsometry}
The third technique we introduce here is spectroscopic
ellipsometry. This relatively new technique has two major
advantages over the previous techniques. Firstly, the technique is
self-normalizing meaning that no reference measurement has to be
done and secondly, it provides directly both the real and
imaginary parts of the dielectric function. As with the grazing incidence technique we have to distinguish
between s- and p-polarized light and label the crystal axes. Instead of measuring $R_{p}$ or $R_{s}$ independently, we now
measure directly the amplitude and phase of the ratio
$\hat{r}_{p}/\hat{r}_{s}=|\hat{r}_{p}/\hat{r}_{s}|e^{i(\eta_{p}-\eta_{s})}$. To see how this can be done we first describe the experimental
setup. There are a number of different setups one can use and here
we describe the simplest. This setup consists of a source followed
by a polarizer. With this polarizer we can change the orientation
of the polarization impinging on the sample. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{ellipsometry. png}
\caption{\label{ellipsometry} Result of an ellipsometric
measurement. The phase shift $A_{0 }$ and amplitude $2 \gamma$ are
the two quantities that we are interested in. }
\end{figure}
The light reflected from the sample passes through another
polarizer (called analyzer) and then hits the detector. Depending
on the orientation of the first polarizer we can change the
electric field strength of s- and p-polarized light according to,
\begin{eqnarray}
E_p = \left| {E_i } \right|\cos \left( P \right), \\
E_s = \left| {E_i } \right|\sin \left( P \right). \\
\end{eqnarray}
From the expressions for $\hat{r}_{p}$ and $\hat{r}_{s}$,
(\ref{rp}) and (\ref{rs}), in the previous section it follows
that,
\begin{equation}\label{rp/rs}
{\rm }\hat \rho \equiv {\rm }\frac{{r_{\rm p} }}{{r_{\rm s} }} =
\frac{{\sqrt {{ \hat n}_c ^2 - \sin ^2 \theta } - { \hat n}_c {
\hat n}_b \cos \theta }}{{\sqrt {{ \hat n}_c ^2 - \sin ^2 \theta
} + { \hat n}_c { \hat n}_b \cos \theta }} \cdot \frac{{\sqrt {{
\hat n}_a ^2 - \sin ^2 \theta } + \cos \theta }}{{\sqrt {{ \hat
n}_a ^2 - \sin ^2 \theta } - \cos \theta }}. \end{equation}
Our task is now to invert this equation and express the optical
constants in terms of measured quantities and instrument
parameters. For an isotropic sample this can be done quite easily. We define the pseudodielectric function $\hat{\varepsilon}$ such
that:
\begin{equation}
\hat \rho \equiv \frac{{\sin \theta \tan \theta - \sqrt {{
\tilde \varepsilon } - \sin ^2 \theta } }}{{\sin \theta \tan
\theta + \sqrt {{ \tilde \varepsilon } - \sin ^2 \theta } }},
\end{equation}
where we note that
$\hat{\varepsilon}=\varepsilon_{a}=\varepsilon_{b}=\varepsilon_{c}$
in an optically isotropic medium. This equation can be inverted to
obtain $\tilde \varepsilon$,
\begin{equation}\label{ellipseps}
\tilde \varepsilon (\omega ) = \sin ^2 \theta \left[ {1 + \tan ^2
\theta \left( {\frac{{1 - \rho }}{{1 + \rho }}} \right)^2 }
\right]. \end{equation}
So all that is left to do is to express $\hat{\rho}$ in terms of
experimental parameters. The experiment is done in the following
way: we fix the polarizer at some angle $0 <P<90 $ and then we
record the intensity while rotating the analyzer 360 degrees. The
result is shown in figure \ref{ellipsometry}. We then measure the
amplitude of the resulting sine wave, $\gamma$ and the phase
offset with respect to zero, $A_{0 }$ (we assume here that for
$P=0 $ the polarizer and analyzer are aligned parallel to each
other). With some goniometry and figure \ref{ellipsometry} we can
show that,
\begin{equation}
\tan A_0 = \frac{{2 \tan \left( P \right)}}{{\left| \rho \right|^2
+ \tan ^2 \left( P \right)}}\rho _1,
\end{equation}
and
\begin{equation}
\sqrt {1 - \gamma ^2 } = \frac{{2 \tan \left( P \right)}}{{\left|
\rho \right|^2 + \tan ^2 \left( P \right)}}\rho _2. \end{equation}
Combining these two equations leads to,
\begin{equation}\label{ellipsrho}
\rho = \frac{{1 \pm \sqrt {\gamma ^2 - \tan ^2 A_0 } }}{{\tan
A_0 - i\sqrt {1 - \gamma ^2 } }}\tan \left( P \right). \end{equation}
\begin{figure}[bth]
\includegraphics[width=8.5 cm]{ellipshg1201. png}
\caption{\label{ellipshg1201 }Dielectric function measured
ellipsometrically on a HgBa$_{2 }$CuO$_{4 }$ sample. The true
dielectric function is shown in solid lines. The pseudo
dielectric function (i. e. actually measured) is shown as a
dashed line. Data taken from ref. \citep{heumen}. }
\end{figure}
The combination of Eq. (\ref{ellipsrho}) with Eq. (\ref{ellipseps}) is all we need to describe an ellipsometric
experiment on an isotropic sample. For an anisotropic sample
the problem is slightly more difficult. However, there exists a
theorem due to Aspnes which states that the inversion of Eq. (\ref{rp/rs}) results in Eq. (\ref{ellipseps}) but now the
dielectric function on the left-hand side is a so-called
pseudo-dielectric function. This pseudo-dielectric function is
mainly determined by the component parallel to the intersection
of sample surface and plane of incidence (component along $b$
in figure \ref{grazincid}), but still contains a small
contribution of the two other components. If we perform three
measurements, each along a different crystal axis, we can
correct the pseudo dielectric functions and obtain the true
dielectric functions. If the sample is isotropic along two
directions, as is the case in high temperature superconductors
for example, only two measurements are required. Figure
\ref{ellipshg1201 } shows in dashed lines the pseudo dielectric
function of HgBa$_{2 }$CuO$_{4 }$. In this case the $a$ and $b$
axes have the same optical constants. The c-axis dielectric
function was determined from reflectivity measurements and
subsequently used to correct the pseudo dielectric function
measured by ellipsometry on the ab-plane. The true dielectric
function after this correction is shown as the solid line. \subsection{Transmission Experiments}
A technique complementary to the reflection techniques is
transmission spectroscopy. This technique is, obviously, most
suitable for transparent samples. In principle the technique can
also be applied to metallic samples but this requires very thin
samples or films. The reflection experiments discussed above are
usually good methods to obtain accurate estimates of the real part
of the optical conductivity. In contrast the transmission
experiments discussed below are more sensitive to weak absorptions
or, in other words to the imaginary part of the optical
conductivity. Note that the simultaneous knowledge of reflection
and transmission spectra allows one to directly determine the full
complex dielectric function without any further approximations. Examples of weak absorptions which are better probed in a
transmission experiment are multi-phonon or magnon absorptions. The equations for transmission experiments are slightly more
difficult then those for the reflection experiments. These
equations simplify if we do the experiment on a wedged sample as
shown in figure \ref{wedgedsample}. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{wedgedsample. png}
\caption{\label{wedgedsample}Transmission experiment on a wedged
sample. After the initial ray is partially reflected back from the
front surface all following rays are no longer parallel to the
first transmitted ray. }
\end{figure}
At the boundary between vacuum and the sample, part of the light
is reflected and part transmitted. The part that is transmitted is
given by,
\begin{equation}\label{tcoef1 }
{ \hat t}_{v, s} = {\rm }\frac{2 }{{1 + { \hat n}}}{\rm }. \end{equation}
Inside the wave propagates according to $ e^{i\psi}$ where,
\begin{equation}
\psi \equiv \hat nd\omega /c. \end{equation}
At the next boundary again part of the beam is reflected back into
the sample and part is transmitted. Now we can see the advantage
of the wedged sample: the part of the light that is reflected
propagates away at an angle and after another reflection the
second transmitted ray is no longer parallel to and spatially
separated from the first transmitted ray. This means that we only
have to care about the first transmitted ray. The transmission
coefficient at the boundary from sample to vacuum is given by,
\begin{equation}\label{tcoef2 }
{ \hat t}_{s, v} = \frac{{2 { \hat n}}}{{{ \hat n} + 1 }}{\rm },
\end{equation}
so that the total transmission coefficient is,
\begin{equation}\label{tcoef3 }
{ \hat t}_{v, s} e^{{\rm i}\psi } { \hat t}_{s, v}. \end{equation}
Putting Eq.
|
627
| 3
|
arxiv
|
\hat n}}
\right|}}{{\left| {1 + { \hat n}} \right|^2 }}} \right)}
\right\}\frac{{c\sqrt {\varepsilon _1 } }}{{4 \pi d}}. \end{equation}
\begin{figure}[tbh]
\includegraphics[width=6 cm]{grueninger2. png}
\caption{\label{reftrans}Comparison of reflectivity and
transmission measured on the same sample. Note the strong
absorptive features present in the transmission spectrum that are
completely invisible in the reflectance spectra. Figure adapted
from \citep{Grueninger}}
\end{figure}
As an example of this technique we show in figure \ref{reftrans} a
comparison between the reflectivity and transmission spectra of
undoped YBa$_{2 }$Cu$_{3 }$O$_{7 }$ \cite{Grueninger}. This material
is an (Mott) insulator which is clearly visible from the
reflectivity spectrum. The large structure at low energies is an
optical phonon. At higher energy the reflectivity spectrum appears
to be rather featureless. Focussing our attention on the
transmission spectrum we see that it is almost zero in the phonon
range but then above the phonon range a whole series of sharp dips
shows up. The optical conductivity consists of a set of smaller
peaks at energies between 100 meV and 300 meV which are due to
multi-phonon absorptions whereas the larger peak just above 300
meV is due to a two magnon plus one phonon absorption (see also
the section on spin interactions below). \begin{figure}[tbh]
\includegraphics[width=6 cm]{transmission. png}
\caption{\label{transmission}Pictorial of a transmission
experiment on a plan parallel sample. }
\end{figure}
We can also do the experiment on a sample with two
plan-parallel sides as depicted in figure \ref{transmission}. We can immediately realize that for a given thickness of the
sample there will be interference effects between different
transmitted rays for certain frequencies. These will cause
oscillations in the transmission spectra which are called
Fabry-Perot resonances. We now analyze the transmission
coefficients for this experiment. The coefficient for the first
ray is off course the same as in Eq. (\ref{tcoef3 }). The
coefficients for the higher order rays are formed by
multiplying $\hat{t}_{v, s}e^{i\psi}$ on the right with a factor
$f$,
\begin{equation}
f\equiv\hat{r}_{s, v}e^{i2 \psi}\hat{r}_{s, v},
\end{equation}
followed by a factor $\hat{t}_{s, v}$. So the total transmission
coefficient for the second transmitted ray is given by,
\begin{equation}
{\hat t}_{v, s} e^{{\rm i}\psi } { \hat r}_{s, v} e^{{\rm i}\psi }
{\hat r}_{s, v} e^{{\rm i}\psi } { \hat t}_{s, v}={\hat t}_{v, s}
e^{{\rm i}\psi } { \hat t}_{s, v} f. \end{equation}
The coefficients $\hat{t}_{v, s}$ and $\hat{t}_{s, v}$ are given
by Eq. (\ref{tcoef1 }) and (\ref{tcoef2 }). The coefficient for
reflection on a boundary from sample to vacuum is given by,
\begin{equation}
{\hat r}_{s, v} = {\rm }\frac{{{\hat n} - 1 }}{{{\hat n} + 1 }}. \end{equation}
It is easy to see that if we sum over all transmitted rays the
total transmission coefficient is given by,
\begin{equation}\label{tpl}
\hat t = {\hat t}_{v, s} e^{{\rm i}\psi } {\hat t}_{s, v} \left( {1
+ f + f^2 + .. } \right)= \frac{{2 {\hat n}}}{{2 {\hat n}\cos \psi -
i(1 + {\hat n}^2 )\sin \psi }}. \end{equation}
For thin films the phase factor $\psi\ll1 $ and we can simplify
this equation to,
\begin{equation}
\hat t \approx \frac{1 }{{1 + \frac{{2 \pi d}}{c}\sigma _1 -
i\frac{{\omega d}}{{2 c}}(1 + \varepsilon ')}},
\end{equation}
and so,
\begin{equation}
T\left( \omega \right) \approx \frac{1 }{{1 + 4 \pi dc^{ - 1 }
\sigma _1 \left( \omega \right)}}. \end{equation}
\begin{figure}[htb]
\begin{minipage}{8.5 cm}
\includegraphics[width = 8.5 cm]{STOtrans. png}
\end{minipage}\hspace{2 pc}%
\begin{minipage}{8.5 cm}
\includegraphics[width = 6 cm]{STOdisp. png}
\end{minipage}
\caption{\label{STO}Left: Far infrared transmission spectrum for
SrTiO$_{3 }$. The positions of the peaks determine the polariton
dispersion. The dashed line at low frequency is an extrapolation
to zero frequency. Right: Dispersion relation of polaritons in STO
as derived from transmission spectrum in the left panel. }
\end{figure}
More generally from Eq. (\ref{tpl}) we obtain,
\begin{equation}
T_{LR} = \frac{{4 \left| \varepsilon \right|}}{{\left|
{4 \varepsilon \cos ^2 \psi } \right| + \left| {\left( {1 +
\varepsilon } \right)^2 \sin ^2 \psi } \right| + 2 {\mathop{\rm
Im}\nolimits} \left\{ {(1 + \varepsilon )\sin 2 \psi } \right\}}}. \end{equation}
In the case that the sample under investigation has only weak
absorptions, i. e. $Im(\hat{n})\approx0 $, this equation simplifies
to,
\begin{equation}\label{TLR}
T_{LR} \approx \frac{{4 n^2 }}{{4 n^2 + \left( {1 - n^2 }
\right)^2 \sin ^2 \left( {nd\omega /c} \right)}}. \end{equation}
This equation gives us some insight to the occurrence of
Fabry-Perot resonances: if $\omega=cm\pi/nd$ with m=0,1,2,... the
sinus is equal to zero and the transmission $T=1 $. In between
these maxima the transmission has minima and
$T\approx4 \hat{n}^{2 }/(1 +\hat{n}^{2 })^{2 }$. In reality the
transmission will never reach 1 due to the fact that
$Im(\hat{n})\neq0 $ in which case our approximations are no longer
valid. As an example we display in the left panel of figure
\ref{STO} the transmission spectrum of SrTiO$_{3 }$
\cite{mechelen}. This material is very close to being
ferroelectric and as a result it has a very large dielectric
constant. The non-sinusoidal shape of the peaks is due to this
large dielectric constant. One can use the Fabry-Perot resonances
to measure the polariton dispersion as we now show. Note that at
each maximum in the transmission spectrum we know precisely the
value of the argument of the sine function in Eq. (\ref{TLR}). \begin{figure}[htb]
\begin{minipage}{8.5 cm}
\includegraphics[width = 8.5 cm]{LSCO. png}
\end{minipage}\hspace{2 pc}%
\begin{minipage}{8.5 cm}
\includegraphics[width = 8.5 cm]{LSCOdisp. png}
\end{minipage}
\caption{\label{transLSCO}Left: Transmission spectrum of LSCO at a
temperature just above $T_{c}$ and one far below. Note the shift
in peak positions. Figure adapted from ref. \citep{kuzmenko}. Right: Dispersion relation of polaritons in LSCO as derived from
left panel. The squares are derived from the spectrum in the
superconducting state whereas the circles are determined at a
temperature slightly above $T_{c}$. }
\end{figure}
We can read off the value of $\omega$ from the graph and using Eq. (\ref{poldisp}) we can replace the argument in the sine function
by $n\omega d/c=qd$ so the momentum at a given maximum is,
\begin{equation}
q_{m}=\frac{m\pi}{d}\quad m=0,1,2,... \end{equation}
So given the thickness of the sample we can make a plot of
$\omega(q)$. The result for STO is shown in the right panel of
figure \ref{STO}. We see that the dispersion is linear, indicating
that $n$ is dispersion-less in this range. The slope of the curve
directly gives us $n\approx20.5 $. Another interesting application
of this is to superconductors. In figure \ref{transLSCO} we show
the transmission spectrum of LSCO at a temperature slightly above
T$_{c}$ and far below \cite{kuzmenko}. One can see that the
position of the maxima has changed and this shows up in the
polariton dispersion in an interesting way (see right panel figure
\ref{transLSCO}). As in STO we see that in the normal state the
dispersion is linear and extrapolates to zero. In the
superconducting state the dispersion has acquired a $q^{2 }$
dependence and no longer extrapolates to zero for $q\to0 $. This is
the result of the opening of the superconducting gap and it
implies that the polaritons in the superconducting state have
acquired a mass. This is an example of the Anderson-Higgs
mechanism\cite{anderson}, the same mechanism via which the
Higgs-field gives a mass to the W and Z bosons in elementary
particle physics. In the superconductor the order parameter plays
the role of the Higgs-field and the spontaneously broken symmetry
is that of the U(1 ) gauge symmetry. \subsection{TeraHertz time-domain spectroscopy}
This relatively new technique is the last we will discuss here. \begin{figure}[htb]
\includegraphics[width=8.5 cm]{THz. png}
\caption{\label{THz}Left: recorded signal v. s. delay distance
without sample. Right: recorded signal v. s. delay distance with
sample. Note the extra peaks in the signal on the right due to
multiple reflections in the sample. }
\end{figure}
This technique uses a powerful laser pulse and records the
detector output as a function of time, more often expressed in an
optical delay distance. The result for an experiment in vacuum is
shown in figure \ref{THz} on the left. If we now insert a sample
that is transparent to terahertz radiation in the path of the beam
we expect that due to the different optical path length in the
sample the pulse will arrive at a later time. In fact, if we use a
sample with to plane parallel surfaces we expect a series of peaks
due to multiple reflections in the sample (see right panel of
figure \ref{THz}. These peaks are just a different manifestation
of the Fabry-Perot oscillations observed in transmission
spectroscopy. By Fourier transforming this signal to the frequency
domain and doing the same for the signal without sample we can
again obtain the transmission spectrum. The frequency domain
spectrum corresponding to the time domain spectra of figure
\ref{THz} is shown in figure \ref{STO}. \section{Quantum theory}
We now move to the quantum theoretical description of the
interaction of light and matter using the Kubo-formalism. So far
we have been using a "geometrical" or macroscopic view of this
interaction, but in this section we will consider the effects of
the absorption of photons by electrons. Consider for simplicity a
metal. The electronic states of the system are described by a set
of bands, some of which are fully occupied, some partially and the
rest empty, figure \ref{bandstruct}. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{bandstruct. png}
\caption{\label{bandstruct}The indicated transition is an
interband transition. Al states below the dashed line indicated by
E$_{F}$ are occupied all states above are empty. }
\end{figure}
When photons interact with these band electrons they can be
absorbed and in this process the electron is excited to a higher
lying band leaving behind a hole. In this way we create
electron-hole pairs and this (dipole) transition from a state
$|\Psi_{\nu}^{N}\rangle$ to a state $|\Psi_{\mu}^{N}\rangle$ is
characterized by an optical matrix element,
\begin{equation}
M_{\mu \nu } (\vec q) = \left\langle {\Psi _\mu ^N } \right. \left|
{{\bf \hat v}_q } \right|\left. {\Psi _\nu ^N } \right\rangle. \end{equation}
If the transition is from one band to another band we call the
transition an \textit{interband} transition and if the
transition is within a band we call it a \textit{intraband}
transition. In figure \ref{KCL} we show the optical
conductivity of KCl. In this compound a strong onset is seen in
the optical conductivity around $\approx$8.7 eV. This onset is
due to the excitation of electrons from the occupied p-band
related to the Cl$^{-}$ ions to the unoccupied s-band of the
K$^{+}$ ions. Since this particular transition involves moving
charge from the chlorine atoms to the potassium atoms this type
of excitation is called a charge transfer (CT) excitation
\cite{zaanen}. \begin{figure}[bth]
\includegraphics[width=6 cm]{KCL. png}
\caption{\label{KCL}Optical conductivity of KCl. The series of
strong peaks are due to excitons. The onset in absorption
around 9 eV is the onset of charge transfer excitations. }
\end{figure}
Another important feature in figure \ref{KCL} are the strong
peaks seen around 7.5 eV. Many theories often neglect so-called
vertex corrections because these corrections cancel if the
interactions between electrons are isotropic. However in real
materials interactions are more often than not anisotropic and
this means that these corrections have to be taken into
account. The peaks seen in figure \ref{KCL} are due to
transitions from bound states of electron-hole pairs, called
excitons, which arise due to the vertex corrections. Before we
start our display of the Kubo-formalism we first introduce some
notation. We introduce the field operators,
\begin{equation}
\psi^{\dagger}_{\sigma}(\mathbf{r})=\sum_{k}e^{-i\mathbf{k}\cdot\mathbf{r}}\hat{c}^{\dagger}_{k, \sigma}. \end{equation}
The density operator is given by,
\begin{equation}
\hat{n}_{\sigma}(\mathbf{r})=\psi^{\dagger}_{\sigma}(\mathbf{r})\psi_{\sigma}(\mathbf{r}). \end{equation}
The Fourier transform of $\hat{n}_{\sigma}(\mathbf{r})$ is,
\begin{equation}
\hat{n}_{\sigma}(\mathbf{r})=\frac{1 }{V}\sum_{q}e^{-i\mathbf{q}\cdot\mathbf{r}}\rho_{q},
\end{equation}
with
\begin{equation}
\rho_{q}=\sum_{k, \sigma}\hat{c}^{\dagger}_{k-q/2, \sigma}\hat{c}_{k+q/2, \sigma}. \end{equation}
The velocity operator is defined as,
\begin{equation}
\mathbf{\hat{v}}_{q}=\frac{\hbar}{m}\sum_{k, \sigma}\mathbf{k}\hat{c}^{\dagger}_{k-q/2, \sigma}\hat{c}_{k+q/2, \sigma}. \end{equation}
Finally, we note that the operators $\hat{n}_{\sigma}(\mathbf{r})$
and $\mathbf{\hat{v}}_{q}$ satisfy,
\begin{equation}
\frac{i}{\hbar}\left[\hat{n}_{\sigma}(\mathbf{r}), \hat{H}\right]+\nabla\cdot\mathbf{\hat{v}}_{q}=0.
|
627
| 4
|
arxiv
|
}, t)+\textbf{J}^{(2 )}(\textbf{r}, t)$. It consists of two terms the first of which is called the
diamagnetic term,
\begin{equation}\label{J1 }
{\bf J}^{(1 )} (\mathbf{r}, t) = - \frac{{ne^2 }}{{mc}}{\bf
A}(r, t)=\frac{{ine^2 }}{{m\omega}}{\bf E}^{T}(r, t),
\end{equation}
where in the last equality we have used Eq. (\ref{Eop}). The
second term is more difficult. It is given by,
\begin{equation}
{\bf J}^{(2 )} (\mathbf{r}, t) = \frac{{e^2 }}{V}\int\limits_{ -
\infty }^{\rm t} {\left\langle {{\rm e}^{{\rm i}H'\tau } {\rm
e}^{{\rm - i}H\tau } {\bf \hat v}(r, t){\rm e}^{{\rm i}H\tau }
{\rm e}^{{\rm - i}H'\tau } } \right\rangle {\rm e}^{{\rm i}\omega
\left( {{\rm t - }\tau } \right)} {\rm d}\tau } {\rm }. \end{equation}
We make here the approximation of using linear response theory: we
expand the exponentials $e^{iH'\tau}$ to first order in
$\textbf{A}(\textbf{r}, t)$ and then stop the series expansion. After some algebra we arrive at,
\begin{equation}\label{J2 }
\frac{J^{(2 )}}{\mathbf{E}(\mathbf{r}, t)}=\frac{ie^2 }{\omega
V}\sum_{n}\mathbf{v}_{-q}^{nm}\mathbf{v}_{q}^{mn}\left[\frac{1 }{\omega-E_{n}+E_{m}+i0 ^{+}}-\frac{1 }{\omega+E_{n}-E_{m}+i0 ^{+}}\right],
\end{equation}
where we have defined,
\begin{equation}
\mathbf{v}_{q}^{mn}\equiv\langle\Psi_{m}|\mathbf{\hat{v}}_{q}|\Psi_{n}\rangle. \end{equation}
The result we have obtained is for zero temperature but is easily
generalized to finite $T$ if we use the grand canonical ensemble. Combining Eq. (\ref{J1 }) and Eq. (\ref{J2 }) we find for the
optical conductivity,
\begin{equation}\label{conduc}
\sigma_{\alpha, \alpha}(\mathbf{q}, \omega)=\frac{{iNe^2
}}{{mV\omega}}+\frac{{ie^2 }}{{V\omega}}\sum_{n, m\neq
n}e^{\beta(\Omega-E_{n})}\left[\frac{v_{\alpha, q}^{nm}
v_{\alpha, -q}^{mn}}{\omega-\omega_{mn}+i\eta}-\frac{v_{\alpha, -q}^{nm}
v_{\alpha, q}^{mn}}{\omega+\omega_{mn}+i\eta}\right],
\end{equation}
where we have defined $\omega_{mn}\equiv E_{m}-E_{n}$. The optical
conductivity consists now of three contributions: the diamagnetic
term followed by a contribution to positive frequencies and a
contribution to negative frequencies. We note that in general
$\sigma_{\alpha, \alpha}(\mathbf{q}, \omega)$ is a tensor as indicated
by the $\alpha$ subscripts. We further note that the diamagnetic
term does not give a real contribution to the conductivity. This
term gives a $\delta$-function contribution at zero frequency and
this is exactly canceled by a delta function in the second part. This can be seen by using the fact that for every $n$ we have the
following relationship,
\begin{equation}\label{velsum}
\sum_{n, m\neq n}\frac{v_{\alpha, q}^{nm}
v_{\alpha, -q}^{mn}}{\omega_{mn}}=\frac{N}{2 m}. \end{equation}
So we can rewrite Eq. (\ref{conduc}) as,
\begin{equation}\label{cond}
\sigma_{\alpha, \alpha}(\mathbf{q}, \omega)=\frac{{ie^2 }}{{V}}\sum_{n, m\neq
n}\frac{e^{\beta(\Omega-E_{n})}}{\omega_{mn}}\left[\frac{v_{\alpha, q}^{nm}
v_{\alpha, -q}^{mn}}{\omega-\omega_{mn}+i\eta}+\frac{v_{\alpha, -q}^{nm}
v_{\alpha, q}^{mn}}{\omega+\omega_{mn}+i\eta}\right],
\end{equation}
From here on we take the limit $q\to0 $ and define a generalized
oscillator strength $\Omega_{mn}$ as,
\begin{equation}\label{Omeganm}
\Omega_{mn}^{2 }\equiv\frac{8 \pi
e^{2 }e^{\beta(\Omega-E_{n})}|v^{nm}_{\alpha}|^{2 }}{\omega_{mn}V}. \end{equation}
With this definition we are lead to the Drude-Lorentz expansion of
the optical conductivity,
\begin{equation}\label{DL}
\sigma_{\alpha, \alpha}(\omega)=\frac{i\omega}{4 \pi}\sum_{n, m\neq
n}\frac{\Omega_{mn}^{2 }}{\omega(\omega+i\gamma_{mn})-\omega^{2 }_{mn}}. \end{equation}
\subsection{Sum Rules}
Sum rules play an important role in optics. Using the equations of
the previous section we derive the Thomas-Reich-Kuhn sum rule also
known as the f-sum rule. The f-sum rule states that, apart from
some constants, the area under $\sigma_{1 }$ is proportional to the
number of electrons and inversely proportional to their mass. This
can be shown as follows: integrating Eq. (\ref{DL}) we have,
\begin{equation}
{\mathop{\rm Re}\nolimits} \int\limits_{ - \infty }^\infty {\sigma
\left( \omega \right)d\omega = \frac{1 }{4 }} \sum\limits_{n, m \ne
n} {\Omega _{_{mn} }^2 }. \end{equation}
Using the expression for $\Omega_{mn}$, Eq. (\ref{Omeganm}), and
expression (\ref{velsum}) we can rewrite the sum on the right hand
side as,
\begin{equation}
\sum\limits_{n, m \ne n} {\Omega _{_{mn} }^2 } = \frac{{4 \pi e^2
N}}{{mV}}\sum\limits_n {e^{\beta \left( {\Omega - E_n } \right)}
= } \frac{{4 \pi e^2 N}}{{mV}}. \end{equation}
So the f-sum rule states that,
\begin{equation}
\int_{-\infty}^{\infty}\sigma_{1 }(\omega)d\omega=\frac{{\pi e^2
N}}{{mV}},
\end{equation}
\begin{figure}[tbh]
\includegraphics[width=8.5 cm]{Alsumrule. png}
\caption{\label{ALsumrule}Effective number of carriers
$n_{eff}(\Omega_{c})$ as a function of cutoff frequency
$\Omega_{c}$ for Al. Figure adapted from \citep{smith}. }
\end{figure}
as promised. This is the full universal sum rule. It is often
rewritten as an integral over positive frequencies only and
using the definition of the plasma frequency $\omega_{p}$,
\begin{equation}
\omega_{p}^{2 }\equiv\frac{{4 \pi e^2 N}}{{mV}},
\end{equation}
as,
\begin{equation}
\int_{0 }^{\infty}\sigma_{1 }(\omega)d\omega=\frac{\omega_{p}^{2 }}{8 }. \end{equation}
We can also define \textit{partial} sum rules, i. e. sum rules
where we integrate up to a certain frequency cutoff $\Omega_{c}$. In such a case the sum rule is not universal (this means for
instance that the value of this sum rule can depend on
temperature) and we can now define a plasma frequency that depends
on the chosen cutoff frequency,
\begin{equation}
\omega_{p, valence}^{2 }\equiv\frac{4 \pi
e^{2 }}{m}n_{eff}(\Omega_{c}). \end{equation}
A nice example of the application of the partial sum rule is shown
in figure \ref{ALsumrule}. Here the partial sum rule is applied to
the optical conductivity of aluminum \cite{smith}. Here the
effective number of carriers contributing to the sum rule is
plotted as a function of $\Omega_{c}$. $n_{eff}(\Omega_{c})$
slowly increases to a value of roughly three around 50 eV. This
means that as we increase the cutoff from zero to 50 eV we are
slowly integrating over the intraband transitions and when we
reach a value of 50 eV we have integrated over all transitions
involving the three valence electrons. For higher energies the
interband transitions start to contribute with a sharp onset near
80 eV. Finally at 10 $^{4 }$ eV the sum rule saturates at 13
electrons, the total number of electrons of aluminum. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{sigmaSC. png}
\caption{\label{sigmaSC}Optical conductivity of Bi-2212 at $T_{c}$
and below. The difference in area between the two curves is an
estimate of the superfluid density. }
\end{figure}
Another application of sum rules can be found in superconductors. In a superconductor the electrons form a superfluid condensate. This condensate shows up in the optical conductivity as a delta
function at zero frequency (it contributes a diamagnetic term as
in Eq. (\ref{conduc})). At the same time a gap opens up in low
frequency part of the spectrum where the optical conductivity is
(close to) zero, see figure \ref{sigmaSC}. In the normal state the
system is usually metallic and characterized by a Drude peak. In
optical experiments we cannot measure the zero frequency response
and so we cannot directly measure the spectral weight
$\omega_{p, s}^{2 }$ of the condensate. However, using sum rules we
can estimate its spectral weight because the total spectral weight
has to remain constant. This is summarized in the
Ferrel-Glover-Tinkham (FGT) sum rule \cite{ferrel}, which states
that the difference in spectral weight between the optical
conductivity in the superconducting and normal state is precisely
the spectral weight of the condensate,
\begin{equation}
\omega _{p, s} (T)^2 = 8 \int\limits_{0 ^ + }^\infty {\left\{
{\sigma (\omega , T_c ) - \sigma (\omega , T)} \right\}d\omega }. \end{equation}
Note that we integrate here from 0 $^{+}$. There also exist sum rules for mixtures of different types of
particles,
\begin{equation}\label{phonsum}
\int\limits_0 ^\infty {\sigma _1 \left( {\omega '} \right)}
d\omega ' = \sum\limits_j {\frac{{\pi n_j q_j ^2 }}{{2 m_j }}},
\end{equation}
\begin{figure}[tbh]
\includegraphics[width=8.5 cm]{MgO. png}
\caption{\label{MgO}Optical conductivity due to phonon mode in
MgO. The area under the peak is proportional to the effective
charge of the mode. The inset shows the effective charge
calculated using (\ref{phonsum2 }). Data from \citep{damascelli}. }
\end{figure}
here the index j labels the different species. This sum rule can
be applied to measure the charge of ions involved in vibrational
modes. If we can separate the contribution to the optical
conductivity due to the optical modes we can invert Eq. (\ref{phonsum}) to calculate the effective charge related to the
mode. For example, in MgO (figure \ref{MgO}) both ions contribute
an equal charge $q_{Mg}=-q_{O}$. We define the effective mass
$\mu$ as $\mu^{-1 }=m_{Mg}^{-1 }+m_{O}^{-1 }$ and assume that the
density of the two is equal. In that case we can rewrite Eq. (\ref{phonsum}) as,
\begin{equation}\label{phonsum2 }
Z(\omega )^2 \equiv \left( {\frac{{q_T^* (\omega )}}{e}}
\right)^2 \equiv \frac{2 \mu}{{\pi ne^2
}}\int\limits_{\omega_{min}}^{\omega_{max}} {\sigma _{ph} \left(
{\omega '} \right)d\omega '},
\end{equation}
where the integral has to be taken in a frequency range such that
it includes the spectral weight of the optical phonon mode but
nothing else. We will now derive expressions for the conductivity sum rule from
a more microscopic point of view. To do that we return to the Kubo
expression for the optical conductivity,
\begin{equation}
\sigma _1 \left( \omega \right) = \frac{{\pi e^2
}}{V}Tr\left\langle {\Psi _n } \right|{\bf \hat v}\left\{
{\frac{{\delta \left( {\omega - \hat H + E_n } \right)}}{{\hat H
- E_n }} + \frac{{\delta \left( {\omega + \hat H - E_n }
\right)}}{{\hat H - E_n }}} \right\}{\bf \hat v}\left| {\Psi _n }
\right\rangle. \end{equation}
The Hamiltonian in this expression is that of the system of
interacting electrons without the interaction of light. It
represents the optical conductivity for the system in an arbitrary
(ground or excited) many-body state $|\Psi\rangle$. A peculiar
point of this expression is that although the velocity operators
create a single electron-hole pair, due to the fact that the
hamiltonian in the denominator of this expression still contains
the interactions between all particles in the system, the optical
conductivity represents the response from the full collective
system of electrons. If we integrate this expression over
frequency we get,
\begin{equation}\label{microsum}
\int\limits_{ - \infty }^\infty {\sigma _1 \left( \omega
\right)d\omega } = \frac{{2 \pi e^2 }}{V}Tr\left\langle {\Psi _n }
\right|{\bf \hat v}\frac{1 }{{\hat H - E_n }}{\bf \hat v}\left|
{\Psi _n } \right\rangle. \end{equation}
We now take a closer look at the right-hand side of this
expression. Remember that,
\begin{equation}\label{commutator}
{\bf \hat v} = \frac{i}{\hbar }\left[ {\hat H, {\bf \hat x}}
\right]. \end{equation}
Using the commutator we can rewrite,
\begin{equation}
- 2 i\hbar {\bf \hat v}\frac{1 }{{\hat H - E_n }}{\bf \hat v} = \left( {\hat H{\bf \hat x} - {\bf \hat x}\hat H} \right)\frac{1 }{{\hat H - E_n }}{\bf \hat v} + {\bf \hat v}\frac{1 }{{\hat H - E_n }}\left( {\hat H{\bf \hat x} - {\bf \hat x}\hat H}
\right). \end{equation}
Inserting this back into Eq. (\ref{microsum}) we find after some
rearranging
\begin{equation}\label{sumrule}
\int\limits_{ - \infty }^\infty {\sigma _1 \left( \omega
\right)d\omega } = \frac{{i\pi e^2 }}{{\hbar V}}\left\langle
{[{\bf \hat v}, {\bf \hat x}]} \right\rangle,
\end{equation}
where $\langle... \rangle$ stands for the trace over all many-body
states.
|
627
| 5
|
arxiv
|
}$\\
Band electrons & \quad\quad & $[{\bf \hat v}, {\bf \hat x}] =
\frac{\hbar }{{im}}\sum\limits_{k\sigma } {\hat n_{k\sigma } }
[{\bf \hat v}, {\bf \hat x}] = \frac{\hbar }{i}\sum\limits_{k\sigma
} {\frac{{\partial ^2 \varepsilon _{k\sigma } }}{{\partial k^2
}}\hat n_{k\sigma } }$\\
N. N. & \quad\quad & $[{\bf \hat v}, {\bf \hat x}] = -
\frac{{\hbar a^2 }}{i}\sum\limits_{k\sigma } {\varepsilon
_{k\sigma } \hat n_{k\sigma } }$\\
\hline\hline
\end{tabular}
\caption{Expressions for the commutator in Eq. (\ref{sumrule}) for
three different cases. N. N. stands for Nearest Neighbors tight
binding model} \label{modelexpr}
\end{table}
The sum rule for band electrons is in practice the most useful. Suppose that we have a system with only a single reasonably well
isolated band around the Fermi level that can be approximated by a
tight binding dispersion $\varepsilon _k = - t\cos \left( {ka}
\right)$. In that case we find an interesting relation,
\begin{equation}
\int\limits_0 ^{\Omega _c } {\sigma _1 (\omega , T)} d\omega = -
\frac{{\pi e^2 a^2 }}{{2 \hbar ^2 V}}\sum\limits_{k, \sigma }
{\left\langle {\hat n_{k\sigma } \varepsilon _k } \right\rangle _T
} = - \frac{{\pi e^2 a^2 }}{{2 \hbar ^2 V}}E_{kin} (T). \end{equation}
This sum rule states that by measuring the optical spectral weight
we are in fact measuring the kinetic energy of the charge carriers
contributing to the optical conductivity. In real systems this
relation only holds approximately: usually there are other bands
lying nearby and the integral on the left contains contributions
from these as well. Often the bands are described by more
complicated dispersion relations in which case the relation
$\partial^{2 }\varepsilon_{k}/\partial k^{2 }=-\varepsilon_{k}$ does
not hold. We can make some other observations from the sum rule
for band electrons. Suppose again we have a single empty cosine
like band (it is only necessary that the band is symmetric but it
simplifies the discussion) at $T=0 $. Since the band is empty, the
spectral weight is equal to zero. If we start adding electrons the
spectral weight starts to increase until we reach half-filling. If
we add more electrons the spectral weight will start to decrease
again because the second derivative becomes negative for
$k>\pi/2 a$. If we completely fill the band the contributions from
$k>\pi/2 a$ will precisely cancel the contributions from $k<\pi/2 a$
and the spectral weight is again zero. Now consider what happens
if we have a half-filled band and start to increase the
temperature. Due to the smearing of the Fermi-Dirac distribution
higher energy states will get occupied leaving behind lower energy
empty states. The result of this is that the spectral weight
starts to decrease. One can show using the Summerfeld expansion
that the spectral weight follows a $T^{2 }$ temperature dependence. In the extreme limit of $T\to\infty$ something remarkable happens:
the Fermi-Dirac distribution is 1 /2 everywhere and the electrons
are equally spread out over the band. The metal has become an
insulator. \subsection{Applications of sum rules to superconductors}
Before we have a look at some applications of sum rules to
superconductors we first summarize some results from BCS theory. We want to apply our ideas to cuprate superconductors so we use a
modified version from the original theory to include the
possibility of d-wave superconductivity. In other words we suppose
that there is some attractive interaction between the electrons
that has a momentum dependence. The energy difference between the
normal and superconducting state due to interactions can be
written as \cite{marel},
\begin{equation}
\langle\hat{H}_{s}^{int}\rangle-\langle\hat{H}_{s}^{int}\rangle=\int
d^{3 }r g(r)V(r)=\sum_{k}g_{k}V_{k},
\end{equation}
where $g(r)$ and $g_{k}$ are the pair correlation function and its
fourier transform respectively. \begin{figure}[tbh]
\includegraphics[width=6 cm]{correlation. png}
\caption{\label{correlation} Real and momentum space picture of
the correlation functions $g(r)$ and $g_{k}$. Figure adapted from
\citep{marel}. }
\end{figure}
We can find an expression for $g_{k}$,
\begin{equation}
g_{k}=\sum_{q}\frac{\Delta_{q+k}\Delta^{*}_{q}}{4 E_{q+k}E_{q}}. \end{equation}
As usual,
\begin{equation}
E_{k}=\sqrt{(\varepsilon_{k}-\mu)^{2 }+\Delta^{2 }},
\end{equation}
and the temperature dependence of $\Delta_{k}$ is given by,
\begin{equation}
\Delta_{k}=\sum_{q}\frac{V_{q}\Delta_{q}}{2 E_{q}}\tanh\left(\frac{E_{k}}{2 k_{b}T}\right). \end{equation}
We now use a set of parameters extracted from ARPES measurements
to do some numerical simulations. First of all we calculate
$g_{k}$ and fourier transform it to obtain $g(r)$. The results are
shown in figure \ref{correlation}. Although $g_{k}$ is not so illuminating $g(r)$ is. This function
is zero at the origin and strongly peaked at the nearest neighbor
sites. This is a manifestation of the d-wave symmetry. We also
note that the correlation function drops off very fast for sites
removed further from the origin. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{correnergy. png}
\caption{\label{correnergy} Correlation energy and kinetic energy
as a function of temperature for a d-wave BCS superconductor. Figure adapted from \citep{marel}. }
\end{figure}
In figure \ref{correnergy} we show the results for a calculation
of the correlation and kinetic energy using the parameters
extracted from ARPES measurements on Bi-2212. The kinetic energy
is calculated from,
\begin{equation}
\langle\hat{H}_{kin}\rangle=\sum_{k}\varepsilon_{k}\{1 -\frac{\varepsilon_{k}-\mu}{E_{k}}\tanh\left(\frac{E_{k}}{2 k_{b}T}\right)\}. \end{equation}
We see that the kinetic energy \textit{increases} in the
superconducting state. This can be easily understood by looking at
what happens to the particle distribution function below $T_{c}$,
as indicated in the left panel of figure \ref{distribution}: when
the system enters the superconducting state the area below the
Fermi energy decreases and the area above the Fermi energy
increases thereby increasing the total kinetic energy of the
system. \begin{figure}[tbh]
\includegraphics[width=8.5 cm]{distribution2. png}
\caption{\label{distribution} Left: Distribution function for the
normal (Fermi liquid like) state and the superconducting state. Right: Distribution function for a non-Fermi Liquid like state and
the superconducting state. }
\end{figure}
Nevertheless the total internal energy, which is the sum of the
interaction energy and the kinetic energy, decreases and this is
of course why the system becomes superconducting. Now let us take
a look at what happens in the cuprates. In figure \ref{WBi2223 } we
display the optical spectral weight $W(\Omega_{c}, T)$ as a
function of $T^{2 }$ for Bi-2223. \begin{figure}[tbh]
\includegraphics[width=6 cm]{Wbi2223. png}
\caption{\label{WBi2223 } Temperature dependent spectral weight of
Bi-2223. Data taken from ref. \citep{carbone}. }
\end{figure}
To compare this to the BCS kinetic energy we have plotted here
$-W(\Omega_{c}, T)$. This result is contrary to the result from our
calculation: the kinetic energy decreases in the superconducting
state. This experimental result, observed first by Molegraaf
\textit{et al. } \cite{molegraaf}, has sparked a lot of interest
both experimentally \cite{heumen, basov, syro, kuz2, carbone} and
theoretically
\cite{hirsch, anderson2, eckl, wrobel, haule, toschi, marsiglio, mayer, norman}. We note that DMFT calculations with the Hubbard model as starting
point have shown the same effect as observed here \cite{mayer}. Roughly speaking the effect is believed to be due to the
"strangeness" of the normal state (right panel figure
\ref{distribution}). It is well known that the normal state of the
cuprates shows non Fermi-liquid behavior. So if the distribution
function in the normal state does not show the characteristic step
of the Fermi liquid at the Fermi energy but is rather a broadened
function of momentum it is very well possible that the argument we
made for the increase of the kinetic energy (see above) is
reversed. \subsection{Applications of sum rules: the Heitler-London model}
Another interesting application of sum rules is that we can use
them in some cases to extract the hopping parameters of a system. In order to see how this works we express the optical conductivity
at zero temperature,
\begin{equation}
\sigma _1 \left( \omega \right) = \frac{{\pi e^2
}}{V}\left\langle {\Psi _g } \right|{\bf \hat v}\frac{{\delta
\left( {\omega - \hat H + E_g } \right)}}{{\hat H - E_g }}{\bf
\hat v}\left| {\Psi _g } \right\rangle,
\end{equation}
in terms of the dipole operator. Here $|\Psi_{g}\rangle$ is the
groundstate of the system. To do this we make use of the
commutator Eq. (\ref{commutator}) and the insertion of a complete
set of states. After integrating over frequency we get
\begin{equation}\label{dipsumrule}
\int\limits_0 ^\infty {\sigma _1 \left( \omega \right)d\omega } =
\frac{{\pi e^2 }}{{\hbar ^2 V}}\sum\limits_n {\left( {E_n - E_g }
\right)\left| {\left\langle n \right|{\bf \hat x}\left| g
\right\rangle } \right|^2 }. \end{equation}
We note that this can be done only for finite system sizes. Now
consider the special case of a diatomic molecule with two energy
levels, one on each atom and a hopping parameter t and distance
$d$ between the two atoms. We also assume that there is a
splitting $\Delta$ between the two levels. The hamiltonian for
such a system is,
\begin{equation}
H = t{\rm }\sum\limits_\sigma {\left( {\psi _{L, \sigma }^t \psi
_{R, \sigma } + \psi _{R, \sigma }^t \psi _{L, \sigma } } \right)} +
\frac{\Delta }{2 }\left( {\hat n_R - \hat n_L } \right) + U\left(
{\hat n_{L \uparrow } \hat n_{L \downarrow } + \hat n_{R \uparrow
} \hat n_{R \downarrow } } \right). \end{equation}
The indices $L$ and $R$ indicate the left and right atom
respectively. If we now put 1 electron in this system we have a
two-level problem that is easily diagonalized. As usual we make
bonding and anti-bonding states,
\begin{eqnarray}
\left| {\psi _{g, \sigma } } \right\rangle = u\left| {\psi _{l, \sigma } } \right\rangle + v\left| {\psi _{R, \sigma } } \right\rangle, \\
\left| {\psi _{e, \sigma } } \right\rangle = v\left| {\psi
_{l, \sigma } } \right\rangle - u\left| {\psi _{R, \sigma } }
\right\rangle. \end{eqnarray}
The coefficients $u$ and $v$ are given by,
\begin{equation}
u = \frac{1 }{{\sqrt 2 }}\sqrt {1 + \frac{\Delta }{{E_{CT}
}}};\quad\quad v = \frac{1 }{{\sqrt 2 }}\sqrt {1 - \frac{\Delta
}{{E_{CT} }}}. \end{equation}
The bonding and anti-bonding states are split by an energy
$E_{CT}$,
\begin{equation}
E_{CT} = \sqrt {\Delta ^2 + 4 t^2 }. \end{equation}
We are now in position to calculate the transition matrix element
appearing in Eq. (\ref{dipsumrule}). The position operator can be
represented by,
\begin{equation}
{\bf \hat x} = \frac{d}{2 }{\rm }\left( {\hat n_R - \hat n_L }. \right). \end{equation}
So the matrix element is,
\begin{eqnarray}
\left\langle {\psi _{g, \sigma } } \right|{\bf \hat x}\left| {\psi
_{e, \sigma } } \right\rangle =
\left(u\langle\Psi_{L}|+v\langle\Psi_{R}|\right)\frac{d}{2 }{\rm
}\left( {\hat n_R - \hat n_L }
\right)\left(u|\Psi_{L}\rangle-v|\Psi_{R}\rangle\right) \nonumber\\
=-\frac{d}{2 }(uv)=- \frac{t}{{E_{CT} }}d
\end{eqnarray}
Using this in the sum rule Eq. (\ref{dipsumrule}) finally gives us
the spectral weight of this model,
\begin{equation}\label{tsum}
\int\limits_0 ^\infty {\sigma _1 \left( \omega \right)d\omega }
=\frac{{e^2 \pi d^2 }}{{\hbar ^2 V}}\frac{{t^2 }}{{\sqrt {\Delta
^2 + 4 t^2 } }}. \end{equation}
\begin{figure}[tbh]
\includegraphics[width=8.5 cm]{NaVO. png}
\caption{\label{NaVO} Optical conductivity of
$\alpha$-NaV$_{2 }$O$_{5 }$ for two polarizations: one with the
field parallel to the chains and one perpendicular. Data taken
from ref. \citep{damascelli2 }. }
\end{figure}
We see that there is a very simple relation between the spectral
weight of this model and the hopping parameter. This sumrule has
been applied to $\alpha$-NaV$_{2 }$O$_{5 }$ \cite{damascelli2 }. This
compound is a so-called ladder compound. It consists of double
chains of vanadium atoms forming ladders which are weakly coupled
to each other. Each unit cell contains 4 V atoms and 2 valence
electrons. The vanadiums on the rungs of the ladder are more
strongly coupled than those along the legs, i. e. $t_{\perp}\gg
t_{\parallel}$. The Heitler-London model we have discussed above
can be applied to this system since each rung forms precisely a
two level system with different levels. The only difference that
we have to take into account is that this is a crystal consisting
of $N$ independent two level systems. Figure \ref{NaVO} shows the
optical conductivity of $\alpha$-NaV$_{2 }$O$_{5 }$. There are two measurements shown: one with light polarized
parallel to the chains and one with light polarized perpendicular
to the chains. We can immediately read of $E_{CT}\approx 1 $ eV. Integrating the contribution to the optical conductivity of the
peaks we find that the spectral weight perpendicular to the chains
is roughly 4 times larger then the spectral weight parallel to the
chains, so $t_{\perp}\approx 4 t_{\parallel}$. Inverting Eq. (\ref{tsum}) we can calculate $t_{\perp}$ and we find
$t_{\perp}\approx0.3 $ eV.
|
627
| 6
|
arxiv
|
as
in the interactions with impurities, this scattering rate is just
a constant. Otherwise, this scattering rate can depend on
frequency. However, if we define the scattering rate in Eq. (\ref{drude}) to be frequency dependent,
$1 /\tau\equiv1 /\tau(\omega)$, the KK-relations force us to
introduce a frequency dependent effective mass as well. This is
what is done in the generalized Drude formalism \cite{Allen}. The
optical conductivity is written as,
\begin{equation}
\sigma \left( \omega \right) = \frac{{ne^2 /m}}{{\tau ^{ - 1 }
(\omega ){\rm - i}\omega m*(\omega )/m}}. \end{equation}
Having measured a conductivity spectrum we can invert these
equations to calculate 1 /$\tau(\omega)$ or $m^{*}(\omega )/m$ via,
\begin{equation}\label{1 /tau}
\tau ^{ - 1 } (\omega ) \equiv {\mathop{\rm Re}\nolimits}
\frac{{ne^2 /m}}{{\sigma \left( \omega \right)}} = \Sigma
''(\omega ),
\end{equation}
and
\begin{equation}\label{mstar/m}
\frac{{m^{*}(\omega )}}{m} \equiv {\mathop{\rm Im}\nolimits}
\frac{{ - ne^2 /m}}{{\omega \sigma \left( \omega \right)}} = 1 +
\frac{{\Sigma '(\omega )}}{\omega }. \end{equation}
In the last equality of these equations we have defined an optical
self-energy. Note that this quantity is \textit{not} equivalent to
the self-energy used in the context of Green's functions. We can
rewrite the optical conductivity in terms of $\Sigma(\omega)$ as,
\begin{equation}\label{optcondSigma}
\sigma \left( \omega \right) = \frac{{ne^2 }}{m}\frac{i}{{\omega
+ \Sigma \left( \omega \right)}}. \end{equation}
In the case of impurity scattering $\Sigma(\omega)$ is simply
given by
\begin{equation}
\Sigma \left( \omega \right) = i/\tau _0,
\end{equation}
\begin{figure}[tbh]
\includegraphics[width=8.5 cm]{sigmaCe. png}
\caption{\label{sigmaCe} Optical conductivity of Ce in the
$\alpha$ and $\gamma$ phases. Data taken from ref. \citep{vandereb}. }
\end{figure}
so that $1 /\tau(\omega)=1 /\tau_{0 }$ and $m*(\omega )/m=1 $. We can
also capture the effect of the interaction of the electrons with
the static lattice potential in a self-energy,
\begin{equation}
\Sigma \left( \omega \right) = \lambda \omega,
\end{equation}
which gives $\tau^{-1 }(\omega )=0 $ and
$m^{*}(\omega)/m=1 +\lambda$. This is also called static mass
renormalization. Finally we consider dynamical mass
renormalization where the electrons couple to a spectrum of
bosons,
\begin{equation}\label{SEboson}
\Sigma \left( \omega \right) = \frac{{\lambda \omega }}{{1 -
i\omega /T^* }},
\end{equation}
Here $\lambda$ is a coupling constant and $T^{*}$ is a
characteristic temperature scale related to the bosons. In this
case we find,
\begin{equation}
\tau ^{ - 1 } (\omega ) = \lambda T^* \frac{{\omega ^2 }}{{T^{*2 } +
\omega ^2 }},
\end{equation}
and
\begin{equation}
\frac{{m^{*}(\omega )}}{m} = 1 + \lambda \frac{{T^{*2 } }}{{T^{*2 }
+ \omega ^2 }}. \end{equation}
As an example we will discuss the $\alpha$-phase to $\gamma$-phase
transition in pure Cerium. When Cerium is grown at elevated
temperatures it forms in the so called $\gamma$-phase. At low
temperatures a volume collapse occurs and the resulting phase is
called the $\alpha$-phase. This iso-structural transition is first
order. The reduction in volume can be as much as 20 to 30 $\%$. Ce
has 4 valence electrons and these can be distributed between
\begin{figure}[tbh]
\includegraphics[width=6 cm]{tauCe. png}
\caption{\label{tauCe} Scattering rate and effective mass of Ce in
the $\alpha$ and $\gamma$ phases. Data taken from ref. \citep{vandereb}. }
\end{figure}
localized $4 f$ states and the $5 d$ states that form the conduction
band. If occupied, the 4 f states will act as paramagnetic
impurities. In the $\gamma$-phase the Kondo temperature
$T_{K}\approx$ 100 K whereas in the $\alpha$-phase $T_{K}\approx$
2000 K. This difference can be understood to be simply due to the
larger lattice spacing in the $\gamma$-phase: the hopping integral
$t$ is smaller and hence $T_{K}$ is smaller. Figure \ref{sigmaCe}
shows the optical conductivity of $\alpha$- and $\gamma$-Cerium. These measurements were done by depositing Ce films on a substrate
at high and low temperature to form either the $\alpha$ or
$\gamma$ phase. We see that $\gamma$-Cerium is less metallic than
$\alpha$-Cerium. In the $\gamma$-phase there is only a weak Kondo
screening of the impurity magnetic moments and this gives rise to
spin flip scattering, which is the main source of scattering. In
the $\alpha$-phase the moments are screened and form renormalized
band electrons in a very narrow band. This suppresses the
scattering. The difference in scattering rates shows up in the
optical conductivity as a narrower Drude peak for the
$\alpha$-phase (see figure \ref{sigmaCe}). Figure \ref{tauCe}
displays the scattering rate and effective mass extracted from the
optical conductivity in figure \ref{sigmaCe} using Eq. (\ref{1 /tau}) and (\ref{mstar/m}). In the $\gamma$-phase
$1 /\tau(\omega)$ extrapolates to a finite value due to local
moment or spin-flip scattering. We can rewrite the real part of
the optical conductivity in Eq. (\ref{optcondSigma}) with the
self-energy of Eq. (\ref{SEboson}) as follows,
\begin{equation}
\frac{{4 \pi }} {{\omega _{_p }^{*2 } }}\sigma \left( \omega \right)
= \frac{i} {\omega }\frac{1 } {{1 + \frac{\lambda } {{1 - i\omega
/T^* }}}}. \end{equation}
Note that we have defined a renormalized plasma frequency,
$\omega_{p}^{*}$, since the spectral weight is not conserved when
adding $\Sigma(\omega)$ to $\sigma(\omega)$. With some simple
algebra this can be rewritten as,
\begin{equation}
\frac{{4 \pi }} {{\omega _{_p }^{*2 } }}\sigma \left( \omega \right)
= \frac{i} {\omega } + \lambda \frac{{T^* }} {{\omega ^2 +
i\omega (1 + \lambda )T^* }}. \end{equation}
It follows that the real part of this expression is then,
\begin{equation}
\frac{{4 \pi }} {{\omega _{_p }^{*2 } }}Re \sigma \left( \omega
\right) = \frac{\pi } {2 }\delta (\omega ) + \lambda \frac{{T^* }}
{{\omega ^2 + (1 + \lambda )^2 T^{*2 } }}. \end{equation}
We see that the optical conductivity is split into two
contributions: a $\delta$-function which represents the
coherent part of the charge response and an incoherent
contribution. The $\delta$-function is usually broadened due to
other scattering channels present in the system. In this case
the $\delta$-function represents the contribution due to the
Kondo-peak whereas the incoherent contribution is due to the
side-bands. This splitting of the conductivity in a coherent
and incoherent contribution is nicely observed in the
$\alpha$-phase of Cerium as indicated in figure \ref{sigmaCe}. \begin{figure}[thb]
\begin{minipage}{6 cm}
\includegraphics[width=8 cm]{ZrB12 sig. png}
\end{minipage}\hspace{2 pc}%
\begin{minipage}{6 cm}
\includegraphics[width=6 cm]{ZrB12 extdrude. png}
\end{minipage}
\caption{\label{ZrB12 sig}Left: Optical conductivity of ZrB$_{12 }$
at selected temperatures. Right: $1 /\tau$ and $m^{*}/m_{b}$ for
several temperatures. Data taken from ref. \citep{teyssier}. }
\end{figure}
This splitting of the optical conductivity
in coherent and incoherent contributions is much more general
however and is frequently observed in correlated electron
systems. \section{Electron-phonon coupling}
Electron-phonon coupling is most easily described in the framework
of Migdal-Eliashberg theory. The application of the theory to
optics can be found in the papers by Allen \cite{Allen}. In the
so-called Allen approximation the self-energy in Eq. (\ref{optcondSigma}) is calculated using,
\begin{equation}\label{SEallenaprox}
\Sigma \left( \omega \right) = - 2 i\int\limits_0 ^\infty {d\Omega
\alpha _{tr} ^2 F} (\Omega )K(\frac{\omega }{{2 \pi
T}}, \frac{\Omega }{{2 \pi T}}). \end{equation}
Here the kernel $K(\frac{\omega }{{2 \pi T}}, \frac{\Omega }{{2 \pi
T}})$ is given by,
\begin{equation}
{\rm K(x}{\rm , y)} = \frac{{\rm i}}{{\rm y}} + \frac{{y -
x}}{x}\left[ {\Psi (1 - ix + iy) - \Psi (1 + iy)} \right] +
\frac{{y + x}}{x}\left[ {\Psi (1 - ix - iy) + \Psi (1 - iy)}
\right]. \end{equation}
where the $\Psi(x)$ are Digamma functions. The function
$\alpha^{2 }_{tr}F(\Omega)$ appearing in Eq. (\ref{SEallenaprox})
is the phonon spectral function. The label "tr" stands for
transport indicating that the spectral function is related to a
transport property. This function is different by a multiplicative
factor from the true $\alpha^{2 }F(\Omega)$ as measured by for
instance tunnelling. The electron-phonon coupling strength is
easily calculated from $\alpha^{2 }_{tr}F(\Omega)$ by integration,
\begin{equation}\label{lambdatr}
\lambda_{tr}=2 \int_{0 }^{\infty}\frac{\alpha^{2 }_{tr}F(\Omega)}{\Omega}d\Omega. \end{equation}
This approach was first applied by Timusk and Farnworth in a
comparison of tunnelling and optical measurements on the
superconducting properties of Pb \cite{Timusk}. As an example
we discuss the application of this formalism to the optical
properties of ZrB$_{12 }$ \cite{teyssier}. Figure \ref{ZrB12 sig}
shows the optical conductivity of ZrB$_{12 }$. The spectrum
consists of what appears to be a Drude peak and some interband
contributions. Also shown are the calculated $1 /\tau(\omega)$
and $m^{*}(\omega)/m_{b}$. The temperature dependence of
$1 /\tau(\omega)$ is what is usually observed for a narrowing of
the Drude peak with decreasing temperature whereas the strong
frequency dependence is suggestive of electron-phonon
interaction. Using the McMillan formula (\ref{lambdatr}) the
coupling strength was estimated to be $\lambda_{tr}\approx0.7 $. In figure \ref{ZrB12 ref} the reflectivity of ZrB$_{12 }$
together with calculations based on Eq. (\ref{optcondSigma})
and (\ref{SEallenaprox}) is shown. It is clear that a simple
Drude form is not capable of describing the observed
reflectivity. The first fit (fit 1 ) is a fit where the
$\alpha^{2 }_{tr}F(\Omega)$ that was used as input was derived
from specific heat measurements \cite{Lortz}. Although it gives
an improvement over the standard Drude fit there is still some
discrepancy between the data and the fit. To make further
improvements $\alpha^{2 }_{tr}F(\Omega)$ was modelled using a
sum of $\delta$-functions. The results of this modelling are
indicated as fit 2 and fit 3. Using Eq. (\ref{lambdatr}) we
find coupling strengths $\lambda_{tr}\approx$ 1 - 1.3. Another
method to roughly estimate $\alpha^{2 }_{tr}F(\Omega)$ is due to
Marsiglio \cite{Marsiglio1, Marsiglio2 }. It states that a rough
estimate of the shape of $\alpha^{2 }_{tr}F(\Omega)$ can be
found by simply differentiating the optical data,
\begin{figure}[tbh]
\includegraphics[width=8.5 cm]{ZrB12 ref. png}
\caption{\label{ZrB12 ref}Reflectivity of ZrB$_{12 }$ around 20 K
together with calculations as explained in the text. Figure
adapted from ref. \citep{teyssier}. }
\end{figure}
\begin{equation}
\alpha^{2 }_{tr}F(\Omega)=\frac{1 }{2 \pi}\frac{\Omega_{p}^{2 }}{4 \pi}\frac{d^{2 }}{d\omega^{2 }}Re(\frac{1 }{\sigma(\omega)}),
\end{equation}
where $\Omega_{p}$ is the plasma frequency. The obvious problem
with this method is that it requires the double derivative of the
data. Because of the inevitable noise in the data usually some
form of smoothing is required. Applied to ZrB$_{12 }$ the extracted
$\alpha^{2 }_{tr}F(\Omega)$ shows peaks at the same positions as
the ones extracted before and a coupling strength
$\lambda_{tr}\approx1.1 $. These results indicate a medium to
strong electron-phonon coupling for ZrB$_{12 }$. \section{Polarons}
There exist many definitions of what is a polaron. Electrons
coupled to a phonon have been called polaron as have free
electrons moving around in an insulator. Here we will consider the
Landau-Pekar approximation for a polaron \cite{landau, landau2 }. The idea is that when an electron moves about the crystal it
polarizes the surrounding lattice and this in turn leads to an
attractive potential for the electron. If the interaction between
electron and lattice is sufficiently strong this potential is
capable of trapping the electron and it becomes more or less
localized. The new object, electron plus polarization cloud is
called polaron. This self-trapping of electrons can occur in a
number of different situations and different names are used. For
instance, one talks about small polarons in models where only
short range interactions are considered, because this typically
leads to polaron formation with polarons occupying a single
lattice site. From the Landau-Pekar formalism we can get some
feeling of when polarons form and what their
\begin{figure}[thb]
\includegraphics[width=6 cm]{polaroncond. png}
\caption{\label{polcond} Schematic of the optical conductivity of
electrons interacting with a single Einstein mode. }
\end{figure}
properties will be. First of all, the coupling constant $\alpha$
is given by,
\begin{equation}
\alpha^{2 }=\frac{Ry}{\hbar\omega_{0 }\tilde{\varepsilon}_{\infty}^{2 }}\frac{m_{b}}{m_{e}},
\end{equation}
where $Ry$ stands for the unit Rydberg (1 Rydberg =
$m_{e}e^{4 }/2 \hbar^{2 }$ = 13.6 eV), $\omega_{0 }$ is the oscillator
frequency of the (Einstein) phonon mode involved $m_{b}$ and
$m_{e}$ are the band and free electron mass respectively and
$\tilde{\varepsilon}$ is given by,
\begin{equation}
\frac{1 }{\tilde{\varepsilon}}=\frac{1 }{\varepsilon_{\infty, IR}}-\frac{1 }{\varepsilon(0 )}.
|
627
| 7
|
arxiv
|
(n = 0 ) with a spectral weight
1 /(1 +0.02 $\alpha^{4 }$) followed by a series of peaks that
describe the incoherent movement of polarons assisted by
n=1,2,3.. phonons. In real solids the peaks are smeared out due
to the fact that phonons form bands. The real part of the
optical conductivity can thus be described as,
\begin{equation}
{\mathop{\rm Re}\nolimits} \frac{{4 \pi }}{{\omega _{_p }^{*2 }
}}\sigma \left( \omega \right) = \frac{1 }{{1 + 0.02 \alpha ^4
}}\frac{{\pi \delta (\omega )}}{2 } + \frac{{0.02 \alpha ^4 }}{{1 +
0.02 \alpha ^4 }}E_{pol}^{ - 1 } \exp \left\{ { - \left(
{\frac{{\omega ^2 - E_{pol}^2 }}{{cE_{pol}^2 }}} \right)^2 }
\right\}. \end{equation}
The first term in this expression describes the coherent part
of the spectrum, which in real solids will also be smeared out
to finite frequency by other forms of scattering, and an
incoherent term given by the second term which is called the
Holstein band. The shape of the side-band can be qualitatively
understood by imagining how a polaron has to move through the
lattice. In order to move from one site to another the lattice
deformation around the original site has to relax and be
adjusted on the new site. This relaxation process results in
the multi-phonon side-bands of the Drude peak. \begin{figure}[thb]
\begin{minipage}{6 cm}
\includegraphics[width=4 cm]{LaTiOres. png}
\end{minipage}\hspace{2 pc}%
\begin{minipage}{6 cm}
\includegraphics[width=8 cm]{LaTiOsig. png}
\end{minipage}
\caption{\label{LaTiO} Left panel: temperature dependent
resistivity of LaTiO$_{3.41 }$. Right panel: Optical conductivity
for selected temperatures. Figure adapted from ref. \citep{kuntscher}. }
\end{figure}
The observation of the Holstein side-band is somewhat
complicated because it is not possible to distinguish between
normal interband transitions and the effects due to polaron
formation. There have been some claims that a band observed in
the mid infrared region ($\approx$ 100 - 500 meV) of the
spectrum of high-T$_{c}$ superconductors is due to polaron
formation but many other interpretations exist. Figure
\ref{LSCOdopdep} shows the doping dependence of
La$_{2 -x}$Sr$_{x}$CuO$_{4 }$. The peak that occurs around 0.5 eV
for the 0.02 doped sample has been interpreted as the Holstein
side-band. Another example where polarons could play a role is
in LaTiO$_{3.41 }$ \cite{kuntscher}. In this material the
resistivity (figure \ref{LaTiO}) shows a quasi one dimensional
behavior with an upturn of the resistivity at lower
temperatures. This could be due to polaron formation but it has
also been interpreted as due to a charge density wave. The
optical conductivity at low temperatures shows that a large
part of the spectral weight is contained in a side-band around
300 meV (see figure \ref{LaTiO}). If this peak would be due to
polarons we expect that when we warm up the system to higher
temperatures its spectral weight should be diminished. This is
because the increased temperature unbinds electrons from their
self-trapping potential and therefore shifts spectral weight
from the Holstein band to the Drude peak. This is also what is
observed and at the same time explains the decrease of
resistivity with increasing temperature. \begin{figure}[bth]
\includegraphics[width=6 cm]{NaV6 O15 struct. png}
\caption{\label{NaVOstruct}Crystal structure of
NaV$_{6 }$O$_{15 }$. }
\end{figure}
The last example we will discuss is NaV$_{6 }$O$_{15 }$. The
structure of this compound is build up out off octahedra and
tetrahedra of vanadium and oxygen atoms where the tetrahedra
form quasi 1 -dimensional zig-zag chains (see figure
\ref{NaVOstruct}). There are 3 different types of vanadium
sites in this structure: 2 of them are ionic with a charge 5 +
on the vanadium which has then a $3 d^{0 }$ configuration. The
third site has half an electron more leading to a charge of
4.5 + on the vanadium atom in a $3 d^{1 /2 }$ configuration. Because of this we expect a quarter filled band and metallic
behavior. Figure \ref{NaVOsigma} shows the optical conductivity
of $\beta$- NaV$_{6 }$O$_{15 }$. The chains are along the
direction labelled \textit{b}. At energies around 3000
cm$^{-1 }$ we observe a broad peak for light polarized along the
$b$-direction which could be due to polarons although these
transitions also correspond well with the energies predicted by
the Hubbard model for d-d transitions. If we compare the
conductivity with polarization parallel and perpendicular to
the b-axis, we see that the conductivity perpendicular to the
b-axis is insulating whereas the one along the b-axis is
conducting. This conducting behavior is due to the quarter
filled bands. \begin{figure}[thb]
\includegraphics[width=6 cm]{NaV6 O15 sigma. png}
\caption{\label{NaVOsigma}Optical conductivity of
$\beta$-NaV$_{6 }$O$_{15 }$ for light polarized along and
perpendicular to the b-axis. Figure adapted from \cite{presura}}
\end{figure}
Are polarons playing an important role in the above examples . It
is nearly impossible to answer this question experimentally due to
the above mentioned difficulty in separating polaronic behavior
from normal interband transitions. Moreover, in most cases where
polarons are invoked, other theories are also able to reproduce
the experimental results. To close this section we briefly discuss
what happens if the density of polarons becomes larger. Imagine
what happens if we increase the density of polarons such that we
are getting close to a system with one polaron on each site. In
that case the original lattice will almost be completely deformed
and one can wonder wether the electrons are still capable to
self-trap. It seems reasonable that in this limit the polaron
picture no longer applies. Another possibility is the formation of
bipolarons. Since the deformation energy of the lattice is
proportional to the electron charge $E_{pol}\propto -1 /2 Cq^{2 }$,
the binding energy of two polarons is $\propto -Cq^{2 }$. The
binding energy of a bipolaron (two electrons trapped by the same
polarization cloud) is twice as large however $E_{bipol}\propto
-1 /2 C(2 q)^{2 }$. This binding energy is usually not enough to
overcome the Coulomb repulsion between the electrons. \section{Spin interactions}\label{spininteraction}
As mentioned in the previous section the signatures for the
presence of polarons can often be interpreted with different
ideas. Most often these models are based on coupling to magnetic
interactions. Consider for example the spectrum of the parent
(undoped) compound YBa$_{2 }$Cu$_{3 }$O$_{6 }$ which is a Mott
insulator (see bottom panel of figure \ref{reftrans}). Below 100
meV we see a series of peaks which are due to phonons. But what
about the structure between 100 meV and 1 eV . One of the
difficulties in explaining this structure is that light does not
directly couple to spin degrees of freedom. It is however possible
to indirectly make spin flips with photons (see figure
\ref{spinflip}). \begin{figure}[thb]
\includegraphics[width=10 cm]{spinflip. png}
\caption{\label{spinflip}Interaction diagram for the indirect
interaction of light with spin degrees of freedom. }
\end{figure}
For this process to occur we have to include phonon-magnon
interaction. When a photon enters the material it gets dressed
with phonons forming a polariton which is then coupled to the spin
degrees by the phonon-magnon interaction. This leads to the
possibility of so-called phonon assisted absorption of spin-flip
excitations \cite{Lorenzana}. We see from figure \ref{spinflip}
that the polariton creates a bi-magnon. This is because the
intermediate state has to have spin S = 0. The dashed square
represents all magnon-magnon interactions. The coupling constant
for this process was first calculated by Lorenzana and Sawatzky
and is
\begin{equation}
J_{ph-mag}=\frac{1 }{2 J}<\frac{d^{2 }J}{du^{2 }}><u^{2 }>,
\end{equation}
\begin{figure}[thb]
\includegraphics[width=8.5 cm]{spinsigma. png}
\caption{\label{spinsigma}Optical conductivity of (a):
La$_{2 }$CuO$_{4 }$, (b): La$_{2 }$NiO$_{4 }$ and (c):
Sr$_{2 }$CuO$_{4 }$. Dashed lines are fits using the
Lorenzana-Sawatzky model. }
\end{figure}
where $J$ is the superexchange constant and $u$ is the atomic
displacement vector. In the process momentum and energy have to be
conserved and this leads to
\begin{equation}
k_{\text{magnon 1 }} + k_{\text{magnon 2 }} + k_{\text{phonon}} =
k_{\text{photon}} \approx 0. \end{equation}
and
\begin{equation} \omega _{\text{magnon 1 }} + \omega _{\text{magnon 2 }} +
\omega _{\text{phonon}} = \omega _{\text{photon}}. \end{equation}
for the process in figure \ref{spinflip}. This gives constraints
on the possible absorptions. In figure \ref{spinsigma} some
examples are shown of materials in which we believe this process
to play a role. One of the compounds where the predicted optical
conductivity fits the spectrum very well is in the case of
Sr$_{2 }$CuO$_{3 }$. To make the fit the magnon dispersion as
measured with neutron scattering was used. The reason that this
theory works so well for Sr$_{2 }$CuO$_{3 }$ is that the conduction
is nearly one dimensional. This gives a good starting point
because the magnon spectrum is completely understood. On the
contrary, the theory is not completely capable of predicting the
spectrum of La$_{2 }$CuO$_{4 }$. Most likely the peaks around 0.6
and 0.75 eV are due to 4 and 6 magnon absorption. In the case of
YBa$_{2 }$Cu$_{3 }$O$_{6 }$ the situation gets even more complicated
due to the presence of two layers per unit cell. Because of the
doubling of the unit cell, there are now acoustic and optical
magnon branches just as what would happen in the case of phonons. The effect of this on the optical conductivity was first discussed
by Grueninger \textit{et al. } \cite{Grueninger, Grueninger2 }. Another example of probing of spin excitations occurs in
NaV$_{2 }$O$_{5 }$. As already discussed in the previous section
this compound has quasi one dimensional chains as shown in figure
\ref{NaVOstruct}. These chains can be seen to form a so-called
ladder structure, with the ladders parallel to the $b$ direction. \begin{figure}[thb]
\includegraphics[width=3 cm]{NaVOlad. png}
\caption{\label{NaVOlad}Schematic of the ladder structure of
$\alpha$- NaV$_{2 }$O$_{5 }$. Arrows indicate the position of the
lectrons and their spin orientation. }
\end{figure}
Each adjacent ladder is shifted with respect to the previous
such that the rungs of one ladder fall in between those of the
next (figure \ref{NaVOlad}). The vanadium atoms that form the
ladders have an average charge of +4.5. It has been claimed
\cite{carpy} that the charge distribution is inhomogeneous with
most of the charge on one side of the ladder as indicated in
figure \ref{NaVOlad}. \begin{figure}[thb]
\begin{minipage}{7 cm}
\includegraphics[width=7 cm]{NaVOmag. png}
\caption{\label{NaVOmag}Magnetization of $\alpha$-
NaV$_{2 }$O$_{5 }$. Figure adapted from ref. \citep{Isobe}. }
\end{minipage}\hspace{2 pc}%
\begin{minipage}{7 cm}
\includegraphics[width=5 cm]{NaVOphase. png}
\caption{\label{NaVOphase}Schematic representation of the low
and high temperature phase of $\alpha$- NaV$_{2 }$O$_{5 }$. }
\end{minipage}
\end{figure}
The temperature dependence of the magnetic susceptibility can
be modelled pretty well using a Bonner-Fischer model for a
spin-1 /2 Heisenberg chain \cite{Isobe} for temperatures higher
then 34 K (see figure \ref{NaVOmag}). Below 34 K, X-ray
analysis shows a doubling of the a- and b- axes and a
quadrupling of the c-axis. It indicates that the new unit cell
consists of 64 vanadium atoms and 32 valence electrons. At the
same temperature the susceptibility shows an abrupt drop. An
explanation for this transition is in terms of a spin-Peierls
transition. In the high temperature phase (T $>$ T$_{SP}$) the
left side of the ladder has a uniform spin distribution, as
indicated in the left panel of figure \ref{NaVOphase}, which is
reasonably well described with an anti-ferromagnetic (AF) S =
1 /2 Heisenberg spin chain with uniform exchange coupling $J$. For T $<$ T$_{SP}$ the system dimerises due to a deformation of
the lattice, leading to an alternation of exchange couplings
(see right panel figure \ref{NaVOphase}). Here we focus on the
high temperature phase. If the charge inhomogeneity is present
it would lead to a breaking of the inversion symmetry which in
turn leads to a non-zero optical matrix element for two magnon
absorption \cite{damascelli3 }. The idea is similar to the
Lorenzana-Sawatzky model discussed above. In the latter case
the phonon effectively lowers the symmetry making the process
optically allowed. The optical conductivity of $\alpha$-
NaV$_{2 }$O$_{5 }$ is shown in figure \ref{alphaNaVO}. We can
model $\alpha$- NaV$_{2 }$O$_{5 }$ with independent ladders where
the hopping probability along a rung ($t_{\perp}$) is much
larger than that along the ladder ($t_{\parallel}$). Furthermore we assume a large on-site repulsion U. We can then
model a ladder by independent rungs. Assuming a quarter filled
ladder (one electron per rung) we have a simple two level
problem leading to bonding and anti-bonding levels (see also
the discussion in the section on applications of sum rules). If
we also include a potential energy difference $\Delta$ between
the sites the wavefunctions become asymmetric with higher
probability on the low potential site and one can show that
this again leads to bonding and anti-bonding solutions which
are split by an energy \cite{damascelli3 },
\begin{figure}[bht]
\includegraphics[width=8.5 cm]{alphaNaVO. png}
\caption{\label{alphaNaVO}Optical conductivity of $\alpha$-
NaV$_{2 }$O$_{5 }$. Inset (a) shows the low energy continuum
attributed to charged bi-magnon excitations and inset (b) shows
the temperature dependence of the spectral weight of this
continuum. Figure adapted from ref. \citep{damascelli3 }. }
\end{figure}
\begin{equation}\label{Ect}
E_{CT}=\sqrt{\Delta^{2 }+4 t^{2 }_{\perp}}. \end{equation}
Transitions from the bonding to anti-bonding band are optically
active and involve charge transfer (CT) from the left side of the
ladder to the right. The large peak seen in figure \ref{alphaNaVO}
around 1 eV is due to these transitions. The energy position of
the peak indirectly gives evidence for the charge inhomogeneity:
band structure calculations and exact diagonalization of finite
clusters give $t_{\perp}\approx$ 0.35 eV, which would put the
charge transfer peak around 0.7 eV. The observed value of 1 eV
thus indicates $\Delta\neq$ 0.
|
1,168
| 0
|
arxiv
|
\section*{}
\vspace{-1 cm}
\footnotetext{\textit{$^{a}$~Sorbonne Université, CNRS, Physicochimie des Électrolytes et Nanosystèmes Interfaciaux, F-75005 Paris,
France}}
\footnotetext{\textit{$^{b}$~Réseau sur le Stockage Electrochimique de l’Energie (RS2 E), FR CNRS 3459, 80039 Amiens Cedex,
France }}
\footnotetext{\textit{$^{c}$~Courant Institute of Mathematical Sciences, New York University,
NY, 10012, U. S. A. }}
\footnotetext{$\ast$; E-mail: sophie@marbach. fr}
Microscopy techniques, from dynamic light scattering~\cite{berne2000 dynamic} to fluorescence correlation spectroscopy~\cite{elson1974 fluorescence}, generally rely on observing a small part of a much bigger underlying system. Understanding macroscopic properties from information at these scales, due in part to important fluctuations, is a major experimental challenge. For this reason, there is a long history in theory and simulations, especially in molecular simulations, in tracking fluctuations on a small volume of an underlying larger simulation domain~\cite{schnell2011 calculating, kruger2013 kirkwood, chandler2005 interfaces, kavokine2019 ionic} (see also Fig. ~\ref{fig:fig1 }). The larger simulation domain serves as a reservoir for particles allowing one to probe behaviour within the grand canonical ensemble~\cite{kruger2013 kirkwood, kavokine2019 ionic, frenkel2001 understanding}, without resorting to complex insertion/deletion rules~\cite{schnell2011 calculating, frenkel2001 understanding, robin2023 ion, maginn2009 molecular, belloni2019 non}. \textit{In the static limit}, fluctuations in finite observation volumes give access to various thermodynamic properties. Charge fluctuations in Coulombic systems can quantify screening properties~\cite{van1979 thermodynamics, martin1980 charge, lebowitz1983 charge, levesque2000 charge, bekiranov1998 fluctuations, kim2005 screening, kim2008 charge, jancovici2003 charge, kavokine2019 ionic}. Fluctuations of water molecules in observation volumes near interfaces can probe surface hydrophobicity and solvation free energies~\cite{chandler2005 interfaces, patel2010 fluctuations, rotenberg2011 molecular}. More generally, density fluctuations integrated over increasing volumes correspond, in the limit of infinite volumes, to Kirkwood-Buff integrals~\cite{kirkwood1951 statistical} from which it is possible to extract various thermodynamic properties of the fluid such as partial molar volumes, compressibility, and chemical potentials~\cite{kirkwood1951 statistical, kusalik1987 thermodynamic, schnell2011 calculating, kruger2013 kirkwood, dawass2019 kirkwood, cheng_computing_2022 }. In contrast, resolving fluctuating \textit{dynamics} in finite observation volumes has received less attention. Yet, Green-Kubo integrals -- the kinetic counterpart of Kirkwood-Buff integrals -- give access to various dynamic properties. Integration of fluctuating fluxes over an entire domain enables to probe conductivity, permeance, friction on surfaces, and more, ~\cite{bocquet2010 nanofluidics, bocquet1994 hydrodynamic, van2014 statistical, guerin2015 kubo, oga2019 green, espanol2019 solution, marbach2019 osmosis, minh2022 frequency, pireddu2022 frequency, zorkot2018 current, zorkot2016 power, zorkot2016 current, detcheverry2013 thermal, sega_calculation_2013, cox_finite_2019, caillol_dielectric_1987, caillol_theoretical_1986 }, including far from equilibrium~\cite{dal2019 linear, lesnicki2020 field, lesnicki2021 molecular, chun2021 nonequilibrium}. The relevance of Green-Kubo integrals over sub-volumes has only recently been raised, particularly to coarse-grain molecular dynamics near interfaces~\cite{duque2019 discrete}. Numerous coarse-graining techniques, from Mori-Zwanzig approaches to (fluctuating) Lattice-Boltzmann or Fluctuating Hydrodynamics, which are crucial to access mesoscale dynamics~\cite{gubbiotti2022 electroosmosis}, rely on a detailed understanding of fluctuations in finite volumes~\cite{dunweg_statistical_2007, dunweg_progress_2009, asta2017 transient, parsa2017 lattice, parsa2020 large, tischler2022 thermalized, schilling2022 coarse, espanol2019 solution, donev2010 accuracy, donev_fluctuating_hydro_2019, peraud_fluctuating_2017 }. \begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figure1. pdf}
\caption{\textbf{Ionic fluctuations in finite observation volumes. } (a) Brownian dynamics simulation, here for a $C_0 = 52 ~\mathrm{mM}$ salt solution in a cubic, triply periodic simulation domain. Yellow (resp. blue) particles represent cations (resp. anions). The dark gray cube is a finite observation volume, here, one of the 64 boxes of size $L_{\rm obs} = L_{\rm sim}/4 $. (b) Within the observation box, the number of particles $N = n_+ + n_-$ and the charge $Q = q(n_+ - n_-)$ fluctuate, where $n_+$ (resp. $n_-$) is the number of positive (resp. negative) charges. }
\label{fig:fig1 }
\end{figure}
The study of \textit{ionic} fluctuations in finite volumes is especially intriguing. Without electrostatic interactions, particle number fluctuations at a steady state should scale like the average number of particles in the observation volume, $\sim L_{\rm obs}^3 $ if $L_{\rm obs}$ is the corresponding size. However, fluctuations of the charge $Q$ are dramatically screened by electrostatics and scale with the \textit{area} of the observation volume for sufficiently large volumes, $\langle Q^2 \rangle \sim L_{\rm obs}^2 $ ~\cite{martin1980 charge, lebowitz1983 charge, levesque2000 charge, bekiranov1998 fluctuations, kim2005 screening, kim2008 charge, jancovici2003 charge} -- a generic phenomenon termed hyperuniformity~\cite{torquato2003 local, torquato2018 hyperuniform, ghosh2017 fluctuations, leble2021 two}. How does this screening behaviour pertain in time. Furthermore, dynamic features of electrolytes, such as conductivity, are not characterized by a universal timescale. An ion's self-diffusion coefficient $D$ -- related to its mobility -- and the Debye screening length $\lambda_D$ -- which quantifies the length scale of electrostatic interactions -- define the characteristic timescale $\tau_{\rm Debye} = \lambda_D^2 /D$ for the relaxation time of fluctuations in a bulk electrolyte. However, other length scales, such as, here, the size of the observation volume, $L_{\rm obs}$, provide other timescales, such as $\tau_{\rm Diff} = L_{\rm obs}^2 /D$, the time to diffuse across the finite volume. \textit{De facto}, various mixed timescales, as a combination $\lambda_D^{\nu} L_{\rm obs}^{2 -\nu}/D$ with $\nu$ a real number, will also play a role, as was seen in the charging dynamics of two parallel plates separated by a distance $L$ playing the role of $L_{\rm obs}$ here~\cite{bazant2004 diffuse, palaia2023 charging, minh2022 frequency}. Which timescale dominates fluctuations in finite volumes. Here, we use a combination of Brownian dynamics simulations and analytical calculations to rationalize ionic fluctuations in finite volumes (Sec. ~\ref{sec:secmethodo}). We probe (see Fig. ~\ref{fig:fig1 }) the particle number $N$ and charge $Q$ in cubic observation volumes with side $L_{\rm obs}$, smaller than the overall system size, for an electrolyte with various concentrations, at equilibrium. The correlations in the particle number fluctuations $N$ decay algebraically in time and are not affected by electrostatics (Sec. ~\ref{sec:2 }). In contrast, charge fluctuations $Q$ strongly depend on electrostatic interactions (Sec. ~\ref{sec:3 }). In the static limit, we recover hyperuniformity when the observation volume is much larger than the Debye length $L_{\rm obs} \gg \lambda_D$. The dynamic response of charge correlations encompasses a rich phenomenology that depends on the separation of length scales. To lift this ambiguity, we introduce a global timescale, defined as a weighted integral over the structure factor (Eq. ~\eqref{eq:T}), which quantifies the impact of either length scale on the relaxation time. The noise spectrum of both $Q$ and $N$ features a characteristic decay as $1 /f^{3 /2 }$ where $f$ is the frequency, a signature of fractional noise~\cite{marbach2021 intrinsic}, showing that such noise is a universal property of diffusing particles observed in finite volumes. The present framework can describe fluctuations in finite volumes for particles with different pairwise interactions, which allows us to discuss our results in the broader context of coarse-graining techniques, hyperuniformity, and electrochemical noise in confined geometries (Sec. ~\ref{sec:4 }). \section{Methodological overview}
\label{sec:secmethodo}
\subsection{Numerical methods}
We perform Brownian dynamics (BD) simulations of model electrolyte solutions (see Fig. ~\ref{fig:fig1 }-a). Specifically, we solve overdamped Langevin equations to describe the stochastic ion motion in an implicit solvent
\begin{equation}
\frac{d\bm{x}_i}{dt} = -\frac{D_i}{k_B T} \sum_{j\neq i} \bm{\nabla} V^{\rm Coul}_{ij} (||\bm{x}_i - \bm{x}_j||) + \sqrt{2 D_i} \bm{\eta}_i(t)
\label{eq:Langevin}
\end{equation}
where $\bm{x}_i$ is the 3 D position of particle $i$, $D_i$ its diffusion coefficient, $k_B T$ the thermal energy and $\bm{\eta}_i$ a 3 D gaussian white noise representing the action of the implicit solvent on the ions (such that $\langle \eta_{x, i}\rangle = 0 $ and $\langle \eta_{x, i}(t)\eta_{x', i}(t') \rangle = \delta(t-t')\delta_{xx'} \delta_{ij}$ where $x$ here indicates the $x^{\text{th}}$ component of the vector $\bm{\eta}_i$). $V_{ij}^{\mathrm{Coul}}$ corresponds to Coulomb interactions
\begin{equation}
V_{ij}^{\mathrm{Coul}}(r=||\bm{x}_i - \bm{x}_j||) = \frac{q_i q_j}{4 \pi \epsilon_0 \epsilon_w r}
\label{eq:COULOMB_POT}
\end{equation}
where $q_i$ is the charge of particle $i$, $\epsilon_0 $ is the vacuum permittivity, and $\epsilon_w$ the relative permittivity of the fluid. To avoid ionic collapse, we also add pairwise short-range repulsive interactions (not specified in Eq. ~\eqref{eq:Langevin}, see details in Appendix A). We take parameters to describe a typical symmetric salt solution, here KCl in water, broadly used in experiments~\cite{bocquet2010 nanofluidics, marbach2019 osmosis, secchi2016 scaling, powell2009 nonequilibrium, knowles2019 noise, knowles2021 current, smeets2008 noise}: $q_{+} = e = -q_{-} \equiv q$ where $e$ is the elementary charge, $D_+ = D_- = D = 1.5 \times 10 ^{-9 }~\mathrm{m^2 /s}$ and $\epsilon_\mathrm{w} = 78.5 $. We conduct simulations with $N_{0 } = N_+ = N_-$ ion pairs enclosed in a cubic simulation domain of side $L_{\rm sim} = 16 ~\mathrm{nm}$ and periodic boundary conditions. The salt concentration is, therefore, $C_0 = N_0 /L_{\rm sim}^3 $. Additional simulation details may be found in Appendix A. Here, we have chosen minimal interactions between ions; in particular, we have neglected hydrodynamic interactions~\cite{te_vrugt_classical_2020 } to isolate the effect of electrostatic interactions. In a companion paper~\cite{FDspectra}, we found that the same BD simulations, in a $\sim1 $~M aqueous electrolyte solution, compared with molecular dynamics simulations, capture well the main features of the dynamic structure factor of charges. However, deviations are observed for intermediate wavenumbers, which can be partially improved by improving the description of static correlations. Electrostatic and hydrodynamic interactions can be jointly addressed with either lengthy molecular dynamics or faster Brownian dynamics of ions with a fluctuating implicit solvent~\cite{ladiges1, ladiges2 }. The study of hydrodynamic interactions (albeit in the absence of electrostatics) will be the focus of a further study~\cite{AliceExclusion} using fluctuating hydrodynamics~\cite{sprinkle2017 large, hashemi2022 computing}. Our goal in this work is to understand the defining features of the fluctuating number of particles $N=n_+(L_{\rm obs})+n_-(L_{\rm obs})$ and the charge $Q=q[n_+(L_{\rm obs})-n_-(L_{\rm obs})]$ in cubic observation volumes of side $L_{\rm obs}$ within the simulation domain~(see Fig. ~\ref{fig:fig1 }-a); where $n_{\pm}(L_{\rm obs})$ refer to the number of positively (resp. negatively) charged particles in the observation volume. Recording particle positions, we find that the two quantities fluctuate in time (see Fig. ~\ref{fig:fig1 }-b) taking discrete values, either around the average box occupation $\langle N \rangle = 2 C_0 L_{\rm obs}^3 $ or the average zero charge $\langle Q \rangle = 0 $. To analyze their statistical properties, we examine their static $C_N(0 ), C_Q(0 )$ and dynamic correlations, $C_N(t) = \langle N(t) N(0 )\rangle - \langle N \rangle^2 $ and $C_Q(t) = \langle Q(t) Q(0 ) \rangle$. Here, we have checked that both $N$ and $Q$ follow Gaussian distributions around their average values, which means relevant insight can be obtained without considering higher-order correlations. Yet, in different geometries, such as near interfaces, we might expect deviations from Gaussian distributions, which can be exploited to calculate other thermodynamic quantities (see \emph{e. g. } Refs. ~\citenum{chandler2005 interfaces, patel2010 fluctuations} for the link between water density fluctuations and the hydrophobic/hydrophilic character of surfaces). \subsection{Stochastic Density Functional Theory}
To quantify particle statistics in boxes, we also rely on analytical calculations that introduce at the mean-field level the same physical ingredients as in the BD simulations.
|
1,168
| 1
|
arxiv
|
and $Q$~\cite{schnell2011 calculating, marbach2021 intrinsic, te_vrugt_classical_2020 }, stochastic Density Functional Theory (sDFT)~\cite{dean1996 langevin, kawasaki1994 stochastic} stands out here for its simplicity. sDFT directly describes the fluctuations on the continuous fields $C(\bm{x}, t)$ and $\rho(\bm{x}, t)$ due to individual particle diffusion and has been successfully applied to electrolytes to recover Onsager relations~\cite{demery2016 conductivity}. It is also especially suited to extract kinetic properties~\cite{jardat2022 diffusion, mahdisoltani2021 transient}. Starting from Poisson-Nernst-Planck equations, using sDFT to introduce fluctuations on individual particle fields, and then assuming fluctuations are small compared to the background density ($|c| \sim |\rho/q| \ll C_0 $), the fields satisfy~\cite{mahdisoltani2021 transient}
\begin{equation}
\begin{cases}
\partial_t c &= D \nabla^2 c + \sqrt{4 D C_0 }\bm{\nabla} \cdot \bm{\eta}_c \\
\partial_t \rho &= D \nabla^2 \rho - D \frac{1 }{\lambda_D^2 } \rho + \sqrt{4 D C_0 }\bm{\nabla}\cdot \bm{\eta}_{\rho}
\end{cases}
\label{eq:L-sPNP}
\end{equation}
where $\lambda_D = \sqrt{ \frac{k_BT \epsilon_0 \epsilon_w }{q^2 (2 C_0 )} }$ is the Debye screening length, and the $\bm{\eta}_X$ (for $X\in\{c, \rho\}$) are 3 D Gaussian white noises with uncorrelated components (see Appendix B for a short derivation). Within this framework, we obtain the structure factors for the density ($X = c$) and the charge ($X = \rho$). They are best expressed in Fourier space. If $\tilde{X}(\bm{k}, \omega)$ is the Fourier transform of $X(\bm{X}, t)$\footnote{With the convention that $\tilde{X}(\bm{k}, \omega) = \iint e^{-i\omega t} e^{-i \bm{k}\cdot \bm{x}} X(\bm{x}, t) \mathrm{d} \bm{x} \mathrm{d} t$} then
\begin{equation}
\langle \tilde{X}(\bm{k}, \omega) \tilde{X}(\bm{k}', \omega') \rangle = 2 C_0 q_X^2 (2 \pi)^4 S_{XX}(\bm{k}, \omega) \delta(\omega+\omega') \delta^3 (\bm{k}+ \bm{k}')
\label{eq:S_fourier}
\end{equation}
where $q_{\rho}^2 = q^2 = (Ze)^2 $, $q_{c}^2 = 1 $, and $S_{XX}(\bm{k}, \omega) $ is the structure factor. For both fields, we find
\begin{equation}
\begin{split}
&S_{XX}(\bm{k}, \omega) = \frac{2 D k^2 }{\omega^2 + (Dk^2 /S^{\rm static}_{XX}(k))^2 } \\
&\, \, \, \text{with static structure factors}\, \, \,
S^{\rm static}_{cc}(k) = 1, \, \, \, \, S^{\rm static}_{\rho\rho}(k) = \frac{k^2 }{k^2 + \kappa_D^2 }
\end{split}
\label{eq:Sgeneral}
\end{equation}
where $k = |\bm{k}|$ and $\kappa_D = 1 /\lambda_D$. Eq. ~\ref{eq:Sgeneral} for the static charge structure factor $S_{\rho\rho}^{\mathrm{static}}(k)$ corresponds to the classical result of Debye-H\"uckel~\cite{hansen2013 theory, FDspectra}, which is expected since linearized sDFT yields the lowest order coupling between diffusion and electrostatics, \textit{i. e. } dynamics close to equilibrium. More generally, the dynamic structure factor expression in Eq. ~\eqref{eq:Sgeneral} holds for various fields, as long as they derive from Markovian (no memory) and gaussian processes (forces are conservative and derive from an energy that is quadratic in the field)~\cite{FDspectra, AliceExclusion, marbach2018 transport}. The present formalism can thus easily be extended to study different pairwise interactions, such as steric interactions. Still, it should be modified to account \textit{e. g. } for hydrodynamic interactions~\cite{te_vrugt_classical_2020, AliceExclusion, zorkot2016 current, sprinkle2017 large, hashemi2022 computing}. The correlations of $N$ and $Q$ are then simply given by
\begin{equation}
\begin{cases}
&C_N(t) = \displaystyle \iint_{\mathcal{V}_{\rm obs}} \langle c(\bm{x}, t) c(\bm{x}',0 ) \rangle \mathrm{d} \bm{x} \mathrm{d} \bm{x}' = L_{\rm obs}^3 \iint \frac{\mathrm{d} \bm{k}\mathrm{d} \omega}{(2 \pi)^4 } e^{i\omega t} f_{\mathcal{V}}(\bm{k}) S_{cc}(k, \omega), \\
&C_Q(t) = \displaystyle \iint_{\mathcal{V}_{\rm obs}} \langle \rho(\bm{x}, t) \rho(\bm{x}',0 ) \rangle \mathrm{d} \bm{x} \mathrm{d} \bm{x}' =L_{\rm obs}^3 \iint \frac{\mathrm{d} \bm{k}\mathrm{d} \omega}{(2 \pi)^4 } e^{i\omega t} f_{\mathcal{V}}(\bm{k}) S_{\rho\rho}(k, \omega),
\end{cases}
\label{eq:deltaNS}
\end{equation}
where
\begin{equation}
f_\mathcal{V}(\bm{k}) = \frac{1 }{L_{\rm obs}^3 } \iint \mathrm{d} \bm{x} \mathrm{d} \bm{x}' e^{i \bm{k}\cdot (\bm{x}- \bm{x}')}
\label{eq:fv}
\end{equation}
is a geometrical volume factor that accounts for the shape of the observation box in Fourier space. Eq. ~\eqref{eq:deltaNS} extends Kirkwood-Buff type integrals beyond the static regime~\cite{van1979 thermodynamics, jancovici2003 charge, kim2008 charge, schnell2011 calculating, kruger2013 kirkwood, kusalik1987 thermodynamic}. In addition, compared to previous calculations for the static case which were conducted in real space~\cite{van1979 thermodynamics, jancovici2003 charge, kim2008 charge, schnell2011 calculating, kruger2013 kirkwood, kusalik1987 thermodynamic}, calculations in Fourier space are more straightforward. Here, we will focus on cubic observation boxes where statistical analysis can be sped up. However, we expect our results to persist for other geometries, especially the scaling laws we unravel. We demonstrate the generality of these scalings in Appendix D, where we show analytically that they also apply to spherical observation volumes. \section{Particle number correlations decay algebraically with time}
\label{sec:2 }
We start by analyzing particle number correlations, $C_N(t) = \langle N(t) N(0 )\rangle - \langle N \rangle^2 $, and present BD simulations results for various observation box sizes in Fig. ~\ref{fig:fig2 }-a (triangles). We find that correlations decay with time due to particle exchanges between the observation box and the rest of the simulation domain. With larger observation boxes, correlations increase in magnitude since more particles participate in the fluctuations. To rationalize this behaviour, we use the sDFT framework. Inserting the expression of the spectrum Eq. ~\eqref{eq:Sgeneral} in Eq. ~\eqref{eq:deltaNS} we find after integration (see Appendix C),
\begin{equation}
\begin{split}
C_N(t) &= \langle N \rangle \left[ f_N\left(\frac{4 Dt}{L_{\rm obs}^2 } \right) \right]^3, \\
& \text{where}\, \, f_N\left(\frac{t}{\tau_{\rm Diff}} \equiv \frac{4 D t}{L_{\rm obs}^2 } \right) = \sqrt{\frac{t}{\tau_{\rm Diff} \pi}} \left( e^{-\tau_{\rm Diff}/t} - 1 \right) + \mathrm{erf} \left( \sqrt{\frac{\tau_{\rm Diff}}{t}} \right). \label{eq:deltaN}
\end{split}
\end{equation}
We compare BD results with Eq. ~\eqref{eq:deltaN} in Fig. ~\ref{fig:fig2 }-a (symbols and lines, respectively). The excellent agreement shows that sDFT is indeed well suited to predict particle number fluctuations. \begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figure2. pdf}
\caption{\textbf{Algebraic decay of particle number fluctuations. } (a) Particle number correlations with time, ranging from light blue to dark blue for increasing observation box size. Brownian dynamics results are shown as triangles, with shaded areas indicating one standard deviation around the mean, while lines are predictions from Eq. ~\eqref{eq:deltaN}. (b) Rescaled (a) plot showing algebraic decay at long times as $1 /t^{3 /2 }$. (c) Associated frequency spectrum with the $1 /f^{3 /2 }$ signature of fractional noise~\cite{marbach2021 intrinsic}. Here, $C_0 = 104 ~\mathrm{mM}$; colored legends are shared across (a-c). }
\label{fig:fig2 }
\end{figure}
Eq. ~\eqref{eq:deltaN} shows that number fluctuations scale with the average number of particles in the observation box, $\langle N \rangle$, and are determined by a single timescale $\tau_{\rm Diff} = L_{\rm obs}^2 /4 D$ corresponding to particle diffusion across the observation box. This is confirmed in Fig. ~\ref{fig:fig2 }-b, which shows that all BD results collapse on a master curve, well described by Eq. ~\eqref{eq:deltaN}, when rescaled by $\langle N \rangle$ and time by $\tau_{\rm Diff}$. Furthermore, we find that the correlations decay algebraically as $t^{-3 /2 }$ at long times. Expanding Eq. ~\eqref{eq:deltaN} we find $C_N(t)/\langle N \rangle = (\tau_{\rm Diff}/\pi t)^{3 /2 }$, which confirms the exponent of the algebraic decay. This \textit{slow} relaxation of the correlations indicates that particle rearrangements are slow with time due to their diffusive or Brownian nature. Finally, the noise spectrum $S_N(f)$ associated with $N(t)$, reported in Fig. ~\ref{fig:fig2 }-c, decays at high frequencies as $1 /f^{3 /2 }$, a signature of fractional noise. This $3 /2 $ exponent is not related to the long-time algebraic decay of the correlations; rather, it corresponds to the early time behaviour as $C_N(t)/\langle N \rangle \simeq 1 - \sqrt{t/\pi\tau_{\rm Diff}}$. Overall, the statistical properties of $N(t)$ are thus characteristic of a so-called fractional Brownian walk, with ``diffusion coefficient'' $\sqrt{1 /\pi\tau_{\rm Diff}}$ and Hurst index H = 1 /4 ~\cite{mandelbrot1968 fractional, burov2011 single}. The physical origin of this peculiar mathematical property comes from boundary crossings, here, that of the observation volume. Similar fractional noise signatures were predicted in 1 D for Brownian particles with no interactions~\cite{marbach2021 intrinsic}. Remarkably, this fractional feature pertains here in 3 D, with particle interactions, showing that fractional noise is a universal property of Brownian motion, which arises as soon as a quantity involves particles crossing boundaries. Surprisingly, our results for particle number fluctuations do not depend on electrostatic properties. While this is somewhat expected at low enough salt concentrations $C_0 $, steric effects should modify number fluctuations at high concentrations. Steric effects result in oscillations of the static structure factor~\cite{hansen2013 theory, thorneywork2018 structure}, that are only weakly captured by BD~\cite{FDspectra} and not at this stage with sDFT in Eq. ~\eqref{eq:Sgeneral}. Steric effects can, however, be captured by improving the expression of the static structure factor analytically~\cite{thorneywork2018 structure, hansen1982 rescaled, dufreche2002 ionic} or by fitting numerically obtained structure factors~\cite{FDspectra} and will be the object of future work~\cite{AliceExclusion}. \section{Exotic signatures in charge fluctuations}
\label{sec:3 }
We now turn to charge fluctuations within finite observation volumes. \subsection{Hyperuniformity in the static regime}
We first revisit static charge fluctuations in an observation volume, $C_Q(0 ) = \langle Q^2 \rangle$, to lay the ground for time dependence. For sufficiently small $L_{\rm obs}$, BD results indicate that charge fluctuations scale with the volume, $C_Q(0 ) \sim L_{\rm obs}^3 $, hence like the average particle number in that region (Fig. ~\ref{fig:fig3 }-a, circles). In contrast, for large $L_{\rm obs}$, and especially at high concentrations, fluctuations scale only with the area of the probe volume, $C_Q(0 ) \sim L_{\rm obs}^2 $. This peculiar behaviour was predicted theoretically by \citet{martin1980 charge} then verified with Monte Carlo simulations~\cite{lebowitz1983 charge, jancovici2003 charge, kim2005 screening, kim2008 charge}. The property that fluctuations over an observation volume scale with the area is nowadays termed \textit{hyperuniformity}, and is a generic feature of particle systems with long-range $1 /r$ interactions, where $r$ is the interparticle distance~\cite{torquato2018 hyperuniform, ghosh2017 fluctuations, leble2021 two}. \begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figure3. pdf}
\caption{Static charge fluctuations are hyperuniform. (a) Static charge fluctuations with observation box size $L_{\rm obs}$ for increasing salt concentrations, going from purple to yellow. Dots: results from BD simulations; lines: Eq. ~\eqref{eq:Q20 }. Error bars are one standard deviation about the mean and are smaller than dot sizes. (b) Rescaled (a) plot showing data collapse, highlighting ``entropic'' and ``enthalpic'' regimes. (c) Sketch of the origin of fluctuations, in 2 D for simplicity, in both regimes (see text for details). Fluctuations growing as the area of the observation volume in the enthalpic regime are, by definition, hyperuniform. }
\label{fig:fig3 }
\end{figure}
We may rationalize this behaviour with sDFT: From Eq.
|
1,168
| 2
|
arxiv
|
number in the box, $\kappa_D = 1 /\lambda_D$ and $f_\mathcal{V}(\bm{k})$ is a geometric factor, see Eq. ~\eqref{eq:fv}. $C_Q(0 )$ depends only on the ratio $L_{\rm obs}/\lambda_D$. Formally, if $L_{\rm obs} \ll \lambda_D$ the dominant part of the spectrum will be for values $k \gg \kappa_D$ and expanding Eq. ~\eqref{eq:Q20 } one obtains $C_Q(0 ) \sim q^2 \langle N \rangle = 2 C_0 q^2 L_{\rm obs}^3 $ (see Appendix C). In contrast, when $L_{\rm obs} \gg \lambda_D$, further expansions yield $C_Q(0 ) \sim 2 c_Q C_0 q^2 L_{\rm obs}^2 \lambda_D$ where $c_Q = (16 \pi)^{1 /3 } \simeq 3.7 $ ($c_Q$ usually depends on box geometry, see~\citet{kim2008 charge} and also Appendix D). Fig. ~\ref{fig:fig3 }-b shows that rescaling $C_Q(0 )$ by $L_{\rm obs}^2 \lambda_D$ and $L_{\rm obs}$ by $\lambda_D$ indeed collapses all results on a master curve. The present framework thus shows that hyperuniformity is not just a property of the system itself but also a property of the observation scale ($L_{\rm obs}$) relative to the scale of the interactions ($\lambda_D$). This behaviour can further be interpreted with an energetic approach, brought forth by several authors~\cite{van1979 thermodynamics, kim2005 screening, lebowitz1983 charge}. When $L_{\rm obs} \ll \lambda_D$, at the observation scale $L_{\rm obs}$, electrostatics do not govern ionic structure. Hence, one can simply place particles in the observation volume without concern for their respective charge, among the diversity of particle arrangements (see Fig. ~\ref{fig:fig3 }-c). Fluctuations are dominated by entropy, and as in any such statistical physics framework, fluctuations scale like the average number of particles in the observation volume. In contrast, when $L_{\rm obs} \gg \lambda_D$, an inner, neutral region of the observation volume exists where charges are balanced. The remaining degree of freedom is at the interface, within a thin shell around the neutral region of thickness $\lambda_D$, and fluctuations are dominated by the energetic cost to charge the interface. Hence fluctuations scale as $L_{\rm obs}^2 \lambda_D$ and can be viewed as dominated by enthalpy. This entropic/enthalpic interpretation is similar to solute particle fluctuations in a liquid, where the free energy cost to create a spherical cavity scales with the volume for small radii and the area for large radii, with a cross-over around 1 ~nm in water~\cite{chandler2005 interfaces}. Since this free energy cost has proved useful to characterize the hydrophilic/phobic behaviour of interfaces from the local water density fluctuations~\cite{patel2010 fluctuations, rotenberg2011 molecular, rego_understanding_2022 }, it might be relevant to explore this analogy in the case of charged systems further. \subsection{Charge fluctuations with time: timescales and hyperuniformity}
We now turn to the relaxation of charge correlations by considering $C_Q(t) = \langle Q(t) Q(0 ) \rangle$. Fig. ~\ref{fig:fig4 }-a displays the BD results for a fixed salt concentration and various observation volumes, rescaled by $q^2 \langle N \rangle$. At early times, correlations collapse for small $L_{\rm obs}$ (light orange) but not for large ones (dark red), a natural consequence of the above-discussed static ($t=0 $) hyperuniformity. Surprisingly, at long times, we observe the opposite behaviour: correlations collapse for large $L_{\rm obs}$ but not for small ones. Furthermore, the decay of charge correlations for large $L_{\rm obs}$ is not algebraic, in contrast with number fluctuations, but exponential (see Fig. ~\ref{fig:fig4 }-b). \begin{figure}[h]
\centering
\includegraphics[width = 0.8 \textwidth]{Figure4. pdf}
\caption{\textbf{Charge fluctuations decay exponentially for large observation volumes. } (a) Charge fluctuations rescaled by $\langle N\rangle$, with time, for increasing $L_{\rm obs}$ from yellow to dark red. Dots: results from BD simulations with shaded areas indicating one standard deviation around the mean; lines: Eq. ~\eqref{eq:dQt}. (b) Same as (a) in lin-log scale to highlight the exponential decay for large $L_{\rm obs}$. Here, $C_0 = 104 ~$mM and $\lambda_D = 0.95 ~\mathrm{nm}$. }
\label{fig:fig4 }
\end{figure}
We can get direct insight on the exponential decay using sDFT. Integrating Eq. ~\eqref{eq:deltaNS} over $\omega$ yields
\begin{equation}
\begin{split}
C_Q(t) &= q^2 \langle N \rangle \int \frac{d\bm{k}}{(2 \pi)^3 } S_{\rho \rho}^{\rm static}(k) e^{-D k^2 t / S_{\rho \rho}^{\rm static}(k)} f_\mathcal{V}(\bm{k}). \\
&= q^2 \langle N \rangle \, e^{-t/\tau_{\rm Debye}} \int \frac{d\bm{k}}{(2 \pi)^3 } \frac{k^2 }{k^2 + \kappa_D^2 }e^{-D k^2 t} f_\mathcal{V}(\bm{k})
\label{eq:dQt}
\end{split}
\end{equation}
where the Debye time $\tau_{\rm Debye} = \lambda_D^2 /D$ corresponds to the time to diffuse across the Debye length scale. Eq. ~\eqref{eq:dQt} reproduces remarkably well the BD results (see Fig. ~\ref{fig:fig4 }, lines). For sufficiently large $L_{\rm obs}$, the correlations decay exponentially with characteristic timescale $\tau_{\rm Debye}$. Indeed, the relaxation of charge fluctuations is primarily driven by electrostatics: the transient local breakdown of electroneutrality induces an internal electric field driving the ions (with a mobility $q D/k_BT$) to restore electroneutrality. There are clearly other timescales involved in the relaxation, especially for small $L_{\rm obs}$. As mentioned in the introduction, the interplay between $\tau_{\rm Debye}$ and $\tau_{\rm Diff}$ can produce a variety of timescales that could all explain part of the behaviour~\cite{bazant2004 diffuse, palaia2023 charging, minh2022 frequency}. To understand the relaxation behaviour more systematically, we explore in Fig. ~\ref{fig:fig5 } the relaxation of $C_Q(t) e^{t/\tau_{\rm Debye}}$. Since BD results are well captured by sDFT over a broad range of parameters, we use analytical expansions of Eq. ~\eqref{eq:dQt} to quantify the dependence of the results on $\lambda_D$ and $L_{\rm obs}$. Fig. ~\ref{fig:fig5 }-a first reports the case of small observation volumes compared to the Debye length $L_{\rm obs} \ll \lambda_D$. Beyond the initial static regime where $C_Q \sim L_{\rm obs}^3 $, when $t \gtrsim \tau_{\rm Diff}$, we find, expanding Eq. ~\eqref{eq:dQt}, that the correlations decay as $C_Q \sim L_{\rm obs}^3 (\tau_{\rm Diff}/t)^{3 /2 }$ (see Appendix C). This decay exactly follows that of the particle number decay in Fig. ~\ref{fig:fig2 }-b. At this observation length scale \textit{and} timescale, electrostatics do not play any role, and the only relevant timescale appears to be $\tau_{\rm Diff}$. Eventually, at longer times, $t\gtrsim \tau_{\rm Debye}$, correlations decay faster as $C_Q \sim L_{\rm obs} \lambda_D^2 e^{-t/\tau_{\rm Debye}} (\tau_{\rm Diff}/t)^{5 /2 }$, and the Debye timescale $\tau_{\rm Debye}$ appears to govern charge fluctuation relaxation. As explained above, this time scale emerges due to restoring electrostatic forces that damp fluctuations arising from diffusion. How do these effects survive when the length scales, $L_{\rm obs} \gg \lambda_D$, and hence the timescales $\tau_{\rm Debye} \ll \tau_{\rm Diff}$ are reversed. In Fig. ~\ref{fig:fig5 }-b, we show BD results with parameters $L_{\rm obs} \simeq 2 \lambda_D$, which is already hard to achieve with reasonable simulation times. Beyond the static hyperuniform regime where $C_Q \sim L_{\rm obs}^2 \lambda_D$, for $t\gtrsim \tau_{\rm Debye}$ we find $C_Q \sim L_{\rm obs}^2 \lambda_D e^{-t/\tau_{\rm Debye}} (\tau_{\rm Debye}/t)^{1 /2 }$. The decay of the correlations is apparently entirely due to electrostatic effects, with $\tau_{\rm Debye}$ the relevant timescale, and is faster than exponential. Finally, for $t \gtrsim \tau_{\rm Diff}$, when particles have had time to diffuse across the observation volume, $\tau_{\rm Diff}$ appears in the dynamics, as $C_Q \sim L_{\rm obs} \lambda_D^2 e^{-t/\tau_{\rm Debye}} (\tau_{\rm Diff}/t)^{5 /2 }$. Curiously, at long times, correlations decay as $C_Q \sim L_{\rm obs} \lambda_D^2 e^{-t/\tau_{\rm Debye}} (\tau_{\rm Diff}/t)^{5 /2 }$ in both the $L_{\rm obs} \ll \lambda_D$ and $L_{\rm obs} \gg \lambda_D$ regimes. At such long timescales, particles have diffused over distances long enough that $\lambda_D$ and $L_{\rm obs}$ appear comparably small. Remarkably, the amplitude of the fluctuations now scales with the \textit{perimeter} $L_{\rm obs}$ of the observation domain, which we verify numerically in Fig~\ref{fig:fig5 }-c. Note, that the collapse of the data onto the scaling law is not perfect, since we are limited in time with simulations and the time investigated is not always much bigger than $\tau_{\rm Debye}, \tau_{\rm Diff}$ for all parameters ($C_0 $, $L_{\rm obs}$). This extreme long-time scaling appears to be a case of hyperuniformity, where the dimensional degree of hyperuniformity is increased because fluctuations have relaxed. It is tempting to interpret this result in the following way: at long times, only boundary crossings in volume elements surrounding cube edges with area $\lambda_D^2 $ matter. This open interpretation could be formally addressed, for example, by investigating the spatial relaxation of fluctuations. Finally, with this curious scaling, we might expect that for quasi-2 D electrostatics, such as in extremely confined systems~\cite{mouterde2019 molecular, robin2023 long}, fluctuations should have the same amplitude at long times. This resonates, more generally, with the peculiar behaviour of fluctuations confined to 2 D, from thermal Casimir forces to memory effects~\cite{dean2016 nonequilibrium, mahdisoltani2021 long, robin2023 long}. \begin{figure}[h. ]
\centering
\includegraphics[width = \textwidth]{Figure5. pdf}
\caption{\textbf{A variety of timescales emerges in charge correlation relaxation. } (a) Charge correlations rescaled by $e^{t/\tau_{\rm Debye}}$ for $L_{\rm obs} \ll \lambda_D$. Vertical dotted orange and blue lines indicate scaling law intersections, as $t = \tau_{\rm Diff}/\pi$ and $t = 3 \tau_{\rm Debye}/2 $, respectively. (b) Similar as (a) but for $L_{\rm obs} \gg \lambda_D$. The vertical dotted orange and blue lines indicate $t = \sqrt{3 }\tau_{\rm Diff}/(2 ^{7 /6 } \pi^{5 /12 })$ and $t = 4 \tau_{\rm Debye}/\pi^2 $, respectively. (c) Rescaled charge correlations at long time, with a scaling law as $C_Q \sim L_{\rm obs}$. In all panels: dots: results from BD simulations with shaded areas (or error bars) indicating one standard deviation around the mean; lines: Eq. ~\eqref{eq:dQt}. Slight discrepancy arount $t = 1000 ~\mathrm{ps}$ in (b) between simulations and theory can be attributed to steric effects. }
\label{fig:fig5 }
\end{figure}
Notably, in deriving such scaling laws, there are various ways one can non-dimensionalize time; hence, the relevant timescale is ambiguous. For example, at long times, the scaling law for charge correlations can be written in several ways
\begin{equation}
C_Q(t) \stackrel[t \gg \tau_{\rm Diff}, \tau_{\rm Debye}]{}{\sim} L_{\rm obs} \lambda_D^2 \left(\frac{\tau_{\rm Diff}}{t}\right)^{5 /2 } e^{-\frac{t}{\tau_{\rm Debye}}} = L_{\rm obs}^2 \lambda_D \left(\frac{\tau_{\rm Diff}^{4 /5 }\tau_{\rm Debye}^{1 /5 }}{t}\right)^{5 /2 } e^{-\frac{t}{\tau_{\rm Debye}}} = ... \label{eq:ambiguity}
\end{equation}
To interpret these scalings, we generally assume a rule of aesthetic simplicity; at long times, this corresponds to $C_Q \sim L_{\rm obs} \lambda_D^2 (\tau_{\rm Diff}/t)^{5 /2 } e^{-t/\tau_{\rm Debye}}$. However, this ambiguity highlights the diversity of timescales at play. Together with the variety of scaling laws uncovered in Fig. ~\ref{fig:fig5 }, this naturally raises the question of understanding which timescale dominates the relaxation of charge fluctuations. \subsection{Universal timescale to characterize relaxation of fluctuations}
We, therefore, define a global, unambiguous, relaxation timescale for charge fluctuations as
\begin{equation}
T_Q = \int_0 ^{\infty} \frac{C_Q(t)}{C_Q(0 )}\mathrm{d} t = \frac{L_Q^2 }{D}, \, \, \, \,
\text{where} \, \, \, \, L_Q^2 = \frac{\displaystyle \int \frac{d\bm{k}}{(2 \pi)^3 } \frac{S^{\rm static}_{\rho\rho}(k)^2 }{k^2 } f_\mathcal{V}(\bm{k})}{\displaystyle \int \frac{d\bm{k}}{(2 \pi)^3 } S^{\rm static}_{\rho\rho}(k) f_\mathcal{V}(\bm{k})}
\label{eq:T}
\end{equation}
where the last equality comes from Eqs. ~\eqref{eq:Q20 } and ~\eqref{eq:dQt}. When calculating $T_N$, using $\textit{i. e. }$ Eq. ~\eqref{eq:T} but with the structure factor $S_{cc}^{\rm static}(k) = 1 $ for the particle number fluctuations, we find $T_N \simeq \tau_{\rm Diff}$, which means that Eq. ~\eqref{eq:T} is indeed suited, \textit{a priori}, to uncover a relevant relaxation timescale of the system.
|
1,168
| 3
|
arxiv
|
here $\tau_{\rm Debye}/T_Q$, as a function of the separation of length scales $\lambda_D/L_{\rm obs}$. Unsurprisingly, for $\lambda_D \ll L_{\rm obs}$, we find $T_Q \sim \tau_{\rm Debye}$, while for $\lambda_D \gg L_{\rm obs}$, $T_Q \sim \tau_{\rm Diff}$, showing that the relevant timescale for the correlations is always the smallest one (see also Appendix C). However, there is a broad intermediate region where $T_Q$ spans a combination of both timescales -- resonating with other studies which also find numerous timescales to characterize the charge of electrodes~\cite{bazant2004 diffuse, palaia2023 charging, minh2022 frequency}.
\begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figure6. pdf}
\caption{A global timescale $T_Q$, defined by Eq. ~\eqref{eq:T}, accounts for relaxation of charge fluctuations. (a) $\tau_{\rm Debye}/T_Q$ as a function of $\lambda_D/L_{\rm obs}$ for increasing salt concentrations, from purple to yellow. The scaling behaviour of $T_Q$ is highlighted in both limit regimes. Line: Eq. ~\eqref{eq:T}. (b) Charge correlations, from Fig. ~\ref{fig:fig4 }, rescaled by static charge correlations, with time rescaled by $T_Q$. Increasing observation box sizes, from yellow to dark red, for $C_0 = 104 $~mM. (c) Power spectrum of (b), as a function of frequency. In all plots, dots: BD data; error bars are one standard deviation around the mean. }
\label{fig:fig6 }
\end{figure}
The global timescale $T_Q$ accounts remarkably for the relaxation of charge correlations. Fig. ~\ref{fig:fig6 }-b shows that rescaling the time by $T_Q$ collapses the simulation results for normalized charge fluctuations, except for times approaching $T_Q$, where, as we have seen in Eq. ~\eqref{eq:ambiguity}, the correlation function
can only be described with scaling laws involving multiple timescales. The relevance of $T_Q$ is also apparent in the power spectrum of charge fluctuations $S_Q(f)$ rescaled by $T_Q$ (Fig. ~\ref{fig:fig6 }-c). We find that fluctuations plateau at low frequencies, as $S_Q(f=0 ) = C_Q(0 )/T_Q$, containing the information of static hyperuniformity and relaxation time. The plateau thus corresponds the equilibration of particles inside and outside the observation box at long enough times $t\gtrsim T_Q$. At large frequencies, fluctuations decay as $1 /f^{3 /2 }$, similarly to number fluctuations. This decay occurs for frequencies typically larger than $1 /T_Q$, showing that $T_Q$ determines how long correlations persist in the observation box, or determines the length scale, $L_Q = \sqrt{D T_Q}$ to diffuse accross before correlations are lost. Again, the $1 /f^{3 /2 }$ is a signature of fractional noise and shows that this universal behaviour can be seen regardless of the details of pairwise interactions.
The collapse of timescales on a single master curve spanning all intermediate combinations of $\tau_{\rm Debye}$ and $\tau_{\rm Diff}$ is strikingly similar to the result of a recent study, including some of the authors, of the response to an oscillating electric field of an electrolyte \emph{confined} between two plates separated by a distance $L$~\cite{minh2022 frequency}. There, the critical timescale defining a conducting or insulating behaviour, typically the time to ``charge'' the plates by transporting the ions, is either close to $\tau_{\rm Debye}$ or $\tau_{\rm Diff} = L^2 /D$ according to the separation of length scales $\lambda_D/L$, and spans all intermediate regimes. Remarkably, here in an equilibrium and \textit{bulk} context, this behaviour remains, showing the universality of such response in electrolytes.
\section{Conclusion and Discussion}
\label{sec:4 }
In this work, we have investigated ionic fluctuations in finite observation volumes, in the dilute regime and at equilibrium. With Brownian dynamics simulations and analytical calculations, we have probed the relaxation of correlations in the particle number $N$ and charge $Q$ in the observation volume. For charges, correlations decay with a timescale depending on the separation of length scales between the size of the observation volume $L_{\rm obs}$ and the Debye screening length $\lambda_D$: ranging from the time to diffuse across the box $\tau_{\rm Diff} = L^2 /4 D$ to the time to diffuse across the Debye length $\tau_{\rm Debye} = \lambda_D^2 /D$, and spanning combinations in between. The decay of charge correlations at long times is exponential. In contrast, for particle number, correlations decay algebraically with a single timescale $\tau_{\rm Diff}$, independently of electrostatics. We find that charge correlations are hyperuniform when the size of the overvation volume is much larger than the Debye length ($L_{\rm obs} \gg \lambda_D$).
Hyperuniformity persists in time and is even exacerbated at long times, including for small boxes. Finally, both $N$ and $Q$ feature a $1 /f^{3 /2 }$ decay in their power spectrum, a signature of fractional noise, showing the universality of such traces when observing particles diffusing finite volumes.
\paragraph*{Beyond Debye-H\"uckel: generality of the approach}
Stochastic density functional theory remarkably reproduced simulation results and is applicable to a few more complex systems. In fact, the present analytic theory depends only on the static structure factor of the quantity of interest $X$, $S^{\rm static}_{XX}(k)$. As long as the dynamic structure factor in Eq. ~\eqref{eq:Sgeneral} well describes the dynamics of $X$, which is typical for Markovian and Gaussian systems near equilibrium, we can use Eq. ~\eqref{eq:dQt} to compute the correlations of $X$ and Eq. ~\eqref{eq:T} to characterize their relaxation time. This is especially interesting since the static structure factor $S^{\rm static}_{XX}(k)$ is sometimes hard to calculate analytically but is fairly accessible experimentally and numerically~\cite{FDspectra, thorneywork2018 structure}, allowing one to estimate dynamical quantities from static properties. For example, one could explore, in this way, the effect of steric repulsion, which will be the purpose of a further study~\cite{AliceExclusion}. However, significant extensions of sDFT would be required, \textit{e. g. } to model ions with different self-diffusion coefficients, with concentration-dependent diffusion~\cite{dufreche2002 ionic}, and with hydrodynamic interactions~\cite{ladiges1, ladiges2 }.
\paragraph*{Extracting kinetic properties from fluctuations in finite volumes. }
Beyond relaxation dynamics, other ionic-specific kinetic properties may be extracted from dynamical fluctuations in finite volumes. For example, conductivity~\cite{zorkot2016 power} and the dielectric permittivity and susceptibility~\cite{FDspectra} also derive from integrals of structure factors, and can be addressed within the same theoretical framework as proposed here.
Since even a quantity as simple as the number density has non-trivial fluctuations, with fractional noise signature, it is clear that more complex variables might exhibit rich behaviour in finite volumes. This resonates with coarse-graining issues, for example, with Lattice methods, where fluctuations do not diminish with coarse-graining size in non-ideal systems, such as with steric repulsion~\cite{parsa2020 large, parsa2017 lattice}.
\paragraph*{Hyperuniformity in time. }
Here we have highlighted that the hyperuniform behaviour persists in time, reaching peculiar scalings, especially at long times. Yet, electrolytes are but a special case of particles with long-range interactions (decaying as $1 /r$ where $r$ is the distance between particles), which include also one-component plasmas, active particles, and many others~\cite{torquato2018 hyperuniform, leble2021 two, ghosh2017 fluctuations}. A recent investigation showed remarkable results where long-range correlations were observed both in driven electrolytes and active particle systems~\cite{mahdisoltani2022 nonequilibrium, mahdisoltani2021 long}, for the same underlying mathematical reason. This raises the question of whether the time-dependent behaviour uncovered in the present work extends to this broad class of systems and whether other universal signatures may be unraveled.
\paragraph*{Fractional noise and noise in confined systems. } The omnipresence of the spectrum scaling as $1 /f^{3 /2 }$ when observing fluctuations in finite volumes suggests that such fractional noise could be seen in various contexts. Especially in nanopores, one might wonder if fractional noise is linked with the pink noise $\sim 1 /f$ scalings measured on current correlations~\cite{secchi2016 scaling, bezrukov2000 examining, powell2009 nonequilibrium, siwy2002 origin, knowles2019 noise, dekker2007 solid, smeets2008 noise, knowles2021 current}. Beyond apparent discrepancies (in nanopores such correlations are measured out-of-equilibrium), both contexts involve tracking fluctuations in finite sub-volumes of a larger domain~\cite{kavokine2019 ionic, gravelle2019 adsorption, marbach2021 intrinsic, bezrukov2000 particle, zorkot2018 current, zorkot2016 power, zorkot2016 current}. The $\sim 1 /f$ pink noise arises in more varied electrochemical contexts than nanopores, especially near interfaces, for example in redox monolayers~\cite{grall2022 electrochemical}.
More generally, the advent of microscopy techniques resolving electrochemical fluctuations at the single particle level~\cite{zevenbergen_electrochemical_2009, mathwig_electrical_2012, sun_electrochemistry_2008, grall_attoampere_2021, knowles2021 current} means that there are increasingly more opportunities to compare experiments and theory at the microscopic level, and more contexts to understand the kinetic response of fluctuations in confined or finite volumes.
\section*{Author Contributions}
\textbf{Th\^e Hoang Ngoc Minh:} Conceptualization (equal); Formal Analysis (equal); Investigation (equal); Validation (equal); Writing/Review \& Editing (equal);
\textbf{Benjamin Rotenberg:} Conceptualization (equal); Formal Analysis (supporting); Funding Acquisition (lead); Investigation (supporting); Supervision (equal); Validation (equal); Writing/Review \& Editing (equal).
\textbf{Sophie Marbach:} Conceptualization (lead); Formal Analysis (equal); Funding Acquisition (supporting); Investigation (equal); Supervision (equal); Validation (equal); Writing/Review \& Editing (lead).
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
We wish to acknowledge fruitful discussions with David S. Dean, Pierre Illien, Thomas Lebl\'{e}, Brennan Sprinkle and Alice Thorneywork. S. M. received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement 839225, MolecularControl. This project received funding from the European Research Council under the European Union’s Horizon 2020 research and innovation program (project SENSES, grant Agreement No. 863473 ).
|
1,321
| 0
|
arxiv
|
\section*{Introduction}
Let $(R, \mathfrak{m})$ be a commutative Noetherian local ring of dimension $d$ containing a field $K$ of positive characteristic $p$. For an ideal $I$ and a prime power $q=p^e$ we define the ideal $I^{[q]}=\langle a^q|a\in I\rangle$ which is the ideal generated by the $q$th power of the elements of $I$. Let $I$ be an $\mathfrak{m}$-primary ideal of $R$ and $M$ a finite $R$-module. Then the $R$-modules $M/I^{[q]}M$ have finite length. The \emph{Hilbert-Kunz function} of $M$ with respect to $I$ is
\[ HKF(I, M)(q)=\operatorname{length}(M/I^{[q]}M). \]
If $M=R, I=\mathfrak{m}$ then we have the classical Hilbert-Kunz function $HKF(\mathfrak{m}, R)(q)=HK_R(q)$, introduced by Kunz \cite{Kunz}. He showed that $R$ is regular if and only if $HK_R(q)=q^d$ for all $q$. In \cite{Monsky}, P. Monsky proved that there is a real constant $c(M)$ such that
\[\operatorname{length}(M/I^{[q]}M)=c(M)q^d+O(q^{d-1 }). \]
The \emph{Hilbert-Kunz multiplicity} $e_{HK}(I, M)$ of $M$ with respect to $I$ is
\[e_{HK}(I, M):=\lim_{q\rightarrow\infty}\dfrac{\operatorname{length}(M/I^{[q]}M)}{q^d}. \]
There are many questions related to Hilbert-Kunz function and multiplicity. \begin{inproblem}
Is the Hilbert-Kunz multiplicity always a rational number. \end{inproblem}
\begin{inproblem}
Is there any interpretation in characteristic 0. \end{inproblem}
For the following problems the ring comes from a finitely generated $\mathbb Z$-algebra by reduction modulo $p$. \begin{inproblem}
How does the Hilbert-Kunz multiplicity depend on the characteristic $p$. \end{inproblem}
\begin{inproblem}
Does the limit
\[\lim_{p\rightarrow\infty} e_{HK}(I_p, R_p) \] exist. \end{inproblem}
\begin{inproblem}[C. Miller]
Does the limit
\[ \lim_{p\rightarrow \infty} \dfrac{\operatorname{length}(R_p/I_p^{[p]})}{p^d}\]
exist. \end{inproblem}
In most known cases the Hilbert-Kunz multiplicity is a rational number, for example for toric rings (\cite{Watanabe}, \cite{Bruns}), monoid rings (\cite{Eto}),
monomial ideals and binomial hypersurfaces (\cite{Conca}), rings of finite Cohen-Macaulay type (\cite{Seibert}), for invariant rings for the action of a finite group on a polynomial ring (follows from \cite[Theorem 2.7 ]{Watanabeyoshida}), for
two-dimensional graded rings (\cite{Bre06 }, \cite{Trivedi2 }). In \cite{Brenner} it is shown that there exist also irrational Hilbert-Kunz multiplicities. There are many situations where the Hilbert-Kunz multiplicity is independent of the characteristic $p$. For example, for toric rings (\cite{Watanabe}) or invariant rings as above the Hilbert-Kunz multiplicity is independent of the characteristic of the base field at
least for almost all prime characteristics. But there are also examples where the Hilbert-Kunz multiplicity depends on the characteristic. We can ask when the limit of the Hilbert-Kunz multiplicity exists for $p\rightarrow\infty$. If so then this limit is a
candidate for the Hilbert-Kunz multiplicity in characteristic zero. This leads us to the question of whether a characteristic zero Hilbert-Kunz multiplicity could be defined directly. In all known cases this limit exists. H. Brenner, J. Li and C. Miller (\cite{BreLiMil}) have observed that in all known cases where
\[\lim_{p\rightarrow\infty}e_{HK}(R_p)\]
exists then this double limit can be replaced by the limit of Problem 5. If the rings are of a more combinatorial nature, like for example monoid rings (\cite{Eto}, \cite{Watanabe}, \cite{Bruns}), Stanley-Reisner rings and binomial hypersurfaces (\cite{Conca}), we have positive answers for all these problems. Also, the proofs in these cases are easier compared to the methods of P. Monsky, C. Han, P. Teixeira (\cite{MonHan}, \cite{MonTei04 }, \cite{MonTei06 }) or the geometric methods of H. Brenner and V. Trivedi (\cite{Bre06 }, \cite{Bre07 }, \cite{TriFak03 }, \cite{Trivedi1 }, \cite{Trivedi2 }, \cite{Trivedi07 }). We want to generalize these results to a broad and unified concept of a combinatorial ring. For that we work with a new combinatorial structure namely binoids (pointed monoid) which were introduced in the thesis of S. Boettger \cite{Simone}. A binoid $(N, +,0, \infty)$ is a monoid with an absorbing element $\infty$ which means that for every $a\in N$ we have $a+\infty=\infty+a=\infty$. This concept recovers among others monoid rings and Stanley-Reisner rings. In the first four sections we will describe some basic properties of binoids and related objects,
namely $N$-sets, smash products, exact sequences. Also we will give the definition and some properties on the dimension of binoids and their binoid algebras. In Section 5 we will define the Hilbert-Kunz function and multiplicity of binoids. This function is given by counting the elements in certain residue class binoids,
and not the vector space dimension (or length) of residue class rings. We define the Hilbert-Kunz function and Hilbert-Kunz multiplicity not only for binoids but also for $N_+$-primary ideals of $N$ and a finitely generated $N$-set in the following way:
Let $N$ be a finitely generated, semipositive binoid, $T$ a finitely generated $N$-set and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. Then we call the number
\[\hkf^N(\mathfrak{n}, T, q)=\hkf(\mathfrak{n}, T, q):=\# T/([q]\mathfrak{n}+T)\]
(where for a finite binoid we do not count $\infty$) the Hilbert-Kunz function of $\mathfrak{n}$ on the $N$-set $T$ at $q$. In particular, for $T=N$ and $\mathfrak{n}=N_+$ we have
\[ \hkf(N, q):=\hkf(N_+, N, q)=\#N/[q]N_+. \]
Note that this function is defined for all $q\in\mathbb N$, not only for powers of a fixed prime number. Also note that this residue construction is possible in the category of binoids, not in the category of monoids. The combinatorial Hilbert-Kunz multiplicity is the limit of this function divided by $q^{\dim N}$, provided this limit exists and provided that there is a reasonable notion of dimension. It turns out that the Hilbert-Kunz function of a binoid for $q=p^e$ is the same as the Hilbert-Kunz function of its binoid algebra over a field of characteristic $p$. Hence from here we have a chance to study the above mentioned five problems, by just studying the binoid case. Since the Hilbert-Kunz function of binoids is given by just counting elements, not vector space dimensions,
this reveals clearer the combinatorial nature of the problem. For example we have
\[\dim_K K[N]/K[N_+]^{[q]}=\dim_K K[N/[q]N_+]=\# N/[q]N_+. \]
So the computation of the Hilbert-Kunz function and multiplicity of a binoid is in some sense easier than the standard Hilbert-Kunz function and multiplicity. The following is our main combinatorial theorem (Theorem \ref{f. g, s. p, c, r binoid}). \begin{intheorem}
Let $N$ be a finitely generated, semipositive, cancellative, reduced binoid and $\mathfrak{n}$ be an $N_+$-primary ideal of $N$. Then $e_{HK}(\mathfrak{n}, N)$ exists and is a rational number. \end{intheorem}
Note that this is a characteristic-free statement and that the existence does not follow from Monsky's theorem. From this result we can deduce a positive answers to our five problems, see Theorem \ref{Miller conjecture} and Theorem \ref{existsrational} in the current setting. The strategy to prove these theorems is to reduce it step by step to the corresponding results of lower dimensional binoids fulfilling further properties. The case of primary ideals in a normal toric setting was given by Eto, Bruns, Watanabe and is true without normality. The components of a reduced torsion-free cancellative binoid are toric, and the Hilbert-Kunz multiplicity depends only on the components of maximal dimension. In the proof of this we encounter also the non-reduced situation, and for this the (strongly) exact sequences of $N$-sets are extremely useful. The reduction from the case
with torsion to the torsion-free case requires a deeper understanding of the torsion-freefication and its relation to the smash product of the torsion-free binoid and a
finite group. In this setting we need to replace positive (local algebras) by semipositive (corresponding to semilocal algebras), so we also have to generalize the Hilbert-Kunz theory on the ring side. \section{Binoids and their properties}
In this Section we will introduce (commutative) binoids, describe basic properties of binoids and binoid sets and their properties. For a general introduction to binoids we refer to \cite{Simone}, where most of the basic concepts were developed. We focus on material which is relevant for Hilbert-Kunz theory. A \emph{binoid} $(N, +,0, \infty)$ is a commutative monoid with an absorbing element $\infty$, given by the property $x +\infty= \infty$. We write $N^\bullet$ for the set $N\setminus \{\infty\}$. If $N^\bullet$ is a monoid, then $N$ is called an \emph{integral} binoid. The set of all units form the \emph{unit group} of $N$, denoted by $N^\times$. The set of all non-units $N\setminus N^\times$ will be denoted by $N_+$. A binoid $N \neq \{ \infty\}$ is called \emph{semipositive}, if $|N^\times|$ is finite
and \emph{positive}, if $N^\times=\{0 \}$. From the corresponding concepts in monoid theory it is clear what a homomorphism of binoids (sending $\infty$ to $\infty$),
what the kernel is and what an ideal, a radical, a prime ideal is. Specific for an ideal $I \subseteq N$ in a binoid is that there is a homomorphism $M \rightarrow M/I$
to the \emph{residue class binoid}, where the ideal is sent to $\infty$ and which is injective elsewhere. For every ideal $I$ and $q \in \mathbb N_+$ we will denote by
\[ [q]I :=\langle q a\mid a\in I\rangle \]
the ideal generated by the set $\{qa\mid a\in I\}$, which we call the $q$th \emph{Frobenius sum} of the ideal. For a binoid homomorphism $\varphi: N\rightarrow M$ and an ideal $I\subseteq N$ we denote by $I+M$ the ideal generated by $\varphi(I)$
and call it the \emph{extended ideal}. Since $[q]:N\rightarrow N, x\mapsto qx$, is a binoid homomorphism, the ideal $[q]I$ can be considered as
the extended ideal under this homomorphism. The extension of ideals commute with Frobenius sums. An ideal is called a \emph{primary ideal} if $\operatorname{rad}(\mathfrak{n})=N_+$. For an
$N_+$-primary ideal $\mathfrak{n}$ also its Frobenius sums $[q]\mathfrak{n}$ are $N_+$-primary. For a finitely generated semipositive binoid and an $N_+$-primary ideal $\mathfrak{n}$
the residue class binoid $N/\mathfrak{n}$ is a finite set. An element $a\in N$ is called \emph{nilpotent} if $na=a+\cdots+a=\infty$ for some $n\in\mathbb N$. The set of all nilpotent elements will be denoted by $\operatorname{nil}(N)$. We say that $N$ is \emph{reduced} if $\operatorname{nil}(N)=\{\infty\}$. We call the quotient binoid $N_{\operatorname{red}}:=N/\operatorname{nil}(N)$ the \emph{reduction} of $N$. An element $a \in N$ is a \emph{torsion} element in case $a=\infty$ or $na = nb$ for some $b\in N, b\neq a, n \geqslant 2 $. We say $N$ is \emph{torsion-free} if there are no other torsion elements in M besides $\infty$, i. e. $na = nb$ implies $a = b$ for every $a, b \in N$ and $n\geqslant 1 $. A binoid is called \emph{torsion-free up to nilpotence} if $na = nb \neq \infty$ implies $a = b$ for
every $a, b \in N$ and $n \geqslant 1 $. For an integral binoid $N$ we denote by $\operatorname{diff} N$ the \emph{difference} binoid of $N$, which is a group binoid. If $N$ is integral and cancellative then there is an injection
$N \subseteq \operatorname{diff} N$. If $N \subseteq M \subseteq \operatorname{diff} N$ we say that $M$ is \emph{birational} over $N$. The \emph{spectrum} of $N$, denoted by $\operatorname{Spec} N$, is the set of all prime ideals of $N$. It can be made to a (finite, if $N$ is finitely generated binoid) topological space. The \emph{combinatorial dimension} of a binoid $N$, denoted by $\dim N$, is the supremum of the lengths of strictly increasing chains of prime ideals of $N$. Without any further
condition the Krull dimension of a binoid algebra $K[N]$ over a field $K$ and $\dim N$ need not be the same. A binoid algebra is the monoid algebra where one additionally sets $T^\infty =0 $. \begin{lemma}
\label{dimension properties}
Let $N$ be a finitely generated binoid. If $N$ is integral and $I\neq \{\infty\}$ be an ideal of $N$, then $\dim N/I< \dim N$. If $\mathfrak{p}$ and $\mathfrak{q}$ are
different minimal prime ideals of $N$, then $\dim N/(\mathfrak{p}\cup \mathfrak{q})<\min\{\dim N/\mathfrak{p}, \dim N/\mathfrak{q}\}$. \end{lemma}
\begin{proof}
See \cite[Proposition 1.8.3 ]{Batsukhthesis} and \cite[Proposition 1.8.4 ]{Batsukhthesis}. \end{proof}
\section{$N$-sets}
A \emph{pointed set} $(S, p)$ is a set $S$ with a distinguished element $p\in S$. Let $N$ be a binoid.
|
1,321
| 1
|
arxiv
|
longmapsto n+s, \]
such that the following conditions are fulfilled:
\begin{enumerate}
\item For all $n, m\in N$ and $s\in S:(n+m)+s=n+(m+s)$. \item For all $s\in S: 0 +s=s$. \item For all $s\in S: \infty+s=p$. \item For all $n\in N: n+p=p$. \end{enumerate}
Then $S$ is called an $N$-\emph{set}. \end{definition}
Given a fixed binoid homomorphism $N \rightarrow M$ then $M$ is an $N$-set in a natural way. This applies in particular for $N$ itself and for residue class binoids. For an $N$-set $(S, p)$ we say that $a\in N$ is an \emph{annihilator} if $a+s=p$ for every $s\in S$. We denote the set of all annihilators by $\operatorname{Ann} S$, which is an ideal of $N$. The $N$-set $(S, p)$ is also an $(N/\mathfrak{a})$-set for every ideal $\mathfrak{a}\subseteq \operatorname{Ann} S$. If there exist finitely many elements
$s_1, \dots, s_k\in S$ such that for all $s\in S$ we can write $s=n+s_j$ where $n\in N$,
then we say that $S$ is a \emph{finitely generated $N$-set}. We call the elements $s_j$ $N$-\emph{generators}. For a finite $N$-set $S$ we set
\[\# S=|S|-1 =|S\setminus\{p\}| , \]
so we do not count the distinguished point. For a family of $N$-sets $(S_i, p_i)$, $i\in I$, we define the \emph{pointed union} of $S_i, i\in I, $ by
\[\bigcupdot_{i\in I} S_i=(\biguplus_{i\in I}S_i)/\sim, \]
where $\biguplus$ is the disjoint union and $a\sim b$ if and only if $a=b$ or $a=p_j, ~b=p_k$ for some $j, k$. So the pointed union just contracts the points $p_j$ to one point. We write $S^{\cupdot r}=\bigcupdot_{i=1 }^r S$ and in particular
$N^{\cupdot r}$ for the $r$-folded pointed union of $N$ with itself. For a homomorphism $f:S\rightarrow T$ of $N$-sets we set
\[ \operatorname{im}(f)=\{t\in T \mid t = f(s) \text{ for some } s \in S\} \text{ and } \ker(f)=\{s\in S \mid f(s)=p_T\}. \]
For an $N$-subset $S\subseteq T$ we define the \emph{quotient} of $T$ by $S$ to be the $N$-set $(T \setminus S) \cup {p}$ and denoted it by $T/S$,
so $S$ is contracted to a point. The following statements are analogous to statements about modules, for the easy proofs we refer to \cite{Batsukhthesis}. \begin{lemma}
\label{hom}
Let $N$ be a binoid, $(S, p_S), (T, p_T)$ be $N$-sets and $S'\subseteq S$ an $N$-subset. If we have an $N$-set homomorphism $\phi:S\rightarrow T$ with $\phi(S')=p_T$ then there exists a unique homomorphism $\tilde{\phi}:S/S'\rightarrow T$ such that the following diagram commutes. \[\begin{tikzpicture}[node distance=1.8 cm, auto]
\node (S) {$S$};
\node (T) [right of=S] {$T$};
\node (A) [below of=S] {$S/S'$};
\draw[->] (S) to node {$\phi$} (T);
\draw[->, dashed] (A) to node [swap]{$\tilde{\phi}$} (T);
\draw[->] (S) to node [swap] {$\varphi$} (A);
\end{tikzpicture}
\]
If $\phi$ is surjective, then $\tilde{\phi}$ is surjective. \end{lemma}
\begin{proof}
See \cite[Lemma 1.5.7 ]{Batsukhthesis}. \end{proof}
\begin{proposition}
\label{quotient to union}
Let $N$ be a finitely generated binoid, $J$ be an ideal of $N$ and $S\subseteq T$ be $N$-sets. Then $(T/S)/(J+(T/S))= T/(S\cup (J+T))$. \end{proposition}
\begin{proof}
See \cite[Proposition 1.5.8 ]{Batsukhthesis}. \end{proof}
\begin{lemma}
\label{canonical isomorphism}
Let $N$ be a binoid, $I\subseteq N$ an ideal of $N$ and $(T, p)$ be an $N$-set. If $r$ is some positive integer then we have a canonical $N$-set isomorphism
\[T^{\cupdot r}/(I+T^{\cupdot r})\cong (T/(I+T))^{\cupdot r}. \]
\end{lemma}
\begin{proof}
See \cite[Lemma 1.5.9 ]{Batsukhthesis}. \end{proof}
\begin{lemma}
\label{surj hom}
Let $N$ be a binoid, $(S, p_S), (T, p_T)$ $N$-sets and $I\subseteq N$ be an ideal of $N$. If we have a surjective $N$-set homomorphism $\phi:S\rightarrow T$ then there exists a canonical surjective $N$-set homomorphism
$\tilde{\phi}:S/(I+S)\longrightarrow T/(I+T)$. \end{lemma}
\begin{proof}
See \cite[Lemma 1.5.10 ]{Batsukhthesis}. \end{proof}
\begin{lemma}
\label{surj map for f. g. set}
Let $N$ be a binoid and $T$ be an $N$-set. Then
\begin{enumerate}
\item For $t_1, \dots, t_r\in T$ we can define an $N$-set homomorphism $\phi:N^{\cupdot r}\rightarrow T$. \item $t_1, \dots, t_r\in T$ is a generating system of $T$ over $N$ if and only if $\phi$ is a surjective homomorphism. \item $T$ is finitely generated over $N$ if and only if there exists a surjective $N$-set homomorphism $N^{\cupdot r}\rightarrow T$. \end{enumerate}
\end{lemma}
\begin{proof}
See \cite[Lemma 1.5.11 ]{Batsukhthesis}. \end{proof}
The smash product of binoids and $N$-sets correspond to the tensor product of algebras and modules. \begin{definition}
Let $(S_i, p_i)_{i\in I}$ be a finite family of pointed sets and $\sim_\wedge$ the relation on $\prod_{i\in I} S_i$ given by
\[ (s_i)_{i\in I}\sim_\wedge (t_i)_{i\in I}:\Leftrightarrow s_i=t_i, \text{ for all } i\in I, \text{ or } s_j=p_j, t_k=p_k \text{ for some } j, k\in I. \]
Then the pointed set
\[\bigwedge_{i\in I}S_i:=(\prod_{i\in I} S_i)/\sim_\wedge \]
with distinguished point $[p_\wedge:=(p_i)_{i\in I}]$ is called the \emph{smash product} of the family $S_i, i\in I$. \end{definition}
\begin{definition}
Let $N$ be a binoid, $(S_i, p_i)_{i\in I}$ be a finite family of pointed $N$-sets and $\sim_{\wedge_N}$ the equivalence relation on
$\bigwedge_{i\in I} S_i$ generated by
\[ \cdots\wedge n+s_i\wedge \cdots \wedge s_j \wedge\cdots \sim_{\wedge_N}\cdots\wedge s_i\wedge\cdots\wedge n+s_j \wedge\cdots, \] for all $i, j\in I$ and $n\in N$. Then \[\bigwedge_{i\in I} {\. _N} S_i:=(\bigwedge_{i\in I} S_i)/\sim_{\wedge_N} \]
is called the
\textbf{smash product} of the family $(S_i)_{i\in I}$ over $N$. \end{definition}
\begin{proposition}
\label{pointed union and smash}
Let $N$ be a binoid and $S$, $T_i,1 \leqslant i\leqslant k$ be $N$-sets. Then
\[S\wedge_N (\bigcupdot_{i=1 }^k T_i)=\bigcupdot_{i=1 }^k (S\wedge_N T_i). \]
\end{proposition}
\begin{proof}
See \cite[Proposition 1.6.7 ]{Batsukhthesis}. \end{proof}
\begin{proposition}
\label{quotient}
Let $N$ be a binoid and $(S, p)$ an $N$-set. If $I$ is an ideal of $N$ then
\[(N/I)\wedge_N S\cong S/(I+S). \]
\end{proposition}
\begin{proof}
See \cite[Proposition 1.6.8 ]{Batsukhthesis}. \end{proof}
\begin{proposition}
\label{surj map to smash}
Let $N$ be a binoid and $J$ an ideal of $N$. If $S$ is a finitely generated $N$-set then there is a surjective homomorphism
$(N/J)^{\cupdot r}\rightarrow S\wedge_N N/J$, where $r$ is the number of generators of $S$. \end{proposition}
\begin{proof}
See \cite[Proposition 1.6.9 ]{Batsukhthesis}. \end{proof}
\begin{corollary}
\label{quotient over primary ideal and smash}
Let $N$ be a finitely generated, semipositive binoid and $J$ an $N_+$-primary ideal of $N$. If $S$ is a finitely generated $N$-set then $S\wedge_N N/J$ is finite. \end{corollary}
\begin{proof}
See \cite[Corollary 1.6.10 ]{Batsukhthesis}. \end{proof}
\begin{lemma}
\label{smash of quotients}
Let $M, N$ be binoids and $I\subseteq M, J\subseteq N$ be ideals. Then $(I\wedge N)\cup (M\wedge J)$ is an ideal of $M\wedge N$ and
\[(M\wedge N)/((I\wedge N)\cup (M\wedge J))\cong M/I\wedge N/J. \]
\end{lemma}
\begin{proof}
See \cite[Lemma 1.6.13 ]{Batsukhthesis}. \end{proof}
\begin{proposition}
\label{number of elements of smash}
Let $N, M$ be finite binoids, then
\[\#(N\wedge M)=\#N\cdot\#M. \]
\end{proposition}
\begin{proof}
See \cite[Proposition 1.6.2 ]{Batsukhthesis}. \end{proof}
\begin{theorem}
\label{dimension of smash product}
Let $M, N$ be non zero binoids of finite dimension. Then
\[\dim M \wedge N=\dim M+ \dim N. \]
\end{theorem}
\begin{proof}
See \cite[Theorem 1.8.5 ]{Batsukhthesis}. \end{proof}
\section{Exact sequences for $N$-sets}
The concept of (strongly) exact sequences for $N$-sets is crucial to reduce statements on the Hilbert-Kunz function to lower dimensional binoids. \begin{definition}
Let $N$ be a binoid. A sequence
\[S_0 \xrightarrow{\;\phi_1 } S_1 \xrightarrow{\;\phi_2 } S_2 \xrightarrow{\;\phi_3 }\cdots\xrightarrow{\;\phi_n} S_n \]
of $N$-sets and $N$-set homomorphisms is called \emph{exact} if the image of each homomorphism is equal to the kernel of the next:
\[ \operatorname{im}\phi_k = \ker\phi_{k+1 }. \]
\end{definition}
\begin{definition}
Let $N$ be a binoid. An exact sequence
\[S_0 \xrightarrow{\;\phi_1 } S_1 \xrightarrow{\;\phi_2 } S_2 \xrightarrow{\;\phi_3 }\cdots\xrightarrow{\;\phi_n} S_n\]
is called \emph{strongly exact} if $\phi_k$ is injective on $S_{k-1 }\setminus \ker \phi_k$ for every $k$. \end{definition}
\begin{proposition}
\label{exact sequence}
Let $N$ be a binoid, $S\subseteq T$ and $U$ be $N$-sets. Then we have an exact sequence of $N$-sets
\[\infty\xrightarrow{\;\phi_1 } \{s\wedge u\mid s\wedge u=\infty \text{ in } T\wedge_N U\} \xrightarrow{\;\phi_2 } S\wedge_N U\xrightarrow{\;\phi_3 } T\wedge_N U\xrightarrow{\;\phi_4 } (T/S)\wedge_N U\xrightarrow{\;\phi_5 }\infty. \]
\end{proposition}
\begin{proof}
We have an exact sequence
$\infty \rightarrow S\hookrightarrow T\rightarrow T/S\rightarrow \infty$ and we can smash this sequence with $U$. Then we obtain a sequence
\[ \infty\xrightarrow{\;\phi_1 } \{s\wedge u\mid s\wedge u=\infty \text{ in } T\wedge_N U\} \xrightarrow{\;\phi_2 } S\wedge_N U\xrightarrow{\;\phi_3 }
T\wedge_N U\xrightarrow{\;\phi_4 } (T/S)\wedge_N U\xrightarrow{\;\phi_5 }\infty, \]
where $\phi_2 $ is the inclusion, $\phi_3 (s\wedge u)=s\wedge u\in T\wedge_N U$, $\phi_4 (t\wedge u)=[t]\wedge u$ and $\phi_5 ([t]\wedge u)=\infty$. We know by definition that
$\operatorname{im}\phi_1 =\{\infty\}= \ker\phi_{2 }$, $\operatorname{im}\phi_2 =\{s\wedge u\mid s\wedge u=\infty \text{ in } T\wedge_N U\}= \ker\phi_{3 }$, $\operatorname{im}\phi_3 =S\wedge_N U= \ker\phi_{4 }$, $\operatorname{im}\phi_4 =T/S\wedge_N U= \ker\phi_{5 }$. So it is an exact sequence. \end{proof}
\begin{example}
\label{example of not strongly exact}
Let $N=\mathbb N^\infty$ and $S$ be an $N$-set with an operation given by $n+s=f^n(s)$, where $f:S \rightarrow S$ is a pointed map. Then we have an exact sequence
\[\infty\longrightarrow N_+\longrightarrow N\longrightarrow N/N_+\longrightarrow\infty \]
and we can smash this sequence with $S$ over $N$.
|
1,321
| 2
|
arxiv
|
$\{n\wedge s\mid n\geqslant 1, n\wedge s=\infty \text{ in } N\wedge_N S\}\cong\{t\in S\mid f(t)=p\}=\ker f$ and
$N/N_+\wedge_N S\cong S/\operatorname{im} f$. So we have an exact sequence
\[\infty\longrightarrow \ker f\longrightarrow S\xrightarrow{\;f\;} S \longrightarrow S/\operatorname{im} f\longrightarrow\infty. \]
If $S=\{a, b, p\}$, $f(a)=f(b)=b$ and $f(p)=p$ then this sequence is not strongly exact. \end{example}
\begin{proposition}
\label{strongly exact sequence}
Let $N$ be a binoid, $J\subseteq N$ an ideal and $S\subseteq T$ be $N$-sets. Then we have a strongly exact sequence of $N$-sets
\[\infty \. \xrightarrow{\phi_1 } \{s\wedge [a]\mid s\wedge [a] \. = \. \infty \text{ in } T\wedge_N N/J\}
\. \xrightarrow{\phi_2 } S\wedge_N N/J\xrightarrow{\phi_3 } T\wedge_N \. N/J \. \xrightarrow{\phi_4 } T/S\wedge_N N/J
\. \xrightarrow{\phi_5 } \. \infty \]
which is the same as
\[ \infty \. \xrightarrow{\phi_1 }(S\cap(J+T))/(J+S) \xrightarrow{\phi_2 }
S/(J+S)\xrightarrow{\phi_3 }T/(J+T)\xrightarrow{\phi_4 }(T/S)/(J+T/S))\xrightarrow{\phi_5 } \. \infty. \]
If $S=I$ is an ideal of $N$ and $T=N$, then we have the strongly exact sequence
\[\infty\longrightarrow (I\cap J)/(I+J) \longrightarrow I/(I+J) \longrightarrow N/J\longrightarrow (N/I)/(J+N/I)\longrightarrow\infty. \]
\end{proposition}
\begin{proof}
From Proposition \ref{exact sequence}, when $U=N/J$, we have an exact sequence
\[ \infty \. \xrightarrow{\phi_1 } \{s\wedge [a]\mid s\wedge [a]=\infty \text{ in } T\wedge_N N/J\}\. \xrightarrow{\phi_2 }
S\wedge_N N/J \. \xrightarrow{\phi_3 } \. T\wedge_N \. N/J \. \xrightarrow{\phi_4 } T/S\wedge_N \. N/J \. \xrightarrow{\phi_5 } \. \infty. \]
By Proposition \ref{quotient} we know that $S\wedge_N N/J\cong S/(J+S)$ and $T\wedge_N N/J\cong T/(J+T)$. Let
\[s\wedge [a]\in\{s\wedge [a]\mid s\wedge [a]=\infty \text{ in } T\wedge_N N/J\}. \]
Then $S/(J+S)\ni[a+s]=\infty\in T/(J+T)$. So we have $a+s\in J+T$, which means that $a+s\in S\cap (J+T)$. Hence we have
\[\{s\wedge [a]\mid s\wedge [a]=\infty \text{ in } T\wedge_N N/J\}\cong(S\cap(J+T))/(J+S). \]
Also, by Proposition \ref{quotient} and Proposition \ref{quotient to union}, we get
\[ T/S\wedge_N N/J\cong (T/S)/(J+T/S)=T/(S\cup(J+T)). \]
So we can rewrite the previous exact sequence as
\[\infty\xrightarrow{\phi_1 }(S\cap(J+T))/(J+S) \xrightarrow{ \phi_2 }S/(J+S)\xrightarrow{\phi_3 }T/(J+T)\xrightarrow{\phi_4 }(T/S)/(J+T/S))\xrightarrow{\phi_5 }\infty \]
or, equivalently
\[ \infty\xrightarrow{\phi_1 }(S\cap(J+T))/(J+S) \xrightarrow{
\phi_2 }S/(J+S)\xrightarrow{\phi_3 }T/(J+T)\xrightarrow{\phi_4 }T/(S\cup(J+T))\xrightarrow{\phi_5 }\infty. \]
Here $\phi_1 (\infty)=\infty$, $\phi_2 $ is the inclusion (so it is injective), $\phi_3 $ is an inclusion (injective) on $S\setminus (J+T)$, $\phi_4 $ is surjective and outside of the kernel it is a bijection, $\phi_5 ([s])=\infty$. So it means that our sequence is a strongly exact sequence. If $S=I$ and $T=N$ then we get
\[ \infty\xrightarrow{\;\phi_1 }I\cap J/(J+I) \xrightarrow{\;\phi_2 }I/(J+I)\xrightarrow{\;\phi_3 }N/J\xrightarrow{\;\phi_4 }(N/I)/(J+N/I)\xrightarrow{\;\phi_5 }\infty. \qedhere \]
\end{proof}
\begin{proposition}
\label{general equation of exact seq}
Let $N$ be a binoid and $\infty \rightarrow S_1 \rightarrow S_2 \rightarrow\cdots\rightarrow S_n\rightarrow \infty$ a strongly exact sequence of finite $N$-sets. Then
\[ \sum_{i=1 }^n (-1 )^i\# S_i=0. \]
\end{proposition}
\begin{proof}
Write $S_i=K_i\uplus R_i\uplus \{p_i\}$, with maps
\[ \begin{aligned}
\phi_i:S_{i-1 }&\longrightarrow S_i, \\
R_{i-1 }&\xrightarrow{bij} K_i, \\
K_{i-1 }&\longrightarrow p_i, \\
p_{i-1 }&\longrightarrow p_i,
\end{aligned}
\]
where $1 \leqslant i \leqslant n+1, $
\[ S_0 =S_{n+1 }=\{\infty\}, \;p_0 =p_{n+1 }=\infty, \]
and
\[ R_0 =K_0 =K_1 =K_{n+1 }=R_{n+1 }=\varnothing. \]
Then we can conclude that
\[\sum_{i=1 }^n (-1 )^i\# S_i \. =\. \sum_{i=1 }^n (-1 )^i(|K_i|+|R_i|) \. = \. \sum_{i=1 }^n (-1 )^i(|K_i|+|K_{i+1 }|)\. =\. -| K_1 |+(-1 )^n |K_{n+1 } | \. =\!0. \]
\end{proof}
\begin{corollary}
\label{equality of e. s. corollary}
Let $N$ be a finitely generated, semipositive binoid and let $I$ be an ideal of $N$. If $J$ is an $N_+$-primary ideal of $N$ then
\[\# N/J+\# I\cap J/(I+J)=\# I/(I+J)+\# (N/I)/(J+N/I). \]
\end{corollary}
\begin{proof}
We want to apply Proposition \ref{general equation of exact seq} to the strongly exact sequence
\[\infty\longrightarrow I\cap J/(I+J) \longrightarrow I/(I+J) \longrightarrow N/J\longrightarrow (N/I)/(J+N/I)\longrightarrow\infty \]
from Proposition \ref{strongly exact sequence}. To do this we have to show that the involved $N$-sets are finite. We know that $N/J$ is a finite set. Also we know that $I$ is a finitely generated $N$-set, so by Proposition \ref{surj map to smash} we have a surjective homomorphism
$(N/J)^{\cupdot r}\rightarrow I\wedge_N N/J$. Hence $|I\wedge_N N/J |=|I/(J+I)|\leqslant |N/J|^r$,
which is a finite set. So we can apply Proposition \ref{general equation of exact seq} and get the result. \end{proof}
\section{Algebras and Modules}
In this section we will assume that $K$ is a commutative ring (or just a field). We associate to a binoid a binoid algebra over $K$
essentially in the same way how monoids yield monoid algebras. \begin{definition}
The \emph{binoid algebra} of a binoid $N$ is the quotient algebra \[ KN/\langle X^\infty\rangle=:K[N], \]
where $KN$ is the monoid algebra of $N$ and $\langle X^\infty\rangle$ is the ideal in $KN$ generated by the element $X^\infty$. \end{definition}
So we can consider $K[N]$ as the set of all formal sums $\sum_{m\in M}r_m X^m, $ where $M\subseteq N^\bullet$ is finite, $r_m\in K$ and the multiplication is given by
\[ r_nX^n\cdot r_m X^m=\left\{
\begin{array}{ccc}
(r_nr_m)X^{n+m}, &\text{ if } n+m\in N^\bullet \\
0, &\text{ if } n+m=\infty. \\
\end{array}
\right. \]
For an $N$-set $(S, p)$ we define the $K[N]$-module $K[S]$ as the set of all formal sums $\sum_{s\in U}r_s X^s, $ where $U\subseteq S^\bullet=S \setminus \{p\}$ is finite, $r_s\in K$
and the multiplication is given by
\[(\sum_{m\in M}r_m X^m)\cdot (\sum_{s\in U}r_s X^s)=\left\{
\begin{array}{ccc}
\sum_{m\in M, s\in U}r_mr_s X^{m+s}, &\text{ if } m+s\in S^\bullet \\
0, &\text{ if } m+s=p. \\
\end{array}
\right. \]
Here $M$ is a finite subset of $N$ and $U$ is a finite subset of $S$. For an ideal $I \subseteq N$ we get an ideal
\[ K[I]:=\{\sum_{a\in J}r_a X^a\mid J\subseteq I \text{ finite subset }\} \] of $K[N]$. In this case we have the natural identification
$K[N/I]\cong K[N]/K[I]$. In the same way we have $K[S/T] \cong K[S]/K[T]$. For a finite $N$-set $S$ we have $\# S=\dim_K K[S]$. \begin{proposition}
\label{exact sequence of K algebra}
Let $N$ be a binoid,
\[\infty \longrightarrow S_1 \longrightarrow S_2 \longrightarrow\cdots\longrightarrow S_n\longrightarrow \infty \]
a strongly exact sequence of finite $N$-sets and $K$ a commutative ring. Then we have an exact sequence of $K[N]$-modules
\[ 0 \longrightarrow K[S_1 ] \longrightarrow K[S_2 ] \longrightarrow\cdots\longrightarrow K[S_n]\longrightarrow 0. \]
\end{proposition}
\begin{proof}
See \cite[Proposition 1.9.6 ]{Batsukhthesis}. \end{proof}
\begin{example}
Let $S=\{a, b, p\}$ be as in Example \ref{example of not strongly exact}. Then we have an exact sequence of $N$-sets
\[\infty\longrightarrow\ker f=\infty\xrightarrow{\;i\;\, } S\xrightarrow{\;f\;} S\longrightarrow S/\operatorname{im} f\longrightarrow\infty. \]
We have
\[ K[f](X^a-X^b)=X^{f(a)}-X^{f(b)}=X^b-X^b=0, \]
but $X^a-X^b\notin \operatorname{im} K[i]=\{\infty\}$. So strong exactness is a necessary condition for Proposition \ref{exact sequence of K algebra}. \end{example}
\begin{proposition}
\label{smash to tensor}
Let $N$ be a binoid and $S, T$ the $N$-sets. Then we have
\[K[S\wedge_N T]\cong K[S]\otimes_{K[N]} K[T]\]
and
\[K[S\cupdot T]\cong K[S]\oplus K[T]. \]
\end{proposition}
\begin{proof}
See Corollary 3.5.2 in \cite{Simone} and \cite[Proposition 1.9.8 ]{Batsukhthesis}. \end{proof}
\begin{proposition}
\label{K algebra of quotient}
Let $N$ be a binoid, $S$ be an $N$-set and $I$ be an ideal of $N$. Then we have \[K[S/(I+S)]\cong K[S]/(K[I]K[S]). \]
\end{proposition}
\begin{proof}
By Proposition \ref{quotient} we have $S/(I+S)\cong (N/I)\wedge_N S$, so $K[S/(I+S)]\cong K[(N/I)\wedge_N S]$. Hence from the compatibility of the $K$-functor
with the residue class construction, Proposition \ref{smash to tensor} and the general isomorphism $R/I \otimes_R V \cong V/IV$ we get
\[
\begin{aligned}
K[S/(I+S)]&\cong K[(N/I)\wedge_N S] \\
&\cong K[N/I]\otimes_{K[N]} K[S]\\
&\cong (K[N]/K[I])\otimes_{K[N]} K[S] \\
&\cong K[S]/(K[I]K[S]). \end{aligned} \]
\qedhere
\end{proof}
\begin{lemma}
\label{semipositive binoid algebra}
Let $N$ be a semipositive binoid and $K$ some field of characteristic $p$ which does not divide $|N^\times|$. Then $K[N_+]$ is the intersection of finitely many maximal ideals. In particular, if $N$ is a positive binoid then $K[N_+]$ is a maximal ideal of $K[N]$. \end{lemma}
\begin{proof}
Because of the fundamental theorem for finite abelian groups we can write
\[ N^\times=\mathbb Z/(\alpha_1 )\times\cdots\times \mathbb Z/(\alpha_r). \]
By assumption $p$ does not divide $\alpha_1, \dots, \alpha_r$.
|
1,321
| 3
|
arxiv
|
otimes\cdots
\otimes\overline{K}[X_r]/\big((X_r-\xi_{r1 })\cdots(X_r-\xi_{r\alpha_r})\big)\\
&\cong\overline{K}^{|N^\times|},
\end{aligned} \]
where $\overline{K}$ is the algebraic closure of $K$ and $\xi_{ij}$ are the $\alpha_i$-th roots of unity. Hence the maximal ideals of $\overline{K}N^\times$ have the form
\[\mathfrak{m}_{i_1, \dots, i_r}=(X_1 -\xi_{1 i_1 }, \dots, X_r-\xi_{ri_r}). \]
So we have finitely many maximal ideals in $\overline{K} N^\times$ with this form. We also know that the intersection of all maximal ideals of $\overline{K} N^\times$ is equal to $\operatorname{nil} (\overline{K} N^\times)$ and this is $0 $. Under the homomorphism $KN^\times\hookrightarrow\overline{K}N^\times$ the preimage of a maximal ideal is maximal and therefore the intersection of the maximal ideals of
$KN^\times$ is 0 as well. Let $K[\pi]:K[N]\rightarrow K[N/N_+]$ be the homomorphism induced by $\pi:N\rightarrow N/N_+\cong (N^\times)^\infty$. Then $K[\pi]^{-1 }(\mathfrak{m_i})$ is a maximal ideal
of $K[N]$, where $\mathfrak{m_i}$ is a maximal ideal of $K[N^\times]$. So
\[K[N_+]=K[\pi]^{-1 }(0 )=\bigcap_i K[\pi]^{-1 }(\mathfrak{m_i}). \qedhere \]
\end{proof}
\section{The Hilbert-Kunz function of a binoid}
In this Section we introduce the Hilbert-Kunz function of a binoid. It is defined for a natural number $q$, an $N_+$-primary ideal $\mathfrak{n}$ and a finitely generated $N$-set $T$,
where $N$ fulfills some natural properties. The first lemma ensures that this function exists. \begin{lemma}
Let $N$ be a finitely generated, semipositive binoid, $T$ a finitely generated $N$-set and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. Then for every positive integer $q$ we have
\[\# T/([q]\mathfrak{n}+T)\leqslant r|N^\times|+D q^s, \] where $r$ is the number of generators of $T$, $s$ is the number of generators of $N_+$ and $D$ is some constant. \end{lemma}
\begin{proof}
Let $t_1, \dots, t_r$ be generators of $T$ and $n_1, \dots, n_s$ be generators of $N_+$. If $t\in T$ then either \[ t=u+t_i, \] where $u\in N^\times$, or \[t=a_1 n_1 +\cdots+a_sn_s+t_i, \] where $a_j\in \mathbb N,1 \leqslant i \leqslant r$. There are at most $r|N^\times|$ elements of the first type. In the second case we know by the primary property that there exist $d_i\in\mathbb N$ such that
$d_in_i\in \mathfrak{n}$ for $1 \leqslant i \leqslant s$. So if $a_j\geqslant qd_j$, for some $j$, then $t\in [q]\mathfrak{n}+T$, which means
\[ \# T/([q]\mathfrak{n}+T)\leqslant r|N^\times|+r\cdot q^s\cdot\prod_{i=1 }^s d_i. \qedhere \]
\end{proof}
\begin{definition}
Let $N$ be a finitely generated, semipositive binoid, $T$ a finitely generated $N$-set and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. Then we call the number \[\hkf^N(\mathfrak{n}, T, q)=\hkf(\mathfrak{n}, T, q):=\# T/([q]\mathfrak{n}+T)=|T/([q]\mathfrak{n}+T)|-1 \] the \emph{Hilbert-Kunz function} of $\mathfrak{n}$ on the $N$-set $T$ at $q$. \end{definition}
In particular, for $T=N$ and $\mathfrak{n}=N_+$ we have \[\hkf(N, q):=\hkf(N_+, N, q)=\#N/[q]N_+. \]
\begin{example}
\label{N^n}
Let $N=(\mathbb N^n)^\infty$ then $\hkf(N, q)=\# N/[q]N_+=q^n$. Since $N_+=(\mathbb N^n)^\infty\setminus \{0 \}$ we have $[q]N_+=\bigcup_{i=1 }^n qe_i+N$, where $e_i$ is the $i$-th standard vector of $\mathbb N^n$. Hence \[N/[q]N_+=\{(a_1, \dots, a_n)\in \mathbb N^n\mid 0 \leqslant a_i\leqslant q-1 \}\cup\{\infty\}, \] which means $\# N/[q]N_+=q^n$. \end{example}
\begin{lemma}
\label{HKF of annihilator}
Let $N$ be a finitely generated and semipositive binoid, $(T, p)$ a finitely generated $N$-set and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. Let $\mathfrak{a}\subseteq \operatorname{Ann} T$ be an ideal of $N$, where $\operatorname{Ann} T$ is the annihilator of $T$. Then $T$ is also an $N/\mathfrak{a}$-set and we have
\[\hkf^N(\mathfrak{n}, T, q)=\hkf^{N/\mathfrak{a}}((\mathfrak{n}\cup\mathfrak{a})/\mathfrak{a}, T, q). \]
\end{lemma}
\begin{proof}
Let us first show that $(\mathfrak{n}\cup\mathfrak{a})/\mathfrak{a}$ is an $(N/\mathfrak{a})_+$-primary ideal of $N/\mathfrak{a}$. If there is an element $[m]\in (N/\mathfrak{a})_+$,
where $m\in N$ then there exists $l\in\mathbb N$ such that $lm\in\mathfrak{n}$ and so $l[m]=[lm]\in(\mathfrak{n}\cup\mathfrak{a})/\mathfrak{a}$. Let us define the action $(N/\mathfrak{a})\times T\longrightarrow T$ by $(\overline{n}, t)\longrightarrow \overline{n}+t=n+t$. Then it is easy to see that this is well defined, that
\[ \begin{tikzpicture}[node distance=2 cm, auto]
\node (N) {$N\times T$};
\node (T) [right of=N] {$T$};
\node (N1 ) [below of=N] {$(N/\mathfrak{a})\times T$};
\node (T1 ) [below of=T] {$T$};
\draw[->] (N) to node {}(T);
\draw[->] (N) to node {}(N1 );
\draw[->] (N1 ) to node {}(T1 );
\draw[->] (T) to node {}(T1 );
\end{tikzpicture}
\]
commutes and so $T$ is an $N/\mathfrak{a}$-set. For every $q\in \mathbb N$ we have $[q]\mathfrak{n}+T=[q](\mathfrak{n}\cup\mathfrak{a})/\mathfrak{a}+T$,
as this is the image of $[q]\mathfrak{n}\times T$. Therefore
\[ \#T/([q]\mathfrak{n}+T)=\#T/([q](\mathfrak{n}\cup\mathfrak{a})/\mathfrak{a}+T). \qedhere \]
\end{proof}
\begin{lemma}
\label{HKF ineq}
Let $N$ be a finitely generated, semipositive binoid, $S$ and $T$ finitely generated $N$-sets and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. Suppose that we have a surjective $N$-set homomorphism $\phi:S\rightarrow T$. Then for all $q$
\[\hkf(\mathfrak{n}, S, q)\geqslant \hkf(\mathfrak{n}, T, q). \]
\end{lemma}
\begin{proof}
By definition $[q]\mathfrak{n}$ is an ideal of $N$. So by Lemma \ref{surj hom} we have a surjective homomorphism
$S/([q]\mathfrak{n}+S)\longrightarrow T/([q]\mathfrak{n}+T)$. Hence
\[\hkf(\mathfrak{n}, S, q)= |S/([q]\mathfrak{n}+S) |-1 \geqslant | T/([q]\mathfrak{n}+T) |-1 =\hkf(\mathfrak{n}, T, q). \qedhere \]
\end{proof}
\begin{definition}
Let $N$ be a finitely generated, semipositive binoid, $T$ a finitely generated $N$-set and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. Then the \emph{Hilbert-Kunz multiplicity} of $\mathfrak{n}$ on the $N$-set $T$ is defined by
\[ e_{HK}(\mathfrak{n}, T):=\lim_{q\rightarrow \infty} \frac{\hkf^N(\mathfrak{n}, T, q)}{q^{\dim N}}, \]
if this limit exists. In particular for $T=N$ and $\mathfrak{n}=N_+$ we set $e_{HK}(N):=e_{HK}(N_+, N)$ and denote it the \emph{Hilbert-Kunz multiplicity} of $M$. \end{definition}
Note that here we work with the combinatorial dimension of $N$. \begin{theorem}
\label{bound of HKF}
Let $N$ be a finitely generated, semipositive binoid and $\mathfrak{n}$ an $N_+$-primary ideal of $N$. If $T$ is a finitely generated $N$-set and $\hkf(\mathfrak{n}, N, q)\leqslant Cq^{\dim N}$ for every $q\in \mathbb N$ and some constant $C$ (in particular, if $e_{HK}(\mathfrak{n}, N)$ exists) then there exists $\alpha$ such that $\hkf(\mathfrak{n}, T, q)\leqslant \alpha q^{\dim N}$. \end{theorem}
\begin{proof}
By Lemma \ref{canonical isomorphism}, we have a canonical isomorphism $N^{\cupdot r}/([q]\mathfrak{n}+N^{\cupdot r})\longrightarrow (N/[q]\mathfrak{n})^{\cupdot r}$, which means that $\# ( N^{\cupdot r}/([q]\mathfrak{n}+N^{\cupdot r})) = \# ( (N/[q]\mathfrak{n})^{\cupdot r} ) = r\cdot \#(N/[q]\mathfrak{n})\leqslant r\cdot Cq^{\dim N}$. From Lemma \ref{surj map for f. g. set}, we have a surjective map $\phi:N^{\cupdot r}\longrightarrow T$, where $r$ is the number of generators of $T$. Also by Lemma \ref{surj hom}, we know that there exists a surjective homomorphism
\[\tilde{\phi}:N^{\cupdot r}/([q]\mathfrak{n}+N^{\cupdot r}) \longrightarrow T/([q]\mathfrak{n}+T). \]
So by Lemma \ref{HKF ineq} we have $\hkf(\mathfrak{n}, T, q)\leqslant \alpha q^{\dim N}$, where $\alpha=r\cdot C$. \end{proof}
\begin{lemma}
\label{multiplicity of smash product}
Let $M$ and $N$ be finitely generated, semipositive binoids. Then we have
\[\hkf(M\wedge N, q)=\hkf(M, q)\cdot \hkf(N, q). \]
\end{lemma}
\begin{proof}
We have
\[(M\wedge N)_+=(M\wedge N)\setminus (M^\times\wedge N^\times)=(M_+\wedge N)\cup (M\wedge N_+). \]
Take an element $q(m\wedge n)+m_1 \wedge n_1 \in [q](M\wedge N)_+$, where $m\wedge n\in (M\wedge N)_+$. Then $m\wedge n\in M_+\wedge N$ or $m\wedge n\in M\wedge N_+$, so $qm+m_1 \in [q]M_+$ or $qn+n_1 \in [q]N_+$, which means
\[[q](M\wedge N)_+\subseteq ([q]M_+\wedge N)\cup (M\wedge [q]N_+). \]
We can also easily check the other inclusion, so we have
\[[q](M\wedge N)_+= ([q]M_+\wedge N)\cup (M\wedge [q]N_+). \]
Hence, from this result and Lemma \ref{smash of quotients}, we get
\[(M\wedge N)/[q](M\wedge N)_+\cong (M/[q]M_+)\wedge (N/[q]N_+). \]
By assumption $M, N$
are semipositive, so we know $M/[q]M_+$ and $N/[q]N_+$ are finite binoids. Hence, by Proposition \ref{number of elements of smash}, we can conclude that
\[ \# \big((M\wedge N)/[q](M\wedge N)_+\big)=\# (M/[q]M_+) \cdot \# (N/[q]N_+). \qedhere \]
\end{proof}
\begin{theorem}
\label{multiplicity of smash}
Let $M$ and $N$ be binoids such that $e_{HK}(M)$ and $e_{HK}(N)$ exist. Then $e_{HK}(M\wedge N)$ exists and
\[e_{HK}(M\wedge N)=e_{HK}(M)\cdot e_{HK}(N). \]
\end{theorem}
\begin{proof}
By definition
\begin{align*}
e_{HK}(M)\cdot e_{HK}(N)&=\lim_{q\rightarrow \infty} \frac{\hkf(M, q)}{q^{\dim M}}\cdot \lim_{q\rightarrow \infty} \frac{\hkf(N, q)}{q^{\dim N}}\\
&=\lim_{q\rightarrow \infty} \frac{\hkf(M, q)\cdot \hkf(N, q)}{q^{\dim M+\dim N}}\\
&\overset{\text{Lemma } \ref{multiplicity of smash product}}{=}\lim_{q\rightarrow \infty} \frac{\hkf(M\wedge N, q)}{q^{\dim M+\dim N}}\\
&\overset{\text{Theorem } \ref{dimension of smash product}}{=}\lim_{q\rightarrow \infty} \frac{\hkf(M\wedge N, q)}{q^{\dim M\wedge N}}\\
&=e_{HK}(M\wedge N). \end{align*}
\end{proof}
\section{Reduction to the integral and reduced case}
We want to show that the Hilbert-Kunz multiplicity of a finitely generated semipositive cancellative reduced binoid exists by reducing it to the toric (not necessarily normal) case, i. e. to the case of a integral cancellative torsionfree binoid which was proven by Eto (\cite{Eto}). If $N$ is such a monoid and $N \subseteq \hat{N} \subseteq \operatorname{diff} N$ its normalization,
and $ {\mathfrak n} $ a primary ideal with generators $f_1, \ldots , f_n$,
then the Hilbert-Kunz multiplicity $e_{HK} ( {\mathfrak n}, N)$ equals $ e_{HK} ( {\mathfrak n} \hat{N}, \hat{N} )$, where $ {\mathfrak n} \hat{N}$ denotes
the extended ideal (see Lemma \ref{finite birational extension} below).
|
1,321
| 4
|
arxiv
|
{enumerate}
\item Let $N$ be a finitely generated, semipositive, cancellative, torsion-free, reduced binoid of dimension $d$ and let $\{\mathfrak{p}_1, \dots, \mathfrak{p}_s\}$ be all minimal prime ideals of dimension $d$. Then the Hilbert-Kunz multiplicity of $N$ exists, and
\[e_{HK}(N)=\sum_{i=1 }^s e_{HK}(N/\mathfrak{p}_i). \]
\item Let $N$ be a finitely generated, semipositive, cancellative binoid of dimension $d$ and torsion-free except for nilpotent elements. Then we have
\[\hkf(N, q)\leqslant Cq^{\dim N}, \]
for all $q$, where $C$ is some constant. \end{enumerate}
\end{lemma}
\begin{proof}
We have finitely many minimal prime ideals $\{\mathfrak{p}_1, \dots, \mathfrak{p}_n\}$ and $n\geqslant s$. From reducedness we have $\bigcap_{i=1 }^n \mathfrak{p}_i=\operatorname{nil}(N)=\{\infty\}$ by \cite[Corollary 2.3.10 ]{Simone}. If $f\notin [q]N_+$ and $f\in [q]N_+\cup \mathfrak{p}_i$ for all $i$,
then $f\in \bigcap_{i=1 }^n \mathfrak{p}_i=\{\infty\}$, which implies $N\setminus[q]N_+\subseteq \bigcup_{i=1 }^n N\setminus([q]N_+\cup\mathfrak{p}_i)$. Also by Proposition \ref{quotient to union} and the set identifications
\[ N\setminus[q]N_+\cong (N/[q]N_+)\setminus \{\infty\}, \]
\[ N\setminus([q]N_+\cup\mathfrak{p}_i)\cong (N/([q]N_+\cup\mathfrak{p}_i))\setminus \{\infty\}\]
we can conclude that
\[N\setminus[q]N_+\subseteq \bigcup_{i=1 }^n (N/\mathfrak{p}_i)\setminus[q](N/\mathfrak{p}_i)_+. \]
Since $N/\mathfrak{p}_i$ is finitely generated, semipositive, cancellative, torsion-free and integral by assumption we know that
\[\hkf(N/\mathfrak{p}_i, q)=e_{HK}(N/\mathfrak{p}_i)q^{d_i}+O(q^{d_i-1 }), \]
where $d_i:=\dim N/\mathfrak{p}_i$. Hence only the minimal prime ideals
with dimension $d_i=d$ are important and we can write
\begin{equation}
\# N/[q]N_+\leqslant \sum_{i=1 }^s e_{HK}(N/\mathfrak{p}_i)q^d+O(q^{d-1 }). \end{equation}
Let us denote $H:=N/[q]N_+$. Then
\[N/([q]N_+\cup\mathfrak{p}_i)=(H\setminus \mathfrak{p}_i)\cup\{\infty\}\]
and
\[ N/([q]N_+\cup\mathfrak{p}_i\cup\mathfrak{p}_j)=\big((H\setminus \mathfrak{p}_i)\cap (H\setminus \mathfrak{p}_j)\big)\cup\{\infty\}. \]
Hence from set theory we know that
\[ |H\setminus \{\infty\}|\geqslant |\bigcup_{i=1 }^n H\setminus\mathfrak{p}_i|\geqslant \sum_{i=1 }^n |H\setminus\mathfrak{p}_i|-\sum_{1 \leqslant i<j\leqslant n} |(H\setminus \mathfrak{p}_i)\cap (H\setminus \mathfrak{p}_j)|. \]
So from here we have
\[\# N/[q]N_+\geqslant \sum_{i=1 }^n \# N/([q]N_+\cup\mathfrak{p}_i)-\sum_{1 \leqslant i<j\leqslant n}\# N/([q]N_+\cup\mathfrak{p}_i\cup\mathfrak{p}_j). \]
But we know that $N/(\mathfrak{p}_i\cup\mathfrak{p}_j)$ is finitely generated, semipositive, cancellative, torsion-free and integral so by Proposition \ref{quotient to union} and the assumption we get
\begin{align*}
\sum_{i\neq j}\# N/([q]N_+\cup\mathfrak{p}_i\cup\mathfrak{p}_j)&=\sum_{i\neq j}\hkf(N/(\mathfrak{p}_i\cup\mathfrak{p}_j), q)\\
&=\sum_{i\neq j}\Big(e_{HK}(N/(\mathfrak{p}_i\cup\mathfrak{p}_j))q^{d_{ij}}+O(q^{d_{ij}-1 })\Big),
\end{align*}
where $d_{ij}$ is the dimension of $N/(\mathfrak{p}_i\cup\mathfrak{p}_j)$. By Lemma \ref{dimension properties} we also know that $d_{ij}<\min\{d_i, d_j\}$. So we have
\begin{equation}
\# N/[q]N_+\geqslant \sum_{i=1 }^s e_{HK}(N/\mathfrak{p}_i)q^d+O(q^{d-1 }). \end{equation}
Hence from (1 ) and (2 ) we get
\[e_{HK}(N)=\lim_{q\rightarrow \infty} \dfrac{\hkf(N, q)}{q^d}=\sum_{i=1 }^s e_{HK}(N/\mathfrak{p}_i). \]
To prove the second statement, note that $\operatorname{nil}(N)$ is an ideal of $N$ and $N_{\operatorname{red}}:=N/\operatorname{nil}(N)$ is reduced. Let $a_1, \dots, a_s$ be the generators of $\operatorname{nil}(N)$ and $k_ia_i=\infty$ but $(k_i-1 )a_i\neq \infty$, where $k_i\in \mathbb N,1 \leqslant i\leqslant s$. We have a finite decreasing sequence
\[N=M_0 \longrightarrow M_1 \longrightarrow \cdots \longrightarrow M_{t-1 }\longrightarrow M_t=N_{\operatorname{red}}, \]
where $M_{k_0 +\cdots+k_i+j}=M_{k_0 +\cdots+k_i+j-1 }/((k_{i+1 }-j)a_{i+1 }),1 \leqslant j\leqslant k_{i+1 }-1,0 \leqslant i\leqslant s-1 $ and $k_0 =0 $. Here we have $2 (k_{i+1 }-j)a_{i+1 }=\infty$ in $M_{k_0 +\cdots+k_i+j}$. So in particular there exists a sequence such that
\[M_{i+1 }=M_i/(f_i),2 f_i=\infty \text{ in }M_i,0 \leqslant i\leqslant t-1 \]
and $M_0 =N, M_t=N_{\operatorname{red}}$. Hence there exists also such a sequence with this property of minimal length $l$. We will use induction on $l$ to prove this Lemma. For $l=0 $ we are in the reduced situation and the statement follows from the case above. So suppose that $l$ is arbitrary and that for smaller $l$ the statement is already proven. Suppose
that a sequence as described is given. This means in particular that we have a strongly exact sequence
\[\infty \. \longrightarrow (f)+N \ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} N \longtwoheadrightarrow N/(f) \. \longrightarrow \. \infty, \]
where $f=f_0 $. Hence $M:=N/(f)$ has a decreasing sequence with the described property of length smaller than $l$,
so by the induction hypothesis we know that $\# M/[q]M_+\leqslant Dq^d$ for some constant $D$. By Corollary \ref{equality of e. s. corollary} we also have
\[\# N/[q]N_+ +\# \big((f)\cap [q]N_+\big)/((f)+[q]N_+)=\# ((f)+N)/((f)+[q]N_+)+\# M/([q]N_++M). \]
Since $2 f=\infty$, we know that $f$ annihilates the $N$-set $(f)+N$. So $(f)+N$ is a finitely generated $M$-set ($f$ is the only $M$-set generator of $(f)+N). $
Hence by Lemma \ref{surj map for f. g. set} we have a surjective homomorphism $M\longrightarrow (f)+N$, which implies
\[\# ((f)+N)/((f)+[q]N_+)\leqslant\# M/[q]M_+. \]
But we also know that $[q]M_+\subseteq [q]N_++M$, so we have that
\[\# M/([q]N_++M)\leqslant\# M/[q]M_+. \]
From here we can conclude
that
\[\# N/[q]N_+\leqslant\# M/[q]M_++\# M/([q]N_++M)\leqslant 2 Dq^d. \qedhere \]
\end{proof}
The same reduction steps hold when we allow torsion. \begin{lemma}
\label{reduction}
Suppose that for all finitely generated, semipositive, cancellative and integral binoids of dimension less or equal to $d$ the Hilbert-Kunz multiplicity exists. Then
\begin{enumerate}
\item Let $N$ be a finitely generated, semipositive, cancellative and reduced binoid of dimension $d$ and let $\{\mathfrak{p}_1, \dots, \mathfrak{p}_s\}$ be all minimal prime ideals of dimension $d$. Then the Hilbert-Kunz multiplicity of $N$ exists, and
\[e_{HK}(N)=\sum_{i=1 }^s e_{HK}(N/\mathfrak{p}_i). \]
\item Let $N$ be a finitely generated, cancellative and semipositive binoid of dimension $d$. Then
\[\hkf(N, q)\leqslant Cq^d, \]
for all $q$, where $C$ is some constant. \end{enumerate}
\end{lemma}
\begin{proof}
The proof is similar to the one of Lemma \ref{reduction of torsion free}. \end{proof}
The following Lemma reduces in particular the case of a non-normal toric binoid to the normal toric case. A finite $N$-binoid $M$ is an $N$-binoid which is finitely generated as an $N$-set. \begin{lemma}
\label{finite birational extension}
Suppose that for all finitely generated, cancellative, semipositive, integral binoids of dimension less than $d$, the Hilbert-Kunz function is bounded by $Cq^{d-1 }$ for some constant $C$. Let $N$ be a finitely generated, semipositive, cancellative, integral binoid of dimension $d$ and let $\mathfrak{n}$ be an $N_+$-primary ideal of $N$. Let $M$ be a finite $N$-binoid which is birational over $N$. Suppose that $e_{HK}(\mathfrak{n}+M, M)$ exists. Then $e_{HK}(\mathfrak{n}, N)$ exists and it is equal to $e_{HK}(\mathfrak{n}+M, M)$. \end{lemma}
\begin{proof}
First note that $\mathfrak{n}+M$ is an $M_+$-primary ideal. By Proposition \ref{strongly exact sequence} applied to the $N$-sets $S=N, T=M$ and the ideal $J=[q]\mathfrak{n}$ we have the strongly exact sequence
\[\infty \. \longrightarrow (N\cap([q]\mathfrak{n}+M))/[q]\mathfrak{n} \. \longrightarrow \. N/[q]\mathfrak{n} \. \longrightarrow M/([q]\mathfrak{n}+M) \. \longrightarrow (M/N)/([q]\mathfrak{n}+M/N) \. \longrightarrow\infty. \]
By Proposition \ref{general equation of exact seq},
we know that
\[ \# (N\cap([q]\mathfrak{n}+M))/[q]\mathfrak{n} + \# M/([q]\mathfrak{n}+M) = \# N/[q]\mathfrak{n} +\#(M/N)/([q]\mathfrak{n}+M/N) \]
and we have
\[ \# M/([q]\mathfrak{n}+M) \leqslant \# N/[q]\mathfrak{n} +\#(M/N)/([q]\mathfrak{n}+M/N). \]
It is easy to check that there exists (a common denominator) $b\in N$ such that $b+M\subseteq N$, and that $I:=b+M\neq\{\infty\}$ is an ideal of $N$ which is isomorphic to $M$, because $N$ is an integral binoid. We also know that $I$ annihilates $M/N$,
so $M/N$ is an $N/I$-set and $N/I$ is a finitely generated, semipositive, cancellative binoid
and torsion-free up to nilpotence. So by Lemma \ref{HKF of annihilator} we have that
\[\# (M/N)/([q]\mathfrak{n}+M/N)=\hkf(\mathfrak{n}, M/N, q)=\hkf^{N/I}((I\cup\mathfrak{n})/I, M/N, q)\]
and by Lemma \ref{dimension properties} we know that $\dim N/I<d$, so from the assumptions and by Lemma \ref{reduction of torsion free} we get
\[\hkf^{N/I}((I\cup\mathfrak{n})/I, M/N, q)\leqslant C q^{d-1 }. \]
Hence we have
\begin{equation}
\# M/([q]\mathfrak{n}+M)-Cq^{d-1 } \leqslant \# N/[q]\mathfrak{n}. \end{equation}
By Corollary \ref{equality of e. s. corollary}, we deduce
\[\# N/[q]\mathfrak{n}\leqslant\# I/(I+[q]\mathfrak{n})+\# (N/I)/([q]\mathfrak{n}+N/I). \]
But we know that $\# (N/I)/([q]\mathfrak{n}+N/I)=\hkf(\mathfrak{n}, N/I, q)$ and $I$ annihilates $N/I$. Similarly we can show that
\[\hkf^N(\mathfrak{n}, N/I, q)=\hkf^{N/I}((I\cup\mathfrak{n})/I, N/I, q)\leqslant C'q^{d-1 }, \]
where $C'$ is some constant. Since $I\cong M$, we have that $\# I/(I+[q]\mathfrak{n})=\# M/([q]\mathfrak{n}+M)$, so we can conclude that
\[\# N/[q]\mathfrak{n}\leqslant\# M/([q]\mathfrak{n}+M)+C'q^{d-1 }. \]
From the last result and $(3 )$ we have that
\[e_{HK}(\mathfrak{n}, N)= \operatorname{lim}_{q \rightarrow \infty} \frac{ \# M/([q]\mathfrak{n}+M)}{q^d}. \]
Since extended ideals commute with Frobenius sums, we have $\# M/([q]\mathfrak{n}+M)=\# M/[q](\mathfrak{n}+M)$, which means that
\[e_{HK}(\mathfrak{n}, N)=e_{HK}(\mathfrak{n}+M, M).
|
1,321
| 5
|
arxiv
|
1 ) and the toric case. \end{proof}
\begin{example}
\label{simplicial}
Let $\operatorname{\triangle}$ be a simplicial complex on the vertex set $V$. The binoid associated to $\operatorname{\triangle}$ is given by $F(V)/I_{\operatorname{\triangle}}=:M_{\operatorname{\triangle}}$, where $I_{\operatorname{\triangle}}$ is
the ideal $\{ f\in F(V)\mid \operatorname{supp}(f)\nsubseteq \operatorname{\triangle}\}$ of the free binoid $F(V)\cong (\mathbb N^{|V|})^\infty$. In this case we have
$e_{HK}(M_{\operatorname{\triangle}})=k $,
where $k$ is the number of facets of maximal dimension. This rests on Theorem \ref{not integral, reduced} and the facts that simplicial binoids are positive cancellative torsion-free and reduced,
that faces correspond to prime ideals and facets to minimal prime ideals and that $M_{\operatorname{\triangle}}/ \mathfrak{p} \cong {\mathbb N}^{\operatorname{dim} M_{\operatorname{\triangle}} }$
for the minimal prime ideals, see \cite[Chapter 6 ]{Simone}. \end{example}
\begin{lemma}
\label{not integral and torsion free up to nilpotence}
Let $N$ be a finitely generated, positive, cancellative binoid which is torsion-free up to nilpotence. Then $\hkf(N, q)\leqslant Cq^d$, where $C$ is some constant and $d=\dim N$. \end{lemma}
\begin{proof}
This follows from the toric case and Lemma \ref{reduction of torsion free} (2 ). \end{proof}
\begin{corollary}
\label{bound for N/I}
Let $N$ be a finitely generated, integral, positive, cancellative and torsion-free binoid and let $I\neq \infty$ be an ideal of $N$. Then $\hkf(N/I, q)\leqslant Cq^d$, where $C$ is some constant and $d=\dim N/I$. \end{corollary}
\begin{proof}
We know by \cite[Lemma 2.1.20 ]{Simone}, that $N/I$ is torsion-free up to nilpotence and finitely generated, positive, cancellative. So by Lemma \ref{not integral and torsion free up to nilpotence} we have the result. \end{proof}
\begin{lemma}
\label{ideal case}
Let $N$ be a finitely generated, positive, cancellative, integral and torsion-free binoid and let $I\neq \infty$
be an ideal of $N$. Then $e_{HK}(I)$ exists and is equal to $e_{HK}(N)$. \end{lemma}
\begin{proof}
By Corollary \ref{equality of e. s. corollary} we have
\[ \# N/[q]N_+ +\# I\cap [q]N_+/(I+[q]N_+)=\# I/(I+[q]N_+)+\# (N/I)/[q](N/I)_+ . \]
By Corollary \ref{bound for N/I} we know that $\hkf(N/I, q)\leqslant Dq^{d'}$, where $d'=\dim N/I<d$. Hence from $(2.11 )$ we get
\begin{equation}
\# I/(I+[q]N_+)\geqslant \hkf(N, q)-\# (N/I)/[q](N/I)_+\geqslant \hkf(N, q)-Dq^{d'}. \end{equation}
Let $\infty\neq f\in I$. Then we have a strongly exact sequence $\infty\rightarrow (f)\rightarrow I\rightarrow I/(f)\rightarrow\infty$. So we have
\[\# I/(I+[q]N_+)\leqslant \# (f)/((f)+[q]N_+)+\# (I/(f))/[q](I/(f))_+. \]
By Lemma \ref{HKF of annihilator}, the $N$-set $I/(f)$ is an $N/(f)$-set. By Corollary \ref{bound for N/I} we know that $\hkf(N/(f), q)\leqslant Dq^{d''}$, where $d''=\dim N/(f)<d$ and by Theorem \ref{bound of HKF} we have $\hkf(I/(f), q)\leqslant \alpha q^{d''}$. Hence we can conclude that
\begin{equation}
\# I/(I+[q]N_+)\leqslant \# (f)/((f)+[q]N_+)+\alpha q^{d''}. \end{equation}
But we know that $(f)=f+N$ is isomorphic to $N$ as an $N$-set so
\[\# (f)/((f)+[q]N_+)=\hkf(N, q). \]
Now by $(4 )$ and $(5 )$ we have
\[\begin{aligned}e_{HK}(N)&=\lim_{q\rightarrow\infty} \frac{\hkf(N, q)-Dq^{d'}}{q^d}\\
&\leqslant \lim_{q\rightarrow\infty} \frac{\# I/(I+[q]N_+)}{q^d}=e_{HK}(I)\\
&\leqslant\lim_{q\rightarrow\infty} \frac{\hkf(N, q)+\alpha q^{d''}}{q^d}=e_{HK}(N)\end{aligned}\]
\end{proof}
\section{Integral and cancellative binoids with torsion}
Let $N$ be a finitely generated, semipositive, cancellative and integral binoid. We know that
\[N\subseteq \operatorname{diff} N\cong(\mathbb Z^m\times T)^\infty=(\mathbb Z^m)^\infty\wedge T^\infty, \]
where smashing is over the trivial binoid $\mathbb T=\{0, \infty\}$. Here $T$ is the torsion part of the difference group, which is a finite commutative group, hence $T=\mathbb Z/k_1 \times\cdots \times \mathbb Z/k_l$. We will write elements $x\in N^\bullet$ as $x=f\wedge t$ with $f\in\mathbb Z^m$ and $t\in T$. This representation is unique. The relation $\sim_{\operatorname{tf}}$ on $N$ given by $a\sim_{\operatorname{tf}} b$ if $na=nb$ for some $n\geqslant 1 $ is a congruence and $N_{\operatorname{tf}}:=N/\sim_{\operatorname{tf}}$ is a torsion-free binoid which we call the \emph{torsion-freefication} of $N$. If $F=\{f\in (\mathbb Z^m)^\infty\mid \exists t\in T, f\wedge t\in N\}$ then $N_{\operatorname{tf}}\cong F\subseteq \mathbb Z^m$. Hence we may assume $N\subseteq F\wedge T^\infty$ and $f\wedge t_1 \sim_{\operatorname{tf}} g\wedge t_2 $ if and only if there exists $n\in\mathbb N$ such that $n(f\wedge t_1 )=n (g\wedge t_2 )$, which means $f=g\in F$. We define the subsets $F\wedge t:=\{g\wedge t \mid g\in F\}$, $F_t:=\{f\in F \mid f\wedge t\in M\}$ and $M_t:=F_t\wedge t$. \begin{proposition}
Let $F$ be a finitely generated, positive, cancellative, integral and torsion-free binoid. If $T$ is a finite group then
$e_{HK}(F\wedge T^\infty)=e_{HK}(F)\cdot |T|$. \end{proposition}
\begin{proof}
This follows from the toric case, Theorem \ref{multiplicity of smash} and the fact that for a finite group binoid the Hilbert Kunz multiplicity is just the order of the group. \end{proof}
In the following we write $N\subseteq F\wedge T^\infty$, where $F\cong N_{\operatorname{tf}}$, $T$ is a finite group and $\operatorname{diff} N=\operatorname{diff} F\wedge T^\infty$. The strategy is to relate the Hilbert-Kunz multiplicity of $N$ with that of $F\wedge T^\infty$. \begin{lemma}
\label{correspondence of ideals}
Let $F$ be a binoid and $T^\infty$ be a group binoid. Then we have a bijection between ideals of $F$ and ideals of $F\wedge T^\infty$. The $F_+$-primary ideals correspond to $(F\wedge T^\infty)_+$-primary ideals. \end{lemma}
\begin{proof}
We have an inclusion
\[F \xrightarrow{\;i\;} F\wedge T^\infty, ~f \longmapsto f\wedge 0. \]
So we can consider for an ideal in $F$ its extended ideal in $F\wedge T^\infty$, in other words we use the map $\mathfrak{a}\mapsto \mathfrak{a}+F\wedge T^\infty$. For ideal generators $f_j\wedge t_j, j\in J$, in $F\wedge T^\infty$ we have
\[\langle f_j\wedge t_j\mid j\in J\rangle=\langle f_j\wedge 0 \mid j\in J\rangle\subseteq F\wedge T^\infty, \]
because $0 \wedge t_j$ are units. Hence the map $\mathfrak{b}\mapsto i^{-1 }(\mathfrak{b})$, where $\mathfrak{b}$ is an ideal of $F\wedge T^\infty$, is inverse to the extension map. Let $\mathfrak{p}\subseteq F$ be an $F_+$-primary ideal and $f\wedge t\in (F\wedge T^\infty)_+$. Then there exists $l\in\mathbb N$ such that $lf \in \mathfrak{p}$, so $l(f\wedge t)=lf\wedge lt=(lf\wedge 0 )+(0 \wedge lt) \in \mathfrak{p}+F\wedge T^\infty$, because $0 \wedge lt$ is a unit. Now let $\mathfrak{q}$ be an $F\wedge T^\infty_+$-primary ideal and $f\in F_+$. Then $f\wedge 0 \in F\wedge T^\infty$ is not a unit, so there exists $m\in \mathbb N$ such that $m(f\wedge 0 )=mf\wedge 0 \in \mathfrak{q}$. Hence $mf \in i^{-1 }(\mathfrak{q})$. \end{proof}
\begin{lemma}
\label{finite birational ext}
Let $N$ be a finitely generated, semipositive, cancellative, integral binoid. If $F=N_{\operatorname{tf}}$ and $T$ is the torsion subgroup of $\operatorname{diff} N$ then $F\wedge T^\infty$ is finite and birational over $N$. \end{lemma}
\begin{proof}
We can assume that $\operatorname{diff} N=(\mathbb Z^d)^\infty\wedge T^\infty$, where $d=\dim N$, $T=\{t_1, \dots, t_m\}$ is a finite abelian group. We know $F\wedge T^\infty \subseteq \operatorname{diff} N$, so it is clear that $F\wedge T^\infty$ is birational over $N$. If $f\wedge t\in (F\wedge T^\infty)^\bullet$ then there exists $t'\in T$ such that $f\wedge t'\in N$ and
\begin{equation}
m(f\wedge t)=mf\wedge mt=mf\wedge 0 =mf\wedge mt'=m(f\wedge t')\in N,
\end{equation}
which means that these elements satisfy a pure integral equation over $N$. Let $f_i\wedge t_i,1 \leqslant i \leqslant k$, be the generators of $N$, then $\{f_i\wedge t_j, \mid 1 \leqslant i \leqslant k,1 \leqslant j \leqslant m\}$ will give us an $N$-generating system of $F\wedge T^\infty$. This means that $F\wedge T^\infty$ is finite over $N$. \end{proof}
Note that the notion birational makes sense though the corresponding binoid algebras are not integral in general. \begin{lemma}
\label{F smash T}
Let $F$ be a finitely generated, positive, cancellative, torsion-free, integral binoid, $T^\infty$ be a torsion group binoid and $\mathfrak{p}$ be an $F_+$-primary ideal of $F$. Then
\[\hkf^{F\wedge T^\infty}(\mathfrak{p}+F\wedge T^\infty, F\wedge T^\infty, q)=|T|\cdot \hkf^F(\mathfrak{p}, F, q). \]
In particular, the Hilbert-Kunz multiplicity exists and
\[e_{HK}^{F\wedge T^\infty}(\mathfrak{p}+F\wedge T^\infty, F\wedge T^\infty)=|T|\cdot e_{HK}^F(\mathfrak{p}, F). \]
\end{lemma}
\begin{proof}
For every $q\in\mathbb N$ we have by Lemma \ref{smash of quotients}
\[F\wedge T^\infty/([q]\mathfrak{p}\wedge T^\infty\cup F\wedge (\infty))=F\wedge T^\infty/[q]\mathfrak{p}\wedge T^\infty\cong F/[q]\mathfrak{p}\wedge T^\infty, \]
and $[q]\mathfrak{p}\wedge T^\infty=[q]\mathfrak{p}+F\wedge T^\infty$. Hence
\[ F\wedge T^\infty/([q]\mathfrak{p}+F\wedge T^\infty)\cong F/[q]\mathfrak{p}\wedge T^\infty, \]
which means
\[ \hkf^{F\wedge T^\infty}(\mathfrak{p}+F\wedge T^\infty, F\wedge T^\infty, q)=|T|\cdot \hkf^F(\mathfrak{p}, F, q). \]
The existence of the Hilbert-Kunz multiplicity follows from the toric case. \end{proof}
Let $N$ be a binoid with $N\subseteq F\wedge T^\infty$, where $F\cong N_{\operatorname{tf}}$, $T$ a finite group. Then we have the following diagram. \[\begin{tikzpicture}[node distance=2 cm, auto]
\node (F) {F};
\node (FT) [below of=F] {$F\wedge T^\infty$};
\node (N) [left of=FT] {$N$};
\draw[->] (N) to node {} (FT);
\draw[->] (F) to node [swap] {$i$} (FT);
\end{tikzpicture}
\]
where $i(f)=f\wedge 0 $. \begin{theorem}
\label{HKM with torsion binoid}
Let $N$ be a finitely generated, semipositive, cancellative, integral binoid and $\mathfrak{n}$ be an $N_+$-primary ideal of $N$. Then
\[e_{HK}^N(\mathfrak{n}, N)=|T|\cdot e_{HK}^F(\mathfrak{m}, F), \]
where $F=N_{\operatorname{tf}}$, $T$ is the torsion subgroup of $\operatorname{diff} N$ and $\mathfrak{m}=i^{-1 }(\mathfrak{n}+F\wedge T^\infty)$ is an ideal of $F$. \end{theorem}
\begin{proof}
We know that $\mathfrak{n}+F\wedge T^\infty$ is a primary ideal and by Lemma \ref{correspondence of ideals} that $\mathfrak{m}$ is an $F_+$-primary ideal of $F$. So by Lemma \ref{F smash T} we know that $e_{HK}(\mathfrak{m}+F\wedge T^\infty, F\wedge T^\infty)$ exists and is equal to $|T|\cdot e_{HK}^F(\mathfrak{m}, F)$.
|
1,321
| 6
|
arxiv
|
N)$ exists and is equal to
\[ e_{HK}^{F\wedge T^\infty}(\mathfrak{n}+F\wedge T^\infty, F\wedge T^\infty)=|T|\cdot e_{HK}^F(\mathfrak{m}, F). \]
\end{proof}
\begin{theorem}
\label{f. g, s. p, c, r binoid}
Let $N$ be a finitely generated, semipositive, cancellative, reduced binoid and $\mathfrak{n}$ be an $N_+$-primary ideal of $N$. Then $e_{HK}(\mathfrak{n}, N)$ exists and is rational number.
\end{theorem}
\begin{proof}
This follows directly from Lemma \ref{reduction} and Theorem \ref{HKM with torsion binoid}.
\end{proof}
\begin{example}
The binoid $\langle x, y\rangle/ax=ay$ (for $a\in \mathbb N_+$) can be realized as $\langle (1 \wedge 0 ), (1 \wedge 1 )\rangle\subseteq \mathbb N\wedge (\mathbb Z/a)^\infty$.
In this case, the torsion-freefication is $\mathbb N$ and the Hilbert-Kunz multiplicity is $a$ by Theorem \ref{HKM with torsion binoid},
since $e_{HK}((\mathbb N)^\infty)=1 $ by Example \ref{N^n} and $HKF( (\mathbb Z/a)^\infty, q)=|(\mathbb Z/a)|$.
\end{example}
\begin{example}
Let $a=(2,1 ), b=(3,0 )\in (\mathbb N \times \mathbb Z/2 )^\infty$ be the generators of a binoid $N$. This binoid is not torsion-free, since $(6,1 ) \neq (6,0 )$, but $2 (6,1 ) \neq 2 (6,0 )$.
The binoid is positive, its normalization $\hat{N} = (\mathbb N \times \mathbb Z/2 )^\infty$ is only semipositive.
It is not difficult to see that $N_{\operatorname{tf}}\cong\mathbb N^\infty\setminus \{1 \}$ and the torsion group is $T=\mathbb Z/2 $. So by Theorem \ref{HKM with torsion binoid},
we have
\[e_{HK}(N)=|T|\cdot e_{HK}(N_{\operatorname{tf}})=2 \cdot 2 =4. \]
\end{example}
\begin{example}
Let $N=\langle X, Y, Z\rangle/4 X+12 Y=16 Z$ be a binoid. From \cite[Lemma 2.2.9 ]{Batsukhthesis} we have an injective binoid homomorphism $\phi:N\rightarrow \big(\mathbb Z^2 \times \mathbb Z/16 \mathbb Z\big)^\infty$.
So $\phi(N)$ has generators $(16,0,0 ), (0,16,0 ), (4,12,1 )$. If we choose the new generators $u=(4, -4,1 ), ~v=(0,16,0 ), ~w=(0,0,4 )$, then $\phi(X)=4 u+v-w$, $\phi(Y)=v$, $\phi(Z)=u+v$.
Hence the difference group of our binoid is isomorphic to $\mathbb Z^2 \times \mathbb Z/4 $.
The torsion-freefication of $N$ is $F=\langle X', Y', Z'\rangle/X'+3 Y'=4 Z'$ with $e_{HK}(F)=\frac{13 }{4 }$ by the toric case,
so $e_{HK}(N)=13 $ by Theorem \ref{HKM with torsion binoid}.
\end{example}
\section{Hilbert-Kunz function of binoid rings}
We finally want to relate the Hilbert-Kunz function of a binoid $N$ to the Hilbert-Kunz function of its binoid algebra $K[N]$. However, $K[N]$ is not a local ring. If $N$ is positive, then $K[N]$ contains the unique combinatorial maximal ideal $K[N_+]$ and we work with the localization $K[N]_{K[N_+]}$.
In this setting we get by counting the dimension and Proposition \ref{K algebra of quotient}, immediately
\[ HKF^N\! (\mathfrak{n}, S, q) \! =\! \# \big(S/(S+[q]\mathfrak{n})\big) \! =\! \dim_K\! K[S]/(K[S]\cdot K[\mathfrak{n}]^{[q]}) \! =\! HKF^{K[N]}(K[\mathfrak{n}], K[S], q). \]
So the rationality results of the previous sections translates directly to results on the Hilbert-Kunz multiplicity of the localization of a binoid algebra.
This translation is more involved for semipositive binoids. We have seen in Lemma \ref{semipositive binoid algebra}
that $K[N_+]$ is the intersection of finitely many maximal ideals $\mathfrak{m}_1, \dots, \mathfrak{m}_r$.
Then $T=K[N]\setminus\mathfrak{m}_1 \cap\cdots\cap K[N]\setminus\mathfrak{m}_r$
is a multiplicatively closed subset of $K[N]$ and the localization $K[N]_T$ is a semilocal ring with maximal ideals
$\mathfrak{m}_i \cdot K[N]_T$ and $K[N_+]\cdot K[N]_T=\bigcap_{i=1 }^r \mathfrak{m}_i\cdot K[N]_T$.
Now, for a semilocal Noetherian ring $R$ with Jacobson ideal $\mathfrak{m}=\bigcap_{i=1 }^r \mathfrak{m}_i$ containing a field of positive characteristic, we can define the Hilbert-Kunz function as before.
For a finite $R$-module $M$ and an $\mathfrak{m}$-primary ideal $\mathfrak{n}$ we set
\[ \hkf^R(\mathfrak{n}, M, q)=\operatorname{length}(M/\mathfrak{n}^{[q]}M). \]
If $J$ is an ideal in a Noetherian ring $R$ with $V(J)=\{\mathfrak{m}_1, \dots, \mathfrak{m}_r\}$, then
\[ \operatorname{length}^R(M/JM)=\operatorname{length}^{R_T}(M_T/JM_T) \]
for $T=\bigcap_{i=1 }^r R\setminus \mathfrak{m}_i$. In this way we consider $K[N_+]$-primary ideals in $K[N]$ for a semipositive binoid $N$,
and we write $\hkf^{K[N]}(K[\mathfrak{n}], K[S], q)$ instead of $\hkf^{K[N]_T}(K[\mathfrak{n}]\cdot K[N]_T, K[S]_T, q)$.
\begin{theorem}
\label{HK binoid ring}
Let $K$ be a field of characteristic $p$, $N$ a finitely generated, semipositive binoid, $S$ an $N$-set, $\mathfrak{n}$ an $N_+$-primary ideal and $q=p^e$. Then we have
\[\hkf^N(\mathfrak{n}, S, q)=\hkf^{K[N]}(K[\mathfrak{n}], K[S], q). \]
\end{theorem}
\begin{proof}
We know, from Proposition \ref{K algebra of quotient}, that
\[K[S/(S+[q]\mathfrak{n})]\cong K[S]/(K[S]\cdot K[\mathfrak{n}]^{[q]}), \] and by a dimension count we can conclude that
\[\# (S/(S+[q]\mathfrak{n}))=\dim_K K[S/(S+[q]\mathfrak{n})]=\dim_K K[S]/(K[S]\cdot K[\mathfrak{n}]^{[q]}). \]
\end{proof}
\begin{theorem}
\label{HKM binoid ring and binoid}
Let $K$ be a field of characteristic $p$, $N$ a finitely generated, semipositive binoid. Suppose that $\dim N=\dim K[N]$ and $e_{HK}(\mathfrak{n}, N)$ exists. Then
\[e_{HK}^{K[N]}(K[\mathfrak{n}], K[N])=e_{HK}(\mathfrak{n}, N)\]
and it is independent of the (positive) characteristic of $K$.
\end{theorem}
\begin{proof}
If we take $S=N$ in Theorem \ref{HK binoid ring} then we have
\[\hkf^{K[N]}(K[\mathfrak{n}], K[N], q)=\hkf^N(\mathfrak{n}, N, q). \]
By assumption $(q=p^e, d=\dim N)$
\[e_{HK}(\mathfrak{n}, N)=\lim_{q\rightarrow \infty} \! \dfrac{\hkf^N(\mathfrak{n}, N, q)}{q^d}=\lim_{e\rightarrow \infty} \! \dfrac{\hkf^{K[N]}(K[\mathfrak{n}], K[N], q)}{q^d}=e_{HK}^{K[N]}(K[\mathfrak{n}], K[N]). \]
\end{proof}
\begin{theorem}[Miller conjecture for cancellative binoid rings]
\label{Miller conjecture}
Let $K$ be a field of characteristic $p$, $N$ a binoid.
Suppose that $\dim N=\dim K[N]$ and $e_{HK}(\mathfrak{n}, N)$ exists. Then
\[\lim_{p\rightarrow\infty} \frac{\hkf^{(\mathbb Z/p)[N]}((\mathbb Z/p)[\mathfrak{n}], (\mathbb Z/p)[S], p)}{p^{\dim N}}\] exists and equals
\[\lim_{p\rightarrow\infty} e_{HK}^{(\mathbb Z/p)[N]}((\mathbb Z/p)[\mathfrak{n}], (\mathbb Z/p)[S]). \]
\end{theorem}
\begin{proof}
This follows from the identity $\hkf^{(\mathbb Z/p)[N]}((\mathbb Z/p)[\mathfrak{n}], (\mathbb Z/p)[S], p)=\hkf^N(\mathfrak{n}, S, p)$ from Theorem \ref{HK binoid ring} and the existence of the limit over all numbers.
\end{proof}
\begin{theorem}
\label{existsrational}
Let $K$ be a field of characteristic $p$, $N$ be a finitely generated, semipositive, cancellative, reduced binoid and $\mathfrak{n}$ be an $N_+$-primary ideal of $N$. Then
\[e_{HK}^{K[N]}(K[\mathfrak{n}], K[N])\]
exists, is independent of the characteristic of $K$ and it is a rational number.
\end{theorem}
\begin{proof}
By Theorem \ref{f. g, s. p, c, r binoid} we know that $e_{HK}(\mathfrak{n}, N)$ exists and that it is a rational number. So by Theorem \ref{HKM binoid ring and binoid} we have the result.
\end{proof}
|
324
| 0
|
arxiv
|
\section{Related Work}
\label{sec:background}
There has been a growing interest in flaky tests in recent years, especially after the publication of Martin Fowler's article on the potential issues with non-deterministic tests \cite{fowler2011 eradicating}, and Luo et al. 's \cite{luo2014 empirical} seminal study. To the best of our knowledge, there have been three reviews of studies on test flakiness: two systematic literature reviews, one by Zolfaghari et al. and the other by Zheng et al. \cite{zolfaghari2020 root, zheng2021 research} and a survey by Parry et al. \cite{parry2021 survey}. There have also been some developers' surveys that aimed to understand how developers perceive and deal with flaky tests in practice. A developer survey conducted by Eck et al. \cite{eck2019 understanding} with 21 Mozilla developers studied the nature and the origin of 200 flaky tests that had been fixed by the same developers. The survey looked into how those tests were introduced and fixed, and found that there are 11 main causes for those 200 flaky tests (including concurrency, async wait and test order dependency). It was also pointed out that flaky tests can be the result of issues in the production code (code under test) rather than in the test. The authors also surveyed another 121 developers about their experience with flaky tests. It was found that flakiness is
perceived as a significant issue by the vast majority of developers they surveyed. The study reported that developers found flaky tests to have a wider impact on the reliability of the test suites. As part of their survey with developers, the authors also conducted a mini-multivocal review study to collect evidence from the literature on the challenges to deal with flaky tests. However, this was a small, targeted review to address only the challenges to deal with flaky tests. The study included a review of only a few (19 ) articles. A recent developers' survey \cite{habchi2022 qualitative} echoed the results found in Eck et al., noting that flakiness can result from interactions between the system components, the testing infrastructure, and other external factors. Ahmed et al. \cite{ahmad2021 empirical} conducted a similar survey with developers aiming to understand developers' perception of test flakiness (e. g., how developers define flaky tests, and what factors are known to impact the presence of flaky tests). The study identified several key factors that are believed to be impacted by the presence of test flakiness, such as software product quality and the quality of the test suite. The systematic review by Zolfaghari et al. \cite{zolfaghari2020 root} identified what has been done so far on test flakiness in general, and presented points for future research directions. The authors identified main methods behind approaches for detecting flaky tests, methods for fixing flaky tests, empirical studies on test flakiness, root causes of flakiness and listed tools for detecting flaky tests. The study suggested investigation into building a taxonomy of flaky tests that covers all dimensions (causes, impact, detection), formal modelling of flaky tests, setting standards for flakiness-free testing and investigating the application of AI-based approaches to the problem, and automated flaky test repair. Zheng et al. \cite{zheng2021 research} also discussed current trends and research progress in flaky tests. The study analysed similar questions to the research reported in this paper on causes and detection techniques of flaky tests in 31 primary studies on flaky tests. Hence, this review was limited, and it did not discuss in detail the mechanism of current detection approaches or the wider impact of flaky tests on other techniques. There was a short scoping grey literature review by Barboni et al. \cite{barboni2021 we} that focused on investigating the definition of flaky tests in grey literature by analysing flaky-test related blogs posted on \emph{Medium}. The study is limited to understanding the definition of flaky tests (highlighting the problem of inconsistent terminology used in the surveyed articles), and covered a small subset of the posts (analysing only 12 articles in total). Parry et al. \cite{parry2021 survey} conducted a more recent comprehensive survey of academic literature on the topic of test flakiness. The study addressed similar research questions to our review, and to those in the previous reviews, by studying causes, detection and mitigation of flaky tests. The study reviewed 76 articles that focused on flaky tests. The review presented in this paper covers a longer period of time than those previous reviews \cite{parry2021 survey, zolfaghari2020 root, zheng2021 research}, which includes work dating back further (on ``non-deterministic tests''), before the term ``flaky tests'' became popular. The review contains a discussion of publications through the end of April 2022, where the most recent review of Parry et al \cite{parry2021 survey} covers publications through April 2021. We found a significant number of academic articles published in the period between the two reviews (229 articles published between 2021 -2022 ). In general, our study complements previous work in that, 1 ) we gather more detailed evidence about causes of flaky tests, and investigate the relationships between different causes, 2 ) we investigate both the impact of and responses to flaky tests in both research and practice, 3 ) we list the \textit{indirect} impact of flaky tests on other analysis methods and techniques (e. g., software debugging and maintenance). All previous reviews focused on academic literature. The review by Zolfaghari et al. \cite{zolfaghari2020 root} covered a total of 43 articles, Parry et al. \cite{parry2021 survey} covered 76 articles, and Zheng et al. \cite{zheng2021 research} covered 31 articles. Our study complements these reviews by providing a much wider coverage and in-depth perspective on the topic of flaky tests. We include a total of 602 academic articles, as well as reviewing 91 grey literature entries (details in Section \ref{sec:data-extraction}). We cover not only studies that directly report on flaky tests, but also those that reference or discuss the issue of test flakiness while it is not the focus of the study. We also discussed a wide range of flaky tests related tools used in research and practice (including industrial and open-source tools), and discuss the impact of flaky tests from different perspectives. A comparative summary of this review with the previous three reviews is shown in Table \ref{tab:reviews-summary}. \begin{table*}[]
\caption{Summary of prior reviews on test flakiness compared with our review}
\label{tab:reviews-summary}
\centering
\resizebox{\linewidth}{. }{
\begin{tabular}{@{}lllll@{}}
\toprule
Paper & Period covered & \begin{tabular}[c]{@{}l@{}}\# of reviewed\\ articles\end{tabular} & \begin{tabular}[c]{@{}l@{}}Grey \\ literature\end{tabular} & Focus \\ \midrule
\cite{zolfaghari2020 root} Zolfaghari et al. & 2013 - 2020 & 43 & -- & causes and detection techniques \\
\cite{zheng2021 research} Zheng et al. & 2014 - 2020 & 31 & -- & causes, impact, detection and fixing approaches \\
\cite{parry2021 survey} Parry et al. & 2009 - 4 /2021 & 76 & -- &
\begin{tabular}[c]{@{}l@{}}causes, costs and consequences, detection and approaches for \\ mitigation and repair \end{tabular} \\
This review & 1994 - 5 /2022 & 560 & 91 & \begin{tabular}[c]{@{}l@{}}taxonomy of causes, detection and responses techniques, and \\impact on developers, process and product in research and practice\end{tabular} \\ \bottomrule
\end{tabular}}
\end{table*}
\section{Study Design}
\label{sec:design}
We designed this review following the Systematic Literature Review (SLR) guidelines by Kitchenham and Charters \cite{kitchenham2007 guidelines}, and the guidelines of Garousi et al. \cite{Garousi2019 Guidelines} on multivocal review studies in software engineering. The review process is summarized in Fig. \ref{fig:protocol}. \begin{figure*}[h]
\centering
\includegraphics[width=0.70 \linewidth]
{media/review_protocol. png
\caption{An overview of our review process}
\label{fig:protocol}
\end{figure*}
\subsection{Research Questions}
\label{sec:questions}
This review addresses a set of research questions that we categorized along four main dimensions: \textit{causes}, \textit{detection}, \textit{impact} and \textit{responses}. We list our research questions below, with the rationale for each. \subsubsection{Common Causes of Flaky Tests:}
\noindent\textbf{RQ1. What are the common causes of flaky tests. }
The rationale behind this question is to list the common causes of flaky tests and then group similar categories of causes together. We also investigate the cause-effect relationships between different flakiness causes as reported in the reviewed studies, as we believe that some causes are interrelated. For example, flakiness related to the User Interface (UI) could be attributed to the underlying environment (e. g., the Operating System (OS) used). Understanding the causes and their relationships is key to dealing with flaky tests (i. e., detection, quarantine or elimination). \subsubsection{Detection of Flaky Tests}
\noindent\textbf{RQ2. How are flaky tests being detected. }\\
\noindent To better understand flaky tests detection, we divide this research question into the following two sub-questions. \\
\noindent\textbf{RQ2.1. What \textit{methods} have been used to detect flaky tests. \\
}
\noindent\textbf{RQ2.2. What \textit{tools} have been developed to detect flaky tests. } \\
In RQs 2.1 and 2.2, we gather evidence regarding methods/tools that have been proposed/used to detect flaky tests. We seek to understand how these methods work. We later discuss the potential advantages and limitations of current approaches. \subsubsection{Impact of Flaky Tests}
\noindent\textbf{RQ3. What is the impact of flaky tests. }
As reported in previous studies, flaky tests are generally perceived to have a negative impact on software product and process \cite{fowler2011 eradicating, googleFlaky2016, harman2018 start, eck2019 understanding}. However, it is important to understand the extent of this impact, and what exactly is affected (e. g., process, product, personnel). \subsubsection{Responses to Flaky Tests
\noindent\textbf{RQ4. How do developers/organizations respond to flaky tests when detected. }
Here we are looking at the responses and mitigation strategies employed by developers, development teams and organisations. We note that there are both technical (i. e., how to fix the test or the underlying code that causes the flakiness) and management (i. e., what are the processes followed to manage test suites that are flaky) responses. \subsection{Review Process}
\label{sec:review-process}
Since this is a multivocal review where we search for academic and grey literature in different forums, the search process for each of the two parts (academic and grey literature) is different and requires different steps. The systematic literature review search targets peer-reviewed publications that have been published in relevant international journals, conference proceedings and theses in the areas of software engineering, software testing and software maintenance. The search also covers preprint and postprint articles available in open access repositories such as \textit{arXiv}. For the grey literature review, we searched for top-ranked online posts on flaky tests. This includes blog posts, technical reports, white papers, and official documentation for tools. We used \textit{Google Scholar} to search for academic literature and \textit{Google} search engine to search for grey literature. Both \textit{Google Scholar} and \textit{Google search} have been used in similar multivocal studies in software engineering \cite{garousi2018 smell, myrbakken2017 devsecops, garousi2016 and} and other areas of computer science \cite{Islam2019 Security, pereira2021 security}. Google Scholar indexes most major publication databases relevant to computer science and software engineering \cite{neuhaus2006 depth}, in particular the ACM digital library, IEEE Xplore, ScienceDirect and SpringerLink, thus providing a much wider coverage compared to those individual libraries. A recent study on the use of Google Scholar in software engineering reviews found it to be very effective, as it was able to retrieve $\sim$96 \% of primary studies in other review studies \cite{yasin2020 using}. Similarly, it has been suggested that a regular \textit{Google Search} is sufficient to search for grey literature material online \cite{mahood2014 searching, adams2016 searching}. \subsubsection{Searching Academic Publications}
We closely followed Kitchenham and Charters's guidelines \cite{kitchenham2007 guidelines} to conduct a full systematic literature review. The goal here is to identify and analyse primary studies relating to test flakiness. We defined a search strategy and search string that covers the terminology associated with flaky tests. The search string was tested and refined multiple times during our pilot runs to ensure coverage. We then defined a set of inclusion and exclusion criteria. We included a quality assessment of the selection process to ensure that we include all relevant primary studies. We explain those steps in detail below. We define the following criteria for our search:
\begin{enumerate}
\item [] \textbf{Search engine:} Google Scholar. \item []\textbf{Search String:} ``flaky test'' OR ``test flakiness'' OR ``flaky tests'' OR ``nondeterministic tests'' OR ``non deterministic tests'' OR ``nondeterministic test'' OR ``non deterministic test''
\item [] \textbf{Search scope:} all academic articles published until 30 April 2022. \end{enumerate}
In case that an article appears in multiple venues (e. g., a conference paper that was also published on arXiv under a different title, or material from a thesis that was subsequently published in a journal or conference proceedings), we only include the published articles/papers over the other available versions. This was to ensure that we include as much peer-reviewed material as possible. We conducted this search over two iterations. The first iteration covers the period until 31 December 2020, where the second iteration covers the period between 1 January 2021 and 30 April 2022. Results from the two searches were then combined. \noindent Our review for academic articles follow the following steps:
\begin{enumerate}
\item Search and retrieve relevant articles using the defined search terms using Google Scholar. \item Read the title, abstract and the full text (if needed) to determine relevance by one of the co-authors and apply inclusion and exclusion criteria. \item Cross-validate a randomly selected set of articles by another co-author. \item Apply inclusion and exclusion criteria. \item Classify all included articles based on the classification we have for each question (details of those classifications are provided for each research question in Section \ref{sec:results}). \end{enumerate}
\subsubsection{Searching for Grey Literature}
Here we followed the recommendations made in previous multivocal review studies \cite{garousi2018 smell, Garousi2019 Guidelines} and reported the results obtained only from the first 10 pages (10 items each) of the Google search. It was reported that relevant results usually only appear in the first few pages of the search \cite{Garousi2019 Guidelines}. We observed that the results in the last five pages were less relevant compared to those that appeared in the first five. For the grey literature search, we define the following criteria:
\begin{enumerate}
\item [] \textbf{Search engine:} Google Search.
|
324
| 1
|
arxiv
|
second iteration we searched for material published up to 30 April 2022 (we removed duplication found between the two searches)
\end{enumerate}
\noindent The grey literature review consists of the following steps:
\begin{enumerate}
\item Search and retrieve relevant results using the search terms in Google Search. \item Read the title and full article (if needed) to determine relevance. \item Cross-validate a randomly selected set of articles by another co-author. \item Check external links and other external resources that have been referred to in the articles/posts. Add new results to the dataset. \item Classify all posts based on the classification scheme. \end{enumerate}
\subsubsection{Selection Criteria}
\noindent We selected articles based on the three following inclusion criteria:
\begin{itemize}
\item Studies discussed test flakiness as the main topic of the article. \item Studies discussed test flakiness as an impact of using a specific technique, tool or in an experiment. \item Studies discussed test flakiness as a limitation of a technique, tool or an experiment. \end{itemize}
\noindent We apply the following exclusion criteria:
\begin{enumerate}
\item [--] Articles only mentioning flakiness, but without substantial discussion on the subject. \item [--] Articles that are not accessible electronically, or the full text is not available for downloading\footnote{In case the article is not available either through a known digital library such as ACM Digital Library, IEEE Xplore, ScienceDirect and SpringerLink; or not publicly available through other open-access repositories such as arXiv or ResearchGate}. \item [--] Studies on nondeterminism in hardware and embedded systems. \item [--] Studies on nondeterminism in algorithms testing (e. g., when nondeterminism is intentionally introduced). \item [--] Duplicate studies (e. g., reports of the same study published in different places or on different dates, or studies that appeared in both academic and grey literature searches). \item [--] Secondary studies on test flakiness (previous review articles). \item [--] Editorials, prefaces, books, news, tutorials and summaries of workshops and symposia. \item [--] Multimedia material (videos, podcasts, etc. ) and patents. \item [--] Studies written in a language other than English. \end{enumerate}
\noindent For the grey literature study, we also exclude the following:
\begin{enumerate}
\item [--] Academic articles, as those are covered by our academic literature search using Google Scholar. \item [--] Tools description pages (such as GitHub pages) with little or no discussion about the causes of flakiness or its impact. \item [--] Web pages that mention flaky tests with no substantial discussion (e. g., just provided a definition of flakiness). \end{enumerate}
\subsubsection{Pilot Run}
\label{sec:pilot}
Before we finalized our search keywords and search strings, and defined our inclusion and exclusion criteria, we conducted a pilot run using a simplified search string to validate the study selection criteria, refine/confirm the search strategy and refine the classification scheme before conducting the full-scale review. Our pilot run was conducted using a short, intentionally inclusive string (\textit{``flaky test'' OR ``test flakiness'' OR ``non-deterministic test''}) using both Google and Google Scholar. These keywords were drawn from two key influential articles and blog posts (either used in the title or as keywords) that the authors are aware of - i. e., the highly cited work on the topic by Luo et al. \cite{luo2014 empirical}, and the well-known blog post by Martin Fowler \cite{fowler2011 eradicating}). This was done for the period until 31 December 2020. In the first iteration, we retrieved 10 academic articles and 10 grey literature results, and then in the second iteration we obtained another 10 academic articles (next 10 results) and 10 grey literature results. We validated the results of this pilot run based on articles' relevance as well as our familiarity with the field. We validated the retrieved articles and classified all retrieved results using the defined classification scheme in order to answer the four research questions. We used this pilot run to improve and update our research questions and our classification scheme. We classified a total of 20 articles in each group (academic and grey literature). With respect to the former, we were able to identify 14 of the 20 articles found by the search as being familiar to us, lending a degree of confidence that our search would at minimum find papers relevant to our research questions. \subsection{Data Extraction and Classification}
\label{sec:data-extraction}
We extracted relevant data from all reviewed articles in order to answer the different research questions. Our search results (following the different steps explained above) are shown in Fig. \ref{fig:results_stats}. As explained in Section \ref{sec:review-process}, we conducted our search over two iterations, covering two periods. The first period covers articles published up until 31 December 2020, while the second cover the period from 1 January 2021 until 30 April 2022. In the first iteration we retrieved a total of 1092 results, with 992 articles obtained from the Google Scholar search and 100 grey literature articles obtained from Google Search (i. e., the first 10 pages). After filtering the relevant articles and applying the inclusion and exclusion criteria,
we ended up with a total of 408 articles to analysis (354 academic articles and 54 grey literature articles). In the second iteration (from January 2021 until April 2022 ), we retrieved 330 academic articles from Google Scholar and 100 articles from Google Search (results from the first 10 pages). We removed the duplicates (e. g., results that might appear twice such as the same publication appeared in multiple publication venues, or grey literature article that appeared in the top 10 pages over the two iterations). For this iteration, after filtering the relevant articles, we ended up with 243 results (206 academic articles and 37 grey literature posts). Collectively, we identified a total of 560 academic and 91 grey literature articles for our analysis. \begin{figure*}[h]
\centering
\resizebox{0.6 \linewidth}{. }{
\includegraphics[width=\linewidth]{media/review-results. png}}
\caption{Results of the review process}
\label{fig:results_stats}
\end{figure*}
The analysis of articles was done by three coders (co-authors). We split the articles between coders, where each coder read the articles and obtained data to answer our research questions. For each article, we first tried to understand the context of flakiness. We then looked for the following: 1 ) the discussed causes of flakiness (RQ1 ), 2 ) how flakiness is detected (methods, tools, etc. ) (RQ2 ), 3 ) the noted impact of flakiness (R3 ), and 4 ) the approach used to respond to or deal with flaky tests (RQ4 ). \subsection{Reliability Assessment}
We conducted a reliability check of our filtration and classification. We cross-validated a randomly selected sample of 88 /992 academic articles and 49 /100 grey literature articles, as obtained from the first iterations (obtaining a confidence level = 95 \% and confidence interval = 10 ). Two of the authors cross-validated those articles, with each classifying half of these articles (44 of academic literature and 25 of the grey literature articles). In addition, a third co-author (who was not involved in the initial classification) cross-validated another 25 randomly selected academic/grey articles. On those cross-validation, we reached an agreement level of $\geq$ 90 \%. \\
We provide the full list of articles that we reviewed (both academic and grey) online\url{https://docs. google. com/spreadsheets/d/1 eQhEAUSMXzeiMatw-Sc8 dqvftzLp8 crC3 -B9 v5 qHEuE}. \section{Results}
\label{sec:results}
\subsection{Overview of the publications}
We first provide an overview of the timeline of publications on flaky tests in order to understand how the topic has been viewed and developed over the years. The timeline of publications is shown in Fig. \ref{fig:timeline}. Based on our search for academic articles, we found that there have been articles that discuss the issue of \emph{nondeterminism} in test outcomes dating back to 1994, with 34 articles found between 1994 and 2014. However, the number of articles has significantly increased since 2014. There has been an exponential growth in publications addressing flaky tests in the past 6 years (between 2016 and 2022 ). We attributed this increase to the rising attention to flaky tests by the research community since the publication of the first empirical study on the causes of flaky tests in Apache projects by Luo et al. in 2014 \citeS{S1 }, which was the first study that directly addressed the issue of flaky tests in great detail (in terms of common causes and fixes). Over 93 \% of the articles were published after the publication of this study, with 41 \% of those articles (229 ) published between January 2021 and April 2022 only, indicating an increased popularity over the years. \\
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]
{media/timeline. png}
\caption*{\scriptsize{* The 2021 -2022 numbers include article published between Jan 2021 and April 2022. }}
\caption{Timeline of articles published on flaky tests, including the focused articles}
\label{fig:timeline}
\end{figure}
In terms of publication types and venues, more than 40 \% of these publications have been published in reputable and highly rated software engineering venues. Top publications venues include the premier software engineering conferences: International Conference on Software Engineering (ICSE), Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) and the International Conference on Automated Software Engineering (ASE). Publications on flaky tests has also appeared in the main software testing conferences International Symposium on Software Testing and Analysis (ISSTA) and International Conference on Software Testing, Verification and Validation (ICST). They also appear in software maintenance conferences, International Conference on Software Maintenance and Evolution (ICSME) and International Conference on Software Analysis, Evolution and Reengineering (SANER). A few articles ($\sim$10 \%) were published in premier software engineering journals, including Empirical Software Engineering (EMSE), Journal of Systems and Software (JSS), IEEE Transaction in Software Engineering (TSE) and Software Testing, Verification and Reliability (STVR) journal. The distribution of publications in key software engineering venues is shown in Fig. \ref{fig:venues}). To expand on the methodology for including and excluding articles, we did not perform a quality assessment of articles based on venues or citation statistics. We focused on primary studies (excluding the three reviews and secondary studies). Furthermore, looking deeper into focused studies provided insights into their quality. \begin{figure}[h]
\centering
\includegraphics[width=\linewidth]
{media/venue-distr. png}
\caption{Distribution of publications based on publication venues}
\label{fig:venues}
\end{figure}
In terms of programming languages, the vast majority of the studies have focused on Java (49 \% of those studies), with only a few other studies that discuss flakiness in other languages such as Python, JavaScript and . NET, with 49 (14 \%) studies used multiple languages (results listed in Table \ref{tab:languages}). \begin{table}[h]
\centering
\caption{Flaky tests in terms of languages studied}
\begin{tabular}{lc}
\toprule
\textbf{Language} & \textbf{\# Articles} \\ \midrule
Java & 212 \\
Python & 25 \\
JavaScript & 11 \\
. NET languages & 6 \\
Other languages & 27 \\
Multiple languages & 58 \\
Not specified/unknown & 221 \\
\bottomrule
\end{tabular
\label{tab:languages}
\end{table}
We classified all articles into three main categories: 1 ) studies focusing on test flakiness, where flakiness is the focal point (e. g., an empirical investigation into test flakiness due to a particular cause such as concurrency or order dependency), 2 ) studies that explain how test flakiness impacts tools/techniques (e. g., the impact of test flakiness on program repair or fault localization) or 3 ) studies that just mention or reference test flakiness (but it is not a focus of the study). A breakdown of the focus of the articles and the nature of the discussion on test flakiness in academic literature is shown in Table \ref{tab:focus}. We observed that only 109 articles ($\sim$20 \%) from all the collected studies focused on test flakiness as the main subject of the study. The majority of the reviewed articles (297, representing $\sim$53 \%) just mentioned test flakiness as a related issue or as a threat to validity. The remaining 154 articles($\sim$27 \%) discussed flakiness in terms of their impact on a proposed technique or tool that is the subject of the study. \begin{table}[. h]
\caption{Focus of academic articles}
\resizebox{\linewidth}{. }{
\begin{tabular}{llc}
\toprule
\textbf{Type} & \textbf{Description} & \multicolumn{1 }{l}{\textbf{\# of articles}} \\ \toprule
Focused & \begin{tabular}[c]{@{}l@{}}Studies focusing on test flakiness\end{tabular} & 109 \\
Impact & \begin{tabular}[c]{@{}l@{}}Studies that explain how test flakiness impacts tools/techniques \end{tabular} & 154 \\
Referenced & \begin{tabular}[c]{@{}l@{}}Studies that just mention or reference test flakiness\end{tabular} & 297 \\ \bottomrule
\end{tabular}}
\label{tab:focus}
\end{table}
As for the grey literature results, all articles we found were published following the publication of Martin Fowler's influential blog post on test nondeterminism in early 2011 \cite{fowler2011 eradicating}. Similar to the recent increased attention on those academic literature articles, we found that almost 50 \% of the grey literature articles (26 ) have been published between 2019 and 2020, indicating a growing interest in flaky tests, and shedding light on the importance of the topic in practice. In the following subsections, we present the results of the study by answering each of our four research questions in detail. \input{Sections/rq1 causes}
\input{Sections/rq2 detection}
\input{Sections/datasets}
\input{Sections/rq3 impact}
\input{Sections/rq4 responses}
\section{Discussion}
\label{sec:discussion}
In this section, we discuss the results of the review and present possible challenges in detecting and managing flaky tests. We also provide our own perspective on current limitations of existing approaches, and discuss potential future research directions. \subsection{Flaky Tests in Research and Practice}
The problem with flaky tests has been a subject of a wide discussion among researchers and practitioners. Dealing with flaky tests is a real issue that is impacting developers and test engineers on a daily basis. It can undermine the validity of test suites and make them almost useless \citeG{G8, G74 }. The topic of flaky tests has been a research focus, with a noticeable increase in the number of publications over the last four years (between 2017 and 2021 ).
|
324
| 2
|
arxiv
|
studies have focused on specific causes of flakiness (either in terms of detection or fixes) - namely those related to order-dependency in test execution or to concurrency. There is generally a lack of studies that investigate the impact of other causes of test flakiness, such as those related to variation in the environment or in the network. This is an area that can be addressed by future tools designed specifically to detect test flakiness due to those factors. Our recent work targets this by designing a tool, \textit{saflate}, that is aimed at reducing test flakiness by sanitising failures induced by network connectivity problems \cite{dietrich2022 flaky}. On the other hand, the discussion in grey literature focused more on the general strategies that are being followed in practice to deal with any flaky tests once detected in the CI pipeline. Those are usually detected by checking if the test outcomes have changed between different runs (e. g., between \texttt{PASS} to \texttt{FAIL}). Several strategies that have been followed by software development teams are discussed in grey literature, especially around what to do with flaky tests once they have been identified. A notable approach is quarantining flaky tests in an isolated `staging' area before they are fixed \citeG{G4, S106 }. The gap between academic research and practice when it comes to the way flaky tests are viewed has also been discussed in some of the most recent articles. An experience report published by managers and engineers at Facebook \citeG{G127 } explained how real world applications can \textit{always} be flaky (e. g., due to the non-determinism of algorithms), and what we should be focusing on is not when or if tests are flaky, but rather how flaky those tests can be. This supports Harman and O'Hearn's view \cite{harman2018 start} that all test should, by default, considered to be flaky, which provides a defensive mechanism that can help in managing flaky tests in general. \subsection{Identifying and Detecting Flaky Tests}
In RQ1, we surveyed the common causes of flaky tests, whether in the CUT or in the tests themselves. We observe that there are a variety of causes for flakiness, from the use of specific programming language features to the reliance on external resources. It is clear that there are common factors that are responsible for flaky test behaviours, regardless of the programming language used or the application domains. Factors like test order dependency, concurrency, randomness, network and reliance on external resources are common across almost all domains, and are responsible for a high proportion of flaky tests. Beyond the list of causes noted in the first empirical study in this topic of Luo et al. \citeS{luo2014 empirical}, we found evidence of a number of additional causes, namely flakiness due to algorithmic nondeterminism (related to randomness), variations of hardware, environment and those related to the use of ML applications (which are nondeterministic in nature). Some causes identified in academic literature overlap, and causes can also be interconnected. For example, UI flakiness can in turn be due to a platform dependency (e. g. dependency on a specific browser) or because of event races. With this large number of causes of flaky tests, further in-depth investigation into the different causes is needed to understand how flaky tests are introduced in the code base, and better understand the root causes of flaky tests in general. This also includes studies of test flakiness in the context of variety of programming languages (as opposite to Java or Python, which most flakiness studies have covered). Another point worth mentioning here is how \emph{flakiness} is being defined in different studies -- in general, a test is considered flaky if it has a different outcome on different runs with the same input data. Academic literature refers to tests having binary outcomes, i. e., \emph{PASS} or \emph{FAIL}. In practice, however, tests can have multiple outcomes on execution (pass, fail, error or skip). For instance, tests may be skipped/ignored (potentially non-deterministically) or may not terminate (or timeout, depending on the configuration of tests or test runners). A more concise and consistent definition of the different variants of flakiness is needed. \subsection{The Impact of and Response to Flaky Tests}
It is clear that flaky tests are known to have a negative impact on the validity of the tests, or the quality of the software as a whole. A few impact points have been discussed in both academic and grey literature. Notable areas that are impacted by flaky tests are test-dependent techniques, such as fault localization, program repair and test selection. An important impact area that has not been widely acknowledged is how flaky tests affect developers. Although the impact on developers was mentioned in developers'
surveys \cite{S8, S1019 }, and in many grey literature articles (e. g., \citeG{G2, G8, G134 }), such an impact has not been explicitly studied in more detail -- an area that should be explored further in the future. In terms of responses to flaky tests, it seems that the most common approach is to quarantine flaky tests once they are detected. The recommendation is to keep tests with flaky outcomes in a separate quarantine area from other ``healthy'' tests. This way, those flaky tests are not forgotten and the cause of flakiness can be investigated later to apply a suitable fix. On the other hand, other non-flaky tests can still run so that it does not cause any delay in development pipelines. However, there remains some open questions about how to deal with quarantined tests, how long those tests should stay in the designated quarantine area, and how many tests can be quarantined at once. A strategy (that can be implemented into tools) on how to process quarantined flaky tests and remove them from the designated quarantine area (i. e., de-quarantining) also needs further investigation. One interesting area for future research is to study the long-term impact of flaky tests. For example, what is the impact of flaky tests on the validity of test suites if left unfixed or unchanged. Do a few flaky tests that are left in the test suite untreated have a wider impact on the presence of bugs as the development progresses. It is also interesting to see, when flaky tests are flagged and quarantined, how long it will take developers to fix those tests. This can be viewed at as a technical debt that will need to be paid back. Therefore, a study on whether this is actually been paid back, and how long it takes, will be valuable. \\
\subsection{Implications on Research and Practice}
This study yields some actionable insights and opportunities for future research. We discuss those implications in the following:
\begin{enumerate}
\item The review clearly demonstrates that academic research on test flakiness focuses mostly on Java, with limited studies done in other popular languages\footnote{Based on Stack Overflow language popularity statistics \url{https://insights. stackoverflow. com/survey/2021 \#technology-most-popular-technologies}} i. e., JavaScript and Python. The likely reasons are a combination of existing expertise, the availability of open-source datasets, and the availability of high-quality and low cost (often free) program analysis tools. However, our grey literature review shows that the focus among practitioners is more on the ``big picture'', and flakiness has been discussed in the context of a variety of programming languages. \item Different programming languages have different features, and it is not obvious how results observed in Java programs carry over to other languages. For instance: flakiness caused by test order dependencies and shared (memory) state are not possible in a pure functional language (like Haskell), and at least less likely in a language that manages memory more actively to restrict aliasing (like Rust using ownership\footnote{\url{https://doc. rust-lang. org/book/ch04 -00 -understanding-ownership. html}}). In languages with different concurrency models such single-threaded languages (e. g., JavaScript), some flakiness caused by concurrency is less likely to occur. For instance, deadlocks are more common in multithreaded applications \cite{wang2017 comprehensive}. Still, this does not mean that flakiness cannot occur due to concurrency, but it is likely to happen to a lesser extent compared to multithreaded languages such as Java. Similarly, languages (like Java) that use a virtual machine decoupling the runtime from operating systems and hardware are less likely to produce flakiness due to variability in those platforms than low-level languages lacking such a feature, like C. Languages with strong integrated dynamic/meta programming features to facilitate testing like mock testing, which when used may help avoid certain kinds of flakiness, for instance, flakiness caused by network dependencies. \item There seems to be an imbalance in the way to respond to flaky tests between what have been discussed in academic and industry articles. Industry responses have focused on processes to deal with flaky tests (such as quarantining strategies), and academic research have focused more on causes detection (note that there are some recent studies on automatically repairing flakiness in ML projects and order dependent tests). This is not unexpected; however, and may also indicate opportunities for future academic research to provide tools that can help automate quarantining (and de-quarantining). Furthermore, it appears that some industrial practices such as rerunning failed tests until they pass may require a deeper theoretical foundation. For instance, does a test that only passes after several reruns provide the same level of assurance as a test that always passes provides. The same question can be asked for entire test suites: what is the quality of a test suite that never passes in its entirety, but each individual test is observed to pass in some configuration. \item Another question arises from this, what is the number of reruns required to assure (with a high level of confidence level) that a test is not flaky. From what we observed in the studies that used a rerun approach to manifest flakiness, the number of reruns used differ from one study to another (with some studies noting 2 \citeS{S16 }, 10 \citeS{S99 } or even 100 \citeS{S6 } reruns as baselines). A recent study on Python projects reported that $\sim$170 reruns are required to ensure a test is not flaky due to non-order-dependent reasons \cite{gruber2021 empirical}. We believe that the number of reruns required will depend largely on the cause of flakiness. Some rerun based tools, such as rspec-retry\footnote{\url{https://github. com/NoRedInk/rspec-retry}} for Ruby or the \texttt{\@RepeatedTest}\footnote{\url{https://junit. org/junit5 /docs/5.0.1 /api/org/junit/jupiter/api/RepeatedTest. html}} annotation in JUnit, provides an option to rerun tests a specified \textit{n} number of times (set by the developer). However, it is unknown what is a suitable threshold for the number of reruns required for different types of flakiness. Further empirical investigation to quantify the minimum number of reruns required to manifest flakiness (for the different causes and in different contexts) is required. Alshammari et al. \citeS{S1008 } is a step in this direction where they rerun tests in 24 Java projects 10,000 times to find out how many flaky tests can be found with different numbers of reruns. \end{enumerate}
\section{Validity Threats}
\label{sec:threats}
We discuss a number of potential threats to the validity of the study below, and explain the steps taken to mitigate them. \\
\noindent\textbf{Incomplete or inappropriate selection of articles:} As with any systematic review study, due to the use of an automatic search it is possible that we may have missed some articles that were not either covered by our search string, or were not captured by our search tool. We mitigated this threat by first running and refining our search string multiple times. We piloted the search string on Google Scholar to check what the string will return. We cross-validated this by checking if the search string would return well-known, highly cited articles of test flakiness (e. g., \cite{luo2014 empirical, memon2013 automated, eck2019 understanding}). We believe this iterative approach has improved our search string and reduced the risk of missing key articles. There is also a chance that some related articles have used terms other than those we used in our search string. If terms other than ``flaky'', ``flakiness'' or ``non-deterministic'' were used, then the possibility of missing those studies increases. To avoid such a limitation we repeatedly refined our search string and performed sequential testing in order to recognize and include as many relevant studies as possible. \\
\noindent\textbf{Manual analysis of articles:}
We read through each of the academic and grey literature articles in order to answer our research questions. This was done manually with at least one of the authors reading through articles and then the overall results are verified by another co-author. This manual analysis could introduce bias due to multiple interpretations and/or oversight. We are aware that human interpretation introduces bias, and thus we attempted to account for it via cross-validation involving multiple evaluators and by cross-checking the results from the classification stage, by involving at least two coders. \\
\noindent\textbf{Classification and reliability:} We have performed a number of classifications based on findings from different academic and grey literature articles to answer our four research questions. We extracted information such as causes of flakiness (RQ1 ), detection methods and tools (RQ2 ), impact of flakiness (RQ3 ) and responses (RQ4 ). This information was obtained by reading through the articles, extracting the relevant information, and then classifying the articles by one of the author. Another author then cross-validated the overall classification of articles. We made sure that at least two of the co-authors will check each results and discuss any difference until a 100 \% agreement between the two is reached. \\
\section{Conclusion}
\label{sec:conclusion}
In this paper, we systematically studied how test flakiness has been addressed in academic and grey literature. We provide a comprehensive view of flaky tests, their common causes, their impact on other techniques/artefacts, and discuss response strategies in research and practice. We also studied methods and tools that have been used to detect and locate flaky tests and strategies followed to respond to flaky tests. \\
This review covers 560 academic literature and 91 grey literature articles. The results show that most academic studies that covered test flakiness has focused more on Java compared to other programming languages. In terms of common causes, we observed that flakiness due to test order dependency and concurrency have been studied more widely compared to other noted source of flakiness. However, this depends mainly on the focus of the studies that reported those causes. For example, studies that used Android as their subject systems has focused mostly on flakiness in UI (which concurrency issues are attributed as the root cause). Correspondingly, methods to detect flaky tests have focused more on specific types of flaky tests, with the dynamic rerun-based approach noted as the main proposed method for flaky tests detection. The intention is to provide approaches (either static or dynamic) that are less expensive to run by accelerating ways to manifest flakiness with running fewer tests. \\
This paper outlines some limitations in test flakiness research that should be addressed by researchers in the future. \section{Introduction}
\label{sec:introduction}
Software testing is a standard method used to uncover defects. Developers use tests early during development to uncover software defects when corrective actions are relatively inexpensive. A test can only provide useful feedback if it has the same outcome (either pass or fail) for every execution with the same version of code. Tests with non-deterministic outcomes (known as \textit{flaky tests}) are tests that may pass in some runs and fail on others. Such flaky behaviour is problematic as it leads to uncertainty in choosing corrective measures \cite{harman2018 start}. They also incur heavy costs in developers' time and other resources, particularly when the test suites are large and development follows an agile methodology, requiring frequent regression testing on code changes to safeguard releases. Test flakiness has been attracting more attention in recent years. In particular, there are several studies on the causes and impact of flaky tests in both open-source and proprietary software. In a study of open source projects, it was observed that 13 \% of failed builds are due to flaky tests \cite{Labuschagne2017 }. At Google, it was reported that around 16 \% of their tests were flaky, and 1 in 7 of the tests written by their engineers occasionally fail in a way that is not caused by changes to the code or tests \cite{googleFlaky2016 }.
|
324
| 3
|
arxiv
|
have shown that flaky tests present a real problem in practice that have a wider impact on product quality and delivery \cite{fowler2011 eradicating, SandhuTesting2015, palmer2019 }. Studies of test flakiness have also been covered in the context of several programming languages including Java \cite{luo2014 empirical}, Python \cite{gruber2021 empirical} and, more recently, JavaScript \cite{Hashemi2022 flakyJS}. Awareness that more research on test flakiness is needed has increased in recent years \cite{harman2018 start}. Currently, studies on test flakiness and its causes largely focus on specific sources of test flakiness, such as order-dependency \cite{gambi2018 practical}, concurrency \cite{dong2021 flaky} or UI-specific flakiness \cite{memon2013 automated, romano2021 empirical}. Given that test flakiness is an issue in both research and practice, we deem it is important to integrate knowledge about flaky tests from both academic literature and grey literature in order to provide insights into the state-of-the-practice. In order to address this,
we performed a multivocal literature review on flaky tests. A multivocal review is a form of a \textit{systematic literature review} \cite{kitchenham2007 guidelines} which includes sources from both academic (formal) and grey literature \cite{Garousi2019 Guidelines}. Such reviews in computer science and software engineering have become popular over the past few years \cite{Tom2013 TD, garousi2018 smell, Islam2019 Security, Butijn2020 Blockchains} as it is acknowledged that the majority of developers and practitioners do not publish their work or thoughts through peer-reviewed academic channels \cite{Garousi2016 Multivocalreviews, Glass2006 Creativity}, but rather in blogs, discussion boards and Q\&A sites \cite{Williams2019 Grey}. This research summarizes existing work and current thinking on test flakiness from both academic and grey literature. We hope that this can help a reader to develop an in-depth understanding of common causes of test flakiness, methods used to detect flaky tests, strategies used to avoid and eliminate them, and the impact flaky tests have. We identify current challenges and suggest possible future directions for research in this area. The remaining part of the paper is structured as follows: Section \ref{sec:background} presents recent reviews and surveys on the topic. Our review methodology is explained in Section \ref{sec:design}. We present our results answering all four research questions in Section \ref{sec:results}, followed by a discussion of the results in Section \ref{sec:discussion}. Threats to validity are presented in Section \ref{sec:threats}, and finally we present our conclusion in Section \ref{sec:conclusion}. \subsection{Causes of Test Flakiness (RQ1 )}
\label{sec:causes}
We analysed causes of flaky tests, as noted in both academic and grey literature articles. We looked for the quoted reasons for why a test is flaky, and in most cases, we note multiple (sometimes connected) causes being the reason for flakiness. We then grouped those causes into categories based on their overall nature. \\
The most widely discussed causes in literature are those that are already in the empirical study on flaky tests by Luo et al. \citeS{S1 }. The study provides a classification of causes as the result of analysing 201 commits that fix flaky tests across Apache projects, which are diverse across languages and maturity. Their methodology is centred around examining commits that \textit{fix} flaky tests. In addition to classifying root causes of test flakiness into categories, they present approaches to manifest flakiness and strategies used to fix flaky tests. The classification consists of 10 categories that are the \textit{root causes} of flaky tests in the commits, which are \emph{async-wait}, \emph{concurrency}, and \emph{test order dependency}, \emph{resource leak}, \emph{network}, \emph{time}, \emph{IO}, \emph{randomness}, \emph{floating point operations} and \emph{unordered collections}. Thorve et al. \citeS{S7 } listed additional causes identified from a study of Android commits fixing flaky tests: \emph{dependency}, \emph{program logic}, and \emph{UI}. Dutta et al. \citeS{S14 } noted subcategories of \emph{randomness}, and Moritz et al. \cite{S8 } identified three additional causes: \emph{timeout}, \emph{platform dependency} and \emph{too restrictive range} from a survey of developers. We mapped all causes found in all surveyed publications, and categorized them into the following major categories (based on their nature): \emph{concurrency}, \emph{test order dependency}, \emph{network}, \emph{randomness}, \emph{platform dependency}, \emph{external state/behaviour dependency}, \emph{hardware}, \emph{time} and \emph{other}. A summary of the causes we classified is provided in Table \ref{tab:causes} and discussed below. \noindent\textbf{Concurrency. } This categorizes flakiness due to concurrency issues resulting from concurrency-related bugs. These bugs can be race conditions, data races, atomicity violations or deadlocks. \textbf{\emph{Async-wait}} is investigated as one of the major causes of flakiness under concurrency. This occurs when an application or test makes an asynchronous call and does not correctly wait for the result before proceeding. This category accounts for nearly half of the studied flaky test fixing commits \citeS{S1 }. Thorve et al. \citeS{S7 } and Luo et al \citeS{S1 } classified async-wait related flakiness under concurrency. Lam et al. \citeS{S10, S6 } reported async-wait as the main cause of flakiness in Microsoft projects. Other articles cited async-wait in relation to root-cause identification \citeS{S6 }, detection \citeS{S15 } and analysis \citeS{S72 }. Luo et al. \citeS{S1 } identified an additional subcategory \textit{``bug in condition''} for concurrency-related flakiness, where the guard for code that determines which thread can call it is either too restrictive or permissive. Concurrency is also identified as a cause in articles on detection \citeS{S6 }, \citeS{S15 }, \citeS{S14 } and \citeS{S7 }. Another subcategory identified from browser applications is event races~\citeS{S1008 }. \noindent\textbf{Test order dependency. } The test independence assumption implies that tests can be executed in any order and produce expected outcomes. This is not the case in practice \cite{S40 } as tests may exhibit different behaviour when executed in different orders. This is due to a shared state where it can either be in memory or external (e. g. file system, database). Tests can expect a particular state before they can exhibit the expected outcome, which can be different if the state is not setup correctly or reset. There can be multiple sources of test order dependency. Instances of shared state can be in the form of explicit or implicit data dependencies in tests, or even bugs such as resource leaks or failure to clean up resources between tests. Luo et al. \citeS{S1 } listed these as separate root causes: resource leaks and I/O. \emph{\noindent\textbf{Resource leak}} can be a source of test order dependency when the code under test (CUT) or test code is not properly managing shared resources (e. g. obtaining a resource and not releasing it). Empirical studies that discuss this resource leak related flakiness include \citeS{S1 } and \citeS{S10 }, as well other studies on root cause analysis, such as \citeS{S6 } and \citeS{S210 }, that find instances of flakiness in test code due to improper management of resources. Luo et al. \citeS{S1 } identified I/O as a potential cause of flakiness. An example is a code that opens and reads from a file and does not close it, leaving it to garbage collection to manage it. If a test attempts to open the same file, it would only succeed if the garbage collector had processed the previous instance. In Luo et al. \citeS{S1 }, 12 \% of test flakiness in their study is due to order dependency. Articles that cite order dependency include those that propose detection methods \citeS{S147, S4 }, and one that is an experimental study on flakiness in generated test suites \citeS{S110 }. A shared state can also arise due to \emph{incorrect/flaky API usage} in tests. Tests may intermittently fail if programmers use such APIs in tests without accounting for such behaviour. Dutta et al. \citeS{S14 } discussed this in the study of machine learning applications and cite an example where the underlying cause is the shared state between two tests that use the same API and one of the tests not resetting the fixture before executing the second. \noindent\textbf{Network. } Another common cause for flaky tests relates to network issues (connections, availability, and bandwidth). This has two subcategories: local and remote issues. Local issues pertain to managing resources such as sockets (e. g. contention with other programs for ports that are hard-coded in tests) and remote issues concern failures in connecting to remote resources. Mor{\'a}n et al. \citeS{S99 } studied network bandwidth in localizing flakiness causes. In a study consisting of Android projects \citeS{S7 }, network is identified as a cause of flakiness of 8 \% of the studied flaky tests. \noindent\textbf{Randomness. } Tests or the code under test may depend on randomness, which can result in flakiness if the test does not consider all possible random values that can be generated. This is listed as a main cause by Luo et al. \citeS{S1 }. Dutta et al. \citeS{S14 } identified subcategories of randomness in their investigation of flaky tests in probabilistic and machine learning applications. Such applications rely on machine learning frameworks that provide operations for inference and training, which are largely nondeterministic in nature. Writing tests can be challenging for such applications, which use these frameworks. The applications are written in Python, and they study applications that use the main ML frameworks for the language. They analysed 75 bug/commits that are linked to flaky tests and obtained three cause subcategories: 1 ) \textit{algorithmic nondeterminism}, 2 ) \textit{incorrect/flaky API usage} and 3 ) \textit{hardware}. They state that these categories are subcategories of the \textit{randomness} category in \citeG{S1 }. The most common cause identified was algorithmic nondeterminism. They also present a technique to detect flaky tests due to such assertions. They evaluate the technique on 20 projects and found 11 previously unknown flaky tests. The subcategories identified are \emph{Algorithmic non-determinism} and \emph{Unsynchronized seeds} in ML applications. In these applications, as test input, developers use small datasets and models, expecting the results to converge to values within an expected range. Assertions are added to check if the inferred values are close to the expected ones. As there is a chance that the computed value may fall outside the expected range, this may result in flaky outcomes. Tests in ML applications may also use multiple libraries that need sources of randomness, and flakiness can arise if different random number seeds are used across these modules or if the seeds are not set. We include a related category here, \emph{too restrictive ranges}, identified in \citeS{S8 }. This is due to output values falling outside ranges or values in assertions determined at design time. \begin{landscape}
\begin{table}
\caption{Causes of flaky tests}
\label{tab:causes}
\resizebox{0.70 \linewidth}{. }{
\begin{tabular}{@{}llp{10 cm}l@{}}
\toprule
\textbf{Main category} & \textbf{Sub-category} & \textbf{Description} & \textbf{Example Articles} \\ \midrule
Concurrency & Synchronization & Asynchronous call in test (or CUT) without proper synchronization before proceeding & \citeS{S1 }, \citeS{S7 }, \citeS{S10 }, \citeS{S6 } \\
& & & \citeS{S15 }, \citeS{S72 } \\
& Event races & Event racing due to a single UI thread and async events triggering UI changes & \citeS{S16, S93 } \\
& Bugs & Other concurrency bugs (deadlocks, atomicity violations, different threads interacting in a non-desirable manner. ) & \citeS{S1 } \\
& Bug in condition & A condition that inaccurately guards what thread can execute the guarded code. & \citeS{S1 } \\
\midrule\\
Test order dependency & Shared state & Tests having the same data dependencies can affect test outcome. & \citeS{S147 }, \citeS{S4 }, \citeS{S110 }\\
& I/O & Local files & \citeS{S1, S357 } \\
& Resource leaks & When an application does not properly manage the resources it acquires & \citeS{S1 }, \citeS{S10 }, \citeS{S6 }, \citeS{S210 } \\
\midrule\\
Network & Remote & Connection failure to remote host (latency, unavailability) & \citeS{S1, S357 }\\
& Local & Bandwidth, local resource management issues (e. g. port collisions)& \\
\midrule\\
Randomness & Data & Input data or output from the CUT & \citeS{S6, S545 } \\
& Randomness seed & If the seed is not fixed in either the CUT or test it may cause flakiness. & \citeS{S14 } \\
& Stochastic algorithms & Probabilistic algorithms where the result is not always the same. & \citeS{S217 }\\
& Too restrictive range & Valid output from the CUT are outside the assertion range. & \citeS{S8 } \\
\midrule\\
Platform dependency & Hardware & Environment that the test executes in (Development/Test/CI or Production. ) & \citeS{S1, S14, S548, S210 } \\
& OS & Varying operating system & \citeS{S42, S99 } \\
& Compiler & Difference in compiled code & \citeS{S1019 } \\
& Runtime & e. g., Languages with virtual runtimes (Java, C\# .. etc) & \citeS{S1 } \\
& CI infra flakiness & Build failures due to infrastructure flakiness. & \citeS{S1007 }\\
& Browser & A browser may render objects differently affecting tests. & \citeS{S42 } \\
\midrule\\
External state/behaviour & Reliance on production service & Tests rely on production data that can change. & \\
dependency & & & \\
& Reliance on external resources & Databases, web, shared memory... etc & \citeS{S23, S39 } \\
& API changes & Evolving REST APIs due to changing requirements & \\
& External resources & Relying on data from external resources (e. g., REST APIs, databases) & \citeS{S23, S713 } \\
\midrule\\
Environmental dependencies & & Memory and performance & \citeS{S42 } \\
\midrule\\
Hardware & Screen resolution & UI elements may render differently on different screen resolutions causing problems for UI tests & \\
& Hardware faults & & \citeS{S210 } \\
\midrule\\
Time & Timeouts & Test case/test suite timeouts. & \citeS{S8 } \\
& System date/time & Relying on system time can result in non-deterministic failure (e. g.
|
324
| 4
|
arxiv
|
Tests with manual steps & & \citeS{S1025 } \\
& Code transformations & Random amplification/instrumentation can cause flaky tests & \citeS{S258 } \\
\bottomrule
\end{tabular}}
\end{table}
\end{landscape}
\noindent\textbf{Platform dependency. } This causes flakiness when a test is designed to pass on a specific platform but when executed on another platform it unexpectedly fails. A platform could be the hardware and OS and also any component on the software stack that test execution/compilation depends on. Tests that rely on platform dependency may fail on alternative platforms due to missing preconditions or even performance issues across them. The cause was initially described in Luo et al. \citeS{S1 }, though it was not in the list of 10 main causes as it was a small category. It is discussed in more detail in \citeS{S8 }. In Thorve et al \citeS{S7 }, it was reported that dependency flakiness for Android projects are due to hardware, OS version or third-party libraries. The study consisted of 29 Android projects containing 77 flakiness related commits. We also include \emph{implementation dependency}, differences in compilation \citeS{S1019 } and infrastructure flakiness \citeS{S1007 } under this category. Infrastructure flakiness could also be due to issues in setting up the required infrastructure for test execution, which could include setting up Virtual Machines (VM)/containers and downloading dependencies, which can results in flakiness. Environmental dependency flakiness due to dynamic aspects (performance or resources) are also included in this category. \noindent\textbf{Dependencies on external state/behaviour. } We include flakiness due to changes in external dependencies like state (e. g. reliance on external data from databases or obtained via REST API's) or behaviour (changes or assumptions about the behaviour of third-party libraries) in this category. Thorve et al ~\citeS{S7 } included this under platform dependency. \noindent\textbf{Hardware. } Some ML applications/libraries may use specialized hardware, as discussed in \citeS{S14 }. If the hardware produces nondeterministic results, this can cause flakiness. An example is where an accelerator is used that performs floating-point computations in parallel. The ordering of the computations can produce nondeterministic values, leading to flakiness when tests are involved. Note that this is distinct from platform dependency, which can also be at the hardware level, for instance, different processors or Android hardware. \noindent\textbf{Time. } Variations in time are also a cause of test flakiness (e. g. midnight changes in the UTC time zone, daylight saving time, etc. ), and due to differences in precision across different platforms. Time is listed as a cause in root cause identification by Lam et al. \citeS{S6 }. New subcategories, \textit{timeouts}, are listed by developers in the survey done in \citeS{S8 }. Time precision across OS, platforms and different time zones are listed under this category \citeS{S39 }. Another cause related to time, is that test cases may time out nondeterministically e. g. failing to obtain prerequisites or execution not completing within the specified time due to flaky performance. A similar cause is test suite timeouts where no specific test case is responsible for it. Both of these causes were identified in the developers survey reported in \citeS{S8 }. \noindent\textbf{Other causes. } We include causes listed in articles, which may already have relationships with the major causal categories listed above. Thorve et al. \citeS{S7 } listed \emph{program logic} as one of them. This category consists of cases where programmers have made incorrect assumptions about the code's behaviour, which results in cases where tests may exhibit flakiness. The authors cited an example where the Code Under Test (CUT) may nondeterministically raise an I/O exception and the exception handling throws a runtime exception, causing the test to crash in that scenario. \emph{UI flakiness} can be caused due to developers either not understanding UI behaviour or incorrectly coding UI interactions \citeS{S7 }. They can also be caused by concurrency (e. g., event races or async-wait) or platform dependency (e. g., dependence on the availability of a display, dependence on a particular browser \citeS{S99 }). \emph{Floating-point operations. } floating-point operations can be a cause of flakiness as they can be non-deterministic due to non-associative addition, overflows and underflows as described in \citeS{S1 }. It is also discussed in the context of machine learning applications \citeS{S14 }. Concurrency, hardware and platform dependency can be a source of nondeterminism in floating-point operations. Luo et al. \citeS{S1 } identified \emph{unordered collections}, where there are variations in outcomes due to a test's incorrect assumptions about an API. An example of this is the sets which can have specifications that are underdetermined. Code may assume behaviour such as the order of the collection from a certain execution/implementation, which is not deterministic. \subsubsection{Ontology of causes of flaky tests}
\label{sec:ontology}
Fig. ~\ref{fig:ontologycause} illustrates the different causes of flakiness. The figure uses Web Ontology Language (OWL) ~\cite{mcguinness2004 owl} terminology such as classes, subclasses and relations. We identify classes for causes of flakiness and flaky tests. Subclass relationships between classes are named `kindOf' and `causes' is the relation for denoting causal relationships. Note that not all identified causes are shown in the diagram. For instance, causes listed under the other category may be due to sources already shown in the diagram. For instance, UI flakiness can be due to platform dependency or environmental dependency. An example that demonstrates the complex causal nature of flakiness is in \citeS{S14 }, where the cause of flakiness is due to a hardware accelerator for deep learning, which performed fast parallel floating point computations. As different orderings of floating point operations can result in different outputs, this leads to test flakiness. In this case, the causes are a combination of \textit{hardware}, \textit{concurrency}, and \textit{floating point operations}. Network uncertainty can be attributed to multiple reasons, for instance, connection failure and bandwidth variance. Stochastic algorithms exhibit randomness, and concurrency related flakiness can be due to concurrency bugs such as races and deadlocks. Finally, order dependency is due to improper management of resources (e. g. leaks and not cleaning up after I/O operations) or hidden state sharing that may manifest in flakiness. There are a number of factors that vary, which underlie those causes. For instance, \textit{random seed variability} can cause flakiness related to randomness and scheduling variability causes concurrency-related flakiness. \textit{Test execution order variability}, which causes order dependent test flakiness and types of \textit{platform variability} (e. g. hardware and browser that can, for instance, manifest in UI flakiness) are additional dimensions of variability. \begin{qoutebox}{white}{}
\textbf{RQ1 summary. }
Numerous causes of flakiness have been identified in literature, with factors related to concurrency, test order dependency, network availability and randomness are the most common causes for flaky test behaviour. Other factors related to specific types of systems such as \textit{algorithmic nondeterminism} and \textit{unsynchronised seeds} impacting testing in ML applications. There is also a casual relationship between some of these factors (i. e., they impact each other - for example, UI flakiness is mostly due to concurrency issues). \end{qoutebox}
\begin{figure*}[h]
\centering
\includegraphics[width=0.9 \linewidth]{media/causes. png}
\caption{Relationships between the different causes of flaky tests}
\label{fig:ontologycause}
\end{figure*}
\subsection{Flaky Tests Detection (RQ2 )}
\label{sec:results:detection}
One of the dimensions we studied is how flaky tests are identified and/or detected. In this section, we present methods used to detect and identify or locate causes of flakiness. We make a distinction between these three goals in our listing of techniques found in the reviewed literature. We look at methods identified in both academic and grey literature. RQ2 is divided into two sub-questions, we answer each subquestion separately, as shown below:
\subsubsection{Methods Used to Detect Flaky Tests (RQ2.1 )}
There are two distinctive approaches towards the detection of flaky tests, which are either by using dynamic techniques that involve the execution of tests or using static techniques that rely only on analysing the test code without actually executing tests. Figure~\ref{fig:taxonomydetection} depicts a broad overview of these strategies. Dynamic methods are based mostly on multiple test runs whilst also using techniques to perturb specific variability factors (e. g. environment, test execution order, event schedules or random seeds) that quickly manifests flakiness. There is one study on using program repair \citeS{S28 } to induce test flakiness and two studies on using differential coverage to detect flakiness without resorting to reruns \citeS{S2 }. Under static approaches, studies have employed machine learning (3 studies), model checking for implementation dependent tests and similarity patterns techniques for identifying flaky tests. There are only two studies that leverage hybrid approaches (one for order dependent tests and another for async-wait). \\
\noindent\textbf{Static methods}: Static approaches that do not execute tests are mostly classification-based that use machine learning techniques \citeS{S607, S118, S31 }. Other static methods use pattern matching \citeS{S357 } and association rule learning \citeS{SB1 }. Model checking using Java PathFinder \cite{visser2003 model} has also been used for detecting flakiness due to implementation dependent tests \citeS{SB28 }. \\
Ahmad et al. \citeS{S607 } evaluated a number of machine learning methods for predicting flaky tests. They used projects from the iDFlakies dataset \citeS{S4 }. There is also a suggestion that the evaluation also covered another language (Python) besides the data from the original dataset (which is in Java), though this is not made clear, and the set of Python programs or tests is not listed. The study built on the work of Pinto et al. \citeS{S11 }, which is an evaluation of five machine learning classifiers (Naive Bayes, Random Forest, Decision Tree, Support Vector Machine and Nearest Neighbour) that predict flaky tests. In comparison to \citeS{S11 }, the study of Ahmad et al. \cite{S607 } answers two additional research questions: how classifiers perform with regard to another programming language, and the predictive power of the classifiers. Another static technique based on patterns in code have been used to predict flakiness \citeS{S357 }. \\
\noindent\textbf{Dynamic methods:} Dynamic techniques to detect flakiness are built on executions of tests (single or multiple runs). Those techniques are centred around making reruns less expensive by accelerating ways to manifest flakiness, i. e., fewer number of reruns or rerunning fewer tests. Methods to manifest flakiness include varying causal factors such as random number seeds \citeS{S14 }, event order \citeS{S93 }, environment (e. g. browser, display) \citeS{S99 }, and test ordering \citeS{S147, S4 }. Test code has also been varied using program repair \citeS{S28 } to induce flakiness. Fewer tests are run by selecting them based on differential code coverage or those with state dependencies. \\
\noindent\textbf{Hybrid methods:} Dynamic and static techniques are known to make different trade-offs between desirable attributes such as recall, precision and scalability \cite{ernst2003 static}. As in other applications of program analysis, hybrid techniques have been proposed to combine the strength of different techniques, whilst avoiding their weaknesses. One of the tools, FLAST \citeS{S118 }, proposes a hybrid approach where the tool uses a static technique but suggests that dynamic analysis can be used to detect cases missed by the tool. Malm et al. \citeS{S72 } proposed a hybrid analysis approach to detect delays used in tests that cause flakiness. Zhang et al. \citeS{S591 } proposed a tool for dependent test detection, and they use static analysis to determine side-effect free methods, whose field accesses are ignored when determining inter-test dependence in the dynamic analysis. Some tools, stated earlier under static methods (e. g., \citeS{S607 }), may need access to historic execution data for analysis or training. \begin{figure*}[htp]
\centering
\includegraphics[width=\linewidth]{media/detection. png}
\caption{Taxonomy of detection methods}
\label{fig:taxonomydetection}
\end{figure*}
\subsubsection{Tools to Detect Flaky Tests (RQ2.2 )}
Table~\ref{tab:detection} lists the tools that detect test flakiness, which are described in the literature. Most of the tools detect flakiness manifested in test outcomes. The majority of the tools found in academic articles work on Java programs, with only three for Python and a single tool for JavaScript. These tools can be grouped by the source of flakiness they target: UI, test order, concurrency and platform dependency (implementation dependency to a particular runtime). Some of these tools identify the cause of flakiness as well (which may already be a part of the tool's output if the source of flakiness they detect is closely associated with a cause: e. g., test execution order dependency arising from a shared state can be detected by executing tests under different orders). \\
\begin{landscape}
\begin{table*}[. b]
\caption{Flaky tests detection tools as reported in academic studies}
\centering
\label{tab:detection}
\resizebox{0.8 \linewidth}{.
|
324
| 5
|
arxiv
|
\midrule
Outcomes & Order & Java & Dynamic & Rerun (Vary orders) & - & \citeS{S1014 } \\
Outcomes & Android & Java & Dynamic & Rerun (Vary event schedules) & FlakeScanner & \citeS{S1008 } \\
Outcomes & General & Java & Dynamic & Rerun (twice) & - & \citeS{S1011 } \\
Cause & Web & Java & Dynamic & Rerun (Vary environment) & FlakyLoc & \citeS{S99 } \\
Location & General & - & Dynamic & Log analysis & RootFinder & \citeS{S6 } \\
Outcomes & UI & Java & Dynamic & Rerun (Vary event schedules) & FlakeShovel & \citeS{S16 } \\
Outcomes & General & Java & Hybrid & Machine learning & FlakeFlagger & \citeS{S1027 } \\
Outcome & General & Mixed & Dynamic & Rerun (Environment) & - & \citeS{S27 } \\
Outcomes & General & Java & Static & Machine learning & - & \citeS{S607 } \\
Outcomes & ML & Python & Dynamic & Rerun (Vary random number seeds) & FLASH & \citeS{S14 } \\
Outcomes & Concurrency & JavaScript & Dynamic & Rerun (Vary event order) & NodeRacer & \citeS{S93 } \\
Outcomes & General & Java & Static & Machine learning (test code similarity) & FLAST & \citeS{S118 } \\
Outcomes & General & Python & Dynamic & Rerun (Vary test code) & FITTER & \citeS{S28 } \\
Outcomes & Concurrency & Java & Dynamic & Rerun (Add noise to environment) & Shaker & \citeS{S24 } \\
Outcomes & General & Python & Dynamic & Test execution history & GreedyFlake & \citeS{S78 } \\
Outcomes & General & Java & Dynamic & Rerun & iDFlakies & \citeS{S4 } \\
Location & Assumptions & Java & Dynamic & Rerun (vary API implementation ) & NonDex & \citeS{S152 } \\
Cause & Order & Java & Dynamic & Rerun and delta debugging & iFixFlakies & \citeS{S135 } \\
Outcomes & General & Java & Dynamic & Differential coverage & DeFlaker & \citeS{S2 } \\
& & & & and test execution history & & \\
Cause, location & General & C++ / Java & Dynamic & Rerun & Flakiness & \citeS{S25 } \\
& & & & & Debugger & \\
& UI & JavaScript & Dynamic & Machine learning (Bayesian network) & - & \citeS{S31 }\\
Cause & Order & Java & Dynamic & - & PolDet & \citeS{S460 }\\
Outcome & General & - & Static & Machine learning & Flakify & \citeS{S1022 }\\
Cause, Outcome & IO/Concurrency/Network & - & Dynamic & Rerun in varied containers & - & \citeS{S19 }\\
Cause, Outcome & - & - & Dynamic & Rerun in varied containers & TEDD & \citeS{S274 }\\
Cause, Outcome & - & C & Static & Dependency analysis & - & \citeS{S780 }\\
Outcomes & Order & Java & Dynamic & Rerun (Dynamic dataflow analysis) & PRADET & \citeS{S147 } \\
Outcomes & Order & Java & Dynamic & Rerun (Vary order) & DTDetector & \citeS{S591 } \\
Outcomes & Order & Java & Dynamic & Rerun (Dynamic dataflow analysis) & ElectricTest & \citeS{S391 } \\
Outcome & Order and Async/Wait & Java & Static & Pattern matching & - & \citeS{S357 }\\
Outcome & Order & Python & Dynamic & Rerun (varying orders) & iPFlakies & \citeS{S1031 }\\
Outcome & - & Multilanguage & Dynamic & Machine learning & Fair & \citeS{S1050 }\\
Outcome & Order & Java & Static & Model checking & PolDet (JPF) & \citeS{S1051 }\\
Outcome & Nondeterminism & Java & Static & Type checking & Determinism Checker & \citeS{S1054 }\\
\bottomrule
\end{tabular}}
\end{table*}
\end{landscape}
FlakyLoc \citeS{S99 } does not detect flaky tests, but identifies causes for a given flaky test. The tool executes the known flaky test in different environment configurations. These configurations are composed of environment factors (i. e., memory sizes, CPU cores, browsers and screen resolutions) that are varied in each execution. The results are analysed using a spectrum-based localization technique \cite{wong2016 survey}, which analyses the factors that cause flakiness and assigns a ranking and a suspiciousness value to determine the most likely factors. The tool was evaluated on a single flaky test from a Java web application (with several end-to-end flaky tests). The results for this particular test indicate that the technique is successfully able to rank the cause of flakiness (e. g., low screen resolution) for the test. RootFinder \citeS{S6 } identifies causes as well as the location in code that cause flakiness. It can identify nine causes (network, time, I/O, randomness, floating-point operations, test order dependency, unordered collections, concurrency). The tool adds instrumentation at API calls during test execution, which can log interesting values (time, context, return value) as well as add additional behaviour (e. g., add a delay to identify causes involving concurrency and async wait). Post-execution, the logs are analysed by evaluating predicates (e. g., if the return value was the same at this point compared to previous times) at each point where it was logged. Predicates that evaluate to consistent values in passing and failing runs are likely to be useful in identifying the causes, as they can explain what was different during passing and failing runs. DeFlaker \citeS{S2 } detects flaky tests using differential coverage to avoid reruns (as rerun can be expensive). If a test has a different outcome compared to a previous run and the code covered by the test has not changed, then it can be determined to be flaky. The study also examines if a particular rerun strategy has an impact on flakiness detection. With Java projects, there can be many such strategies (e. g., five reruns in the same JVM, forking with each run in a new JVM, rebooting the machine and cleaning files generated by builds between runs). NodeRacer \citeS{S93 }, Shaker \citeS{S24 } and FlakeShovel \citeS{S16 } specifically detect concurrency related flakiness. NodeRacer analyses JavaScript programs and accelerates manifestation of event races that can cause test flakiness. It uses instrumentation and builds a model consisting of a happens-after relation for callbacks. During the guided execution phase, this relation is used to explore postponing of events such that callback interleaving is realistic with regard to actual executions. In Shaker, it is suggested that the tool exposes flakiness faster than rerun by adding noise to the environment in the form of tasks that also stress the CPU and memory whilst the test suite is executed. FlakeShovel targets the same type of cause as NodeRacer by similarly exploring different yet feasible event execution orders, but only for GUI tests in Android applications. \\
A number of detection tools are built to detect order dependent tests. In the case of iDFlakies \citeS{S4 }, which uses rerun by randomizing the order of their execution, it classifies flaky tests into two types: order-dependent and non-order dependent. In this category there are four more studies: DTDetector \citeS{S591 }, ElectricTest \citeS{S391 }, PolDet \citeS{S460 }, and PRADET \citeS{S147 }. DTDetector presents four algorithms to check for dependent tests, which are manifested in test outcomes: reversal of test execution order, random test execution order, the exhaustive bounded algorithm (which executes bounded subsequences of the test suite instead of trying out all permutations), and the dependence-aware bounded algorithm that only tests subsequences that have data dependencies. ElectricTest checks for data dependencies between tests using a more sophisticated check for data dependencies. While DTDetector checks for writes/reads to/from static fields, ElectricTest checks for changes to any memory reachable from static fields. PRADET uses a similar technique to check for data dependencies, but it also refines the output by checking for manifest dependencies, i. e., data dependence that also influences flakiness in test outcomes. Wei et al. \citeS{S1014 } used a systematic and probabilistic approach to explore the most effective orders for manifesting order dependent flaky tests. Whereas tools such as PRADET and DTDetector explore randomized test orders, Wei et al. analyse the probability of randomized orders detecting flaky tests, and they propose an algorithm that explores consecutive tests to find all order-dependent tests that depend on one test. Anjiang et al. \citeS{S1011 } discussed a class of flakiness due to shared state, non-idempotent-outcome (NOP) tests, which are detected by executing the same test twice in the same VM. NonDex \citeS{S152 } is the only tool we found that detects flakiness caused by implementation dependency. The class of such dependencies it detects is limited to dependencies due to assumptions developers make about underdetermined APIs in the Java standard libraries, for instance the iteration order of data structures using hashing in the internal representation, such as Java's \texttt{HashMap}. A number of studies discussed machine learning approaches for flakiness prediction. Pontillo et al. \citeS{S1004 } studied the use of test and production code factors that can be used to predict test flakiness using classifiers. Their evaluation uses a logistic regression model. Haben et al. \citeS{S1005 } reproduced a Java study \citeS{S11 } with a set of Python programs to confirm the effectiveness of code vocabularies for predicting test flakiness. Camara et al. \citeS{S1012 } is another replication of the same study that extends it with additional classifiers and datasets. Parry et al. \citeS{S1034 } presented an evaluation of static and dynamic features that are more effective as predictors of flakiness in comparison to previous feature sets. Camara et al. \citeS{S1017 } evaluated using test smells to predict flakiness. \\
\begin{qoutebox}{white}{}
\textbf{RQ2 summary. }
A number of methods have been proposed to detect flaky tests, which include static, dynamic and hybrid methods. Most static approaches use machine learning. Rerun (in different forms) is the most common dynamic approach for detecting flaky tests. Approaches that use rerun focus on making flaky tests detection less expensive by accelerating ways to manifest flakiness or running fewer tests. \end{qoutebox}
\begin{table*}[. b]
\centering
\caption{Detection tools as reported in grey literature}
\label{tab:detection-industry}
\resizebox{\linewidth}{. }{
\begin{tabular}{p{3 cm}p{12 cm}l}
\toprule
\textbf{Tool} & \textbf{Features} & \textbf{Articles}\\
\midrule
Flakybot & Determines test(s) are flaky before merging commits. Flakybot can be invoked on a pull request, and tests will be exercised quickly and results reported & \citeG{G2 } \\
Azure DevOps Services & Feature that enables the detection of flaky tests (based on changes and through reruns) & \citeG{G6 } \\
Scope& Helps identify flaky tests, requiring a single execution based on the commit diff & \citeG{G8 } \\
Cypress & Automatically rerun (retries) a failed test prior to marking it as fail & \citeG{G9 } \\
Gradle Enterprise & Considers a test flaky if it fails and then succeeds within the same Gradle task & \citeG{G22 } \\
pytest-flakefinder \& pytest-rerunfailures & Rerun failing tests multiple times without having to restart pytest (in Python) & \citeG{G31 } \\
pytest-random-order \& pytest-randomly & Randomise test order so that it can detect flakiness due to order dependency and expose tests with state problems & \citeG{G31 } \\
BuildPluse & Detect and categorise flaky tests in the build by checking changes in test outcomes between builds (cross-language) & \citeG{G92 } \\
rspec-retry & Ruby scripts that rerun flaky \texttt{RSpec} tests and obtain a success rate metric & \citeG{G35 } \\
Quarantine & A tool that provides a run-time solution to diagnosing and disabling flaky tests and automates the workflow around test suite maintenance & \citeG{G36 } \\
protractor-flake & Rerun failed tests to detect changes in test outcomes & \citeG{G50 } \\
Shield34 & Designed to address the Selenium flaky tests issues & \citeG{G57 } \\
Bazel & Build and auto testing tool, An option to mark tests as flaky, which will skip those marked tests & \citeG{G58, G71 } \\
Flaky (pytest plugin) & Automatically rerunning failing tests & \citeG{G59, G67 } \\
Capybara & Contains an option to prevent against race conditions & \citeG{G68 } \\
Xunit. Skip-pableFact & Tests can be marked as SkippableFact allowing control over test execution & \citeG{G70 } \\
timecop & ruby framework to test time-dependent tests & \citeG{G81, G96 } \\
Athena & Identifies commits that make a test nondeterministically fail, and notifying the author. Automatically quarantines flaky tests & \citeG{G108 } \\
Datadog & Flaky test management through a visualisation of test outcomes, it shows which tests are flaky & \citeG{G116 } \\
CircleCI dashboard & The ``Test Insights'' dashboard provides information about all flaky tests, with an option to automate reruns of failed tests & \citeG{G122 } \\
Flaky-test-extractor-maven-plugin & Maven plugin that filters out flaky tests from existing surefire reports. It generates additional XML files just for the flaky tests & \citeG{G140 } \\
TargetedAutoRetry & A tool to retry just the steps which are most likely to cause issues with flakiness (such as Apps launch, race conditions candidates etc.. ) & \citeG{G213 } \\
Junit surefire plugin & an option to rerun failing tests in Junit surefire plugin (rerunFailingTestsCount) & \citeG{G192 } \\
Test Failure Analytics & gradle's plugin that helps to identify flaky tests between different builds & \citeG{G142 } \\
Test Analyzer Service & An internal tool at Uber used to manage the state of unit tests and to disable flaky tests & \citeG{G149 } \\
TestRecall & Test analysis tool that provides insights about test suites, including tracking flaky tests & \citeG{G202 } \\
Katalon Studio & an option to retry all tests (or only failed tests) when the Test Suite finishes & \citeG{G203 } \\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{Impact of Flaky Tests (RQ3 )}
Next, we explore the wider view of the impact flaky tests have on different aspects of software engineering. In addressing this research question, we look at the impact of flaky tests as discussed in the articles we reviewed, and then combine the evidence noted in academic and grey literature. We discuss this in detail in the following two subsections. \subsubsection{Impact Noted in Academic Research}
For each article we included in our review, we look at the context of flaky tests in the study. We classify the impact of flaky tests as reported in academic literature into the following three categories:
\begin{enumerate}
\item \textbf{Testing (including testing techniques):} the impact on software testing process in general (i. e., impact on test coverage). \item \textbf{Product quality:} impact on the software product itself, and its quality.
|
324
| 6
|
arxiv
|
Impact on testing:} Many aspects of testing are affected by test flakiness. This includes automatic test generation \citeS{S330 }
, test quality characteristics \citeS{S57 }, and techniques or tasks involved in test debugging and maintenance \citeS{S430 }. A number of testing techniques are based on the assumption that tests have deterministic outcomes, and when this assumption does not hold, they may not be reliable for their intended purposes. Test optimization techniques such as test suite reduction, test prioritization, test selection, and test parallelization rely on this assumption. For instance, flakiness can manifest in order dependent tests when test optimization is applied to test suites with such tests. Lam et al. \citeS{S40 } studied the necessity of dependent-test-aware techniques to reduce flaky test failures, where they first investigate the impact of flaky tests on three regression testing techniques: test prioritization, test selection and parallelization. Other testing techniques impacted are test amplification \citeS{S1156 }, simulation testing \citeS{S1041 } and manual testing \citeS{S1081 }. \\
\noindent \textbf{Impact on product quality:} Several articles cite how test flakiness breaks builds \citeS{S336, S256 }. Testing drives automated builds, which flakiness can break, resulting in delaying CI workflows. Zdun et al. \citeS{S429 } highlighted how flaky tests can introduce noise into CI builds that can affect service deployment and operation (microservices and APIs in particular). B{\"o}hme \citeS{S57 } discussed flakiness as one of the challenges for test assurance, i. e., executing tests as a means to increase confidence in the software. Product quality can be affected due to lack of test stability, which is cited as an issue by Hirsch et al. \citeS{S197 }, in the context of a single Android application with many UI tests that are fragile. Several articles mention the issue of cost in detecting flaky tests, Pinto et al. ~\citeS{S11 } pointed out that it can be costly to run detectors after each change and hence organizations run them only on new or changed tests, which might not be the best approach as this would affect the recall. Vassallo et al. \citeS{S556 } identified retrying failure to deal with flakiness as a CI smell, as it has a negative impact on development experience by slowing down progress and hiding bugs. Mascheroni et al. \citeS{S1043 } proposed a model to improve continuous testing by presenting test reliability as a level in the improvement model, and flaky tests as a main cause for reliability issues with tests. They suggest good practices to achieve this. \\
Multiple articles also discuss how test flakiness can affect developers, leading to a negative impact on product quality. This includes the developer's perception of tests, and the effort required to respond to events arising from test flakiness (build failures in CI, localizing causes, fixing faulty tests). Koivuniemi \citeS{S750 } mentioned uncertainty and frustration due to developers attributing flaky failures to errors in the code where there are none. Eck et al. \citeS{S8 } survey on developer's perception of flaky tests noted that flaky tests can have an impact on software projects, in particular on resource allocation and scheduling. \\
\noindent \textbf{Impact on debugging and maintenance:} Several techniques used in maintenance and debugging are known to be impacted by the presence of flaky tests. This includes all techniques that rely on tests such as test-based program repair, crash reproduction, test amplification and fault localization, which can all be negatively impacted by flakiness. Martinez et al. \citeS{S252 } reported a flaky test in a commonly used bugs' dataset, Defects4 J, and how the repair system's effectiveness can be affected (if the flaky test fails after a repair, the system would conclude that the repair introduced a regression). Chen et al. \citeS{S962 } explained that subpar quality tests can affect their use for detecting performance regressions, and in the case of flaky tests they may introduce noise and require multiple executions. Dorward et al. \citeS{S1049 } proposed an approach that is more efficient to find culprit commits using flaky tests, as bisect fails in this situation. \subsubsection{Impact noted in grey literature}
We also analysed the impact of flaky test as found in grey literature articles. We checked if there was any discussion of the impact of flaky tests on certain techniques, tools, products or processes. We classify the noted impact of flaky tests into the following three categories:
\begin{enumerate}
\item \textbf{Code-base and product:} the impact of flaky tests on the quality or the performance of the production code and the CUT. \item \textbf{Process:} the impact on the development pipeline and the delivery of the final product. \item \textbf{Developers:} the `social' impact of flaky test on the developers/testers. \end{enumerate}
\begin{table*}[]
\caption{Summary of the impact of flaky tests noted in academic literature}
\label{tab:impact_academic}
\resizebox{\linewidth}{. }{
\begin{tabular}{lll}
\toprule
\textbf{Impact Type} & \textbf{Impact} & \textbf{Reference} \\ \midrule
\textbf{Product quality} & Breaking builds & \citeS{S336, S256 } \\
& Service deployment and operation & \citeS{S429 } \\
& Test reliability & \citeS{S1043 } \\
& Test assurance & \citeS{S57 } \\
& Product quality & \citeS{S197 }\\
& Costly to detect & \citeS{S15, S11, S143 } \\
& Delays CI workflow & \citeS{S556, S136 }\\
& Maintenance effort & \citeS{S430 } \\
& Uncertainty and frustration & \citeS{S750 } \\
& Trust in tools and perception & \citeS{S395, S159 }\\ \midrule
\textbf{Testing} & Regression testing techniques & \citeS{S40 } \\
& Simulation testing & \citeS{S1041 } \\
& Test amplification & \citeS{S1156 } \\
& Test suite/case reduction & \citeS{S662, S444 } \\
& Mutation testing & \citeS{S174, S234 } \\
& Manual testing & \citeS{S1081 } \\
& Test minimization & \citeS{S526 } \\
& Test coverage (ignored tests) & \citeS{S211 } \\
& Test selection & \citeS{S207, S584 } \\
& Patch quality & \citeS{S269 } \\
& Test performance & \citeS{S704 } \\
& Test suite efficiency & \citeS{S565 } \\
& Test prioritization & \citeS{S73, S207 } \\
& Regressions & \citeS{S110 } \\
& Test suite diversity & \citeS{S73 } \\
& Test generation & \citeS{S330 } \\
& Differential testing & \citeS{S348 } \\
& Test assurance & \citeS{S57 } \\ \midrule
\textbf{Debugging and maintenance} & Program repair & \citeS{S520, S252, S1070 }\\
& Determining culprit commits & \citeS{S1049 } \\
& Performance analysis & \citeS{S962 } \\
& Bug reproduction & \citeS{S229 } \\
& Crash reproduction & \citeS{S762 } \\
& Fault localization & \citeS{S17, S692 } \\
\bottomrule
\end{tabular}}
\end{table*}
\begin{table*}[]
\caption{Summary of the impact of flaky tests as noted in grey literature}
\label{tab:impact_grey}
\resizebox{\linewidth}{. }{
\begin{tabular}{lll}
\toprule
\textbf{Impact Type} & \textbf{Impact} & \textbf{Reference} \\ \midrule
\textbf{Product} & Hard to debug & \citeG{G11, G52, G95 } \\
& Hard to reproduce & \citeG{G11 } \\
& Reduces test reliability & \citeG{G27, G103 } \\
& Expensive to repair & \citeG{G114 } \\
& \begin{tabular}[c]{@{}l@{}}Increase cost of testing as flaky \\ behaviour can spread to other tests\end{tabular} & \citeG{G8, G210 } \\ \midrule
\textbf{Developers}& Losing trust in builds & \citeG{G74, G81, G114, G127, G203 } \\
& Loss of productivity & \citeG{G8, G152, G165, G210 } \\
& Time-consuming / wastes time & \citeG{G22, G95, G107, G134, G142, G144, G147, G149 }\\
& Resource consuming & \citeG{G26, G30, G127 } \\
& Demotivate/mislead developers & \citeG{G22, G134 } \\
\midrule
\textbf{Delivery} & Affects the quality of shipped code & \citeG{G6, G129, G202 } \\
& Slows down deployment pipeline & \citeG{G22, G95, G114, G142, G154 } \\
& Slows down the development & \citeG{G45, G95, G98, G22, G110 } \\
& Loses faith in tests catching bugs & \citeG{G30, G89 }
\\
& Causes unstable deployment pipelines & \citeG{G35 } \\
& Slows down development and testing processes & \citeG{G45, G110 } \\
& Delays project release & \citeG{G107, G108, G213 } \\
\bottomrule
\end{tabular}}
\end{table*}
A summary of the impact noted in the grey literature is shown in Table \ref{tab:impact_grey}. We discussed each of those three categories below. \\
\noindent \textbf{Impact on the code-base and product:}
Several grey literature articles have discussed the wider impact of flaky tests on the production code and on the final software product. Among several issues reported by different developers, testers and managers, it was noted that the presence of flaky tests can significantly increase the cost of testing \citeG{G8, G36 }, and makes it hard to debug and reproduce the CUT \citeG{G11, G52, G95 }. In general, flaky tests can be very expensive to repair and often require time and resources to debug \citeG{G95, G114 }. They can also make end-of-end testing useless \citeG{G74 }, which can reduce test reliability \citeG{G27, G103 }. One notable area that flaky tests compromise is coverage - if the test is flaky enough that it can fail even when retried, then coverage is already considered lost \citeG{G129 }. Flaky tests can also spread and accumulate, with some unfixed flaky tests can lead to more flaky tests in the test suite \citeG{G7, G8 }. Fowler describe them as \textit{``virulent infection that can completely ruin your entire test suite"} \citeG{G4 }. \\
Flaky tests can have serious implications in terms of time and resources required to identify and fix potential bugs in the CUT, and can directly impact production reliability \citeG{G36 }. However, detecting and fixing flaky tests can help in finding underlying flaws and issues in the tested application and CUT that is otherwise much harder to detect \citeG{G95 }. \\
\noindent \textbf{Impact on developers:}
We observed that several of the blog posts we analysed here are written by developers and discuss the impact of flaky tests on their productivity and confidence. Developers noted that flaky tests can cause them to lose confidence in the `usefulness' of the test suite in general \citeG{G2 }, and to lose trust in their builds \citeG{G74 }. Flaky tests may also lead to a ``collateral damage'' for developers: if they are left uncontrolled or unresolved, they can have a bigger impact and may ruin the value of an entire test suite \citeG{G8 }. They are also reported to be disruptive and counter-productive, as they can waste developers' time as they try to debug and fix those flaky tests \citeG{G95, G107, G26, G30 }. \\
\begin{quote}
\textit{``The real cost of test flakiness is a lack of confidence in your tests..... If you don’t have confidence in your tests, then you are in no better position than a team that has zero tests. Flaky tests will significantly impact your ability to confidently continuously deliver. ''} (Spotify Engineering, \citeG{G2 }). \\
\end{quote}
Another experience report from Microsoft explained the practices followed and tools used to manage flaky tests at Microsoft in order to boost developers' productivity:
\begin{quote}
\textit{``Flaky tests.... negatively impact developers’ productivity by providing misleading signals about their recent changes ... developers may end up spending time investigating those failures, only to discover that the failures have nothing to do with their changes and may simply go away by rerunning the tests. '' (Engineering@Microsoft, \citeG{G134 })
}\end{quote}
\noindent \textbf{Impact on delivery:}
Developers and managers also presented evidence of how flaky tests can delay developments and have a wider impact on the delivery (e. g., \citeG{G36 }) - mostly by slowing down the development \citeG{G45, G95, G98 } and delaying products' release \citeG{G107, G108 }. They can also reduce the value of an automated regression suite \citeG{G4 } and lead organization and testing teams to lose faith that their tests will actually find bugs \citeG{G30, G89 }. Some developers also noted that if flaky tests left unchecked, or untreated, they can lead to a completely useless test suites, as this is the case with some organisations:
\begin{quote}
\emph{`` We've talked to some organizations that reached 50 \%+ flaky tests in their codebase, and now developers hardly ever write any tests and don’t bother looking at the results. Testing is no longer a useful tool to improve code quality within that organization. '' (Product Manager at Datadog, \citeG{G8 })}\end{quote}
Another impact of flaky tests is that it could slow down deployment pipeline which can decrease confidence in the correctness of changes in the software \citeG{G22, G114 }. They could even block deployment until spotted and resolved \citeG{G5 }. \\
\begin{qoutebox}{white}{}
\textbf{RQ3 summary. }
The impact of flaky tests has been the subject of discussion in both academic and grey literature. Flaky tests are reported to have an impact on the products under development, the quality of CUT and the tests themselves and on the delivery pipelines. Techniques that rely on tests such as test-based program repair, crash reproduction and fault detection and localization can be negatively impacted by the presence of flaky tests. \end{qoutebox}
\subsection{Responses to Flaky Tests (RQ4 )}
The way that developers and teams respond to flaky tests has been discussed in detail in both academic and grey literature. However, the type of applied/recommended response is slightly different from one study to another as this also depends on the context of the causes of flaky tests, and also the methods used to detect them. Below we discuss the responses as noted in academic and grey literature, separately:
\subsubsection{Response Noted in Academic Literature}
We classify responses to flaky tests as follows:
\begin{itemize}
\item Modifying the test.
|
324
| 7
|
arxiv
|
} & \textbf{Change assumptions} & \\
& Fix assumptions about library API's & \citeS{S152 } \\
& automatically repair implementation-dependent tests & \citeS{S1013 } \\
& Replace test & \citeS{S8 } \\
& Merge dependent tests & \citeS{S1 } \\
& \textbf{Change assertions} & \\
& Modify assertion bounds (e. g., to accommodate wider ranges of outputs) & \citeS{S1, S7, S8, S14, S1009, S1044 } \\
& \textbf{Change fixture} & \\
& Removing shared dependency between tests & \citeS{S1 } \\
& Global time as system variable & \citeS{S950 } \\
& Setup/clean shared state between tests & \citeS{S1 }\\
& Modify test parameters & \citeS{S14 } \\
& Modify test fixture & \citeS{S9 } \\
& Fix defective tests & \citeS{S446, S963, S1 } \\
& Make behaviour deterministic & \citeS{S39 } \\
& Change delay for async-wait & \citeS{S72, S1045 } \\
& Concurrency-related fixes & \citeS{S8 } \\
& \textbf{Change test-program interaction} & \\
& Mock use of environment/concurrency & \citeS{S179 } \\ \midrule
\textbf{Modify program}
& Concurrency-related fixes & \citeS{S1 } \\
& Replace dependencies & \citeS{S7, S14 } \\
& Remove nondeterminism & \citeS{S8 } \\ \midrule
\textbf{Process response} & Rerun tests & \citeS{S7, S269, S234 } \\
& Ignore/Disable & \citeS{S8, S359, S143 } \\
& Quarantine & \citeS{S106 } \\
& Add annotation to mark test as flaky & \citeS{S14 } \\
& Increase test time-outs & \citeS{S8 } \\
& Reconfigure test environment (e. g., containerize or virtualize unit tests) & \citeS{SB29 } \\
& Remove & \citeS{S164, S145, S39, S165, S71 } \\
& Improve bots to detect flakiness & \citeS{S100 } \\
& Responsibility of CI to deal with it & \citeS{S67 } \\
& Prioritize tests & \citeS{S137, S1021 } \\
\bottomrule
\end{tabular}}
\label{tab:responses_academic}
\end{table*}
A summary of the responses found in academic articles is presented in Table \ref{tab:responses_academic}. The three major strategies are to fix tests, to modify the CUT or putting in a mechanism to deal with flaky tests (e. g., retry or quarantine tests). Berglund and Vateman \citeS{S39 } listed some strategies for avoiding non-deterministic behaviour in tests: minimising variations in testing environment, avoiding asynchronous implementations, testing in isolation, aiming for deterministic assertions and limiting the use of third party dependencies. Other measures include mocking to reduce flakiness, for instance EvoSuite \cite{fraser2011 evosuite} uses mocking for this. Zhu et al. \citeS{S179 } proposed a tool for identifying and proposing mocks for unit tests. A wider list of specific fixes to the different types of flaky tests is provided in \cite{S1 }. Shi et al. \citeS{S9 } presented a tool, iFixFlakies, to fix order dependent tests. Fixes in the CUT are not discussed as much in academic articles. The closest mention in relation to this is in \citeS{S7 }, which finds instances in flaky test fix commits where the CUT is improved and dependencies are changed to fix flakiness. Another strategy, removing flaky tests, was also identified in \citeS{S7 }. The study found that developers commented out flaky tests in 10 /77 of examined commits. Removing flaky tests is also a strategy cited in papers that discuss testing related techniques \citeS{S164, S165 }. Quarantining, ignoring or disabling flaky tests is also discussed as responses. Memon et al. \citeS{S137 } detailed the approach at Google for dealing with flaky tests. They use multiple factors (e. g., more frequently modified files are more likely to cause faults) to prioritize the tests to rerun rather than a simple test selection heuristic such as rerun tests that failed recently, which is sensitive to flakiness. A number of tools have been proposed recently for automatically repairing flaky tests. They can fix flakiness due to these causes: randomness in ML projects, order dependence and implementation dependence. Dutta et al. \citeS{S1009, S1044 } conducted an empirical analysis of seeds in machine learning projects and propose approaches to repair flaky tests due to randomness by tuning hyperparameters, fixing seeds and modifying assertions bounds. Zhang et al. \citeS{S1013 } proposed a tool for fixing flaky tests that are caused by implementation dependencies of the type explored by NonDex \citeS{S152 }. Wang et al. \citeS{S1031 } proposed iPFlakies for Python that fixes order dependent tests that fail due to state that is polluted by other tests. This is related to their work, iFixFlakies for repairing order dependent tests in Java programs. The Python tool discovers existing tests or helper methods that clean the state before successfully rerunning the order dependent test. ODRepair from Li et al. \citeS{S1018 } is an approach that uses automatic test generation to clean the state rather than existing code. Mondal et al. \citeS{S1058 } proposed an approach to fixing flakiness due to parallelizing dependent tests by adding a test from the same class to correct dependency failure. \subsubsection{Response Noted in Grey Literature}
Here we look at the methods and strategies followed to deal with flaky tests as noted in the grey literature. We classified those strategies into the following categories:
\begin{enumerate}
\item \textbf{Quarantine:} keep flaky tests in a different test suite to other `healthy' tests in a quarantined area in order to diagnose and then fix those tests. \item \textbf{Fix immediately:} fix any flaky test that has been found immediately, but first developers will need to reproduce the flaky behaviour . \item \textbf{Skip and ignore:} provide an option to developers to ignore flaky tests from the build and suppress the test failures. This is usually in the form of annotation. In some cases, especially when developers are fully aware of the flaky behaviour of the tests and the implications of those tests have been considered, they may decide to ignore those flaky tests and continue with the test run as planned. \item \textbf{Remove:} remove any test that is flaky from the test suite once detected. \end{enumerate}
\begin{table*}[. h]
\centering
\caption{Summary of the response strategies followed by some organisations to deal with flaky tests, as discussed in grey literature. }
\resizebox{\linewidth}{. }{
\begin{tabular}{@{}lll@{}}
\toprule
\textbf{Strategy} & \textbf{Description} & \textbf{Example} \\ \toprule
Quarantine & \begin{tabular}[c]{@{}l@{}}Keep flaky tests in a different test suite to \\ other healthy tests in a quarantined area. \end{tabular} & \begin{tabular}[c]{@{}l@{}}\citeG{G1, G4, G5, G36, G104, G106, G108 }\\ \citeG{G35, G38, G67, G70, G79, G89, G111, G114 } \\ \citeG{G149, G154, G164, G213, G134, G202 } \end{tabular} \\ \midrule
\begin{tabular}[c]{@{}l@{}}Fix and replace immediately, \\ or remove if not fixed \end{tabular}& \begin{tabular}[c]{@{}l@{}} Test with flaky behaviour are given priority \\ and fixed/removed once detected. \end{tabular} & \citeG{G20, G101, G102, G37, G147, G189, G111 } \\ \midrule
Label flaky tests & Leave it to developers to decide & \citeG{G22, G67, G129, G134, G202 } \\ \midrule
Ignore/Skip & \begin{tabular}[c]{@{}l@{}}Provide an option to developers to ignore \\ flaky tests from the build (e. g., though the \\ use of annotations) and suppress the test failures. \end{tabular} & \citeG{G6, G8, G70, G129 } \\
\bottomrule
\end{tabular} }
\label{tab:responses_grey}
\end{table*}
A summary of the response found in grey literature is shown in Table \ref{tab:responses_grey}. The most common strategy that has been discussed is to quarantine and then fix flaky tests. As explained by Fowler \citeG{G4 }, this strategy indicates that developers should follow a number of steps once a flaky test has been identified: \textit{Quarantine} $\rightarrow$ \textit{Determine the cause} $\rightarrow$ \textit{Report/Document}
$\rightarrow$ \textit{Isolate and run locally} $\rightarrow$ \textit{Reproduce} $\rightarrow$ \textit{Decide (fix/ ignore)}. This is the same strategy that Google (and many other organisations) has been employing to deal with any flaky tests detected in the pipelines \citeG{G1 }. A report from Google reported that they use a tool that monitors all potential flaky tests, and then automatically quarantines the test in case flakiness is found to be high. The quarantining works by removing ``\emph{the test from the critical path and files a bug for developers to reduce the flakiness. This prevents it from becoming a problem for developers, but could easily mask a real race condition or some other bug in the code being tested}'' \citeG{G1 }. Other organizations also follow the same strategy e. g., Flexport \citeG{G36 } and Dropbox \citeG{G108 }. Flexport \citeG{G36 } have even included a mechanism to automate the process of quarantining and skipping flaky tests. The Ruby gem, Quarantine\footnote{\url{https://github. com/flexport/quarantine}}, used to maintain a list of flaky tests by automatically ``detects flaky tests and disables them until they are proven reliable''. It has been suggested by some developers and managers that all identified flaky tests should be labelled by their severity. This can be determined by which specific component they impact, the frequency of a flaky test, or the flakiness rate of a given test. One approach that has been suggested is not only to quarantine and treat all flaky tests equally, but to quantify the level of flakiness of each flaky test so that those tests can be priorities for fixing. A report from Facebook engineers proposed a statistical metric called the Probabilistic Flakiness Score (PFS), with the aim to quantify flakiness by measuring test reliability based on how flaky they are \citeG{G127 }. Using this metric, developers can \textit{``test the tests to measure and monitor their reliability, and thus be able to react quickly to any regressions in the quality of our test suite. PFS ... quantify the degree of flakiness for each individual test at Facebook and to monitor changes in its reliability over time. If we detect specific tests that became unreliable soon after they were created, we can direct engineers’ attention to repairing them. ''} \citeG{G127 }. GitHub reported a similar metrics-based approach to determine the level of flakiness for each flaky test. An impact score is given to each flaky test based on how many times it changed its outcomes, as well as how many branches, developers, and deployments were affected by it. The higher the impact score, the more important the flaky test and thus the highest priority for fix is given to this test \citeG{G147 }. At Spotify \citeG{G2 }, engineers use Odeneye, a system that visualises an entire test suite running in the CI, and can point out developers to tests with flaky outcomes as the results of different runs. Another tool used at Spotify is Flakybot\footnote{\url{https://www. flakybot. com}}, which is designed to help developers determine if their tests are flaky before merging their code to the master/main branch. The tool can be self-invoked by a developer in a pull request, which will exercise all tests and provide a report of their success/failure and possible flakiness. There are a number of issues to consider when quarantining flaky tests though, such as how many tests should be quarantined (having too many tests in the quarantine can be considered as counterproductive) and how long a test should stay in quarantine. Fowler \citeG{G4 } suggested that not more than 8 tests in the quarantine at one time, and not to keep those tests for a long period of time. It was suggested to have a dashboard to track progress of all flaky tests so that they are not forgotten \citeG{G8 }, and have an automated approach not to only quarantine flaky tests, but also to de-quarantine them once fixed or decided to be ignored \citeG{G202 }. Regarding the different causes of flaky tests, there are different strategies that are recommended to deal with the specific sources of test flakiness. For example, to deal with flakiness due to state-dependent scenarios such as if there is an ``Inconsistent assertion timing'' (i. e., state is not consistent between test runs that can cause tests to fail randomly), one solution is to ``construct tests so that you wait for the application to be in a consistent state before asserting'' \citeG{G36 }. If the test depends on specific test order (i. e., global state shared between tests as one test may depend on compilation of another one), an obvious solution is to ``reset the state between each test run and reduce the need for global state'' \citeG{G36 }. Table \ref{tab:fixing-grey} provides a brief summary of flaky tests' fixing strategies due to the most common causes as noted in grey literature articles. \\
\begin{table*}[]
\caption{Some fixing strategies for some common flaky tests noted in grey literature }
\label{tab:fixing-grey}
\resizebox{\linewidth}{. }{
\begin{tabular}{lll}
\toprule
\textbf{Cause of flakiness} & \textbf{Suggested fix} & \textbf{Example} \\ \toprule
Asynchronous wait & \begin{tabular}[c]{@{}l@{}}Wait for a specified period of time before it checks if the action has been \\ successful (with callbacks and polling) . \end{tabular} & \citeG{G7, G74 } \\ \midrule
Inconsistent assertion timing & \begin{tabular}[c]{@{}l@{}}Construct tests so that you wait for the application to be in a consistent \\ state before asserting. \end{tabular} & \citeG{G7 } \\ \midrule
Concurrency & \begin{tabular}[c]{@{}l@{}}Make tests more robust, so that it accepts all valid results . \\ Avoid running tests in parallel . \end{tabular} & \begin{tabular}[c]{@{}l@{}}\citeG{G7 }\\ \citeG{G74 }\end{tabular} \\ \midrule
Order dependency & \begin{tabular}[c]{@{}l@{}}Run a test in a database transaction that’s rolled back once the test has \\ finished executing . \\ Clean up the environment (i. e., reset state) and prepare it before every \\ test (and reduce the need for global state in general). \\ Run test in isolation. \\ Run test in random order to find out if they are still flaky .
|
324
| 8
|
arxiv
|
Randomization & Avoid the use of random seeds. & \citeG{G38 } \\ \midrule
Environmental & \begin{tabular}[c]{@{}l@{}}Limit dependency on environments in the test. \\ limit calls to external resources and build a mocking server for \\ tests . \end{tabular} & \citeG{G27, G81, G98 } \\ \midrule
Leak global state & Run test in random order. & \citeG{G95 } \\ \bottomrule
\end{tabular}}
\end{table*}
\begin{qoutebox}{white}{}
\textbf{RQ4 summary. }
Quarantining flaky tests (for a later investigation and fix) is a common strategy that is widely used in practice. This is now supported by many tools that can integrate with modern CI tooling (able to automatically detect changes in test outcomes to identify flaky tests). Understanding the main cause of the flaky behaviour is a key to reproducing flakiness and identifying an appropriate fix, which remains a challenge. \end{qoutebox}
\subsection{Flaky Tests Datasets (RQ1 and RQ2 )}
\label{sec:datasets}
Datasets used in flakiness related studies can be divided into those used in empirical studies or for detection/causes analysis studies. Those are all obtained from academic literature studies. Table~\ref{tab:flakdatasets} lists these datasets with the type of flakiness in the programs, the programming language, the number of flaky tests identified and the total number of projects along with the names of the tools or if it's an empirical study.
As can be seen in Table \ref{tab:flakdatasets}, the dominant programming language is Java. There are a few studies in Python \citeS{S14 }. Some of these datasets are used in multiple studies, for instance \citeS{S15 } obtains its subjects from \citeS{S4 }, and \citeS{S12 } from \citeS{S2 }. \footnote{A dataset on the relationship between test smells and flaky tests was largely used in multiple studies but recently was retracted \url{https://ieeexplore. ieee. org/document/8094404 }. }
\begin{table*}[! b]
\centering
\caption{Datasets to Study Test Flakiness}
\label{tab:flakdatasets}
\resizebox{\linewidth}{! }{
\begin{tabular}{llllll}
\toprule
\textbf{Study} & \textbf{Flakiness type} & \textbf{Language} & \textbf{\# flaky tests} & \textbf{\# projects} & \textbf{Article} \\
\midrule
iDFlakies & Order dep/Other & Java & 422 & 694 & \citeS{S4 } \\
DeFlaker & General & Java & 87 & 96 & \citeS{S2 } \\
NonDex & Wrong assumptions & Java & 21 & 8 & \citeS{S152 } \\
iFixFlakies & Order dependent & Java & 184 & 10 & \citeS{S135 } \\
FLASH & Machine learning & Python & 11 & 20 & \citeS{S14 } \\
Shaker & Concurrency & Java/Kotlin (Android) & 75 & 11 & \citeS{S24 } \\
FlakeShovel & Concurrency & Java (Android) & 19 & 28 & \citeS{S16 } \\
NodeRacer & Concurrency & JavaScript & 2 & 8 & \citeS{S93 } \\
GreedyFlake & Flaky coverage & Python & -- & 3 & \citeS{S78 } \\
Travis-Listener & Flaky builds & Mixed & -- & 22,345 & \citeS{S136 } \\
RootFinder & General & . Net & 44 & 22 & \citeS{S6 } \\
\bottomrule
\end{tabular}}
\end{table*}
\section{Acknowledgements}
This work is funded by Science for Technological Innovation National Science Challenge of New Zealand, grant number MAUX2004.
\bibliographystyleS{IEEEtran}
\bibliographyS{literature}
\Urlmuskip=0 mu plus 1 mu\relax
\bibliographystyleG{IEEEtran}
\bibliographyG{greyliterature}
\bibliographystyle{IEEEtran}
|
1,451
| 0
|
arxiv
|
\section{Introduction}
\label{sec:int}
Continual learning from streaming data is a well-established, yet still rapidly developing field of modern machine learning \cite{Parisi:2019 }. Contemporary data sources generate information characterized by both volume and velocity, thus continuously flooding learning systems. This makes traditional classification methods too slow and unable to handle the ever-changing properties of arriving instances \cite{Sahoo:2018 }. Therefore, new algorithms are being developed with their efficacy and adaptive capabilities in mind \cite{Ditzler:2015 }. Learning methods for streaming data must be capable of working under strictly limited memory and time consumption while offering the ability to continually incorporate new instances, and swiftly adapting to evolving data stream characteristics \cite{Krawczyk:2017 }. This phenomenon is known as concept drift and may influence the properties of a stream in a multitude of ways, from changing class distributions \cite{Chandra:2018 } to new features or classes emerging \cite{Wang:2019 }. Timely detection of concept drift and using this information to adapt the classifier is of crucial importance to any continual learning system. One of the biggest challenges in learning from data streams is the non-stationary class imbalance \cite{Krawczyk:2016 }. Skewed data distributions are a very challenging topic in standard machine learning, being present there for over 25 years. In the continual learning domain, class imbalance becomes even more difficult, as we not only need to deal with disproportion among classes, but also with their evolving nature \cite{Wang:2018 }. Class roles and imbalance ratios are subject to change over time and we need to monitor them closely to understand which class and why poses the biggest problem for the classifier in a given moment \cite{Fernandez:2018 }. When we extend this to a multi-class imbalance scenario, we get a complex and perplexing scenario that actually occurs in many real-life applications. Concept drift detection in such problems becomes extremely demanding, as we need to factor in both evolving nature of multiple classes, as well as their non-stationary skewed distributions. \smallskip
\noindent \textbf{Research goal. } To propose a fully trainable drift detector that is capable of handling multi-class imbalanced data streams with special focus on evolving minority classes, and with changes appearing at both global (all classes affected) and local (some classes affected) levels. \smallskip
\noindent \textbf{Motivation. } While there exist a plethora of drift detectors proposed in the literature, most of them share two limitations: (i) they assume roughly balanced data distributions and thus are likely to omit concept drift happening in minority classes; and (ii) they monitor global data stream characteristics, thus detecting concept drifts that affect the entire stream, not particular classes or decision regions. This makes the state-of-the-art drift detectors unsuitable for mining imbalanced data streams, especially when multiple classes are involved. There is a need to develop a new drift detector that is skew-insensitive, can monitor multiple classes at once, and can rapidly adapt to changing imbalance ratios and classes switching roles, as none of the existing methods is capable of this. \smallskip
\noindent \textbf{Summary. } In this paper we propose RBM-IM, a trainable concept drift detector for continual learning from multi-class imbalanced data streams. It is realized as a Restricted Boltzmann Machine neural network with skew-insensitive modifications of the training procedure. We use it to track reconstruction error for each class independently and signal if any of them has been subject to a significant change over the most recent mini-batch of data. Our drift detector re-trains itself in an online fashion, allowing it to handle dynamically changing imbalance ratio, as well as evolving class roles (minority classes becoming majority and vice versa). RBM-IM is capable of detecting drifts occurring at both global and local levels, allowing for complex monitoring of multi-class imbalanced data streams and understanding the nature of each change that takes place. \smallskip
\noindent \textbf{Main contributions. } We offer the following novel contributions to the field of continual learning from data streams. \begin{itemize}
\item \textbf{RBM-IM:} a novel and trainable concept drift detector realized as a recurrent neural network with skew-insensitive loss function that is capable of monitoring multi-class imbalanced data streams with dynamic imbalance ratio. \item \textbf{Robustness to class imbalance:} RBM-IM provides robustness to multi-class skewed distributions, offering excellent detection rates of drifts appearing in minority classes without being biased towards majority concepts. \item \textbf{Detecting local and global changes:} RBM-IM is capable of detecting concept drifts affecting only a subset of minority classes (even when only a single class is affected), offering a better understanding of the nature of changes than any state-of-the-art drift detector. \item \textbf{Taxonomy of multi-class imbalanced data streams:} we propose a systematic view on possible challenges that can be encountered in continual learning from multi-class imbalanced data streams and formulate three scenarios that allow us to model such changes. \item \textbf{Extensive experimental study:} we evaluate the efficacy of RBM-IM on a thoroughly designed experimental test bed using both real-world and artificial benchmarks. We introduce a novel approach towards evaluating concept drift detectors on imbalanced data streams, by measuring their reactivity to drifts occurring only in a subset of minority classes, as well as by checking their robustness to increasing imbalance ratio among multiple classes. \end{itemize}
\section{Data stream mining}
\label{sec:dsm}
Data stream is defined as a sequence ${<S_1, S_2, ..., S_n,... >}$, where each element $S_j$ is a new instance. In this paper, we assume the (partially) supervised learning scenario with classification task and thus we define each instance as $S_j \sim p_j(x^1, \cdots, x^d, y) = p_j(\mathbf{x}, y)$, where $p_j(\mathbf{x}, y)$ is a joint distribution of the $j$-th instance, defined by a $d$-dimensional feature space and assigned to class $y$. Each instance is independent and drawn randomly from a probability distribution $\Psi_j (\mathbf{x}, y)$. \smallskip
\noindent \textbf{Concept drift. } When all instances come from the same distribution, we deal with a stationary data stream. In real-world applications, data very rarely falls under stationary assumptions \cite{Masegosa:2020 }. It is more likely to evolve over time and form temporary concepts, being subject to concept drift \cite{Lu:2019 }. This phenomenon affects various aspects of a data stream and thus can be analyzed from multiple perspectives. One cannot simply claim that a stream is subject to the drift. It needs to be analyzed and understood in order to be handled adequately to specific changes that occur \cite{Goldenberg:2020 }. Let us now discuss the major aspects of concept drift and its characteristics. \smallskip
\noindent \textbf{Influence on decision boundaries}. Firstly, we need to take into account how concept drift impacts the learned decision boundaries, distinguishing between real and virtual concept drifts \cite{Oliveira:2019 }. The former influences previously learned decision rules or classification boundaries, decreasing their relevance for newly incoming instances. Real drift affects posterior probabilities $p_j(y|\mathbf{x})$ and additionally may impact unconditional probability density functions. It must be tackled as soon as it appears, since it negatively impacts the underlying classifier. Virtual concept drift affects only the distribution of features $\mathbf{x}$ over time:
\begin{equation}
\widehat{p}_j(\mathbf{x}) = \sum_{y \in Y} p_j(\mathbf{x}, y),
\label{eq:cd2 }
\end{equation}
\noindent where $Y$ is a set of possible values taken by $S_j$. While it seems less dangerous than real concept drift, it cannot be ignored. Despite the fact that only the values of features change, it may trigger false alarms and thus force unnecessary and costly adaptations. \smallskip
\noindent \textbf{Locality of changes}. It is important to distinguish between global and local concept drifts~\cite{Gama:2006 }. The former affects the entire stream, while the latter affects only certain parts of it (e. g., individual clusters of instances, or subsets of classes). \smallskip
\noindent \textbf{Speed of changes}. Here we distinguish between sudden, gradual, and incremental concept drifts~\cite{Lu:2019 }. \begin{itemize}
\item \textbf{Sudden concept drift} is a case when instance distribution abruptly changes with $t$-th example arriving from the stream:
\begin{equation}
p_j(\mathbf{x}, y) =
\begin{cases}
D_0 (\mathbf{x}, y), & \quad \text{if } j < t\\
D_1 (\mathbf{x}, y), & \quad \text{if } j \geq t. \end{cases}
\label{eq:cd3 }
\end{equation}
\smallskip
\item \textbf{Incremental concept drift} is a case when we have a continuous progression from one concept to another (thus consisting of multiple intermediate concepts in between), such that the distance from the old concept is increasing, while the distance to the new concept is decreasing:
\begin{equation}
p_j(\mathbf{x}, y) =
\begin{cases}
D_0 (\mathbf{x}, y), &\text{if } j < t_1 \\
(1 - \alpha_j) D_0 (\mathbf{x}, y) + \alpha_j D_1 (\mathbf{x}, y), &\text{if } t_1 \leq j < t_2 \\
D_1 (\mathbf{x}, y), &\text{if } t_2 \leq j
\end{cases}
\label{eq:cd4 }
\end{equation}
\noindent where
\begin{equation}
\alpha_j = \frac{j - t_1 }{t_2 - t_1 }. \label{eq:cd5 }
\end{equation}
\smallskip
\item \textbf{Gradual concept drift} is a case where instances arriving from the stream oscillate between two distributions during the duration of the drift, with the old concept appearing with decreasing frequency:
\begin{equation}
p_j(\mathbf{x}, y) =
\begin{cases}
D_0 (\mathbf{x}, y), & \text{if } j < t_1 \\
D_0 (\mathbf{x}, y), & \text{if } t_1 \leq j < t_2 \wedge \delta > \alpha_j\\
D_1 (\mathbf{x}, y), & \text{if } t_1 \leq j < t_2 \wedge \delta \leq \alpha_j\\
D_1 (\mathbf{x}, y), & \text{if } t_2 \leq j,
\end{cases}
\label{eq:cd4 }
\end{equation}
\noindent where $\delta \in [0,1 ]$ is a random variable. \end{itemize}
\section{Related works}
\noindent \textbf{Drift detectors. } In order to be able to adapt to evolving data streams, classifiers must either have explicit information on when to update their model or use continuous learning to follow the progression of a stream. Concept drift detectors are external tools that can be paired with any classifier and used to monitor a state of the stream \cite{Barros:2018 }. Usually, this is based on tracking the error of the classifier or measuring the statistical properties of data. One of the first and most popular drift detectors is Drift Detection Method (DDM) \cite{Gama:2004 } which analyzes the standard deviation of errors coming from the underlying classifier. DDM assumes that the increase in error rates directly corresponds to changes in the incoming data stream and thus can be used to signal the presence of drift. This concept was extended by Early Drift Detection Method (EDDM) \cite{Garcia:2006 } by replacing the standard error deviation with a distance between two consecutive errors. This makes EDDM more reactive to slower, gradual changes in the stream, at the cost of losing sensitivity to sudden drifts. Reactive Drift Detection Method (RDDM) \cite{Barros:2017 } is an improvement upon DDM that allows detecting sudden and local changes under access to a reduced number of instances. RDDM offers better sensitivity than DDM by implementing a pruning mechanism for discarding outdated instances. Adaptive Windowing (ADWIN) \cite{Bifet:2007 } is based on a dynamic sliding window that adjusts its size according to the size of the stable concepts in the stream. ADWIN stores two sub-windows for old and new concepts, detecting a drift when mean values in these sub-windows differ more than a given threshold. Drift Detection Methods based on the Hoeffding's bounds (HDDM) \cite{Blanco:2015 } uses the identical bound as SEED, but drops the idea of sub-windows and focuses on measuring both false positive and false negative rates. Fast Hoeffding Drift Detection Method (FHDDM) \cite{Pesaranghader:2016 } is yet another drift detector utilizing the popular Hoeffding's inequality, but its novelty lies in measuring the probability of correct decisions returned by the underlying classifier. Wilcoxon Rank Sum Test Drift Detector (WSTD) \cite{Barros:2018 w} uses the Wilcoxon rank-sum statistical test for comparing distributions in sub-windows. \smallskip
\noindent \textbf{Drift detectors for imbalanced data streams. } There are almost no concept drift detectors dedicated to imbalanced data streams, especially to multi-class ones. Most of the works in this domain focus mainly on making the underlying classifier skew-insensitive, while assuming it is going to adapt on its own \cite{Cano:2020 }, or erroneously using standard drift detection methods. Two main dedicated drift detectors for skewed streams are PerfSim \cite{Antwi:2012 } that monitors the changes in the entire confusion matrix; and Drift Detection Method for Online Class Imbalance (DDM-OCI) \cite{Wang:2020 } that monitors recall for every class. \smallskip
\noindent \textbf{Limitations of existing methods. } Existing drift detectors suffer from two major limitations: (i) lack of self-adaptation mechanisms; and (ii) lack of robustness mechanisms. The first problem is rooted in state-of-the-art drift detectors being based on monitoring the selected properties of a stream while neglecting the fact that used monitoring criteria should also be adapted over time. The second problem is rooted in a lack of research on effective drift detectors when a stream is suffering from various data-level difficulties, such as class imbalance or noise presence. In this work, we address those two limitations with our RBM-IM, a fully trainable drift detector that autonomously adapts its detection mechanisms, while offering robustness to imbalanced class distributions. \section{Challenges in learning from multi-class imbalanced data streams}
\label{sec:imb}
In static scenarios there is a plethora of works devoted to two-class imbalanced problems, but much less attention is paid to a much more challenging multi-class imbalanced setup \cite{Krawczyk:2016 }. The same carries over to the continual learning from data streams, where most of the works focused on binary streams \cite{Krawczyk:2017 }. This is highly limiting for many modern real-world applications and thus there is a need to develop skew-insensitive techniques that can handle multiple classes \cite{Saadallah:2019 }. There is no single universal approach to how to view and analyze multi-class imbalanced data streams. Therefore, we propose a taxonomy of most crucial problems that can be encountered in this setting, creating three distinctive scenarios. They cover various learning difficulties that affect one or more classes and thus pose significant challenges for both drift detectors and classifiers. \begin{figure}[h]
\centering
\begin{subfigure}{0.33 \linewidth}{
\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/standard/0. pdf}
\subcaption{Before drift}}
\end{subfigure}\hspace*{1 pt}%
\begin{subfigure}{0.33 \linewidth}
{\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/standard/1.
|
1,451
| 1
|
arxiv
|
dynamic imbalance ratio. }
\label{fig:vis1 }
\end{figure}
\smallskip
\noindent \textbf{Scenario 1 : Global concept drift and dynamic imbalance ratio. } Here we assume that all classes are subject to a real concept drift that will influence the decision boundaries. Additionally, imbalance ratio among the classes changes together with the drift occurrences. However, class roles remain static and classes denoted as minority stay minority during the entire stream processing. This scenario poses challenges to drift detectors by varying the degree of changes in each class and how they actually impact the decision boundaries. Changes in minority classes may get overlooked by detector bias towards the majority ones, as usually they gather statistics over the entire data stream. This is depicted in Fig. ~\ref{fig:vis1 }. \begin{figure}[h]
\centering
\begin{subfigure}{0.33 \linewidth}{
\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/dynamic-rel/0. pdf}
\subcaption{Before drift}}
\end{subfigure}\hspace*{1 pt}%
\begin{subfigure}{0.33 \linewidth}
{\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/dynamic-rel/1. pdf}
\subcaption{I drift}}
\end{subfigure}\hspace*{1 pt}%
\begin{subfigure}{0.33 \linewidth}
{\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/dynamic-rel/2. pdf}
\subcaption{II drift}}
\end{subfigure}\vspace*{3 pt}\\
\caption{Scenario 2 -- global concept drift, dynamic imbalance ratio, and changing class roles. }
\label{fig:vis2 }
\end{figure}
\smallskip
\noindent \textbf{Scenario 2 : Global concept drift, dynamic imbalance ratio, and changing class roles. } Here we extend Scenario 1 by adding the third learning difficulty -- changing class roles. Now the imbalance ratio is subject to more significant changes and as a result classes may switch roles -- minority may become majority and vice versa. This is especially challenging to track in a multi-class case, where relationships among classes are more complex. Drift detectors have difficulties with keeping any reliable statistics coming from classes that rapidly change their roles. This may lead to frequently switching bias towards whichever class is currently the most frequent one. This is depicted in Fig. ~\ref{fig:vis2 }. \begin{figure}[h]
\centering
\begin{subfigure}{0.33 \linewidth}{
\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/local/0. pdf}
\subcaption{Before drift}}
\end{subfigure}\hspace*{1 pt}%
\begin{subfigure}{0.33 \linewidth}
{\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/local/1. pdf}
\subcaption{I drift}}
\end{subfigure}\hspace*{1 pt}%
\begin{subfigure}{0.33 \linewidth}
{\includegraphics[width=\linewidth, trim=2 cm 2 cm 2 cm 2 cm, clip]{img/local/2. pdf}
\subcaption{II drift}}
\end{subfigure}\vspace*{3 pt}%
\caption{Scenario 3 -- local concept drift, dynamic imbalance ratio, and changing class roles. }
\label{fig:vis3 }
\end{figure}
\smallskip
\noindent \textbf{Scenario 3 : Local concept drift, dynamic imbalance ratio, and changing class roles. } This is the most challenging scenario that retains dynamic imbalance ratio and changing class roles from Scenario 2, but moves from global concept drift to local one. That means in a given moment only a subset of classes (or even a single one) may be affected by a real concept drift, while the remaining ones are subject to no changes or a virtual concept drift that does not impact decision boundaries (see Sec. 2 ). In such a setting we should not only be able to tell if drift takes place but also which classes are affected. It is a big step towards understanding the dynamics of concept drift and offering classifier adaptation to specific regions of decision space (leading to savings in time and computational resources). This is the most challenging scenario for concept drift detectors, as changes happening in minority classes will remain unnoticed when a detector is biased towards the majority class. This is depicted in Fig. ~\ref{fig:vis3 }. \smallskip
\noindent \textbf{Real-world problems affected by multi-class imbalance and concept drift. } The three defined scenarios are not only interesting from the theoretical point of view but also directly transfer to a plethora of real-world applications. In cybersecurity, we deal with multiple types of attacks that appear with varying frequencies (multi-class extremely imbalanced problems). Some of those attacks will dynamically change over time to bypass new security settings, while legal transactions will not be affected by such concept drift. In computer vision, target detection focuses on finding few specific targets, differentiating them from the information coming from a much bigger background. Targets may change their nature over time, being subject to variations, or even camouflage. In natural language processing, we must deal with constantly evolving wording/slang utilized by various minority groups, where changes in those groups will happen independently. \section{Restricted Boltzmann Machine for imbalanced drift detection}
\label{sec:rrb}
\smallskip
\noindent \textbf{Overview of the proposed method. } We introduce a novel concept drift detector for multi-class imbalanced data streams, implemented as a Restricted Boltzmann Machine (RBM-IM) with leveraged robustness to skewed distributions via dedicated loss function. It is a fully trainable drift detector, capable of autonomous adaptation to the current state of a stream, imbalance ratios, and class roles, without relying on user-defined thresholds. \subsection{Skew-insensitive Restricted Boltzmann Machine}
\label{sec:rbm}
\noindent \textbf{RBM-IM neural network architecture. } Restricted Boltzmann Machines (RBMs) are generative two-layered neural networks \cite{Ramasamy:2020 } constructed using the $\mathbf{v}$ layer of $V$ visible neurons and the $\mathbf{h}$ layer of $H$ hidden neurons:
\begin{equation}
\begin{split}
\mathbf{v} = [v_1, \cdots, v_V] \in \{0,1 \}^V, \\
\mathbf{h} = [h_1, \cdots, h_H] \in \{0,1 \}^H
\end{split}
\label{eq:rbm1 }
\end{equation}
We deal with supervised continual learning from data streams (as defined in Sec. ~2 ), thus we need to extend this two-layer RBM architecture with the third $\mathbf{z}$ layer for class representation. It is implemented as a continuous encoding, meaning that each neuron in $\mathbf{z}$ will return its real-valued support for each analyzed class (thus being responsible for the classification process). By $\mathbf{m}_z$ we denote the vector of RBM outputs with support returned by the $z$-th neuron for the $m$-th class. This allows to define $\mathbf{z}$, known also as the class layer or the softmax layer:
\begin{equation}
\mathbf{z} = [z_1, \cdots, z_Z] \in {\mathbf{m}_1, \cdots, \mathbf{m}_Z}. \end{equation}
\noindent This class layer uses the softmax function to estimate the probabilities of activation of each neuron in $\mathbf{z}$. RBMs do not have connections between units in the same layer, which holds for $\mathbf{v}$, $\mathbf{h}$, and $\mathbf{z}$. Neurons in the visible layer $\mathbf{v}$ are connected with neurons in the hidden layer $\mathbf{h}$, and neurons in $\mathbf{h}$ are connected with those in the class layer $\mathbf{z}$. The weight assigned to a connection between the $i$-th visible neuron $v_i$ and the $j$-th hidden neuron $h_j$ is denoted as $w_{ij}$, while the weight assigned to a connection between the $j$-th hidden neuron $h_j$ and the $k$-th class neuron $z_k$ is denoted as $u_{jk}$. This is used to define the RBM energy function:
\begin{equation}
\begin{split}
E(\mathbf{v}, \mathbf{h}, \mathbf{z}) = - \sum_{i=1 }^V v_i a_i - \sum_{j=1 }^H h_j b_j - \sum_{k=1 }^Z z_k c_k \\
- \sum_{i=1 }^V \sum_{j=1 }^H v_i h_j w_{ij} - \sum_{j=1 }^H \sum_{k=1 }^Z h_j z_k u_{jk},
\end{split}
\label{eq:rbm2 }
\end{equation}
\noindent where $a_i, b_j, $ and $c_k$ are biases introduced to $\mathbf{v}, \mathbf{h}$, and $\mathbf{z}$ respectively. Energy formula $E(\cdot)$ for state $[\mathbf{v}, \mathbf{h}, \mathbf{z}]$ is used to calculate the probability of RBM of being in a given state (i. e., assuming certain weight values), using the Boltzmann distribution:
\begin{equation}
P(\mathbf{v}, \mathbf{h}, \mathbf{z}) = \frac{\exp \left( -E(\mathbf{v}, \mathbf{h}, \mathbf{z})\right)}{F},
\label{eq:rbm3 }
\end{equation}
\noindent where $F$ is a partition function allowing to normalize the probability $P(\mathbf{v}, \mathbf{h}, \mathbf{z})$ to 1. Hidden neurons in $\mathbf{h}$ are independent and use features given by the visible layer $\mathbf{v}$. The activation probability of the $j$-th given neuron $h_j$ can be calculated as follows:
\begin{equation}
\begin{split}
P(h_j |\mathbf{v}, \mathbf{z}) = \frac{1 }{1 + \exp\left(-b_j - \sum_{i=1 }^V v_i w_{ij} - \sum_{k=1 }^Z z_k u_{jk}\right)} \\
= \sigma \left( b_j + \sum_{i=1 }^V v_i w_{ij} + \sum_{k=1 }^Z z_k u_{jk}\right),
\end{split}
\label{eq:rbm4 }
\end{equation}
\noindent where $\sigma(\cdot) = 1 /(1 +\exp(- \cdot))$ stands for a sigmoid function. The same assumption may be made for neurons in the visible layer $\mathbf{v}$, when values of neurons in the hidden layer $\mathbf{h}$ are known. This allows us to calculate the activation probability of the $i$-th visible neuron as:
\begin{equation}
\begin{split}
P(v_i |\mathbf{h}) = \frac{1 }{1 + \exp\left(-a_i - \sum_{j=1 }^H h_j w_{ij}\right)} \\
= \sigma \left( a_i + \sum_{j=1 }^H h_j w_{ij} \right),
\end{split}
\label{eq:rbm5 }
\end{equation}
\noindent where one must note that given $\mathbf{h}$, the activation probability of neurons in $\mathbf{v}$ does not depend on $\mathbf{z}$. The activation probability of class layer (i. e., decision which class the object should be assigned to) is calculated using the softmax function:
\begin{equation}
P(\mathbf{z} = \mathbf{1 }_k |\mathbf{h}) = \frac{\exp \left( - c_k - \sum_{j=1 }^H h_j u_{jk}\right)}{\sum_{l=1 }^Z \exp\left( - c_l - \sum_{j=1 }^H h_j u_{jl}\right)},
\label{eq:rbm6 }
\end{equation}
\noindent where $k \in [1, \cdots, Z]$ and $k \neq l$. \smallskip
\noindent \textbf{RBM training procedure. } As RBM is a neural network model, we may train it using a loss function $L(\cdot)$ minimization with any gradient descent method. Standard RBM most commonly uses the negative log-likelihood of both external layers $\mathbf{v}$ and $\mathbf{z}$. However, our RBM-IM architecture must be designed to handle multiple imbalanced classes. Therefore, we need to modify this loss function to make RBM-IM skew-insensitive. We will achieve this by using the effective number of samples approach \cite{Cui:2019 } that measures the contributions of instances in each class. This allows us to formulate a class-balanced negative log-likelihood loss for RBM-IM:
\begin{equation}
L(\mathbf{v}, \mathbf{z}) = - \frac{1 - \beta}{1 - \beta^{x}_{m}}\log\left( P(\mathbf{v}, \mathbf{z}) \right),
\label{eq:rbm7 }
\end{equation}
\noindent where $\beta^{x}_{m}$ stands for the contribution of $x$-th instance to the $m$-th class. By taking each independent weight $w_{ij}$, we may now calculate the gradient of the loss function:
\begin{equation}
\begin{split}
\nabla L(w_{ij}) = \frac{\delta L(\mathbf{v}, \mathbf{z})}{\delta w_{ij}} = \sum_{\mathbf{v}, \mathbf{h}, \mathbf{z}} P(\mathbf{v}, \mathbf{h}, \mathbf{z}) v_i h_j \\
- \sum_{\mathbf{h}} P(\mathbf{h}|\mathbf{v}, \mathbf{z}) v_i h_j. \end{split}
\label{eq:rbm8 }
\end{equation}
\noindent This equation allows us to calculate the loss function gradient for a single instance. However, as we use RBM as a drift detector, we must be able to capture the evolving properties of a data stream. If we based our change detection on variations induced by a single new instance, we would be highly sensitive to even the smallest noise ratio. Therefore, our RBM-based drift detector must be able to work with a batch of the most recent instances in order to capture the current stream characteristics. We propose to define RBM-IM model for learning on mini-batches of instances. This will offer significant speed-up when compared to traditional batch learning used in data streams. For a mini-batch of $n$ instances arriving in $t$ time $\mathbf{M}_t = {x_1 ^t, \cdots, x_n^t}$, we can rewrite the gradient from Eq. ~\ref{eq:rbm8 } using expected values with loss function:
\begin{equation}
\frac{\delta L(\mathbf{M}_t)}{\delta w_{ij}} = E_{\text{model}}[v_i h_j] - E_{\text{data}}[v_i h_j],
\label{eq:rbm9 }
\end{equation}
\noindent where $E_{\text{data}}$ is the expected value over the current mini-batch of instances and $E_{\text{model}}$ is the expected value from the current state of RBM-IM. Of course, we cannot trace directly the value of $E_{\text{model}}$ (as this would require an immediate oracle access to ground truth), therefore we must approximate it using Contrastive Divergence with $k$ Gibbs sampling steps to reconstruct the input data (CD-$k$):
\begin{equation}
\frac{\delta L(\mathbf{M}_t)}{\delta w_{ij}} \approx E_{\text{recon}}[v_i h_j] - E_{\text{data}}[v_i h_j].
|
1,451
| 2
|
arxiv
|
k$ biases, as well as weights $u_{jk}$ is analogous to Eq. ~\ref{eq:rbm11 } and can be expressed as:
\begin{equation}
a_{i}^{t+1 } = a_{i}^{t} - \eta \left( E_{\text{recon}}[v_i] - E_{\text{data}}[v_i]\right),
\label{eq:rbm12 }
\end{equation}
\begin{equation}
b_{j}^{t+1 } = b_{j}^{t} - \eta \left( E_{\text{recon}}[h_j] - E_{\text{data}}[h_j]\right),
\label{eq:rbm13 }
\end{equation}
\begin{equation}
c_{k}^{t+1 } = c_{k}^{t} - \eta \left( E_{\text{recon}}[z_k] - E_{\text{data}}[z_k]\right),
\label{eq:rbm14 }
\end{equation}
\begin{equation}
u_{jk}^{t+1 } = u_{jk}^{t} - \eta \left( E_{\text{recon}}[h_j z_k] - E_{\text{data}}[h_j z_k]\right). \label{eq:rbm15 }
\end{equation}
\subsection{Drift detection with RBM-IM}
\label{sec:drd}
While RBM-IM is a skew-insensitive generative neural network model, we can use it as an explicit drift detector. The RBM-IM model stores compressed characteristics of the distribution of data it was trained on. By using any similarity measure between the data prototypes and properties of newly arrived instances, one may evaluate if there are any changes in the distribution. This allows us to use RBM-IM as a drift detector. Our model uses an embedded similarity measure for monitoring the state of a stream and the level to which the newly arrived instances differ from the previously observed concepts. RBM-IM tracks the similarity measure for every single class independently, using the class layer continuous outputs. RBM-IM is a fully trainable and self-adaptive drift detector, capable not only of capturing the trends of changes in each class independently (versus state-of-the-art drift detectors that monitor changes in all classes with an aggregated measure), but also of learning and adapting to the current state of a stream, class imbalance ratios, and class roles. This makes it a highly attractive approach for handling multi-class imbalanced streams with various learning difficulties discussed in Sec. ~4. \smallskip
\noindent \textbf{Measuring data similarity}. In order to evaluate the similarity of newly arrived instances to old concepts stored in RBM-IM, we will use the reconstruction error metric. We can calculate it online for each new instance, by inputting a newly arrived $d$-dimensional instance $S_n = [x_1 ^n, \cdots, x_d^n, y^n]$ to the $\mathbf{v}$ layer of RBM. Then values of neurons in $\mathbf{v}$ are calculated to reconstruct the feature values. Finally, class layer $\mathbf{z}$ is activated and used to reconstruct the class label. This allows us to keep track of the reconstruction error for each class independently, offering per-class drift detection capabilities. We can denote the reconstructed vector for $m$-th class as:
\begin{equation}
\tilde{S}_n^m = [\tilde{x}_1 ^n, \cdots, \tilde{x}_d^n, \tilde{y}_1 ^n, \cdots, \tilde{y}_Z^n],
\label{eq:rbm16 }
\end{equation}
\noindent where the reconstructed vector features and labels are taken from probabilities calculated using the hidden layer:
\begin{equation}
\tilde{x}_i^n = P (v_i|h),
\label{eq:rbm17 }
\end{equation}
\begin{equation}
\tilde{y}_k^n = P (z_k|h). \label{eq:rbm18 }
\end{equation}
\noindent The $\mathbf{h}$ layer is taken from the conditional probability, in which the $\mathbf{v}$ layer is identical to the input instance:
\begin{equation}
\mathbf{h} \sim P(\mathbf{h}|\mathbf{v} = x^n, \mathbf{z} = \mathbf{1 }_{y_n}). \label{eq:rbm19 }
\end{equation}
\noindent This allows us to write the reconstruction error in a form of the mean squared error between the true and reconstructed instance for the $m$-th class:
\begin{equation}
R(S_n^m) = \sqrt{\sum_{i=1 }^d (x_i^n - \tilde{x}_i^n)^2 + \sum_{k=1 }^Z(\mathbf{1 }^{y_n}_k - \tilde{y}_k^n)^2 }. \label{eq:rbm19 }
\end{equation}
For the purpose of obtaining a stable concept drift detector, we do not look for a change in distribution over a single instance, but for the change over the newly arriving mini-batch of instances. Therefore, we need to calculate the average reconstruction error over the recent mini-batch of data for the $m$-th class:
\begin{equation}
R(\mathbf{M}_t^m) = \frac{1 }{n} \sum_{m=1 }^n R(x_m^t). \label{eq:rbm20 }
\end{equation}
\smallskip
\noindent \textbf{Adapting reconstruction error to drift detection. } In order to make the reconstruction error a practical measure for detecting the presence of concept drift, we propose to measure the evolution of this measure (i. e., its trends) over arriving mini-batches of instances. The analysis of the trends is done for each class independently, allowing us to effectively detect local concept drifts. We achieve this by using the well-known sliding window technique that will move over the arriving mini-batches. Let us denote the trend of reconstruction error for the $m$-th class over time as $Q_r(t)^m$ and calculate it using the following equation:
\begin{equation}
Q_r(t)^m = \frac{\bar{n^m}_t \bar{TR}_t - \bar{T}_t \bar{R}_t}{\bar{n}_t \bar{T^2 }_t - (\bar{T}_t)^2 }. \label{eq:rbm21 }
\end{equation}
\noindent The trend over time can be computed using a simple linear regression, with the terms in Eq. ~\ref{eq:rbm21 } being simply sums over time as follows:
\begin{equation}
\bar{TR}_t = \bar{TR}_{t-1 } + t R(\mathbf{M}_t^m),
\label{eq:rbm22 }
\end{equation}
\begin{equation}
\bar{T}_t = \bar{T}_{t-1 } + t,
\label{eq:rbm23 }
\end{equation}
\begin{equation}
\bar{R}_t = \bar{R}_{t-1 } + R(\mathbf{M}_t^m),
\label{eq:rbm24 }
\end{equation}
\begin{equation}
\bar{T^2 }_t = \bar{T^2 }_{t-1 } + t^2,
\label{eq:rbm25 }
\end{equation}
\noindent where $\bar{TR}_0 = 0 $, $\bar{T}_0 = 0 $, $\bar{R}_0 = 0 $, and $\bar{T^2 }_0 = 0 $. We capture those statistics for each class using a sliding window of size $W$. Instead of using a manually set size, which is inefficient for drifting data streams, we propose to use a self-adaptive window size \cite{Bifet:2007 }. This eliminates the need for manual tuning of the window size that is used for drift detection. To allow flexible learning from various sizes of mini-batches, we must consider a case where $t > W$. Here, we must compute the terms for the trend regression using the following equations:
\begin{equation}
\bar{TR}_t = \bar{TR}_{t-1 } + t R(\mathbf{M}_t) - (t - w)R(\mathbf{M}_{t-W}^m),
\label{eq:rbm26 }
\end{equation}
\begin{equation}
\bar{T}_t = \bar{T}_{t-1 } + t - (t - W),
\label{eq:rbm27 }
\end{equation}
\begin{equation}
\bar{R}_t = \bar{R}_{t-1 } + R(\mathbf{M}_t) - R(\mathbf{M}_{t-W}^m),
\label{eq:rbm28 }
\end{equation}
\begin{equation}
\bar{T^2 }_t = \bar{T^2 }_{t-1 } + t^2 - (t - W)^2. \label{eq:rbm29 }
\end{equation}
\noindent The required number of instances $\bar{n}_t^m$ to compute the trend of $Q_r(t)^m$ for $m$-th class as time $t$ is given as follows:
\begin{equation}
\bar{n}_t =
\begin{cases}
t & \quad \text{if } t \leq W\\
W & \quad \text{if } t > W. \end{cases}
\label{eq:rbm30 }
\end{equation}
\noindent \textbf{Drift detection. } The above Eq. ~\ref{eq:rbm21 } allows us to compute the trends for every analyzed mini-batch of data. In order to detect the presence of drift we need to have capability of checking if the new mini-batch differs significantly from the previous one for each analyzed class. Our RBM-IM uses Granger causality test \cite{Sun:2008 } on trends from subsequent mini-batches of data for each class $Q_r(\mathbf{M}_t^m)$ and $Q_r(\mathbf{M}_{t+1 }^m)$. This is a statistical test that determines whether one trend is useful in forecasting another. As we deal with non-stationary processes we perform the variation of Granger causality test based on first differences \cite{Mahjoub:2020 }. Accepted hypothesis means that it is assumed that there exist Granger causality relationship between $Q_r(\mathbf{M}_t^m)$ and $Q_r(\mathbf{M}_{t+1 }^m)$, which means there is no concept drift on the $m$-th class. If the hypothesis is rejected, RBM-IM signals the presence of concept drift on the $m$-th class. \section{Experimental study}
\label{sec:exp}
In this section we present the experimental study used to evaluate the quality of RBM-IM. It was carefully designed to offer an in-depth analysis of the proposed method and gain insights into its behavior in various multi-class imbalanced data stream scenarios. We tailored this study to answer the following research questions. \begin{itemize}
\item \textbf{RQ1 :} Does RBM-IM offer better concept drift detection than state-of-the-art drift detectors designed for standard data streams. \item \textbf{RQ2 :} Does RBM-IM offer better concept drift detection than state-of-the-art skew-insensitive drift detectors designed for imbalanced data streams. \item \textbf{RQ3 :} What is the capability of RBM--IBM to detect local drifts that affect a subset of minority classes. \item \textbf{RQ4 :} What robustness to increasing imbalance ratio is offered by RBM-IM. \end{itemize}
\noindent All methods and experiments were implemented in MOA environment \cite{Bifet:2010 moa} and run on Intel Core i7 -8365 u with 64 GB DDR4 RAM. \subsection{Data stream benchmarks}
For the purpose of this experimental study, we selected 24 benchmark data streams: 12 come from real-world domains and 12 were generated artificially using the MOA environment \cite{Bifet:2010 moa}. Such a diverse mix allowed us to evaluate the effectiveness of RBM-IM over a plethora of scenarios. Using artificial data streams allows us to control the specific nature of drift and class imbalance, as well as to inject local concept drift into selected minority classes. Artificial data streams use a dynamic imbalance ratio that both increases and decreases over time. Real-world streams offer challenging problems that are characterized by a mix of different learning difficulties. Properties of the data stream benchmarks are given in Tab. ~\ref{tab:data}. We report the highest imbalance ratio among all the classes, i. e., the ratio between the biggest and the smallest class. \begin{table}[h. ]
\centering
\caption{Properties of real-world (top) and artificial (bottom) imbalanced data stream benchmarks. }
\label{tab:data}
\resizebox{0.49 \textwidth}{.
|
1,451
| 3
|
arxiv
|
4 & unknown\\
Crimes & 878 049 & 3 & 39 & 106.72 & unknown\\
DJ30 & 138 166 & 8 & 30 & 204.66 & yes\\
EEG & 14 980 & 14 & 2 & 29.88 & yes\\
Electricity & 45 312 & 8 & 2 & 17.54 & yes\\
Gas & 13 910 & 128 & 6 & 138.03 & yes\\
Olympic & 271 116 & 7 & 4 & 66.82 & unknown\\
Poker & 829 201 & 10 & 10 & 144.00 & yes\\
IntelSensors & 2 219 804 & 5 & 57 & 348.26 & yes\\
Tags &164 860 & 4 & 11 & 194.28 & unknown\\
\midrule
Aggrawal5 & 1 000 000 & 20 & 5 & 50.00 & incremental\\
Aggrawal10 & 1 000 000 & 40 & 10 & 80.00 & incremental\\
Aggrawal20 & 2 000 000 & 80 & 20 & 100.00 & incremental\\
Hyperplane5 & 1 000 000 & 20 & 5 & 100.00 & gradual\\
Hyperplane10 & 1 000 000 & 40 & 10 & 200.00 & gradual\\
Hyperplane20 & 2 000 000 & 80 & 20 & 300.00 & gradual\\
RBF5 & 1 000 000 & 20 & 5 & 100.00 & sudden\\
RBF10 & 1 000 000 & 40 & 10 & 200.00 & sudden\\
RBF20 & 2 000 000 & 80 & 20 & 300.00 & sudden\\
RandomTree5 &1 000 000 & 20 & 5 & 100.00 & sudden\\
RandomTree10 & 1 000 000 & 40 & 10 & 200.00 & sudden\\
RandomTree20 & 2 000 000 & 80 & 20 & 300.00 & sudden\\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Setup}
\noindent \textbf{Reference concept drift detectors. } As reference methods to the proposed RBM-IM, we have selected three state-of-the-art concept drift detectors for standard data: WSTD \cite{Barros:2018 w}, RDDM \cite{Barros:2017 }, and FHDDM \cite{Pesaranghader:2016 }; as well as two state-of-the-art drift detectors for imbalanced data streams: PerfSim and DDM-OCI. Parameters of all the six drift detectors are given in Tab. ~\ref{tab:ddp}. \begin{table}[h. ]
\centering
\caption{Examined drift detectors and their parameters. }
\scalebox{0.65 }{
\begin{tabular}{lll}
\toprule
Abbr. & Name & Parameters \\
\midrule
WSTD \cite{Barros:2018 w} & Wilcoxon Rank Sum Test & sliding window size $\omega \in \{25,50,75,100 \}$\\
&Drift Detection & warning significance $\alpha_w \in \{0.01,0.03,0.05,0.07 \}$\\
& & drift significance $\alpha_d \in \{0.001,0.003,0.005,0.007 \}$\\
& & max. no of old instances $\min \in \{1000,2000,3000,4000 \}$\\
RDDM \cite{Barros:2017 } & Reactive Drift Detection & warning threshold $\alpha_w \in \{0.90,0.92,0.95,0.98 \}$\\
& & drift threshold $\alpha_d \in \{0.80,0.85,0.90.0.95 \}$\\
& & min. no. of errors $e \in \{10,30,50,70 \}$\\
& & min. no. of instances $\min \in \{3000,5000,7000,9000 \}$\\
& & max. no. of instances $\max \in \{10000,20000,30000,40000 \}$\\
& & warning limit $wL \in \{800,1000,1200,1400 \}$\\
FHDDM \cite{Pesaranghader:2016 }& Fast Hoeffding Drift Detection & sliding window size $\omega \in \{25,50,75,100 \}$\\
& & allowed error $\delta \in \{0.000001,0.00001,0.0001,0.001 \}$ \\
PerfSim \cite{Antwi:2012 } & Performance Similarity & differentiation weights $\lambda \in \{0.1,0.2,0.3,0.4 \}$\\
& & min. no. of errors $n = \{10,30,50,70 \}$\\
DDM--OCI \cite{Wang:2020 } & Drift Detection Method & warning threshold $\alpha_w \in \{0.90,0.92,0.95,0.98 \}$\\
& for online class imbalance & drift threshold $\alpha_d \in \{0.80,0.85,0.90.0.95 \}$\\
& & min. no. of errors $e \in \{10,30,50,70 \}$\\
\midrule
RBM-IM & RBM Drift Detection& mini--batch size $\mathbf{M} \in \{25,50,75,100 \}$\\
& for imbalanced data streams & visible neurons $\mathbf{V} = $ no. of features\\
& & hidden neurons $\mathbf{H} \in \{0.25 \mathbf{V},0.5 \mathbf{V},0.75 \mathbf{V}, \mathbf{V}\}$ \\
& & class neurons $\mathbf{Z} = $ no. of classes\\
& & learning rate $\eta \in \{0.01,0.03,0.05,0.07 \}$\\
& & Gibbs sampling steps $k \in \{1,2,3,4 \}$\\
\bottomrule
\label{tab:ddp}
\end{tabular}
}
\end{table}
\noindent \textbf{Parameter tuning. } In order to offer a fair and thorough comparison, we performed parameter tuning for every drift detector and for every data stream benchmark. As we deal with a streaming scenario, we used self hyper-parameter tuning \cite{Veloso:2018 } that is based on the online Nelder-Mead optimization. \noindent \textbf{Base classifier. } In order to ensure fairness when comparing the examined drift detectors they all use Adaptive Cost-Sensitive Perceptron Trees \cite{Krawczyk:2017 ecml} as a base classifier. This is a skew-insensitive and efficient classifier capable of handling both binary and multi-class imbalanced data streams, but is strongly dependent on an attached concept drift detection component. Therefore, it offers an excellent backbone for our experiments, allowing us to directly measure how a given drift detector impacts the classification quality. \noindent\textbf{RBM-IM training. } Our drift detector uses the first instance batch to train itself at the beginning of the stream processing. It continuously updates itself in an online fashion together with the base classifier. \noindent \textbf{Evaluation metrics. } As we deal with multi-class imbalanced and drifting data streams, we evaluated the examined algorithms using prequential multi-class AUC \cite{Wang:2020 } and prequential multi-class G-mean \cite{Korycki:2020 }. \noindent\textbf{Windows. } We used a window size $W = 1000 $ for calculating the prequential metrics. ADWIN self-adapting window was used for both RBM-IM and reference drift detectors to alleviate the need for manual window size tuning \cite{Korycki:2020 }. \noindent\textbf{Statistical analysis. } We used the Friedman ranking test with Bonferroni-Dunn post-hoc and Bayesian signed test \cite{Benavoli:2017 } for statistical significance over multiple comparison with significance level $\alpha = 0.05 $. \noindent\textbf{Drift injection. } For experiment 2, we inject local concept drift starting with the smallest minority class and then add classes according to their increasing size. This allows us to consider most difficult scenarios, where smallest classes are affected by the local concept drift and thus most likely to be neglected. \begin{table*}[h. ]
\centering
\caption{Results according to pmAUC, pmGM and update times per batch for the examined concept drift detectors. }
\label{tab:res}
\resizebox{0.98 \textwidth}{.
|
1,451
| 4
|
arxiv
|
DM--OCI & RBM-IM \\
\cmidrule(l){1 -7 }\cmidrule(l){8 -14 }
Activity-Raw & 45.43 & 46.23 & 48.45 & 72.81 & 74.29 &\textbf{79.92 } & &51.06 & 54.10 &55.82 &76.11 & 78.59 & \textbf{82.04 } \\
Connect4 & 54.19 &53.48 &55.27 &64.19 &69.10 & \textbf{75.04 }& & 55.03 & 55.39 & 56.29 & 66.08 &70.21 &\textbf{77.92 } \\
Covertype & 33.19 & 34.12 & 35.72 & 41.24 & 40.58 & \textbf{53.98 } & &32.45 & 33.10 & 35.98 & 40.19 & 41.02 & \textbf{54.02 } \\
Crimes &19.93 & 20.04 & 22.11 & 28.56 & 30.02 & \textbf{64.59 }& & 21.88 & 23.92 & 26.01 & 30.99 & 32.07 & \textbf{69.58 } \\
DJ30 &26.94 & 25.98 & 26.02 &34.11 &33.98 &\textbf{59.04 } & & 27.45 &27.11 &28.73 & 36.71 & 35.48 & \textbf{61.29 } \\
EEG & 58.14 & 59.98 & 62.29 & 70.08 & \textbf{74.22 }& 72.03 & &59.85 & 60.98 & 64.67 & 72.93 & \textbf{77.29 }& 74.13 \\
Electricity & 68.94 & 72.10 & 73.45 & 80.04 & \textbf{83.20 }& 79.39 & &70.45 &75.90 &77.28 & 83.92 & \textbf{85.44 }& 81.99 \\
Gas &48.83 & 47.23 & 46.92 & 63.59 & \textbf{67.54 }& 64.20 & & 50.05 & 49.54 & 49.17 & 65.98 & \textbf{70.02 }& 66.13 \\
Olympic &72.98 & 70.34 & 74.53 & 80.08 & 83.19 & \textbf{87.01 }& & 73.95 & 71.91 & 76.02 & 83.19 & 86.88 &\textbf{89.24 } \\
Poker &72.11 & 69.65 & 72.98 & 84.65 & 87.91 & \textbf{91.03 }& & 74.46 & 70.97 & 74.52 & 87.11 & 89.34 & \textbf{93.06 } \\
IntelSensors &9.45 & 11.45 & 13.99 & 36.23 & 37.08 & \textbf{58.10 }& & 10.02 & 13.01 & 14.38 & 37.82 & 38.03 & \textbf{60.39 } \\
Tags & 30.45 &28.67 & 29.45 & \textbf{42.68 }& 40.18 & 39.04 & & 33.10 & 30.08 & 31.14 & \textbf{45.28 } & 43.21 & 41.02 \\
\cmidrule(l){1 -7 }\cmidrule(l){8 -14 }
Aggrawal5 & 78.34 & 77.45 & 80.41 & 84.92 & 88.34 & \textbf{90.38 } & & 77.19 & 79.02 & 80.93 & 85.99 & 90.02 & \textbf{93.01 } \\
Aggrawal10 & 70.12 & 68.34 & 70.23 & 74.99 & 78.32 & \textbf{88.02 } & &71.04 & 70.16 & 71.88 & 75.38 & 79.14 & \textbf{90.49 } \\
Aggrawal20 &55.62 & 56.23 & 58.93 & 65.76 & 66.98 & \textbf{83.87 }& &56.45 &57.22 &59.39 & 66.28 & 67.57 &\textbf{85.09 } \\
Hyperplane5 & 62.05 & 63.66 & 62.07 &70.45 &73.98 & \textbf{75.06 }& &65.39 & 67.20 & 66.14 & 74.82 & 78.05 & \textbf{81.80 } \\
Hyperplane10 & 53.56 & 54.37 & 54.02 & 63.74 & 66.59 &\textbf{72.30 } & & 56.93 & 59.14 & 57.92 & 66.72 & 70.56 & \textbf{78.03 } \\
Hyperplane20 & 40.04 & 38.45 & 42.19 & 50.10 & 57.67 & \textbf{66.48 }& & 42.06 & 41.99 & 40.86 & 52.19 & 59.37 & \textbf{68.27 } \\
RBF5 &80.18 &78.56 &82.40 & 90.48 & 92.36 & \textbf{92.78 }& & 83.47 & 81.59 & 84.99 & 92.12 & 94.82 &\textbf{94.97 } \\
RBF10 & 69.45 & 67.84 & 73.29 &82.19 & 84.48 &\textbf{88.82 } & & 72.19 & 70.48 & 76.44 & 85.11 & 87.81 & \textbf{90.26 } \\
RBF20 & 53.18 & 52.88 & 54.01 & 70.24 & 71.93 & \textbf{83.08 } & & 55.98 & 54.90 & 57.73 & 73.89 & 74.84 & \textbf{85.30 } \\
RandomTree5 & 45.29 & 47.21 & 47.93 & 58.90 & 64.32 & \textbf{67.98 } & & 46.12 & 48.52 & 49.11 & 60.05 & 66.30 & \textbf{69.93 } \\
RandomTree10 & 31.63 & 33.19 & 35.02 & 50.02 & 53.87 & \textbf{63.01 }& & 32.79 & 33.90 & 36.14 & 51.58 & 55.20 & \textbf{64.97 } \\
RandomTree20 &19.83 & 20.04 & 21.38 & 36.29 & 43.22 & \textbf{59.42 }& & 20.02 & 20.88 &22.94 & 38.01 & 44.87 & \textbf{60.33 } \\
\cmidrule(l){1 -7 }\cmidrule(l){8 -14 }
ranks & 5.46 & 4.78 & 3.84 & 2.97 & 2.56 & 1.39 & & 5.80 & 5.05 & 4.15 & 2.45 & 2.29 & 1.26 \\
\cmidrule(l){1 -7 }\cmidrule(l){8 -14 }
Avg. test time [s] & 17.26 $\pm$3.11 & 18.11 $\pm$4.72 & 16.54 $\pm$2.98 & 8.92 $\pm$3.07 & 9.78 $\pm$4.14 & 6.28 $\pm$1.08 & & & & & & & \\
Avg. update time [s] & 0.02 $\pm$0.01 & 0.08 $\pm$0.02 & 0.11 $\pm$0.05 & 19.83 $\pm$6.98 & 18.54 $\pm$7.82 & 12.22 $\pm$0.92 & & & & & & & \\
\bottomrule
\end{tabular}
}
\end{table*}
\subsection{Experiment 1 : Drift detectors comparison}
The first experiment was designed to analyze the behavior of the six examined drift detectors under two different metrics measured on all 24 benchmark data streams. This will allow us to evaluate how competitive is RBM-IM as compared with the state-of-the-art reference methods. Results according to pmAUC and pmGM are given in Tab. ~\ref{tab:res}, while Fig. ~\ref{fig:bon1 } and~\ref{fig:bon2 } depict the outcomes of the post-hoc statistical tests of significance. Fig. ~\ref{fig:bt1 } and~\ref{fig:bt2 } present visualizations of the Bayesian signed test for pairwise comparisons with two best performing reference detectors. \begin{figure}[h]
\centering
\tiny
\resizebox{\columnwidth}{. }{
\begin{tikzpicture}
\draw (1,0 ) -- (6.5,0 );
\foreach \x in {1,2,3,4,5,6 } {
\draw (\x, 0 ) -- ++(0,.1 ) node [below=0.15 cm, scale=0.75 ] {\tiny \x};
\ifthenelse{\x < 7 }{\draw (\x+.5, 0 ) -- ++(0,.03 );}{}
}
\coordinate (c0 ) at (1.39,0 );
\coordinate (c1 ) at (2.56,0 );
\coordinate (c2 ) at (2.97,0 );
\coordinate (c3 ) at (3.84,0 );
\coordinate (c4 ) at (4.78,0 );
\coordinate (c5 ) at (5.46,0 );
\node (l0 ) at (c0 ) [above right=.15 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny RBM-IM};
\node (l1 ) at (c1 ) [above right=.4 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny DDM-OCI};
\node (l2 ) at (c2 ) [above right=.15 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny PerfSim};
\node (l3 ) at (c3 ) [above right=.15 cm and 0.08 cm, align=center, scale=0.7 ] {\tiny FHDDM};
\node (l4 ) at (c4 ) [above right=.4 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny RDDM};
\node (l5 ) at (c5 ) [above right=.15 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny WSTD};
\fill[fill=gray, fill opacity=0.5 ] (1.43, -0.08 ) rectangle (1.45 +1.04,0.08 );
\foreach \x in {0,...,5 } {
\draw (l\x) -| (c\x);
};
\end{tikzpicture}
}
\caption{The Bonferroni-Dunn test (pmAUC). }
\label{fig:bon1 }
\end{figure}
\begin{figure}[h]
\centering
\tiny
\resizebox{\columnwidth}{.
|
1,451
| 5
|
arxiv
|
cm and 0.1 cm, align=center, scale=0.7 ] {\tiny DDM-OCI};
\node (l2 ) at (c2 ) [above right=.15 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny PerfSim};
\node (l3 ) at (c3 ) [above right=.15 cm and 0.08 cm, align=center, scale=0.7 ] {\tiny FHDDM};
\node (l4 ) at (c4 ) [above right=.4 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny RDDM};
\node (l5 ) at (c5 ) [above right=.15 cm and 0.1 cm, align=center, scale=0.7 ] {\tiny WSTD};
\fill[fill=gray, fill opacity=0.5 ] (1.3, -0.08 ) rectangle (1.3 +0.77,0.08 );
\foreach \x in {0,...,5 } {
\draw (l\x) -| (c\x);
};
\end{tikzpicture}
}
\caption{The Bonferroni-Dunn test (pmGM). }
\label{fig:bon2 }
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.49 \linewidth]{bt3. eps}
\includegraphics[width=0.49 \linewidth]{bt4. eps}
\caption{Visualizations of the Bayesian signed test for comparison between PerfSim and RBM-IM for pmAUC (left) and pmGM (right). }
\label{fig:bt1 }
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.49 \linewidth]{bt1. eps}
\includegraphics[width=0.49 \linewidth]{bt2. eps}
\caption{Visualizations of the Bayesian signed test for comparison between DDM-OCI and RBM-IM for pmAUC (left) and pmGM (right). }
\label{fig:bt2 }
\end{figure}
\smallskip
\noindent \textbf{Comparison with standard drift detectors. } The standard drift detectors return unsatisfactory performance for all of the examined multi-class imbalanced data streams. This shows that the metrics collected by them are unsuitable to monitor skewed data streams. This also indicates that drift detectors, despite not being actually trainable models, are still prone to class imbalance. Despite the fact that the underlying classifier used was designed for imbalanced data streams, it could not offer accurate predictions when being feed incorrect information from the drift detectors. Especially in the case of data sets with a high number of classes (such as Crimes, DJ20, IntelSensor, or the artificial ones) standard drift detectors returned performance only slightly above a random guess. Those detectors were not capable of capturing changes affecting at the same time multiple class distributions and imbalance ratios. RBM-IM alleviated those limitations, while displaying comparable computational complexity. \smallskip
\noindent \textbf{Answer to RQ1 :} Yes, RBM-IM offers significant improvements over standard drift detectors when applied to monitoring multi-class imbalanced data streams. Standard detectors cannot handle both a high number of classes and simultaneous changes in distributions and imbalance ratios. This shows that we need to have dedicated drift detectors for such difficult scenarios. \smallskip
\noindent \textbf{Comparison with skew-insensitive drift detectors. } Skew-insensitive detectors performed significantly better when compared with their standard counterparts. However, for most of the real-world benchmarks and for all the artificial ones they still could not compete with RBM-IM. The only four data sets on which they returned a slightly better performance were EEG, Electricity, Gas, and Tags. All of them are relatively small and have a low number of classes. Especially the former factor might have had a strong impact on RBM-IM. As this is a trainable drift detector, it probably suffered from the problem of underfitting when learning from small data streams. This could be potentially alleviated by combining RBM-IM with transfer learning or instance exploitation techniques, which we will investigate in our future works. For all the remaining 20 data stream benchmarks RBM-IM outperformed in a statistically significant manner both PerfSim and DDM-OCI. This can be contributed to the compressed information about the current concept for each class stored within the RBM-IM structure, which allowed for a significantly more informative analysis of the changing properties of incoming instances. \smallskip
\noindent \textbf{Answer to RQ2 :} Yes, RBM-IM is capable of outperforming state-of-the-art skew-insensitive drift detectors, while additionally offering faster detection and update times. This is especially visible on data sets with a high number of classes, where monitoring simple performance measures is not enough to accurately and timely detect occurrences of drifts. Additionally, by being a trainable detector RBM-IM can better adapt to changes in data streams, allowing fine-tuned encapsulation of the definition of what currently is considered as a temporal concept. \subsection{Experiment 2 : Detection of local concept drifts}
This experiment was designed to understand if and how the examined drift detectors can handle the appearance of local concept drifts on top of changing imbalance ratios and class roles (see Sec. 3 -- Scenario 3 for more details). We carried this experiment only on artificial benchmarks, as they allowed us to directly inject concept drift into a selected number of classes. We evaluated how the performance of drift detectors change with the decrease of the number of classes being affected by the concept drift. For each of the 12 benchmark data streams, we created scenarios where from 1 to $M$ classes are being affected by the drift, the $M$ case standing for every single class in the stream being subject to the concept drift. Fig. ~\ref{fig:cla} depicts the behavior of all the six drift detectors under various levels of the local concept drift for the pmAUC metric. We do not show plots for pmGM as they have very similar characteristics and would not provide any additional insights. Please note that the smaller number of classes that are subject to concept drift, the more difficult its detection becomes. \begin{figure}[h. ]
\centering
\includegraphics[width=0.32 \linewidth]{agg5. eps}
\includegraphics[width=0.32 \linewidth]{agg10. eps}
\includegraphics[width=0.32 \linewidth]{agg20. eps}
\includegraphics[width=0.32 \linewidth]{hyp5. eps}
\includegraphics[width=0.32 \linewidth]{hyp10. eps}
\includegraphics[width=0.32 \linewidth]{hyp20. eps}
\includegraphics[width=0.32 \linewidth]{rbf5. eps}
\includegraphics[width=0.32 \linewidth]{rbf10. eps}
\includegraphics[width=0.32 \linewidth]{rbf20. eps}
\includegraphics[width=0.32 \linewidth]{rtr5. eps}
\includegraphics[width=0.32 \linewidth]{rtr10. eps}
\includegraphics[width=0.32 \linewidth]{rtr20. eps}
\caption{Relationship between pmAUC and the number of classes affected by the local drift for the artificial benchmarks. The lower the number of classes subject to concept drift, the more difficult its detection. }
\label{fig:cla}
\end{figure}
\smallskip
\noindent \textbf{Comparison with standard drift detectors. } Unsurprisingly, standard detectors completely failed when facing the task of local drift detection. When the number of classes subject to concept drift dropped below 80 \%, we could see significant drops in their pmAUC. When the number of affected classes dropped below 50 \%, all three detectors started to completely ignore the presence of any drift. This crucially impacted the underlying classifier that lost any adaptation capabilities, as drift detectors were never signaling any change being present. Such results clearly support our earlier statement that standard drift detectors cannot handle local changes, as statistics they monitor relate to the entire stream, not specific classes. Furthermore, in the case of imbalanced multi-class drifting streams, the underlying bias toward the majority class had a strong impact on those statistics. This damaged the reactivity of those detectors to an even greater degree, as changes happening in minority classes were obscured by static properties of the majority class. \smallskip
\noindent \textbf{Comparison with skew-insensitive drift detectors. } This experiment showed the weak side of the skew-insensitive drift detectors published so far. While they can display some robustness to changing class ratios and global concept drift, they did not perform significantly better than standard detectors when facing local drifts. For more than 90 \% of classes being affected by drift, both PerfSim and DDM-OCI returned satisfactory performance. Their quality started degrading when less than 70 \% of classes were being affected, reaching the lowest plateau for less than 30 \% of classes being affected. This shows that despite the fact of monitoring some performance metrics for each class (like DDM-OCI monitors recall) they do not extract strong enough properties of those classes to properly detect local drifts. Only when the majority of classes become subject to concept drift those detectors can pick up local changes. \smallskip
\noindent \textbf{RBM-IM sensitivity to local drifts. } RBM-IM displayed an excellent sensitivity to local drifts, even when they affected only a single class. This observation holds for any data set, any imbalance ratio, and any total number of classes. This can be contributed to the effectiveness of the reconstruction error, used as a change detection metric, combined with storing compressed information about each class independently, and being able to compare reconstruction error for each class individually. This allows RBM-IM to detect local drifts that at a given moment affect any number of classes. \smallskip
\noindent \textbf{Answer to RQ3 :} RBM-IM is the only drift detector among the examined ones that can correctly detect local concept drifts, even when they affect only a single minority class. This allows to gain a better understanding of what is the exact nature of changes affecting the data stream and which classes should be more carefully analyzed to discover useful knowledge. This RBM-IM's capability of offering at the same time global and local concept drift detection is a crucial step towards explainable drift detection and gaining deeper insights into dynamics behind data streams, especially those imbalanced. \subsection{Experiment 3 : Robustness to changing imbalance ratio}
The third experiment was designed for evaluating the robustness of the examined drift detectors to changing imbalance ratio, especially for extremely imbalanced cases (IR $>$ 400 ). This will allow us to test the flexibility and trustworthiness of skew-insensitive mechanisms used in the detectors and to see how reliable they are. For each of 12 benchmark data streams, we created scenarios in which we generate varying imbalance ratios from 50 to 500. Fig. ~\ref{fig:ir} depicts the behavior of the six drift detectors under various levels of class imbalance for the pmAUC metric. We do not show plots for pmGM as, analogously to the previous experiment, they have very similar characteristics. \begin{figure}[h]
\centering
\includegraphics[width=0.32 \linewidth]{iagg5. eps}
\includegraphics[width=0.32 \linewidth]{iagg10. eps}
\includegraphics[width=0.32 \linewidth]{iagg20. eps}
\includegraphics[width=0.32 \linewidth]{ihyp5. eps}
\includegraphics[width=0.32 \linewidth]{ihyp10. eps}
\includegraphics[width=0.32 \linewidth]{ihyp20. eps}
\includegraphics[width=0.32 \linewidth]{irbf5. eps}
\includegraphics[width=0.32 \linewidth]{irbf10. eps}
\includegraphics[width=0.32 \linewidth]{irbf20. eps}
\includegraphics[width=0.32 \linewidth]{irtr5. eps}
\includegraphics[width=0.32 \linewidth]{irtr10. eps}
\includegraphics[width=0.32 \linewidth]{irtr20. eps}
\caption{Relationship between pmAUC and changing imbalance ratio for the artificial benchmarks. The higher the imbalance ratio, the higher the disproportions among multiple classes. }
\label{fig:ir}
\end{figure}
\smallskip
\noindent \textbf{Analyzing robustness to changing imbalance ratios. } As expected the standard drift detectors cannot handle any class imbalance ratios and do not return any acceptable results, omitting drift detection. This can be seen in the extremely poor performance of the underlying classifier that stopped being updated and could not handle new incoming concepts. Two reference skew-insensitive detectors maintain an acceptable robustness to small and medium imbalance ratios (IR $<$ 200 ), but start to critically fail with further increasing IR. At extreme levels of IR they performance becomes similar to standard detectors. This shows that none of the existing detectors can handle high imbalance ratios in multi-class data streams. RBM-IM offers excellent and stable robustness, filling the gap and providing a sought-after robust drift detection approach. We can contribute this to a combination of the used loss function and the ability of RBM-IM to continually learn from the stream. This is a massive advantage, as all other drift detectors are using some preset rules for deciding if the drift is present or not. RBM-IM can learn the current distribution in a skew-insensitive manner, making its drift detection much more accurate and not affected by the imbalance ratio. \smallskip
\noindent \textbf{Answer to RQ4 :} RBM-IM offers excellent robustness to various levels of dynamic imbalance ratio in multi-class scenarios. Due to its trainable nature, RBM-IM is capable of quickly adapting to the current state of any stream and re-aligning its own structure regarding class ratios and class roles. This is the only drift detector displaying robustness to extremely high levels of class imbalance (IR $>$ 400 ). \section{Lessons learned}
\label{sec:les}
Let us now present a short summary of insights and conclusions that were drawn from both the theoretical and experimental parts of this paper. \smallskip
\noindent \textbf{Unified view on challenges in imbalanced multi-class data streams. } Continual learning from non-stationary and skewed multiple distributions is a challenging topic that requires more attention from the research community. It offers an excellent field for developing and evaluating novel learning algorithms, while calling for enhancing our models with various valuable robust characteristics. Three mutually complementary scenarios were identified by us, each dealing with different learning difficulties embedded in the nature of data. \smallskip
\noindent \textbf{Advantages of trainable drift detector. } To the best of our knowledge, the existing state-of-the-art drift detectors are realized as external modules that track some properties of the stream and use them to decide if a drift should be detected or not. However, those models use static rules for determining the degree of change that constitutes drift presence. This significantly limits them in capturing unique properties of each concept and thus may negatively impact their reactivity to changes. We propose to use a trainable drift detector that can extract and store the most important characteristics of the current state of the stream and use them to make an informative and guided decision on deciding whether the underlying classifier should be retrained or not. \smallskip
\noindent \textbf{Handling global and local drifts. } Most of the works in drift detection focus on detecting global drifts that affect the entire stream. Detectors gather information from every single instance and use those statistics to make a decision. However, this makes them less sensitive to local drifts that affect only certain classes. The situation becomes even more challenging when combined with multi-class imbalanced distributions. Here, local drifts affecting the minority classes would go unnoticed, as gathered statistics will be biased towards the majority classes. This shows the importance of monitoring each individual class for local changes. \smallskip
\noindent \textbf{Impact of class imbalance on drift detection. } Not enough attention has been given to the interplay between the concept drift and class imbalance. We observed that imbalanced distributions will directly affect each drift detector in two possible ways: (i) enhancing the presence of small changes in the majority classes; and (ii) diminishing the importance of changes in the minority classes. The former problem is caused by statistics gathered from more abundant classes that will dominate the detector and thus may cause false alarms, as even small changes will be magnified by the sheer disproportion among classes. The latter problem is caused by the minority classes not contributing enough to the drift detector statistics and thus not being able to trigger it to cause an alarm. \section{Conclusions and future works}
\label{sec:con}
In this work, we have discussed an important area of learning from multi-class imbalanced data streams under concept drift. We proposed a unifying taxonomy of challenges that may be encountered when learning from such data, and identified three realistic scenarios representing various types of learning difficulties. This was the first complete attempt to understand and organize challenges arising in this area of machine learning.
|
1,451
| 6
|
arxiv
|
\bibliographystyle{IEEEtran}
|
822
| 0
|
arxiv
|
\section{Introduction}
Let $\mathfrak g$ be a restricted Lie algebra over an algebraically closed field $k$ of positive characteristic $p$. Suslin, Friedlander, and Bendel \cite{bfs1 paramCoh} have shown that the maximal spectrum of the cohomology of $\mathfrak g$ is isomorphic to the variety of $p$-nilpotent elements in $\mathfrak g$, i. e., the so called restricted nullcone $\mathcal N_p(\mathfrak g)$. This variety has become an important invariant in representation theory; for example, it can be used to give a simple definition of the local Jordan type of a $\mathfrak g$-module $M$ and consequently of the class of modules of constant Jordan type, a class first studied by Carlson, Friedlander, and Pevtsova \cite{cfpConstJType} in the case of a finite group scheme. Friedlander and Pevtsova \cite{friedpevConstructions} have initiated what is, in the case of a Lie algebra $\mathfrak g$, the study of certain sheaves over the projectivization, $\PG$, of $\mathcal N_p(\mathfrak g)$. These sheaves are constructed from $\mathfrak g$-modules $M$ so that representation theoretic information, such as whether $M$ is projective, is encoded in their geometric properties. Explicit computations of these sheaves can be challenging due not only to their geometric nature but also to the inherent difficulty in describing representations of a general Lie algebra. The purpose of this paper is to explicitly compute examples of these sheaves for the case $\mathfrak g = \slt$. The Lie algebra $\slt$ has tame representation type and the indecomposable modules were described explicitly by Alexander Premet \cite{premetSl2 } in 1991. This means there are infinitely many isomorphism classes of such modules, so the category is rich enough to be interesting, but these occur in finitely many parameterized families which allow for direct computations. We also note that the variety $\PG[\slt]$ over which we wish to compute these sheaves is isomorphic to $\mathbb P^1 $. By a theorem of Grothendieck we therefore know that locally free sheaves admit a strikingly simple description: They are all sums of twists of the structure sheaf. This makes $\slt$ uniquely suited for such computations. We begin in \Cref{secRev} with the case of a general restricted Lie algebra $\mathfrak g$. We will review the definition of $\mathcal N_p(\mathfrak g)$ and its projectivization $\PG$. We use this to define the local Jordan type of a module $M$. We define the global operator $\Theta_M$ associated to a $\mathfrak g$-module $M$ and use it to construct the sheaves we are interested in computing. We will review theorems which not only indicate the usefulness of these sheaves but are also needed for their computation. In \Cref{secSl2 } we begin the discussion of the category of $\slt$-modules. Our computations are fundamentally based on having, for each indecomposable $\slt$-module, an explicit basis and formulas for the $\slt$ action. To this end we review Premet's description. There are four families and for each family we specify not only the explicit basis and $\slt$-action, but also a graphical representation of the module and the local Jordan type of the module. For the Weyl modules $V(\lambda)$, dual Weyl modules $V(\lambda)^\ast$, and projective modules $Q(\lambda)$ this information was previously known (see for example Benkart and Osborn \cite{benkartSl2 reps}) but for the so called non-constant modules $\Phi_\xi(\lambda)$ we do not know if such an explicit description has previously been given. Thus we give a proof that this description follows from Premet's definition of the modules $\Phi_\xi(\lambda)$. We also compute the Heller shifts $\Omega V(\lambda)$ of the Weyl modules for use in \Cref{secLieEx}. In \Cref{secMatThms} we digress from discussing Lie algebras and compute the kernels of four particular matrices. These matrices, with entries in the polynomial ring $k[s, t]$, will represent sheaf homomorphisms over $\mathbb P^1 = \proj k[s, t]$ but in this section we do not work geometrically and instead consider these matrices to be maps of free $k[s, t]$-modules. This section contains the bulk of the computational effort of this paper. In \Cref{secLieEx} we are finally ready to carry out the computations promised. Friedlander and Pevtsova have computed $\gker{V(\lambda)}$ for the case $0 \leq \lambda \leq 2 p - 2 $ \cite{friedpevConstructions}. We compute the sheaves $\gker{M}$ for every indecomposable $\slt$-module $M$. This computation is essentially the bulk of the work in the previous section; the four matrices in that section describe the global operators of the four families of $\slt$-modules. We also compute $\mathscr F_i(V(\lambda))$ for $i \neq p$ and $V(\lambda)$ indecomposable using an inductive argument. The base case is that of a simple Weyl module $(\lambda < p)$ and is done by noting that $\mathscr F_i(V(\lambda))$ is zero when $i \neq \lambda + 1 $ and that $\mathscr F_{\lambda + 1 }(V(\lambda))$ can be identified with the kernel sheaf $\gker{V(\lambda)}$. For the inductive step we use the Heller shift computation from \Cref{secSl2 } together with a theorem of Benson and Pevtsova \cite{benPevtVectorBundles}. \section{Jordan type and global operators for Lie algebras} \label{secRev}
In this section we review the definition of the restricted nullcone of a Lie algebra $\mathfrak g$ and of the local Jordan type of a $\mathfrak g$-module $M$. We also define the global operator associated to a $\mathfrak g$-module $M$ and the sheaves associated to such an operator. Global operators and local Jordan type can be defined for any infinitesimal group scheme of finite height. Here we give the definitions only for a restricted Lie algebra $\mathfrak g$ and take $\slt$ as our only example. For details on the general case as well as additional examples we refer the reader to Friedlander and Pevtsova \cite{friedpevConstructions} or Stark \cite{starkHo1 }. Let $\mathfrak g$ be a restricted Lie algebra over an algebraically closed field $k$ of positive characteristic $p$. Recall that this means $\mathfrak g$ is a Lie algebra equipped with an additional \emph{$p$-operation} $(-)^{[p]}\colon\mathfrak{g \to g}$ satisfying certain axioms (see Strade and Farnsteiner \cite{stradeFarnModularLie} for details). Here we merely note that for the classical subalgebras of $\mathfrak{gl}_n$ the $p$-operation is given by raising a matrix to the $p^\text{th}$ power. \begin{Def}
The restricted nullcone of $\mathfrak g$ is the set
\[\mathcal N_p(\mathfrak g) = \set{x \ \middle| \ x^{[p]} = 0 }\]
of $p$-nilpotent elements. This is a conical irreducible subvariety of the affine space $\mathfrak g$. We denote by $\PG$ the projective variety whose points are lines through the origin in $\mathcal N_p(\mathfrak g)$. \end{Def}
\begin{Ex} \label{exNslt}
Let $\mathfrak g = \slt$ and take the usual basis
\[e = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \qquad f = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \quad \text{and} \quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}. \]
Let $\set{x, y, z}$ be the dual basis so that $\slt$, as an affine space, can be identified with $\mathbb A^3 $ and has coordinate ring $k[x, y, z]$. A $2 \times 2 $ matrix over a field is nilpotent if and only if its square
\[\begin{bmatrix} z & x \\ y & -z \end{bmatrix}^2 = (xy + z^2 )\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\]
is zero therefore, independent of $p$, we get that $\mathcal N_p(\slt)$ is the zero locus of $xy + z^2 $. By definition $\PG[\slt]$ is the projective variety defined by the homogeneous polynomial $xy + z^2 $. Let $\mathbb P^1 $ have coordinate ring $k[s, t]$ and define a map $\iota\colon\mathbb P^1 \to \PG[\slt]$ via $[s, t] \mapsto [s^2 : -t^2 : st]$. One then checks that the maps $[1 : y : z] \mapsto [1 : z]$ and $[x : 1 : z] \mapsto [-z : 1 ]$ defined on the open sets $x \neq 0 $ and $y \neq 0 $, respectively, glue to give an inverse to $\iota$. Thus $\PG[\slt] \simeq \mathbb P^1 $. \end{Ex}
To define the local Jordan type of a $\mathfrak g$-module $M$, recall that a combinatorial partition $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_n)$ is a weakly decreasing sequence of finitely many positive integers. We say that $\lambda$ is a partition of the integer $\sum_i\lambda_i$, for example $\lambda = (4, 4, 2, 1 )$ is a partition of $11 $. We call a partition $p$-restricted if no integer in the sequence is greater than $p$ and we let $\mathscr P_p$ denote the set of all $p$-restricted partitions. We will often write partitions using either Young diagrams or exponential notation. A Young diagram is a left justified two dimensional array of boxes whose row lengths are weakly decreasing from top to bottom. These correspond to the partitions obtained by reading off said row lengths. In exponential notation the unique integers in the partition are written surrounded by brackets with exponents outside the brackets denoting repetition. \begin{Ex}
The partition $(4, 4, 2, 1 )$ can be represented as the Young diagram
\[\ydiagram{4,4,2,1 }\]
or written in exponential notation as $[4 ]^2 [2 ][1 ]$. \end{Ex}
If $A \in \mathbb M_n(k)$ is a $p$-nilpotent ($A^p = 0 $) $n \times n$ matrix then the Jordan normal form of $A$ is a block diagonal matrix such that each block is of the form
\[\begin{bmatrix} 0 & 1 \\ & 0 & 1 \\ && \ddots & \ddots \\ &&& 0 & 1 \\ &&&& 0 \end{bmatrix} \qquad (\text{an} \ i \times i \ \text{matrix})\]
for some $i \leq p$. Listing these block sizes in weakly decreasing order yields a $p$-restricted partition of $n$ called the \emph{Jordan type}, $\jtype(A)$, of the matrix $A$. Note that conjugation does not change the Jordan type of a matrix so if $T\colon V \to V$ is a $p$-nilpotent operator on a vector space $V$ then we define $\jtype(T) = \jtype(A)$, where $A$ is the matrix of $T$ with respect to some basis. Finally, note that scaling a nilpotent operator does not change the eigenvalues or generalized eigenspaces so it is easy to see that $\jtype(cT) = \jtype(T)$ for any non-zero scalar $c \in k$. \begin{Def}
Let $M$ be a $\mathfrak g$-module and $v \in \PG$. Set $\jtype(v, M) = \jtype(x)$ where $x \in \mathcal N_p(\mathfrak g)$ is any non-zero point on the line $v$ and its Jordan type is that of a $p$-nilpotent operator on the vector space $M$. The \emph{local Jordan type} of $M$ is the function
\[\jtype(-, M)\colon\PG \to \mathscr P_p\]
so defined. \end{Def}
When computing the local Jordan type of a module the following lemma is useful. Recall that the conjugate of a partition is the partition obtained by transposing the Young diagram. \begin{Lem} \label{lemConj}
Let $A \in \mathbb M_n(k)$ be $p$-nilpotent. The conjugate of the partition
\[(n - \rank A, \rank A - \rank A^2, \ldots, \rank A^{p-2 } - \rank A^{p-1 }, \rank A^{p-1 }). \]
is $\jtype(A)$. \end{Lem}
\begin{Ex} \label{exPart}
The conjugate of $[4 ]^2 [2 ][1 ]$ is $[4 ][3 ][2 ]^2 $. \begin{center}
\begin{picture}(100, 70 )(45, 0 )
\put(0, 35 ){\ydiagram{4,4,2,1 }}
\put(150, 35 ){\ydiagram{4,3,2,2 }}
\put(75, 37 ){\vector(1, 0 ){55 }}
\put(-5, 66 ){\line(1, -1 ){3 }}
\put(0, 61 ){\line(1, -1 ){3 }}
\put(5, 56 ){\line(1, -1 ){3 }}
\put(10, 51 ){\line(1, -1 ){3 }}
\put(15, 46 ){\line(1, -1 ){3 }}
\put(20, 41 ){\line(1, -1 ){3 }}
\put(25, 36 ){\line(1, -1 ){3 }}
\put(30, 31 ){\line(1, -1 ){3 }}
\put(35, 26 ){\line(1, -1 ){3 }}
\put(40, 21 ){\line(1, -1 ){3 }}
\put(45, 16 ){\line(1, -1 ){3 }}
\put(50, 11 ){\line(1, -1 ){3 }}
\put(37, 10 ){\vector(1, 1 ){15 }}
\put(52, 25 ){\vector(-1, -1 ){15 }}
\end{picture}
\end{center}
\end{Ex}
\begin{Ex} \label{exV2 JT}
Assume $p > 2 $ and consider the Weyl module $V(2 )$, for $\slt$. This is a $3 $-dimensional module where $e$, $f$, and $h$ act via
\[\begin{bmatrix} 0 & 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}, \qquad \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix}, \quad \text{and} \quad \begin{bmatrix} 2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -2 \end{bmatrix}\]
respectively. The matrix
\[A = \begin{bmatrix} 2 z & 2 x & 0 \\ y & 0 & x \\ 0 & 2 y & -2 z \end{bmatrix}\]
describes the action of $xe + yf + zh \in \mathfrak g$ on $V(2 )$. For the purposes of computing the local Jordan type we consider $[x : y : z]$ to be an element in the projective space $\PG$. If $y = 0 $ then $xy + z^2 = 0 $ implies $z = 0 $ and we can scale to $x = 1 $. This immediately gives Jordan type $[3 ]$. If $y \neq 0 $ then we can scale to $y = 1 $ and therefore $x = -z^2 $.
|
822
| 1
|
arxiv
|
\\ 2 & -4 z & 2 z^2 \end{bmatrix}\]
therefore $\rank A = 2 $ and $\rank A^2 = 1 $. Using \cref{lemConj} we conclude that the Jordan type is the conjugate of $(3 - 2, 2 - 1, 1 ) = [1 ]^3 $, which is $[3 ]$. Thus the local Jordan type of $V(2 )$ is the constant function $v \mapsto [3 ]$. \end{Ex}
\begin{Def}
A $\mathfrak g$-module $M$ has \emph{constant Jordan type} if its local Jordan type is a constant function. \end{Def}
Modules of constant Jordan type will be significant for us for two reasons. The first is because of the following useful projectivity criterion. \begin{Thm}[{\cite[7.6 ]{bfsSupportVarieties}}] \label{thmProjCJT}
A $\mathfrak g$-module $M$ is projective if and only if its local Jordan type is a constant function of the form $v \mapsto [p]^n$. \end{Thm}
For the second note that when $\mathfrak g$ is the Lie algebra of an algebraic group $G$, the adjoint action of $G$ on $\mathfrak g$ induces an action of $G$ on the restricted nullcone and hence on $\PG$. One can show that the local Jordan type of a $G$-module (when considered as a $\mathfrak g$-module) is constant on the $G$-orbits of this action. The adjoint action of $\SL_2 $ on $\PG[\slt]$ is transitive so we get the following. \begin{Thm}[{\cite[2.5 ]{cfpConstJType}}] \label{thmRatCJT}
Every rational $\slt$-module has constant Jordan type. \end{Thm}
Next we define the global operator associated to a $\mathfrak g$-module $M$ and the sheaves associated to such an operator. Let $\set{g_1, \ldots, g_n}$ be a basis for $\mathfrak g$ with corresponding dual basis $\set{x_1, \ldots, x_n}$. We define $\Theta_{\mathfrak g}$ to be the operator
\[\Theta_{\mathfrak g} = x_1 \otimes g_1 + \cdots + x_n \otimes g_n. \]
As an element of $\mathfrak g^\ast \otimes_k \mathfrak g \simeq \hom_k(\mathfrak g, \mathfrak g)$ this is just the identity map and is therefore independent of the choice of basis. Now $\Theta_{\mathfrak g}$ acts on $k[\mathcal N_p(\mathfrak g)] \otimes_k M \simeq k[\mathcal N_p(\mathfrak g)]^{\dim M}$ as a degree $1 $ endomorphism of graded $k[\mathcal N_p(\mathfrak g)]$-modules (where $\deg x_i = 1 $). The map of sheaves corresponding to this homomorphism is the global operator. \begin{Def}
Given a $\mathfrak g$-module $M$ we define $\widetilde M = \OPG \otimes_k M$. The \emph{global operator} corresponding to $M$ is the sheaf map
\[\Theta_M\colon \widetilde M \to \widetilde M(1 )\]
induced by the action of $\Theta_{\mathfrak g}$. \end{Def}
\begin{Ex} \label{exV2 glob}
We have $\Theta_{\slt} = x \otimes e + y \otimes f + z \otimes h$. Consider the Weyl module $V(2 )$ from \cref{exV2 JT}. The global operator corresponding to $V(2 )$ is the sheaf map
\[\begin{bmatrix} 2 z & 2 x & 0 \\ y & 0 & x \\ 0 & 2 y & -2 z \end{bmatrix}\colon\OPG[\slt]^3 \to \OPG[\slt](1 )^3. \]
Taking the pullback through the map $\iota\colon\mathbb P^1 \to \PG[\slt]$ from \cref{exNslt} we get that $\Theta_{V(2 )}$ is the sheaf map
\[\begin{bmatrix} 2 st & 2 s^2 & 0 \\ -t^2 & 0 & s^2 \\ 0 & -2 t^2 & -2 st \end{bmatrix}\colon\mathcal O_{\mathbb P^1 }^3 \to \mathcal O_{\mathbb P^1 }(2 )^3. \]
\end{Ex}
The global operator $\Theta_M$ is not an endomorphism but we may still compose it with itself if we shift the degree of successive copies. Given $j \in \mathbb N$ we define
\begin{align*}
\gker[j]{M} &= \ker\left[\Theta_M(j-1 )\circ\cdots\circ\Theta_M(1 )\circ\Theta_M\right], \\
\gim[j]{M} &= \im\left[\Theta_M(-1 )\circ\cdots\circ\Theta_M(1 -j)\circ\Theta_M(-j)\right], \\
\gcoker[j]{M} &= \coker\left[\Theta_M(-1 )\circ\cdots\circ\Theta_M(1 -j)\circ\Theta_M(-j)\right],
\end{align*}
so that $\gker[j]{M}$ and $\gim[j]{M}$ are subsheafs of $\widetilde M$, and $\gcoker[j]{M}$ is a quotient of $\widetilde M$. To see how these sheaves encode information about the Jordan type of $M$ recall that the $j$-rank of a partition $\lambda$ is the number of boxes in the Young diagram of $\lambda$ that are not contained in the first $j$ columns. For example the $2 $-rank of $[4 ]^2 [2 ][1 ]$ (from \cref{exPart}) is $4 $. If one knows the $j$-rank of a partition $\lambda$ for all $j$, then one knows the size of each column in the Young diagram of $\lambda$ and can therefore recover $\lambda$. Thus if one knows the local $j$-rank of a module $M$ for all $j$ then one knows its local Jordan type. \begin{Def}
Let $M$ be a $\mathfrak g$-module and let $v \in \PG$. Set $\rank^j(v, M)$ equal to the $j$-rank of the partition $\jtype(v, M)$. The \emph{local $j$-rank} of $M$ is the function
\[\rank^j(-, M)\colon\PG \to \mathbb N_0 \]
so defined. \end{Def}
\begin{Thm}[{\cite[3.2 ]{starkHo1 }}]
Let $M$ be a $\mathfrak g$-module and $U \subseteq \PG$ an open set. The local $j$-rank is constant on $U$ if and only if the restriction $\gcoker[j]{M}|_U$ is a locally free sheaf. When this is the case $\gker[j]{M}|_U$ and $\gim[j]{M}|_U$ are also locally free and $\rank^j(v, M) = \rank\gim[j]{M}$ for all $v \in U$. \end{Thm}
We will also be interested in the sheaves $\mathscr F_i(M)$ for $1 \leq i \leq p$. These were first defined by Benson and Pevtsova \cite{benPevtVectorBundles} for $kE$-modules where $E$ is an elementary abelian $p$-group. \begin{Def}
Let $M$ be a $\mathfrak g$-module and $1 \leq i \leq p$ an integer. Then
\[\mathscr F_i(M) = \frac{\gker{M} \cap \gim[i-1 ]{M}}{\gker{M} \cap \gim[i]{M}}. \]
\end{Def}
We end the section by stating two theorems which not only illustrate the utility of these sheaves but will be used in an essential way in \Cref{secLieEx} when calculating $\mathscr F_i(M)$ where $M$ is a Weyl module for $\slt$. Both theorems were originally published by Benson and Pevtsova \cite{benPevtVectorBundles} but with minor errors. These errors have been corrected in the given reference. \begin{Thm}[{\cite[3.7 ]{starkHo1 }}] \label{thmOm}
Let $M$ be a $\mathfrak g$-module and $1 \leq i < p$ an integer. Then
\[\mathscr F_i(M) \simeq \mathscr F_{p-i}(\Omega M)(p-i)\]
\end{Thm}
\begin{Thm}[{\cite[3.8 ]{starkHo1 }}] \label{thmFi}
Let $U \subseteq \PG$ be open. The local Jordan type of a $\mathfrak g$-module $M$ is constant on $U$ if and only if the restrictions $\mathscr F_i(M)|_U$ are locally free for all $1 \leq i \leq p$. When this is the case and $a_i = \rank\mathscr F_i(M)$ we have $\jtype(v, M) = [p]^{a_p}[p-1 ]^{a_{p-1 }}\cdots[1 ]^{a_1 }$ for all $v \in U$. \end{Thm}
\section{The category of $\slt$-modules} \label{secSl2 }
The calculations in \Cref{secLieEx} will be based on detailed information about the category of $\slt$-modules, which we develop in this section. The indecomposable $\slt$-modules have been classified, each is one of the following four types: a Weyl module $V(\lambda)$, the dual of a Weyl module $V(\lambda)^\ast$, an indecomposable projective module $Q(\lambda)$, or a non-constant module $\Phi_\xi(\lambda)$. Explicit bases for the first three types are known; we will remind the reader of these formulas and develop similar formulas for the $\Phi_\xi(\lambda)$. We will also calculate the local Jordan type $\jtype(-, M)\colon\mathbb P^1 \to \mathscr P_p$ for each indecomposable $M$. Finally we will calculate the Heller shifts $\Omega(V(\lambda))$. We begin by stating the results for each of the four types and the classification. Recall the standard basis for $\slt$ is $\set{e, f, h}$ where
\[e = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \qquad f = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \quad \text{and} \quad h = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}. \]
Let $\lambda$ be a non-negative integer and write $\lambda = rp + a$ where $0 \leq a < p$ is the remainder of $\lambda$ modulo $p$. Each type is parametrized by the choice of $\lambda$, with the parametrization of $\Phi_\xi(\lambda)$ requiring also a choice of $\xi \in \mathbb P^1 $. The four types are as follows:
\begin{itemize}
\item The {\bf Weyl modules} $V(\lambda)$. \begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2 }{l}{$\set{v_0, v_1, \ldots, v_\lambda}$} \hspace{150 pt} \\
Action: & $ev_i$ & \hspace{-7 pt}$= (\lambda - i + 1 )v_{i - 1 }$ \\
& $fv_i$ & \hspace{-7 pt}$= (i + 1 )v_{i + 1 }$ \\
& $hv_i$ & \hspace{-7 pt}$= (\lambda - 2 i)v_i$ \\
Graph: & \multicolumn{2 }{l}{\Cref{figV}} \\
Local Jordan type: & \multicolumn{2 }{l}{Constant Jordan type $[p]^r[a + 1 ]$}
\end{tabular}
\end{center}
\begin{sidewaysfigure}[p]
\centering
\vspace*{350 pt}
\begin{tikzpicture} [description/. style={fill=white, inner sep=2 pt}]
\useasboundingbox (-7, -5.5 ) rectangle (7, 4.2 );
\scope[transform canvas={scale=.8 }]
\matrix (m) [matrix of math nodes, row sep=31 pt,
column sep=40 pt, text height=1.5 ex, text depth=0.25 ex]
{ \\ \\ \\ \underset{v_0 }{\bullet} & \underset{v_1 }{\bullet} & \underset{v_2 }{\bullet} & \underset{v_3 }{\bullet} & \cdots & \underset{v_{\lambda - 3 }}{\bullet} & \underset{v_{\lambda - 2 }}{\bullet} & \underset{v_{\lambda - 1 }}{\bullet} & \underset{v_\lambda}{\bullet} \\ \\ \\ \\ \\ \\ \\ \\ \underset{\hat v_0 }{\bullet} & \underset{\hat v_1 }{\bullet} & \underset{\hat v_2 }{\bullet} & \underset{\hat v_3 }{\bullet} & \cdots & \underset{\hat v_{\lambda - 3 }}{\bullet} & \underset{\hat v_{\lambda - 2 }}{\bullet} & \underset{\hat v_{\lambda - 1 }}{\bullet} & \underset{\hat v_\lambda}{\bullet} \\};
\path[->, font=\scriptsize]
(m-4 -1 ) edge [bend left=20 ] node[auto] {$1 $} (m-4 -2 )
(m-4 -2 ) edge [bend left=20 ] node[auto] {$\lambda$} (m-4 -1 )
edge [bend left=20 ] node[auto] {$2 $} (m-4 -3 )
(m-4 -3 ) edge [bend left=20 ] node[auto] {$\lambda - 1 $} (m-4 -2 )
edge [bend left=20 ] node[auto] {$3 $} (m-4 -4 )
(m-4 -4 ) edge [bend left=20 ] node[auto] {$\lambda - 2 $} (m-4 -3 )
edge [bend left=20 ] node[auto] {$4 $} (m-4 -5 )
(m-4 -5 ) edge [bend left=20 ] node[auto] {$\lambda - 3 $} (m-4 -4 )
edge [bend left=20 ] node[auto] {$\lambda - 3 $} (m-4 -6 )
(m-4 -6 ) edge [bend left=20 ] node[auto] {$4 $} (m-4 -5 )
edge [bend left=20 ] node[auto] {$\lambda - 2 $} (m-4 -7 )
(m-4 -7 ) edge [bend left=20 ] node[auto] {$3 $} (m-4 -6 )
edge [bend left=20 ] node[auto] {$\lambda - 1 $} (m-4 -8 )
(m-4 -8 ) edge [bend left=20 ] node[auto] {$2 $} (m-4 -7 )
edge [bend left=20 ] node[auto] {$\lambda$} (m-4 -9 )
(m-4 -9 ) edge [bend left=20 ] node[auto] {$1 $} (m-4 -8 );
\draw[<-] (m-4 -1 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda$} (m-4 -1 );
\draw[<-] (m-4 -2 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 2 $} (m-4 -2 );
\draw[<-] (m-4 -3 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 4 $} (m-4 -3 );
\draw[<-] (m-4 -4 ) .
|
822
| 2
|
arxiv
|
lambda$} (m-4 -8 );
\draw[<-] (m-4 -9 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-\lambda$} (m-4 -9 );
\path[draw] (-4.1, -2 ) rectangle (3.7, 0 );
\draw (-3.5, -1 ) node {$e$:};
\draw[<-] (-3.2, -1 ) .. controls +(-20 :18 pt) and +(200 :18 pt) .. (-1.7, -1 );
\draw (-.5, -1 ) node {$f$:};
\draw[->] (-.2, -1 ) .. controls +(20 :18 pt) and +(160 :18 pt) .. (1.3, -1 );
\draw (2.5, -1 ) node {$h$:};
\draw[<-] (3.1, -1.5 ) .. controls +(70 :40 pt) and +(110 :40 pt) .. (2.9, -1.5 );
\draw (-.5, 5 ) node {$V(\lambda)$};
\path[->, font=\scriptsize]
(m-12 -1 ) edge [bend left=20 ] node[auto] {$\lambda$} (m-12 -2 )
(m-12 -2 ) edge [bend left=20 ] node[auto] {$1 $} (m-12 -1 )
edge [bend left=20 ] node[auto] {$\lambda - 1 $} (m-12 -3 )
(m-12 -3 ) edge [bend left=20 ] node[auto] {$2 $} (m-12 -2 )
edge [bend left=20 ] node[auto] {$\lambda - 2 $} (m-12 -4 )
(m-12 -4 ) edge [bend left=20 ] node[auto] {$3 $} (m-12 -3 )
edge [bend left=20 ] node[auto] {$\lambda - 3 $} (m-12 -5 )
(m-12 -5 ) edge [bend left=20 ] node[auto] {$4 $} (m-12 -4 )
edge [bend left=20 ] node[auto] {$4 $} (m-12 -6 )
(m-12 -6 ) edge [bend left=20 ] node[auto] {$\lambda - 3 $} (m-12 -5 )
edge [bend left=20 ] node[auto] {$3 $} (m-12 -7 )
(m-12 -7 ) edge [bend left=20 ] node[auto] {$\lambda - 2 $} (m-12 -6 )
edge [bend left=20 ] node[auto] {$2 $} (m-12 -8 )
(m-12 -8 ) edge [bend left=20 ] node[auto] {$\lambda - 1 $} (m-12 -7 )
edge [bend left=20 ] node[auto] {$1 $} (m-12 -9 )
(m-12 -9 ) edge [bend left=20 ] node[auto] {$\lambda$} (m-12 -8 );
\draw[<-] (m-12 -1 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda$} (m-12 -1 );
\draw[<-] (m-12 -2 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 2 $} (m-12 -2 );
\draw[<-] (m-12 -3 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 4 $} (m-12 -3 );
\draw[<-] (m-12 -4 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 6 $} (m-12 -4 );
\draw[<-] (m-12 -6 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $6 - \lambda$} (m-12 -6 );
\draw[<-] (m-12 -7 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $4 - \lambda$} (m-12 -7 );
\draw[<-] (m-12 -8 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $2 - \lambda$} (m-12 -8 );
\draw[<-] (m-12 -9 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-\lambda$} (m-12 -9 );
\draw (-.5, -4 ) node {$V(\lambda)^\ast$};
\endscope
\end{tikzpicture}
\caption{Graphs of $V(\lambda)$ and $V(\lambda)^\ast$} \label{figV}
\end{sidewaysfigure}
\item The {\bf dual Weyl modules} $V(\lambda)^\ast$. \begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2 }{l}{$\set{\hat v_0, \hat v_1, \ldots, \hat v_\lambda}$} \hspace{150 pt} \\
Action: & $e\hat v_i$ & \hspace{-7 pt}$= i\hat v_{i - 1 }$ \\
& $f\hat v_i$ & \hspace{-7 pt}$= (\lambda - i)\hat v_{i + 1 }$ \\
& $h\hat v_i$ & \hspace{-7 pt}$= (\lambda - 2 i)\hat v_i$ \\
Graph: & \multicolumn{2 }{l}{\Cref{figV}} \\
Local Jordan type: & \multicolumn{2 }{l}{Constant Jordan type $[p]^r[a + 1 ]$}
\end{tabular}
\end{center}
\item The {\bf projectives} $Q(\lambda)$. Define $Q(p - 1 ) = V(p - 1 )$. For $0 \leq \lambda < p - 1 $ we define $Q(\lambda)$ via
\begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2 }{l}{$\set{v_0, v_1, \ldots, v_{2 p - \lambda - 2 }} \cup \set{w_{p - \lambda - 1 }, w_{p - \lambda}, \ldots, w_{p - 1 }}$} \hspace{0 pt} \\
Action: & $ev_i$ & \hspace{-7 pt}$= -(\lambda + i + 1 )v_{i - 1 }$ \\
& $fv_i$ & \hspace{-7 pt}$= (i + 1 )v_{i + 1 }$ \\
& $hv_i$ & \hspace{-7 pt}$= -(\lambda + 2 i + 2 )v_i$ \\
& $ew_i$ & \hspace{-7 pt}$= -(\lambda + i + 1 )w_{i - 1 } + \frac{1 }{i}v_{i - 1 }$ \\
& $fw_i$ & \hspace{-7 pt}$= (i + 1 )w_{i + 1 } - \frac{1 }{\lambda + 1 }\delta_{-1, i}v_p$ \\
& $hw_i$ & \hspace{-7 pt}$= -(\lambda + 2 i + 2 )w_i$ \\
Graph: & \multicolumn{2 }{l}{\Cref{figQ}} \\
Local Jordan type: & \multicolumn{2 }{l}{Constant Jordan type $[p]^2 $}
\end{tabular}
\end{center}
\begin{sidewaysfigure}[p]
\centering
\vspace*{350 pt}
\begin{tikzpicture} [description/.
|
822
| 3
|
arxiv
|
\lambda - 1 }}{\bullet} && \underset{v_{p - \lambda}}{\bullet} && \underset{v_{p - \lambda + 1 }}{\bullet} && \cdots && \underset{v_{p - 3 }}{\bullet} && \underset{v_{p - 2 }}{\bullet} && \underset{v_{p - 1 }}{\bullet} \\};
\path[->, font=\scriptsize]
(m-2 -1 ) edge [bend left=20 ] node[auto] {$1 $} (m-2 -3 )
(m-2 -3 ) edge [bend left=20, shorten >=-7 pt] node[auto, xshift=18 pt] {$-\lambda - 2 $} (m-2 -5 )
(m-1 -6 ) edge [bend left=20 ] node[auto, xshift=-10 pt] {$-\lambda$} (m-1 -8 )
(m-1 -8 ) edge [bend left=20 ] node[auto, xshift=15 pt] {$1 - \lambda$} (m-1 -10 )
(m-1 -10 ) edge [bend left=20 ] node[auto, xshift=-15 pt] {$2 - \lambda$} (m-1 -12 )
(m-1 -12 ) edge [bend left=20 ] node[auto] {$-3 $} (m-1 -14 )
(m-1 -14 ) edge [bend left=20 ] node[auto] {$-2 $} (m-1 -16 )
(m-1 -16 ) edge [bend left=20 ] node[auto] {$-1 $} (m-1 -18 )
(m-3 -6 ) edge [bend left=20 ] node[auto, xshift=-10 pt] {$-\lambda$} (m-3 -8 )
(m-3 -8 ) edge [bend left=20 ] node[auto, xshift=15 pt] {$1 - \lambda$} (m-3 -10 )
(m-3 -10 ) edge [bend left=20 ] node[auto, xshift=-15 pt] {$2 - \lambda$} (m-3 -12 )
(m-3 -12 ) edge [bend left=20 ] node[auto] {$-3 $} (m-3 -14 )
(m-3 -14 ) edge [bend left=20 ] node[auto] {$-2 $} (m-3 -16 )
(m-3 -16 ) edge [bend left=20 ] node[auto] {$-1 $} (m-3 -18 )
(m-2 -3 ) edge [bend left=20 ] node[auto] {$-\lambda - 2 $} (m-2 -1 )
(m-2 -5 ) edge [bend left=20 ] node[auto, xshift=9 pt] {$1 $} (m-2 -3 )
(m-1 -8 ) edge [bend left=20 ] node[auto, xshift=-10 pt] {$-1 $} (m-1 -6 )
(m-1 -10 ) edge [bend left=20 ] node[auto, xshift=10 pt] {$-2 $} (m-1 -8 )
(m-1 -12 ) edge [bend left=20 ] node[auto, xshift=-12 pt] {$-3 $} (m-1 -10 )
(m-1 -14 ) edge [bend left=20 ] node[auto] {$2 - \lambda$} (m-1 -12 )
(m-1 -16 ) edge [bend left=20 ] node[auto] {$1 - \lambda$} (m-1 -14 )
(m-1 -18 ) edge [bend left=20 ] node[auto] {$-\lambda$} (m-1 -16 )
(m-3 -8 ) edge [bend left=20 ] node[auto, xshift=-10 pt] {$-1 $} (m-3 -6 )
(m-3 -10 ) edge [bend left=20 ] node[auto, xshift=10 pt] {$-2 $} (m-3 -8 )
(m-3 -12 ) edge [bend left=20 ] node[auto, xshift=-12 pt] {$-3 $} (m-3 -10 )
(m-3 -14 ) edge [bend left=20 ] node[auto] {$2 - \lambda$} (m-3 -12 )
(m-3 -16 ) edge [bend left=20 ] node[auto] {$1 - \lambda$} (m-3 -14 )
(m-3 -18 ) edge [bend left=20 ] node[auto] {$-\lambda$} (m-3 -16 )
(m-2 -19 ) edge [bend left=20 ] node[auto] {$1 $} (m-2 -21 )
(m-2 -21 ) edge [bend left=20, shorten >=-7 pt] node[auto, xshift=18 pt] {$-\lambda - 2 $} (m-2 -23 )
(m-2 -23 ) edge [bend left=20 ] node[auto, xshift=8 pt] {$1 $} (m-2 -21 )
(m-2 -21 ) edge [bend left=20 ] node[auto] {$-\lambda - 2 $} (m-2 -19 )
(m-1 -6 ) edge[shorten <=7 pt] node[below, xshift=5 pt] {$\frac{-1 }{\lambda + 1 }$} (m-2 -5 )
(m-2 -5 ) edge[shorten <=7 pt] node[pos=.45, below, xshift=-10 pt] {$-\lambda - 1 $} (m-3 -6 )
(m-1 -18 ) edge[shorten <=5 pt] node[below, xshift=-5 pt] {$\frac{-1 }{\lambda + 1 }$} (m-2 -19 )
(m-2 -19 ) edge node[auto, xshift=-5 pt] {$-\lambda - 1 $} (m-3 -18 )
(m-1 -8 ) edge[shorten <=5 pt] node[pos=.6, above, xshift=-5 pt] {$\frac{-1 }{\lambda}$} (m-3 -6 )
(m-1 -10 ) edge[shorten <=6 pt] node[pos=.6, above, xshift=-5 pt] {$\frac{-1 }{\lambda - 1 }$} (m-3 -8 )
(m-1 -12 ) edge node[pos=.6, above, xshift=-5 pt] {$\frac{-1 }{\lambda - 2 }$} (m-3 -10 )
(m-1 -14 ) edge[shorten <=5 pt] node[pos=.6, above, xshift=-5 pt] {$-\frac{1 }{3 }$} (m-3 -12 )
(m-1 -16 ) edge[shorten <=5 pt] node[pos=.6, above, xshift=-5 pt] {$-\frac{1 }{2 }$} (m-3 -14 )
(m-1 -18 ) edge[shorten <=5 pt] node[pos=.6, above, xshift=-5 pt] {$-1 $} (m-3 -16 );
\draw[<-] (m-2 -1 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-\lambda - 2 $} (m-2 -1 );
\draw[<-] (m-2 -5 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda + 2 $} (m-2 -5 );
\draw[<-] (m-2 -19 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-\lambda - 2 $} (m-2 -19 );
\draw[<-] (m-2 -23 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda + 2 $} (m-2 -23 );
\draw[<-] (m-1 -6 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda$} (m-1 -6 );
\draw[<-] (m-1 -8 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 2 $} (m-1 -8 );
\draw[<-] (m-1 -10 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $\lambda - 4 $} (m-1 -10 );
\draw[<-] (m-1 -14 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $4 - \lambda$} (m-1 -14 );
\draw[<-] (m-1 -16 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $2 - \lambda$} (m-1 -16 );
\draw[<-] (m-1 -18 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-\lambda$} (m-1 -18 );
\draw[shorten >=5 pt, shorten <=5 pt, <-] (m-3 -6 ) .. controls +(250 :50 pt) and +(290 :50 pt) .. node[pos=.5, below]{\scriptsize $\lambda$} (m-3 -6 );
\draw[shorten >=5 pt, shorten <=5 pt, <-] (m-3 -8 ) .. controls +(250 :50 pt) and +(290 :50 pt) .. node[pos=.5, below]{\scriptsize $\lambda - 2 $} (m-3 -8 );
\draw[shorten >=5 pt, shorten <=5 pt, <-] (m-3 -10 ) .. controls +(250 :50 pt) and +(290 :50 pt) .. node[pos=.5, below]{\scriptsize $\lambda - 4 $} (m-3 -10 );
\draw[shorten >=5 pt, shorten <=5 pt, <-] (m-3 -14 ) .. controls +(250 :50 pt) and +(290 :50 pt) .. node[pos=.5, below]{\scriptsize $4 - \lambda$} (m-3 -14 );
\draw[shorten >=5 pt, shorten <=5 pt, <-] (m-3 -16 ) .. controls +(250 :50 pt) and +(290 :50 pt) .. node[pos=.5, below]{\scriptsize $2 - \lambda$} (m-3 -16 );
\draw[shorten >=5 pt, shorten <=5 pt, <-] (m-3 -18 ) .. controls +(250 :50 pt) and +(290 :50 pt) .. node[pos=.5, below]{\scriptsize $-\lambda$} (m-3 -18 );
\path[draw] (-5.7, -6 ) rectangle (5.9, -4 );
\draw (-5.1, -5 ) node {$e$:};
\draw[<-] (-4.8, -5 ) .. controls +(-20 :18 pt) and +(200 :18 pt) .. (-3.3, -5 );
\draw (-2.9, -5 ) node {$+$};
\draw[<-] (-2.8, -5.7 ) -- (-1.8, -4.3 );
\draw (-.9, -5 ) node {$f$:};
\draw[->] (-.6, -5 ) .. controls +(20 :18 pt) and +(160 :18 pt) .. (1.3, -5 );
\draw (1.7, -5 ) node {$+$};
\draw[->] (1.8, -4.3 ) -- (2.8, -5.7 );
\draw (3.5, -5 ) node {$h$:};
\draw[<-] (4.2, -5.5 ) .. controls +(70 :40 pt) and +(110 :40 pt) .. (4, -5.5 );
\draw (4.7, -5 ) node {$+$};
\draw[<-] (5.2, -4.5 ) .. controls +(250 :40 pt) and +(290 :40 pt) .. (5.4, -4.5 );
\draw (.25, 4 ) node {$Q(\lambda)$};
\endscope
\end{tikzpicture}
\caption{Graph of $Q(\lambda)$} \label{figQ}
\end{sidewaysfigure}
\item The {\bf non-constant modules} $\Phi_\xi(\lambda)$. Assume $\lambda \geq p$ and let $\xi \in \mathbb P^1 $. If $\xi = [1 :\varepsilon]$ then $\Phi_\xi(\lambda)$ is defined by
\begin{center}
\begin{tabular}{rrl}
Basis: & \multicolumn{2 }{l}{$\set{w_{a + 1 }, w_{a + 2 }, \ldots, w_\lambda}$} \hspace{122 pt} \\
Action: & $ew_i$ & \hspace{-7 pt}$= (i + 1 )(w_{i + 1 } - {d \choose i}\varepsilon^{i - a}\delta_{\lambda, i}w_{a + 1 })$ \\
& $fw_i$ & \hspace{-7 pt}$= (\lambda - i + 1 )w_{i - 1 }$ \\
& $hw_i$ & \hspace{-7 pt}$= (2 i - \lambda)w_i$ \\
Graph: & \multicolumn{2 }{l}{\Cref{figPhi}} \\
Local Jordan type: & \multicolumn{2 }{l}{$[p]^{r-1 }[p - a - 1 ][a + 1 ]$ at $\xi$ and $[p]^r$ elsewhere}
\end{tabular}
\end{center}
If $\xi = [0 :1 ]$ then $\Phi_\xi(\lambda)$ is defined to be the submodule of $V(\lambda)$ spanned by the basis elements $\set{v_{a + 1 }, v_{a + 2 }, \ldots, v_\lambda}$. It is also depicted in \Cref{figPhi} and has the same local Jordan type as above. \begin{sidewaysfigure}[p]
\centering
\vspace*{350 pt}
\begin{tikzpicture} [description/.
|
822
| 4
|
arxiv
|
] {$a + 2 $} (m-2 -8 )
(m-2 -8 ) edge [bend left=20 ] node[auto, xshift=-10 pt] {$-1 $} (m-2 -7 )
(m-2 -6 ) edge [bend left=20 ] node[auto] {$1 $} (m-2 -5 )
(m-2 -5 ) edge [bend left=20 ] node[auto, xshift=-5 pt] {$a$} (m-2 -4 )
(m-2 -4 ) edge [bend left=20 ] node[auto] {$a + 1 $} (m-2 -3 )
(m-2 -3 ) edge [bend left=20 ] node[auto, xshift=13 pt] {$a + 2 $} (m-2 -2 )
(m-2 -2 ) edge [bend left=20 ] node[auto] {$a$} (m-2 -1 )
(m-2 -1 ) edge [bend left=20 ] node[auto] {$1 $} (m-2 -2 )
(m-2 -2 ) edge [bend left=20, shorten >=-10 pt] node[auto, xshift=12 pt] {$-1 $} (m-2 -3 )
(m-2 -4 ) edge [bend left=20, shorten <=-7 pt] node[auto, xshift=-7 pt] {$1 $} (m-2 -5 )
(m-2 -5 ) edge [bend left=20, shorten >=-3 pt] node[auto] {$a$} (m-2 -6 )
(m-2 -6 ) edge [bend left=20, shorten <=-3 pt, shorten >=-7 pt] node[auto, xshift=13 pt] {$a + 1 $} (m-2 -7 )
(m-2 -7 ) edge [bend left=20, shorten <=-7 pt] node[auto, xshift=-15 pt] {$a + 2 $} (m-2 -8 )
(m-2 -8 ) edge [bend left=20, shorten >=-10 pt] node[auto, xshift=12 pt] {$-1 $} (m-2 -9 )
(m-2 -10 ) edge [bend left=20, shorten <=-10 pt] node[auto, xshift=-10 pt] {$1 $} (m-2 -11 )
(m-2 -11 ) edge [bend left=20 ] node[auto] {$-1 $} (m-2 -12 );
\draw[<-] (m-2 -12 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $a + 2 $} (m-2 -12 );
\draw[<-] (m-2 -10 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $a$} (m-2 -10 );
\draw[<-] (m-2 -9 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $a + 2 $} (m-2 -9 );
\draw[<-] (m-2 -7 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a + 2 )$} (m-2 -7 );
\draw[<-] (m-2 -6 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-a$} (m-2 -6 );
\draw[<-] (m-2 -4 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $a$} (m-2 -4 );
\draw[<-] (m-2 -3 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $a + 2 $} (m-2 -3 );
\draw[<-] (m-2 -1 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $a$} (m-2 -1 );
\draw[shorten >=5 pt, ->] (m-2 -1 ) .. controls +(210 :170 pt) and +(250 :150 pt) .. node[pos=.21, below, xshift=-20 pt]{\scriptsize $-(a + 1 )\varepsilon^{rp}$} (m-2 -12 );
\draw[shorten >=5 pt, shorten <=5 pt, ->] (m-2 -4 ) .. controls +(220 :130 pt) and +(250 :120 pt) .. node[pos=.2, below, xshift=-30 pt]{\scriptsize $-(a + 1 )\binom{r}{q}\varepsilon^{qp}$} (m-2 -12 );
\draw[shorten >=5 pt, shorten <=8 pt, ->] (m-2 -10 ) .. controls +(220 :70 pt) and +(250 :70 pt) .. node[pos=.3, below, xshift=-43 pt]{\scriptsize $-(a + 1 )\binom{r}{q - 1 }\varepsilon^{(q - 1 )p}$} (m-2 -12 );
\path[draw] (-6.2, -2.5 ) rectangle (4.9, -.5 );
\draw (-5.6, -1.5 ) node {$e$:};
\draw[<-] (-5.3, -1.5 ) .. controls +(-20 :18 pt) and +(200 :18 pt) .. (-3.8, -1.5 );
\draw (-3.4, -1.5 ) node {$+$};
\draw[->] (-2.8, -1.2 ) .. controls +(230 :40 pt) and +(250 :30 pt) .. (-1.8, -1.2 );
\draw (-.3, -1.5 ) node {$f$:};
\draw[->] (0, -1.5 ) .. controls +(20 :18 pt) and +(160 :18 pt) .. (1.9, -1.5 );
\draw (3.5, -1.5 ) node {$h$:};
\draw[<-] (4.2, -2 ) .. controls +(70 :40 pt) and +(110 :40 pt) .. (4, -2 );
\draw (-.5, 7 ) node {$\Phi_{[1 :\varepsilon]}(\lambda)$};
\draw (.4, -4 ) node {$\Phi_{[0 :1 ]}(\lambda)$};
\path[->, font=\scriptsize]
(m-8 -3 ) edge [bend left=20 ] node[auto] {$a + 2 $} (m-8 -4 )
(m-8 -4 ) edge [bend left=20 ] node[auto] {$-1 $} (m-8 -3 )
edge [bend left=20 ] node[auto] {$a + 3 $} (m-8 -5 )
(m-8 -5 ) edge [bend left=20 ] node[auto] {$-2 $} (m-8 -4 )
edge [bend left=20 ] node[auto] {$a + 4 $} (m-8 -6 )
(m-8 -6 ) edge [bend left=20 ] node[auto] {$-3 $} (m-8 -5 )
edge [bend left=20 ] node[auto] {$a + 5 $} (m-8 -7 )
(m-8 -7 ) edge [bend left=20 ] node[auto] {$-4 $} (m-8 -6 )
edge [bend left=20 ] node[auto] {$\lambda - 2 $} (m-8 -8 )
(m-8 -8 ) edge [bend left=20 ] node[auto] {$3 $} (m-8 -7 )
edge [bend left=20 ] node[auto] {$\lambda - 1 $} (m-8 -9 )
(m-8 -9 ) edge [bend left=20 ] node[auto] {$2 $} (m-8 -8 )
edge [bend left=20 ] node[auto] {$\lambda$} (m-8 -10 )
(m-8 -10 ) edge [bend left=20 ] node[auto] {$1 $} (m-8 -9 );
\draw[<-] (m-8 -3 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a + 2 )$} (m-8 -3 );
\draw[<-] (m-8 -4 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a + 4 )$} (m-8 -4 );
\draw[<-] (m-8 -5 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a + 6 )$} (m-8 -5 );
\draw[<-] (m-8 -6 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a + 8 )$} (m-8 -6 );
\draw[<-] (m-8 -8 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a - 4 )$} (m-8 -8 );
\draw[<-] (m-8 -9 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-(a - 2 )$} (m-8 -9 );
\draw[<-] (m-8 -10 ) .. controls +(70 :50 pt) and +(110 :50 pt) .. node[pos=.5, above]{\scriptsize $-a$} (m-8 -10 );
\endscope
\end{tikzpicture}
\caption{Graphs of $\Phi_\xi(\lambda)$} \label{figPhi}
\end{sidewaysfigure}
\end{itemize}
\begin{Thm}[\cite{premetSl2 }] \label{thmPremet}
Each of the following modules are indecomposable:
\begin{itemize}
\item $V(\lambda)$ and $Q(\lambda)$ for $0 \leq \lambda < p$. \item $V(\lambda)$ and $V(\lambda)^\ast$ for $\lambda \geq p$ such that $p \nmid \lambda + 1 $. \item $\Phi_\xi(\lambda)$ for $\xi \in \mathbb P^1 $ and $\lambda \geq p$ such that $p \nmid \lambda + 1 $. \end{itemize}
Moreover, these modules are pairwise non-isomorphic, save $Q(p-1 ) = V(p-1 )$, and give a complete classification of the indecomposable restricted $\slt$-modules. \end{Thm}
As stated before the explicit bases for $V(\lambda)$, $V(\lambda)^\ast$, and $Q(\lambda)$ are known; see, for example, Benkart and Osborn \cite{benkartSl2 reps}. For the local Jordan type of $V(\lambda)$ and $V(\lambda)^\ast$ the matrix that describes the action of $e$ with respect to the given basis is almost in Jordan normal form, one needs only to scale the basis elements appropriately, so we can immediately read off the local Jordan type at the point $ke \in \PG[\slt]$, and \cref{thmRatCJT} gives that these modules have constant Jordan type. As \cref{thmProjCJT} gives the local Jordan type of the $Q(\lambda)$ all that is left is to justify the explicit description of $\Phi_\xi(\lambda)$ and its local Jordan type. First we recall the definition of $\Phi_\xi(\lambda)$. Let $V = k^2 $ be the standard representation of $\SL_2 $, then the dual $V^\ast$ is a $2 $-dimensional representation with basis $\set{x, y}$ (dual to the standard basis for $V$). The induced representation on the symmetric product $S(V^\ast)$ is degree preserving and the dual $S^\lambda(V^\ast)^\ast$ of the degree $\lambda$ subspace is the Weyl module $V(\lambda)$. Specifically, we let $v_i \in V(\lambda)$ be dual to $x^{\lambda - i}y^i$. Let $B_2 \subseteq \SL_2 $ be the Borel subgroup of upper triangular matrices and recall that the homogeneous space $\SL_2 /B_2 $ is isomorphic to $\mathbb P^1 $ as a variety; the map $\phi\colon \mathbb P^1 \to \SL_2 $ given by
\[[1 :\varepsilon] \mapsto \begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix} \qquad \text{ and } \qquad [0 :1 ] \mapsto \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \]
factors to an explicit isomorphism $\mathbb P^1 \overset{\sim}{\to} \SL_2 /B_2 $. \begin{Def}[\cite{premetSl2 }]
Let $\Phi(\lambda)$ be the $\slt$-submodule of $V(\lambda)$ spanned by the vectors $\set{v_{a + 1 }, v_{a + 2 }, \ldots, v_\lambda}$. Given $\xi \in \mathbb P^1 $ we define $\Phi_\xi(\lambda)$ to be the $\slt$-module $\phi(\xi)\Phi(\lambda)$. \end{Def}
Observe first that $\Phi_{[0 :1 ]}(\lambda) = \Phi(\lambda)$ so in this case we have the desired description. Now assume $\xi = [1 :\varepsilon]$ where $\varepsilon \in k$. As $\phi(\xi)$ is invertible multiplication by it is an isomorphism so $\Phi_\xi(\lambda)$ has basis $\set{\phi(\xi)v_i}$. Our basis for $\Phi_\xi(\lambda)$ will be obtained by essentially a row reduction of this basis, so to proceed we now compute the action of $\SL_2 $ on $V(\lambda)$.
|
822
| 5
|
arxiv
|
$\Phi_\xi(\lambda)$ we will need only the following special case:
\[\phi(\xi)v_i = \begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix}v_i = \sum_{j = \lambda - i}^\lambda(-1 )^j\binom{j}{\lambda - i}\varepsilon^{i + j - \lambda}v_j. \]
\begin{Prop} \label{propBas}
Given $i = qp + b$, $0 \leq b < p$, define
\[w_i = \begin{cases} v_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}v_{\lambda - b} & \text{if} \ \ b \leq a \\ v_{\lambda - i} & \text{if} \ \ b > a. \end{cases}\]
Then the vectors $w_{a + 1 }, w_{a + 2 }, \ldots, w_\lambda$ form a basis of $\Phi_\xi(\lambda)$. \end{Prop}
\begin{proof}
We will prove by induction that that for all $a + 1 \leq i \leq \lambda$ we have $\spn_k\set{\phi(\xi)v_{a + 1 }, \ldots, \phi(\xi)v_i} = \spn_k\set{w_{a + 1 }, \ldots, w_i}$. For the base case the formula above gives
\begin{align*}
\phi(\xi)v_{a + 1 } &= \sum_{j = rp - 1 }^\lambda(-1 )^j\binom{j}{rp - 1 }\varepsilon^{j - rp + 1 }v_j \\
&= (-1 )^{rp - 1 }\binom{rp - 1 }{rp - 1 }v_{rp - 1 } \\
&= (-1 )^{rp - 1 }w_{a + 1 }
\end{align*}
so clearly the statement is true. For the inductive step we assume the statement holds for integers less than $i$. Then by hypothesis we have
\[\spn_k\set{\phi(\xi)v_{a + 1 }, \ldots, \phi(\xi)v_i} = \spn_k\set{w_{a + 1 }, \ldots, w_{i - 1 }, \phi(\xi)v_i}\]
and can replace $\phi(\xi)v_i$ with the vector
\[w' = (-1 )^{\lambda - i}\phi(\xi)v_i - \sum_{j = a + 1 }^{i - 1 }(-1 )^{i - j}\binom{\lambda - j}{\lambda - i}\varepsilon^{i - j}w_j\]
to get another spanning set. We then show that $w' = w_i$ by checking that the coordinates of each vector are equal. Note that for $j < \lambda - i$ the coefficient of $v_j$ in each of the factors of $w'$ is zero, as it is in $w_i$. The coefficient of $v_{\lambda - i}$ in $(-1 )^{\lambda - i}\phi(\xi)v_i$ is $1 $ and in each $w_j$, $a + 1 \leq j < i$ it is zero, hence the coefficient in $w'$ is $1 $, as it is in $w_i$. Next assume $\lambda - i < j < rp$. Then only $\phi(\xi)v_i$ and $w_{\lambda - j}$ contribute a $v_j$ term therefore the coefficient of $v_j$ in $w'$ is
\[(-1 )^{\lambda - i + j}\binom{j}{\lambda - i}\varepsilon^{i + j - \lambda} - (-1 )^{i + j - \lambda}\binom{j}{\lambda - i}\varepsilon^{i + j - \lambda} = 0 \]
which again agrees with $w_i$. Now all that's left is to check the coefficients of $v_{rp}, v_{rp + 1 }, \ldots, v_\lambda$. Note that $w_t$ has a $v_{rp + j}$ term only if
\[t = p + a - j, 2 p + a - j, \ldots, \lambda - j\]
and the coefficient of $v_{rp + j}$ in $w_{tp + a - j}$, for $1 \leq t \leq r$, is
\[-\binom{r}{t}\varepsilon^{tp}. \]
Thus the coefficient of $v_{rp + j}$ in $w'$ is
\[(-1 )^{a - i - j}\binom{rp + j}{\lambda - i}\varepsilon^{i + j - a} + \sum(-1 )^{i + j - tp - a}\binom{r}{t}\binom{(r - t)p + j}{\lambda - i}\varepsilon^{i + j - a}\]
where the sum is over those $t$ such that $1 \leq t \leq r$ and $tp + a \leq i + j - 1 $. From here there are several cases. Assume first that $a < b$. Then, from $b < p$ we get $p + a - b > a \geq j$ hence any binomial coefficient those top number equals $a - b$ in $k$ and whose bottom number equals $j$ in $k$ will be zero because there will be a carry. Both $\binom{\lambda - t}{\lambda - i}$ and $\binom{(r - t)p + j}{\lambda - i}$ satisfy this condition therefore the coefficient of $v_{rp + j}$ in $w'$ is zero. Thus if $a < b$ then we have $w' = w_i$. Next assume that $b \leq a$. Then the formula above for the coefficient of $v_{rp + j}$ in $w'$ becomes
\begin{align*}
&(-1 )^{a - i - j}\left[\binom{r}{q} + \sum(-1 )^{tp}\binom{r}{t}\binom{r - t}{r - q}\right]\binom{j}{a - b}\varepsilon^{i + j - a} \\
&\qquad = (-1 )^{a - i - j}\left[\binom{r}{q} + \sum(-1 )^t\binom{r}{q}\binom{q}{t}\right]\binom{j}{a - b}\varepsilon^{i + j - a} \\
&\qquad = (-1 )^{a - i - j}\left[1 + \sum(-1 )^t\binom{q}{t}\right]\binom{r}{q}\binom{j}{a - b}\varepsilon^{i + j - a}
\end{align*}
where the sum is over the same $t$ from above. If $j < a - b$ then clearly this is zero. If $j > a - b$ then the sum is over $t = 1, 2, \ldots, q$ and
\[1 + \sum_{t = 1 }^q(-1 )^t\binom{q}{t} = \sum_{t = 0 }^q(-1 )^t\binom{q}{t} = 0 \]
so the only $v_{\lambda - b}$ occurs as a term in $w'$. In that case the sum is over $t = 1, 2, \ldots, q - 1 $ and
\[1 + \sum_{t = 1 }^{q - 1 }(-1 )^t\binom{q}{t} = (-1 )^{q + 1 } + \sum_{t = 0 }^q(-1 )^t\binom{q}{t} = (-1 )^{q + 1 }\]
so the coefficient is
\[(-1 )^{a - i - (a - b) + q + 1 }\binom{r}{q}\varepsilon^{i + (a - b) - a} = -\binom{r}{q}\varepsilon^{qp}\]
as desired. Thus $w' = w_i$ and the proof is complete. \end{proof}
Now that we have a basis it's trivial to determine the action. \begin{Prop}
Let $i = qp + b$, $a + 1 \leq i \leq \lambda$. Then
\begin{align*}
ew_i &= \begin{cases} (i + 1 )\left(w_{i + 1 } - \binom{\lambda}{i}\varepsilon^{qp}w_{b + 1 }\right) & \text{if} \ \ a = b \\ (i + 1 )w_{i + 1 } & \text{if} \ \ a \neq b \end{cases} \\
fw_i &= (\lambda - i + 1 )w_{i - 1 } \\
hw_i &= (2 i - \lambda)w_i
\end{align*}
where $w_a = w_{\lambda + 1 } = 0 $. \end{Prop}
\begin{proof}
The proof is just a case by case analysis. We start with $e \in \slt$. If $b < a$ then
\[ew_i = ev_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}ev_{\lambda - b} = (i + 1 )v_{\lambda - i - 1 } - (b + 1 )\binom{r}{q}\varepsilon^{qp}v_{\lambda - b - 1 } = (i + 1 )w_{i + 1 }. \]
If $b = a$ then
\[ew_i = (i + 1 )v_{\lambda - i - 1 } - (b + 1 )\binom{r}{q}\varepsilon^{qp}v_{\lambda - b - 1 } = (i + 1 )\left(w_{i + 1 } - \binom{\lambda}{i}\varepsilon^{qp}w_{b + 1 }\right). \]
If $p - 1 > b > a$ then
\[ew_i = ev_{\lambda - i} = (i + 1 )v_{\lambda - i - 1 } = (i + 1 )w_{i + 1 }. \]
Finally if $b = p - 1 $ then
\[ew_i = (i + 1 )v_{\lambda - i - 1 } = 0. \]
All the above cases fit the given formula so we are done with $e$. Next consider $f \in \slt$. If $0 < b \leq a$ then
\begin{align*}
fw_i &= fv_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}fv_{\lambda - b} \\
&= (\lambda - i + 1 )v_{\lambda - i + 1 } - (\lambda - b + 1 )\binom{r}{q}\varepsilon^{qp}v_{\lambda - b + 1 } \\
&= (\lambda - i + 1 )w_{i - 1 }. \end{align*}
If $b = 0 $ then
\[fw_i = fv_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}fv_\lambda = (\lambda + 1 )v_{\lambda - i + 1 } = (\lambda - i + 1 )w_{i - 1 }. \]
If $a + 1 = b$ then
\[fw_i = fv_{\lambda - i} = (\lambda - i + 1 )v_{\lambda - i + 1 } = spv_{\lambda - i + 1 } = 0. \]
Finally if $b > a + 1 $ then
\[fw_i = (\lambda - i + 1 )v_{\lambda - i + 1 } = (\lambda - i + 1 )w_{i - 1 }. \]
All the above cases fit the given formula so we are done with $f$. Last but not least consider $h \in \slt$. If $b \leq a$ then
\[hw_i = hv_{\lambda - i} - \binom{r}{q}\varepsilon^{qp}hv_{\lambda - b} = (2 i - \lambda)v_{\lambda - i} - (2 b - \lambda)\binom{r}{q}\varepsilon^{qp}v_{\lambda - b} = (2 i - \lambda)w_i. \]
If $b > a$ then
\[hw_i = hv_{\lambda - i} = (2 i - \lambda)v_{\lambda - i} = (2 i - \lambda)w_i. \]
\end{proof}
Lastly we calculate that the Jordan type is as stated: $[p]^{r-1 }[p - a - 1 ][a + 1 ]$ at $\xi$ and $[p]^r$ elsewhere. First note that the result holds for $\Phi_{[0 :1 ]}(\lambda)$ by \cref{lemBJType}; furthermore, that the point $[0 :1 ] \in \mathbb P^1 $ at which the Jordan type is $[p]^{r - 1 }[p - a - 1 ][a + 1 ]$ corresponds to the line through $f \in \mathcal N_p(\slt)$ under the map
\begin{align*}
\iota\colon\mathbb P^1 &\to \PG[\slt] \\
[s : t] &\mapsto \begin{bmatrix} st & s^2 \\ -t^2 & -st \end{bmatrix}
\end{align*}
from \cref{exNslt}. Let
\[\ad\colon\SL_2 \to \End(\slt)\]
be the adjoint action of $\SL_2 $ on $\slt$. As $V(\lambda)$ is a rational $\SL_2 $-module this satisfies
\[\ad(g)(E)\cdot m = g\cdot(E\cdot(g^{-1 }\cdot m))\]
for all $g \in \SL_2 $, $E \in \slt$, and $m \in V(\lambda)$. Along with $\Phi_{\xi}(\lambda) = \phi(\xi)\Phi_{[0 :1 ]}(\lambda)$ this implies commutativity of the following diagram:
\[\xymatrix@C=50 pt{\Phi_{[0 :1 ]}(\lambda) \ar[r]^-{\phi(\xi)} \ar[d]_{\hspace*{40 pt}E} & \Phi_\xi(\lambda) \ar[d]^{\ad(\phi(\xi))(E)} \\ \Phi_{[0 :1 ]}(\lambda) \ar[r]_-{\phi(\xi)} & \Phi_\xi(\lambda)}\]
As multiplication by $\phi(\xi)$ is an isomorphism, letting $E$ range over $\mathcal N_p(\slt)$ we see that the module $\Phi_\xi(\lambda)$ has Jordan type $[p]^{r - 1 }[p - a - 1 ][a + 1 ]$ at $\ad(\phi(\xi))(f)$ and $[p]^r$ elsewhere. Then we simply calculate
\begin{align*}
\ad(\phi(\xi))(f) &= \begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix}\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 0 & 1 \\ -1 & -\varepsilon \end{bmatrix}^{-1 } \\
&= \begin{bmatrix} -\varepsilon & -1 \\ \varepsilon^2 & \varepsilon \end{bmatrix}
\end{align*}
and observe that, as an element of $\PG[\slt]$, this is $\iota([1 :\varepsilon])$. Thus we now have a complete description of the indecomposable $\slt$-modules. We finish this section with one more computation that will be needed in \Cref{secLieEx}: the computation of the Heller shifts $\Omega(V(\lambda))$ for indecomposable $V(\lambda)$. Note that $V(p-1 ) = Q(p-1 )$ is projective so $\Omega(V(p-1 )) = 0 $. For other $V(\lambda)$ we have the following. \begin{Prop} \label{propOmega}
Let $\lambda = rp + a$ be a non-negative integer and $0 \leq a < p$ its remainder modulo $p$. If $a \neq p - 1 $ then $\Omega(V(\lambda)) = V((r + 2 )p - a - 2 )$. \end{Prop}
\begin{proof}
This will be a direct computation. We will determine the projective cover $f\colon P \to V(\lambda)$ and then set $f(x) = 0 $ for an arbitrary element $x \in P$. This will give us the relations determining $\ker f = \Omega(V(\lambda))$ which we will convert into a basis and identify with $V((r + 2 )p - a - 2 )$.
|
822
| 6
|
arxiv
|
the restriction of $f\colon P \to V(\lambda)$ to the summand $Q(a)$ of $P$. The module $V(\lambda)$ fits into a short exact sequence
\[0 \to V(p - a - 2 )^{\oplus r + 1 } \to V(\lambda) \overset{\pi}{\to} V(a)^{\oplus r + 1 } \to 0 \]
where $\pi$ has components $\pi_q$ for $q = 0, 1, \ldots, r$. Each $\pi_q\colon V(\lambda) \to V(a)$ is given by
\[v_i \mapsto \begin{cases} v_{i - qp} & \text{if} \ qp \leq i \leq qp + a \\ 0 & \text{otherwise. } \end{cases}\]
Hence the top of $V(\lambda)$ is $V(a)^{\oplus r + 1 }$ and $P = Q(a)^{\oplus r + 1 }$. Recall that $Q(a)$ has basis $\set{v_0, v_1, \ldots, v_{2 p - a - 2 }} \cup \set{w_{p - a - 1 }, w_{p - a}, w_{p - 1 }}$. The map $f_q$ is uniquely determined up to a nonzero scalar and is given by
\begin{align*}
v_i & \mapsto -(a + 1 )^2 \binom{p - i - 1 }{a + 1 }v_{(q - 1 )p + a + i + 1 } && \text{if} \ 0 \leq i \leq p - a - 2, \\
w_i & \mapsto (-1 )^{i + a}\binom{a}{i + a + 1 - p}^{-1 }v_{(q - 1 )p + a + i + 1 } && \text{if} \ p - a - 1 \leq i \leq p - 1, \\
v_i & \mapsto 0 && \text{if} \ p - a - 1 \leq i \leq p - 1, \\
v_i & \mapsto (-1 )^{a + 1 }(a + 1 )^2 \binom{i + a + 1 - p}{a + 1 }v_{(q - 1 )p + a + i + 1 } && \text{if} \ p \leq i \leq 2 p - a - 2. \end{align*}
This gives $f = [f_0 \ f_1 \ \cdots \ f_r]$. To distinguish elements from different summands of $Q(a)^{\oplus r + 1 }$ let $\set{v_{q,0 }, v_{q,1 }, \ldots, v_{q,2 p - a - 2 }} \cup \set{w_{q, p - a - 1 }, w_{q, p - a}, w_{q, p - 1 }}$ be the basis of the $q^\text{th}$ summand of $Q(a)^{\oplus r + 1 }$. Then any element of the cover can be written in the form
\[x = \sum_{q = 0 }^r\left[\sum_{i = 0 }^{2 p - a - 2 }c_{q, i}v_{q, i} + \sum_{i = p - a - 1 }^{p - 1 }d_{q, i}w_{q, i}\right]. \]
for some $c_{q, i}, d_{q, i} \in k$. Applying $f$ gives
\begin{align*}
f(x) = &\sum_{q = 0 }^r\Bigg[-(a + 1 )^2 \sum_{i = 0 }^{p - a - 2 }\binom{p - i - 1 }{a + 1 }c_{q, i}v_{(q - 1 )p + a + i + 1 } \\
&\hspace{24 pt} + (-1 )^{a + 1 }(a + 1 )^2 \sum_{i = p}^{2 p - a - 2 }\binom{i + a + 1 - p}{a + 1 }c_{q, i}v_{(q - 1 )p + a + i + 1 } \\
&\hspace{24 pt} + \sum_{i = p - a - 1 }^{p - 1 }(-1 )^{a + i}\binom{a}{i + a + 1 - p}^{-1 }d_{q, i}v_{(q - 1 )p + a + i + 1 }\Bigg]
\end{align*}
Observe that $0 \leq i \leq p - a - 2 $ and $p \leq i \leq 2 p - a - 2 $ give $a + 1 \leq a + i + 1 \leq p - 1 $ and $p + a + 1 \leq a + i + 1 \leq 2 p - 1 $ respectively, whereas $p - a - 1 \leq i \leq p - 1 $ gives $p \leq a + i + 1 \leq p + a$. Looking modulo $p$ we see that the basis elements $v_{(q - 1 )p + a + i + 1 }$, for $0 \leq q \leq r$ and $p - a - 1 \leq i \leq p - 1 $, are linearly independent. Thus $f(x) = 0 $ immediately yields $d_{q, i} = 0 $ for all $q$ and $i$. Now rearranging we have
\begin{align*}
f(x) =& \sum_{q = 0 }^r\sum_{i = 0 }^{p - a - 2 }\Bigg[(-1 )^{a + 1 }(a + 1 )^2 \binom{i + a + 1 }{i}c_{q, i + p}v_{qp + a + 1 + i} \\
& -(a + 1 )^2 \binom{p - i - 1 }{a + 1 }c_{q, i}v_{(q - 1 )p + a + 1 + i}\Bigg] \\
=& -(a + 1 )^2 \sum_{i = 0 }^{p - a - 2 }\Bigg[\sum_{q = 0 }^{r - 1 }(-1 )^a\binom{i + a + 1 }{i}c_{q, i + p}v_{qp + a + 1 + i} \\
& + \sum_{q = 1 }^r\binom{p - i - 1 }{a + 1 }c_{q, i}v_{(q - 1 )p + a + 1 + i}\Bigg] \\
=& \sum_{q = 0 }^{r - 1 }\sum_{i = 0 }^{p - a - 2 }\Bigg[(-1 )^a\binom{i + a + 1 }{i}c_{q, i + p} + \binom{p - i - 1 }{a + 1 }c_{q + 1, i}\Bigg]v_{qp + a + 1 + i}
\end{align*}
so the kernel is defined by choosing $c_{q, i}$, for $0 \leq q \leq r - 1 $ and $0 \leq i \leq p - a - 2 $, such that
\[(-1 )^a\binom{i + a + 1 }{i}c_{q, i + p} + \binom{p - i - 1 }{a + 1 }c_{q + 1, i} = 0. \]
Note that
\[\frac{\binom{p - i - 1 }{a + 1 }}{\binom{i + a + 1 }{i}} = \frac{\binom{p - 1 }{i + a + 1 }}{\binom{p - 1 }{i}} = \frac{(-1 )^{i + a + 1 }}{(-1 )^i} = (-1 )^{a + 1 }\]
so the above equation simplifies to
\[c_{q, i + p} = c_{q + 1, i}. \]
Thus for $0 \leq i \leq (r + 2 )p - a - 2 $ the vectors
\[v_i^\prime = \begin{cases} v_{0, i} & \text{if} \ 0 \leq i < p, \\
v_{q, b} + v_{q - 1, p + b} & \text{if} \ 1 \leq q \leq r, \ 0 \leq b \leq p - a - 2, \\
v_{q, b} & \text{if} \ 1 \leq q \leq r, \ p - a - 1 \leq b < p, \\
v_{r, b} & \text{if} \ q = r + 1, \ 0 \leq b \leq p - a - 2. \end{cases}\]
form a basis for the kernel, where $i = qp + b$ with $0 \leq b < p$ the remainder of $i$ modulo $p$. It is now trivial to check that the $\slt$-action on this basis is identical to that of $V((r + 2 )p - a - 2 )$. \end{proof}
\section{Matrix Theorems} \label{secMatThms}
In this section we determine the kernel of four particular maps between free $k[s, t]$-modules. While these maps are used to represent sheaf homomorphisms in \Cref{secLieEx} we do not approach this section geometrically. Instead we carry out these computations in the category of $k[s, t]$-modules. The first map is given by the matrix $M_\varepsilon(\lambda) \in \mathbb M_{rp}(k[s, t])$ shown in \Cref{figMats}. \begin{sidewaysfigure}[p]
\vspace{350 pt}
\[\hspace{0 pt}\begin{bmatrix} (a + 2 )st & t^2 &&& -(a + 1 )\binom{r}{1 }\varepsilon^ps^2 &&& -(a + 1 )\binom{r}{2 }\varepsilon^{2 p}s^2 &&& \cdots && -(a + 1 )\binom{r}{r}\varepsilon^{rp}s^2 \\
(a + 2 )s^2 & (a + 4 )st & 2 t^2 \\
&(a + 3 )s^2 & \ddots & \ddots \\
&& \ddots & \ddots & -t^2 \\
&&& as^2 & ast & 0 \\
&&&& (a + 1 )s^2 & \ddots & \ddots \\
&&&&& \ddots & \ddots & -t^2 \\
&&&&&& as^2 & ast & 0 \\
&&&&&&& (a + 1 )s^2 & \ddots & \ddots \\
&&&&&&&& \ddots & \ddots & -3 t^2 \\
&&&&&&&&& (a - 2 )s^2 & (a - 4 )st & -2 t^2 \\
&&&&&&&&&& (a - 1 )s^2 & (a - 2 )st & -t^2 \\
&&&&&&&&&&& as^2 & ast
\end{bmatrix}\]
\caption{$M_\varepsilon(\lambda)$} \label{figMats}
\end{sidewaysfigure}
For convenience we index the rows and columns of this matrix using the integers $a + 1, a + 2, \ldots, \lambda$. Then we can say more precisely that the $(i, j)^\text{th}$ entry of this matrix is
\[M_\varepsilon(\lambda)_{ij} = \begin{cases}
is^2 & \text{if} \ \ i = j + 1 \\
(2 i - a)st & \text{if} \ \ i = j \\
(i - a)t^2 & \text{if} \ \ i = j - 1 \\
-(a + 1 )\binom{r}{q}\varepsilon^{qp}s^2 & \text{if} \ \ (i, j) = (0, qp + a) \\
0 & \text{otherwise. }
\end{cases}
\]
\begin{Prop} \label{propMker}
The kernel of $M_\varepsilon(\lambda)$ is a free $k[s, t]$-module (ungraded) of rank $r$ whose basis elements are homogeneous of degree $p - a - 2 $. \end{Prop}
\begin{proof}
The strategy is as follows: First we will determine the kernel of $M_\varepsilon(\lambda)$ when considered as a map of $k[s, \frac{1 }{s}, t]$-modules. We do this by exhibiting a basis $H_1, \ldots, H_r$ via a direct calculation. Then by clearing the denominators from these basis elements we get a linearly independent set of vectors in the $k[s, t]$-kernel of $M_\varepsilon(\lambda)$. We conclude by arguing that these vectors in fact span, thus giving an explicit basis for the kernel of $M_\varepsilon(\lambda)$ considered as a map of $k[s, t]$-modules. To begin observe that $s$ is a unit in $k[s, \frac{1 }{s}, t]$,
\begin{sidewaysfigure}[p]
\vspace{350 pt}
\[\begin{bmatrix} (a + 2 )x & x^2 &&& -(a + 1 )\binom{r}{1 }\varepsilon^p &&& -(a + 1 )\binom{r}{2 }\varepsilon^{2 p} &&& \cdots && -(a + 1 )\binom{r}{r}\varepsilon^{rp} \\
a + 2 & (a + 4 )x & 2 x^2 \\
& a + 3 & \ddots & \ddots \\
&& \ddots & \ddots & -x^2 \\
&&& a & ax & 0 \\
&&&& a + 1 & \ddots & \ddots \\
&&&&& \ddots & \ddots & -x^2 \\
&&&&&& a & ax & 0 \\
&&&&&&& a + 1 & \ddots & \ddots \\
&&&&&&&& \ddots & \ddots & -3 x^2 \\
&&&&&&&&& a - 2 & (a - 4 )x & -2 x^2 \\
&&&&&&&&&& a - 1 & (a - 2 )x & -x^2 \\
&&&&&&&&&&& a & ax
\end{bmatrix}\]
\caption{$\frac{1 }{s^2 }M_\varepsilon(\lambda)$} \label{figxMat}
\end{sidewaysfigure}
thus over this ring the kernel of $M_\varepsilon(\lambda)$ is equal to the kernel of the matrix $\frac{1 }{s^2 }M_\varepsilon(\lambda)$ (shown in \Cref{figxMat}) with $(i, j)^\text{th}$ entries
\[\frac{1 }{s^2 }M_\varepsilon(\lambda)_{ij} = \begin{cases}
i & \text{if} \ \ i = j + 1 \\
(2 i - a)x & \text{if} \ \ i = j \\
(i - a)x^2 & \text{if} \ \ i = j - 1 \\
-(a + 1 )\binom{r}{q}\varepsilon^{qp} & \text{if} \ \ (i, j) = (0, qp + a) \\
0 & \text{otherwise. }
\end{cases}\]
where $x = \frac{t}{s}$. Let
\[f = \begin{bmatrix} f_{a + 1 } \\ f_{a + 2 } \\ \vdots \\ f_\lambda \end{bmatrix}\]
be an arbitrary element of the kernel. Given $i = qp + b$ where $0 \leq b < p$ and $a + 1 \leq i \leq \lambda$ we claim that
\begin{equation} \label{eqPhiForm}
f_i = (-1 )^{\lambda - i}x^{\lambda - i}f_\lambda + (-1 )^b\binom{p + a - b}{p - b - 1 }x^{p - b - 1 }h_{q + 1 }
\end{equation}
for some choice of $h_1, \ldots, h_r \in k[s, \frac{1 }{s}, t]$ and $h_{r + 1 } = 0 $.
|
822
| 7
|
arxiv
|
xf_\lambda$ and if $a = 0 $ then this condition is automatically satisfied. The formula, when $a = 0 $, gives $f_{rp - 1 } = -xf_\lambda + h_r$ so we take this as the definition of $h_r$. Assume the formula holds for all $f_j$ with $j > i \geq a + 1 $ and that these $f_j$ satisfy the conditions imposed by rows $i + 2, i + 3, \ldots, \lambda$ of $\frac{1 }{s^2 }M_\varepsilon(\lambda)$. Row $i + 1 $ has nonzero entries $i + 1 $, $(2 i - a + 2 )x$, and $(i - a + 1 )x^2 $ in columns $i$, $i + 1 $, and $i + 2 $ respectively. First assume $i + 1 \neq 0 $ in $k$ or equivalently $b \neq p - 1 $ where $i = qp + b$ and $0 \leq b < p$. Then the condition
\[(i + 1 )f_i + (2 i - a + 2 )xf_{i + 1 } + (i - a + 1 )x^2 f_{i + 2 } = 0 \]
imposed by row $i + 1 $ can be taken as the definition of $f_i$. Observe that
\begin{align*}
&\frac{-1 }{i + 1 }\left((-1 )^{\lambda - i - 1 }(2 i - a + 2 ) + (-1 )^{\lambda - i - 2 }(i - a + 1 )\right) \\
&\qquad = \frac{(-1 )^{\lambda - i}}{i + 1 }\left((2 i - a + 2 ) - (i - a + 1 )\right) \\
&\qquad = \frac{(-1 )^{\lambda - i}}{i + 1 }\left(i + 1 \right) \\
&\qquad = (-1 )^{\lambda - i}
\end{align*}
so $f_i = (-1 )^{\lambda - i}x^{\lambda - i}f_\lambda + (\text{terms involving } h_j)$. For the $h_j$ terms there are two cases. First assume $b < p - 2 $. Then using $\frac{c}{e}\binom{c - 1 }{e - 1 } = \binom{c}{e}$ we see that
\begin{align*} \label{eqnBinom}
&\frac{-1 }{i + 1 }\left((-1 )^{b + 1 }(2 i - a + 2 )\binom{p + a - b - 1 }{p - b - 2 } + (-1 )^{b + 2 }(i - a + 1 )\binom{p + a - b - 2 }{p - b - 3 }\right) \\
&\qquad = \frac{(-1 )^b}{b + 1 }\left((2 b - a + 2 )\binom{p + a - b - 1 }{p - b - 2 } + (p + a - b - 1 )\binom{p + a - b - 2 }{p - b - 3 }\right) \\
&\qquad = \frac{(-1 )^b}{b + 1 }\left((2 b - a + 2 )\binom{p + a - b - 1 }{p - b - 2 } + (p - b - 2 )\binom{p + a - b - 1 }{p - b - 2 }\right) \\
&\qquad = \frac{(-1 )^b}{b + 1 }(b - a)\binom{p + a - b - 1 }{p - b - 2 } \\
&\qquad = \frac{(-1 )^b}{b + 1 }(b + 1 )\binom{p + a - b}{p - b - 1 } \\
&\qquad = (-1 )^b\binom{p + a - b}{p - b - 1 }. \end{align*}
Putting these together we get that
\begin{align*}
f_i &= \frac{-1 }{i + 1 }((2 i - a + 2 )xf_{i + 1 } + (i - a + 1 )x^2 f_{i + 2 }) \\
&= (-1 )^{\lambda - i}x^{\lambda - i}f_\lambda + (-1 )^b\binom{p + a - b}{p - b - 1 }x^{p - b - 1 }h_{q + 1 }
\end{align*}
as desired. Next assume $b = p - 2 $ so that $f_{i + 2 } = f_{(q + 1 )p}$. The coefficient of $h_{q + 2 }$ in $f_{(q + 1 )p}$ involves the binomial $\binom{p + a}{p - 1 }$. As $0 \leq a < p - 1 $ there is a carry when the addition $(p - 1 ) + (a + 1 ) = p + a$ is done in base $p$, thus this coefficient is zero and the $h_j$ terms of $f_i$ are
\begin{align*}
\frac{(-1 )^{p}}{i + 1 }(2 i - a + 2 )\binom{a + 1 }{0 }xh_{q + 1 } &= \frac{(-1 )^{p - 2 }}{2 }(a + 2 )xh_{q + 1 } \\
&= (-1 )^b\binom{a + 2 }{1 }xh_{q + 1 }
\end{align*}
as desired. Thus the induction continues when $i + 1 \neq 0 $ in $k$. Now assume $i + 1 = 0 $ in $k$; equivalently, $b = p - 1 $. Then the condition imposed by row $i + 1 = (q + 1 )p$ is
\[- axf_{(q + 1 )p} - ax^2 f_{(q + 1 )p + 1 } = 0. \]
If $a = 0 $ then this is automatic. If $a > 0 $ then there is a base $p$ carry in the addition $(p - 2 ) + (a + 1 ) = p + a - 1 $, hence
\begin{align*}
&xf_{(q + 1 )p} + x^2 f_{(q + 1 )p + 1 } \\
&\qquad = (-1 )^{(r - q - 1 )p + a}x^{(r - q - 1 )p + a + 1 }f_\lambda + (-1 )^{(r - q - 1 )p + a - 1 }x^{(r - q - 1 )p + a + 1 }g \\
&\qquad\quad - \binom{p + a - 1 }{p - 2 }x^ph_{q + 2 } \\
&\qquad = 0
\end{align*}
because the $f_\lambda$ terms cancel and the binomial is zero. So in either case the condition above is automatic. The formula for $f_i$ when $i = qp + (p - 1 )$ is
\[f_i = (-1 )^{(r - q - 1 )p + a + 1 }x^{(r - q - 1 )p + a + 1 }g + h_{q + 1 }\]
so we take this as the definition of $h_{q + 1 }$ and the induction is complete. Now $f$ must have the given form for some choice of $h_1, \ldots, h_r$ and any such choice gives an element $f$ such that $\frac{1 }{s^2 }M_\varepsilon(\lambda)f$ is zero in all coordinates save the top ($a + 1 $). All that is left is to impose the condition that $\frac{1 }{s^2 }M_\varepsilon(\lambda)f$ is zero in the $(a + 1 )^\text{th}$ coordinate as well. This condition is
\[(a + 2 )xf_{a + 1 } + x^2 f_{a + 2 } - (a + 1 )\sum_{q = 1 }^r\binom{r}{q}\varepsilon^{qp}f_{qp + a} = 0. \]
In $(a + 2 )xf_{a + 1 } + x^2 f_{a + 2 }$ the $h_j$ terms are
\begin{align*}
&(-1 )^{a + 1 }\left((a + 2 )\binom{p - 1 }{p - a - 2 } - \binom{p - 2 }{p - a - 3 }\right)x^{p - a - 1 }h_1 \\
&\qquad = (-1 )^{a + 1 }\left((a + 2 )\binom{p - 1 }{p - a - 2 } + (p - a - 2 )\binom{p - 1 }{p - a - 2 }\right)x^{p - a - 1 }h_1 \\
&\qquad = 0
\end{align*}
and the coefficient of the $h_j$ term in $f_{qp + a}$ involves the binomial $\binom{p}{p - a - 1 }$ which is zero. Thus the top row imposes a condition only on $f_\lambda$, and this condition is
\begin{align*}
0 &= (-1 )^{rp - 1 }(a + 2 )x^{rp}f_\lambda + (-1 )^{rp - 2 }x^{rp}f_\lambda \\
&\quad - (a + 1 )\sum_{q = 1 }^r(-1 )^{(r - q)p}\binom{r}{q}\varepsilon^{qp}x^{(r - q)p}f_\lambda \\
&= (-1 )^{rp - 1 }(a + 1 )\left[\sum_{q = 0 }^r(-1 )^{qp}\binom{r}{q}\varepsilon^{qp}x^{(r - q)p}\right]f_\lambda. \end{align*}
Note that $x = \frac{t}{s}$ is algebraically independent over $k$ in $k[s, \frac{1 }{s}, t]$ and by hypothesis $a + 1 \neq 0 $ in $k$. The localization of an integral domain is again an integral domain therefore if $f$ is in the kernel then we must have $f_\lambda = 0 $. As the $h_1, \ldots, h_r$ can be chosen arbitrarily this completes the determination of the kernel of $M_\varepsilon(\lambda)$, considered as a map of $k[s, \frac{1 }{s}, t]$-modules. It is free of rank $r$ and the basis elements are given by taking the coefficients of these $h_q$ in \Cref{eqQuasKer}. Let $H_q$ be the basis element that corresponds to $h_q$; it is shown in \Cref{figHq}. \begin{sidewaysfigure}[p]
\vspace{400 pt}
\[\begin{matrix} a + 1 \\ \vdots \\ qp - 1 \\ qp \\ qp + 1 \\ \vdots \\ qp + a \\ qp + a + 1 \\ \vdots \\ (q + 1 )p - 2 \\ (q + 1 )p - 1 \\ (q + 1 )p \\ \vdots \\ \lambda \end{matrix}\begin{bmatrix} 0 \\ \vdots \\ 0 \\ \binom{a + p}{p - 1 }x^{p - 1 } \\ -\binom{a + p - 1 }{p - 2 }x^{p - 2 } \\ \vdots \\ (-1 )^{p - a - 1 }\binom{p}{p - a - 1 }x^{p - a - 1 } \\ (-1 )^{p - a - 2 }\binom{p - 1 }{p - a - 2 }x^{p - a - 2 } \\ \vdots \\ -\binom{a + 2 }{1 }x \\ \binom{a + 1 }{0 } \\ 0 \\ \vdots \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ 0 \\ 0 \\ \vdots \\ 0 \\ (-1 )^{p - a - 2 }\binom{p - 1 }{p - a - 2 }x^{p - a - 2 } \\ \vdots \\ -(a + 2 )x \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} \overset{s^{p - a - 2 }}{\longrightarrow} \begin{bmatrix} 0 \\ \vdots \\ 0 \\ (-1 )^{p - a - 2 }\binom{p - 1 }{p - a - 2 }t^{p - a - 2 } \\ \vdots \\ -(a + 2 )s^{p - a - 3 }t \\ s^{p - a - 2 } \\ 0 \\ \vdots \\ 0 \end{bmatrix}\begin{matrix} a + 1 \\ \vdots \\ qp + a \\ qp + a + 1 \\ \vdots \\ (q + 1 )p - 2 \\ (q + 1 )p - 1 \\ (q + 1 )p \\ \vdots \\ \lambda \end{matrix}\]
\caption{$H_q \to s^{p - a - 2 }H_q$} \label{figHq}
\end{sidewaysfigure}
I claim that $s^{p - a - 2 }H_q$, for $1 \leq q \leq r$, is a basis for the kernel of $M_\varepsilon(\lambda)$, considered as a map of $k[s, t]$-modules. First note that $H_q$ is supported in coordinates $(q + 1 )p - 1 $ through $qp + a + 1 $. These ranges are disjoint for different $H_q$ therefore the $s^{p - a - 2 }H_q$ are clearly linearly independent. Let $f \in k[s, t]^{rp}$ be an element of the kernel of $M_\varepsilon(\lambda)$. Then as an element of $k[s, \frac{1 }{s}, t]$ we have that $f$ is in the kernel of $\frac{1 }{s^2 }M_\varepsilon(\lambda)$ and can write
\[f = \sum_{q = 1 }^rc_qH_q. \]
where $c_q \in k[s, \frac{1 }{s}, t]$. The $((q + 1 )p - 1 )^\text{th}$ coordinate of $f$ is $c_q$ hence $c_q \in k[s, t]$. Also the $(qp + a + 1 )^\text{th}$ coordinate of $f$ is
\[(-1 )^{p - a - 2 }\binom{p - 1 }{p - a - 2 }c_qx^{p - a - 2 }\]
and the binomial coefficient in that expression is nonzero in $k$ so $c_qx^{p - a - 2 } \in k[s, t]$. In particular, $s^{p - a - 2 }$ must divide $c_q$ so write $c_q = s^{p - a - 2 }c^\prime_q$ for some $c^\prime_q \in k[s, t]$. We now have
\[f = \sum_{q = 1 }^rc^\prime_qs^{p - a - 2 }H_q\]
so the $s^{p - a - 2 }H_q$ span and are therefore a basis. Each $H_q$ is homogeneous of degree $0 $ so each $s^{p - a - 2 }H_q$ is homogeneous of degree $p - a - 2 $. \end{proof}
The second map we wish to consider is given by the matrix $B(\lambda) \in \mathbb M_{\lambda + 1 }(k[s, t])$ shown in \Cref{figMatB}.
|
822
| 8
|
arxiv
|
ewaysfigure}
Index the rows and columns of this matrix using the integers $0, 1, \ldots, \lambda$. Then the entries of $B(\lambda)$ are
\[B(\lambda)_{ij} = \begin{cases}
-it^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2 i)st & \text{if} \ \ i = j \\
(\lambda - i)s^2 & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise. }
\end{cases}\]
\begin{Prop} \label{propBker}
The kernel of $B(\lambda)$ is a free $k[s, t]$-module of rank $r + 1 $. There is one basis element that is homogeneous of degree $\lambda$ and the remaining are homogeneous of degree $p - a - 2 $. \end{Prop}
\begin{proof}
The proof is very similar to the proof of \cref{propMker}. We start by finding the kernel of the matrix $\frac{1 }{s^2 }B(\lambda)$ shown in \Cref{figMatBx}
\begin{sidewaysfigure}[p]
\vspace{350 pt}
\[\begin{bmatrix} \lambda x & \lambda \\
-x^2 & (\lambda - 2 )x & \lambda - 1 \\
& -2 x^2 & (\lambda - 4 )x & \lambda - 2 \\
&& -3 x^2 & \ddots & \ddots \\
&&& \ddots & \ddots & \lambda + 2 \\
&&&& x^2 & (\lambda + 2 )x & \lambda + 1 \\
&&&&& 0 & \lambda x & \lambda \\
&&&&&& -x^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & 3 \\
&&&&&&&& -(\lambda - 2 )x^2 & -(\lambda - 4 )x & 2 \\
&&&&&&&&& -(\lambda - 1 )x^2 & -(\lambda - 2 )x & 1 \\
&&&&&&&&&& -\lambda x^2 & -\lambda x
\end{bmatrix}\]
\caption{$\frac{1 }{s^2 }B(\lambda)$} \label{figMatBx}
\end{sidewaysfigure}
whose entries are given by
\[\frac{1 }{s^2 }B(\lambda)_{ij} = \begin{cases}
-ix^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2 i)x & \text{if} \ \ i = j \\
\lambda - i & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise. }
\end{cases}\]
with $x = \frac{t}{s}$. Let
\[f = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_\lambda \end{bmatrix}\]
be an arbitrary element of the kernel. We induct down the rows of the matrix to show that if $i = qp + b$, where $0 \leq b < p$ then
\[f_{\lambda - i} = (-1 )^{\lambda - i}x^{\lambda - i}g + (-1 )^b\binom{p + a - b}{p - b - 1 }x^{p - b - 1 }h_q\]
where $h_r = 0 $. For the base case $i = \lambda$ in the formula gives $f_0 = g$ so we take this as the definition of $g$. The condition imposed by the first row is $axg + af_1 = 0 $ so if $a \neq 0 $ then $f_1 = -xg$. The formula gives $f_1 = -xg + (-1 )^a - 1 \binom{p + 1 }{p - a}x^{p - a}h_r = -xg$ so these agree. If $a = 0 $ then the condition is automatically satisfied and the formula gives $f_1 = -xg + h_{r - 1 }$ so we take this as the definition of $h_{r - 1 }$. For the inductive step assume the formula holds for $f_0, f_1, \ldots, f_{i - 1 }$ and that these $f_j$ satisfy the conditions imposed by rows $0, \ldots, \lambda - i - 2 $. The three nonzero entries in row $\lambda - i - 1 $ are $(b - a + 1 )x^2 $, $(2 b - a + 2 )x$, and $b + 1 $ in columns $\lambda - i - 2 $, $\lambda - i - 1 $, and $\lambda - i$ respectively, thus the condition imposed is
\[(b - a + 1 )x^2 f_{\lambda - i - 2 } + (2 b - a + 2 )xf_{\lambda - i - 1 } + (b + 1 )f_{\lambda - i} = 0. \]
If $b < p - 2 $ then we can solve this for $f_{\lambda - i}$ and we find that it agrees with the formula above (for the $h_j$ terms the computation is identical to the one shown in \cref{propMker}). If $b = p - 2 $ we get
\begin{align*}
f_{\lambda - i} &= -(a + 1 )x^2 f_{\lambda - i - 2 } - (a + 2 )xf_{\lambda - i - 1 } \\
&= (-1 )^{\lambda - i - 1 }(a + 1 )x^{\lambda - i}g + (-1 )^{\lambda - i}(a + 2 )x^{\lambda - i}g - (a + 2 )h_q \\
&= (-1 )^{\lambda - i}x^{\lambda - i}g - \binom{a + 2 }{1 }xh_q
\end{align*}
as desired. Finally if $b = p - 1 $ then $b + 1 = 0 $ in $k$ so the condition is
\[-ax^2 f_{\lambda - i - 2 } - axf_{\lambda - i - 1 } = 0 \]
and this is automatically satisfied (the formulas are the same as in \cref{propMker} again). Thus no condition is imposed on $f_{\lambda - i}$ so we take the formula
\[f_{\lambda - i} = (-1 )^{\lambda - i}x^{\lambda - i}g + h_q\]
as the definition of $h_q$. This completes the induction. Note that the final row to be $\lambda - i - 1 $ we must choose $i = -1 $ and therefore are in the case where $b + 1 = 0 $ and the condition is automatically satisfied. The rest of the proof goes as in \cref{propMker}, except that there is no final condition forcing $g = 0 $. If we let $G$ and $H_0, \ldots, H_{r - 1 }$ be the basis vectors corresponding to $g$ and $h_0, \ldots, h_{r - 1 }$ then the $H_q$ are linearly independent as before. The first ($0 ^\text{th}$) coordinate of $G$ is $1 $ while the first coordinate of each $H_q$ is $0 $ therefore $G$ can be added and this gives a basis for the kernel. The largest power of $x$ in $G$ is $\lambda$ in the last coordinate and the largest power of $x$ in $H_q$ is $p - a - 2 $ in the $(\lambda - qp - a + 1 )^\text{th}$ coordinate. These basis vectors lift to basis vectors of the kernel as a $k[s, t]$-module and are in degrees $\lambda$ and $p - a - 2 $ as desired. \end{proof}
Before we move on to the third map, let us first prove the following lemma which will be needed in \cref{thmFiSimp}. \begin{Lem} \label{lemBlambda}
Assume $0 \leq \lambda < p$. Then the $(i, j)^\text{th}$ entry of $B(\lambda)^\lambda$ is contained in the one dimensional space $ks^{\lambda + j - i}t^{\lambda - j + i}$. \end{Lem}
\begin{proof}
Let $b_{ij}$ be the $(i, j)^\text{th}$ entry of $B(\lambda)$. By definition the $(i, j)^\text{th}$ entry of $B(\lambda)^\lambda$ is given by
\[(B(\lambda)^\lambda)_{ij} = \sum_{n_1, n_2, \ldots, n_{\lambda - 1 }}b_{in_1 }b_{n_1 n_2 }\cdots b_{n_{\lambda - 1 }j}. \]
From the definition of $B(\lambda)$ we have
\begin{align*}
b_{ij} \in ks^2 & \ \ \ \text{if} \ j - i = 1, \\ b_{ij} \in kst & \ \ \ \text{if} \ j - i = 0, \\ b_{ij} \in kt^2 & \ \ \ \text{if} \ j - i = -1, \\ b_{ij} = 0 & \ \ \ \text{otherwise. }
\end{align*}
so any given term $b_{in_1 }b_{n_1 n_2 }\cdots b_{n_{\lambda - 1 }j}$ in the summation can be nonzero only if the $(\lambda + 1 )$-tuple $(n_0, n_1, \ldots, n_\lambda)$ is a \emph{walk} from $n_0 = i$ to $n_\lambda = j$, i. e. \ each successive term of the tuple must differ from the last by at most $1 $. For such a walk we now show by induction that $b_{n_0 n_1 }b_{n_1 n_2 }\cdots b_{n_{m - 1 }n_m} \in ks^{m + n_m - n_0 }t^{m - n_m + n_0 }$. For the base case $m = 1 $ we have the three cases above for $b_{n_0 n_1 }$ and one easily checks that the formula gives $kt^2 $, $kst$, or $ks^2 $ as needed. Now assume the statement holds for $m - 1 $ so that
\[b_{n_0 n_1 }\cdots b_{n_{m - 2 }n_{m - 1 }}b_{n_{m - 1 }n_m} \in ks^{m - 1 + n_{m - 1 } - n_0 }t^{m - 1 - n_{m - 1 } + n_0 }b_{n_{m - 1 }n_m}. \]
There are three cases. First if $n_m = n_{m - 1 } + 1 $ then $b_{n_{m - 1 }n_m} \in ks^2 $ and the set becomes
\[ks^{m - 1 + n_{m - 1 } - n_0 }t^{m - 1 - n_{m - 1 } + n_0 }\cdot s^2 = ks^{m - n_m + n_0 }t^{m + n_m - n_0 }\]
as desired. Next if $n_m = n_{m - 1 }$ then $b_{n_{m - 1 }n_m} \in kst$ and the set becomes
\[ks^{m - 1 + n_{m - 1 } - n_0 }t^{m - 1 - n_{m - 1 } + n_0 }\cdot st = ks^{m + n_m - n_0 }t^{m - n_m + n_0 }\]
as desired. Finally if $n_m = n_{m - 1 } - 1 $ then $b_{n_{m - 1 }n_m} \in kt^2 $ and the set becomes
\[ks^{m - 1 + n_{m - 1 } - n_0 }t^{m - 1 - n_{m - 1 } + n_0 }\cdot t^2 = ks^{m + n_m - n_0 }t^{m - n_m + n_0 }\]
as desired. Thus the induction is complete and for $m = \lambda$ this gives
\[b_{n_0 n_1 }b_{n_1 n_2 }\cdots b_{n_{\lambda - 1 }n_\lambda} \in ks^{\lambda + n_\lambda - n_0 }t^{\lambda - n_\lambda + n_0 } = ks^{\lambda + j - i}t^{\lambda - j + i}\]
and completes the proof. \end{proof}
Moving on, the third map we wish to consider is $B'(\lambda) \in \mathbb M_{rp}(k[s, t])$ defined to be the $rp^\text{th}$ trailing principal minor of $B(\lambda)$, i. e., the minor of $B(\lambda)$ consisting of rows and columns $a + 1, a + 2, \ldots, \lambda$. \begin{Prop} \label{propBpker}
The kernel of $B'(\lambda)$ is a free $k[s, t]$-module (ungraded) of rank $r$ whose basis elements are homogeneous of degree $p - a - 2 $. \end{Prop}
\begin{proof}
The induction from the proof of \cref{propBker} applies giving
\[f_{\lambda - i} = (-1 )^{\lambda - i}x^{\lambda - i}g + (-1 )^b\binom{p + a - b}{p - b - 1 }x^{p - b - 1 }h_q\]
for $0 \leq i < rp$. All that is left is the condition
\[-(a + 2 )xf_{a + 1 } - f_{a + 2 } = 0 \]
from the first row of $\frac{1 }{s^2 }B'(\lambda)$. Substituting in the formulas we get
\[(-1 )^{a + 1 }(a + 1 )x^{a + 2 }g = 0 \]
which forces $g = 0 $. Thus as a basis for the kernel we get $H_0, \ldots, H_{r - 1 }$. \end{proof}
Before we move on to the final map, let us first prove the following lemma which was needed in \Cref{secSl2 }
\begin{Lem} \label{lemBJType}
Let $s, t \in k$ so that $B'(\lambda) \in \mathbb M_{rp}(k)$. \[\jtype(B'(\lambda)) = \begin{cases} [1 ]^{rp} & \text{if} \ s = t = 0, \\ [p]^{r - 1 }[p - a - 1 ][a + 1 ] & \text{if} \ s = 0, t \neq 0, \\ [p]^r & \text{if} \ s \neq 0. \end{cases}\]
\end{Lem}
\begin{proof}
If $(s, t) = (0, 0 )$ then $B'(\lambda)$ is the zero matrix, hence the Jordan type is $[1 ]^{rp}$. If $s = 0 $ and $t \neq 0 $ then $B'(\lambda)$ only has non-zero entries on the sub-diagonal. Normalizing these entries to $1 $ gives the Jordan form of the matrix from which we read the Jordan type. If we use the row numbering from $B(\lambda)$ (i. e. the first row is $a + 1 $, the second $a + 2 $, etc. ) then the zeros on the sub-diagonal occur at rows $p, 2 p, \ldots, rp$. Thus the first block is size $p - a - 1 $, followed by $r - 1 $ blocks of size $p$, and the last block is size $a + 1 $. Hence the Jordan type is $[p]^{r - 1 }[p - a - 1 ][a + 1 ]$. Now assume $s \neq 0 $. There are exactly $r(p - 1 )$ non-zero entries on the super-diagonal and no non-zero entries above the super-diagonal therefore $\rank B'(\lambda) \geq r(p - 1 )$.
|
822
| 9
|
arxiv
|
& s^2 \\
&&&&&& -\lambda t^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & (\lambda - 2 )s^2 \\
&&&&&&&& -3 t^2 & -(\lambda - 4 )st & (\lambda - 1 )s^2 \\
&&&&&&&&& -2 t^2 & -(\lambda - 2 )st & \lambda s^2 \\
&&&&&&&&&& -t^2 & -\lambda st
\end{bmatrix}\]
\caption{$C(\lambda)$} \label{figMatC}
\end{sidewaysfigure}
Index the rows and columns of this matrix using the integers $0, 1, \ldots, \lambda$. Then the entries of $C(\lambda)$ are
\[C(\lambda)_{ij} = \begin{cases}
(i - \lambda - 1 )t^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2 i)st & \text{if} \ \ i = j \\
(i + 1 )s^2 & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise. }
\end{cases}\]
\begin{Prop} \label{propCker}
The kernel of $C(\lambda)$ is a free $k[s, t]$-module (ungraded) of rank $r + 1 $ whose basis elements are homogeneous of degree $a$. \end{Prop}
\begin{proof}
Let
\[f = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_\lambda \end{bmatrix}\]
be an arbitrary element of the kernel of $\frac{1 }{s^2 }C(\lambda)$ shown in \Cref{figMatCx}
\begin{sidewaysfigure}[p]
\vspace{350 pt}
\[\begin{bmatrix} \lambda x & 1 \\
-\lambda x^2 & (\lambda - 2 )x & 2 \\
& -(\lambda - 1 )x^2 & (\lambda - 4 )x & 3 \\
&& -(\lambda - 2 )x^2 & \ddots & \ddots \\
&&& \ddots & \ddots & -1 \\
&&&& -(\lambda + 2 )x^2 & (\lambda + 2 )x & 0 \\
&&&&& -(\lambda + 1 )x^2 & \lambda x & 1 \\
&&&&&& -\lambda x^2 & \ddots & \ddots \\
&&&&&&& \ddots & \ddots & \lambda - 2 \\
&&&&&&&& -3 x^2 & -(\lambda - 4 )x & \lambda - 1 \\
&&&&&&&&& -2 x^2 & -(\lambda - 2 )x & \lambda \\
&&&&&&&&&& -x^2 & -\lambda x
\end{bmatrix}\]
\caption{$\frac{1 }{s^2 }C(\lambda)$} \label{figMatCx}
\end{sidewaysfigure}
whose entries are given by
\[\frac{1 }{s^2 }C(\lambda)_{ij} = \begin{cases}
(i - \lambda - 1 )x^2 & \text{if} \ \ i = j + 1 \\
(\lambda - 2 i)x & \text{if} \ \ i = j \\
i + 1 & \text{if} \ \ i = j - 1 \\
0 & \text{otherwise. }
\end{cases}\]
with $x = \frac{t}{s}$. We show by induction that if $i = qp + b$ and $0 \leq b < p$ then
\[f_i = (-1 )^b\binom{\lambda}{b}x^bh_q. \]
For the base case the formula gives $f_0 = h_0 $ so we take this as the definition of $h_0 $. The condition imposed by row $1 $ is $-\lambda xf_0 + f_1 = 0 $ which gives $f_1 = -xh_0 $ as desired. For the inductive step assume the formula holds for indices less then $i$ and the condition imposed by all rows of index less than $i - 1 $ is satisfied. Row $i - 1 $ has nonzero entries $(i - \lambda - 2 )x^2 $, $(\lambda - 2 i + 2 )x$, and $i$ in columns $i - 2 $, $i - 1 $, and $i$ respectively so the condition is
\[(i - \lambda - 2 )x^2 f_{i - 2 } + (\lambda - 2 i + 2 )xf_{i - 1 } + if_i = 0 \]
First assume $i \neq 0, 1 $ in $k$. Then we have
\begin{align*}
f_i &= \frac{-1 }{i}\left((-1 )^{b - 2 }(i - \lambda - 2 )\binom{\lambda}{b - 2 }x^bh_q + (-1 )^{b - 1 }(\lambda - 2 i + 2 )\binom{\lambda}{b - 1 }x^bh_q\right) \\
&= \frac{(-1 )^b}{b}\left((\lambda - b + 2 )\binom{\lambda}{b - 2 } + (\lambda - 2 b + 2 )\binom{\lambda}{b - 1 }\right)x^bh_q \\
&= \frac{(-1 )^b}{b}((b - 1 ) + (\lambda - 2 b + 2 ))\binom{\lambda}{b - 1 }x^bh_q \\
&= (-1 )^b\frac{\lambda - b + 1 }{b}\binom{\lambda}{b - 1 }x^bh_q \\
&= (-1 )^b\binom{\lambda}{b}x^bh_q
\end{align*}
as desired. Next assume $i = 0 $ in $k$. Then
\begin{align*}
& (i - \lambda - 2 )x^2 f_{i - 2 } + (\lambda - 2 i + 2 )xf_{i - 1 } \\
&\qquad = (\lambda + 2 )\binom{\lambda}{p - 2 }x^ph_{q - 1 } + (\lambda + 2 )\binom{\lambda}{p - 1 }x^ph_{q - 1 }. \end{align*}
But $a + 1 \neq 0 $ so $\binom{\lambda}{p - 1 } = 0 $ and if $a + 1 \neq 0 $ then $\binom{\lambda}{p - 2 } = 0 $ otherwise $\lambda + 2 = 0 $. In any case the above expression is $0 $ so the condition imposed by row $i - 1 $ is automatically satisfied. The formula gives $f_i = h_q$ so we take this as the definition of $h_q$. Finally assume $i = 1 $ in $k$. Then we have
\begin{align*}
f_i &= (\lambda + 1 )\binom{\lambda}{p - 1 }x^{p + 1 }h_{q - 1 } - \lambda xh_q \\
&= -\binom{\lambda}{1 }xh_q
\end{align*}
as desired. This completes the induction. We know that the given formulas for $f_i$ satisfy the conditions imposed by all rows save the last, whose condition is
\[-x^2 f_{\lambda - 1 } - \lambda xf_\lambda = 0. \]
We have
\[\lambda xf_\lambda = (-1 )^a\lambda\binom{\lambda}{a}x^{a + 1 }h_r = (-1 )^a\lambda x^{a + 1 }h_r. \]
If $a = 0 $ then
\[x^2 f_{\lambda - 1 } = (-1 )^{p - 1 }\binom{\lambda}{p - 1 }x^{p + 1 }h_{r - 1 } = 0 \]
and $\lambda = 0 $ so this conditions is satisfied. If $a \neq 0 $ then
\[x^2 f_{\lambda - 1 } = (-1 )^{a - 1 }\binom{\lambda}{a - 1 }x^{a + 1 }h_r = (-1 )^{a - 1 }ax^{a - 1 }h_r\]
so
\begin{align*}
& x^2 f_{\lambda - 1 } + \lambda xf_\lambda \\
& \qquad = (-1 )^{a - 1 }ax^{a - 1 }h_r + (-1 )^a\lambda x^{a + 1 }h_r \\
& \qquad = (-1 )^a(\lambda - a)x^{a + 1 }h_r \\
& \qquad = 0
\end{align*}
and the condition is again satisfied so we have found a basis. If $H_q$ is the basis vector associated to $h_q$ then the smallest and largest powers of $x$ in $H_q$ are $0 $ in coefficient $qp$ and $a$ in coefficient $qp + a$. By the usual arguments the $H_q$ lift to a basis for the kernel of $C(\lambda)$ that is homogeneous of degree $a$. \end{proof}
The final map we want to consider is parametrized by $0 \leq a < p - 1 $. Given such an $a$, let $D(a) \in \mathbb M_{2 p}(k[s, t])$ be the block matrix
\[D(a) = \begin{bmatrix} B(2 p - a - 2 ) & D'(a) \\ 0 & B(a)^\dagger \end{bmatrix}\]
where $D'(a)$ and $B(a)^\dagger$ are as follows. The matrix $D'(a)$ is a $(2 p - a - 1 ) \times (a + 1 )$ matrix whose $(i, j)^\text{th}$ entry is
\[D'(a)_{ij} = \begin{cases} \frac{1 }{i + 1 }s^2 & \text{if} \ i - j = p - a - 2 \\ \frac{1 }{a + 1 }t^2 & \text{if} \ (i, j) = (p, a) \\ 0 & \text{otherwise. } \end{cases}\]
The matrix $B(a)^\dagger$ is produced from $B(a)$ by taking the transpose and then swapping the variables $s$ and $t$. \[B(a)^\dagger = \begin{bmatrix} ast & -s^2 \\ at^2 & (a - 2 )st & -2 s^2 \\ & (a - 1 )t^2 & \ddots & \ddots \\ && \ddots & -(a - 2 )st & -as^2 \\ &&& t^2 & -ast \end{bmatrix}\]
\begin{Prop} \label{propQker}
The inclusion of $k[s, t]^{2 p - a - 1 }$ into $k[s, t]^{2 p}$ as the top $2 p - a - 1 $ coordinates of a column vector induces an isomorphism $\ker B(2 p - a - 2 ) \simeq \ker D(a)$. \end{Prop}
\begin{proof}
As $D(a)$ is block upper-triangular with $B(2 p - a - 2 )$ the top most block on the diagonal it suffices to show that every element of $\ker D(a)$ is of the form $\left[\begin{smallmatrix} v \\ 0 \end{smallmatrix}\right]$ with respect to this block decomposition. That is, we must show that if
\[f = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_{2 p - 1 } \end{bmatrix}\]
is an element of $\ker D(a)$ then $f_i = 0 $ for all $2 p - a - 1 \leq i \leq 2 p - 1 $. Obviously it suffices to prove this for $\frac{1 }{t^2 }D(a)$ over $k[s, t, \frac{1 }{t}]$ so let $x = \frac{s}{t}$. We start by proving that $f_{2 p - 1 } = 0 $. There are two cases, first assume that $a + 2 = 0 $ in $k$. Then row $p$ of $\frac{1 }{t^2 }D(a)$ has only one nonzero entry, a $\frac{1 }{a + 1 }$ in column $2 p - 1 $. Thus $f \in \ker \frac{1 }{t^2 }D(a)$ gives $\frac{1 }{a + 1 }f_{2 p - 1 } = 0 $ in $k[s, t, ]$, hence $f_{2 p - 1 } = 0 $. Next assume that $a + 2 < p$. Then the induction from \cref{propBker} applies to rows $p + 1, \ldots, 20 - a - 2 $ and gives
\[f_i = (-1 )^{a + i}x^{2 p - a - 2 - i}f_{2 p - a - 2 }\]
for $p \leq i \leq 2 p - a - 2 $. The condition imposed by row $p$ is
\[-(a + 2 )xf_p - (a + 2 )x^2 f_{p + 1 } + \frac{1 }{a + 1 }f_{2 p - 1 } = 0. \]
But note that the induction gave us $f_p = -xf_{p + 1 }$ so this simplifies to $\frac{1 }{a + 1 }f_{2 p - 1 } = 0 $ and again we have $f_{2 p - 1 } = 0 $. Now the condition imposed by the last row of $D(a)$ gives $f_{2 p - 2 } = axf_{2 p - 1 } = 0 $. By induction the $i^\text{th}$ row gives $-if_{i - 1 } = (2 i + a + 2 )xf_i + (i + a + 2 )x^2 f_{i + 1 } = 0 $, hence $f_{i - 1 } = 0 $, for $p - a \leq i \leq 2 p - 2 $ and this completes the proof. \end{proof}
\section{Explicit computation of $\gker{M}$ and $\mathscr F_i(V(\lambda))$} \label{secLieEx}
In this final section we carry out the explicit computations of the sheaves $\gker{M}$, for every indecomposable $\slt$-module $M$, and $\mathscr F_i(V(\lambda))$ for $i \neq p$. Friedlander and Pevtsova \cite[Proposition 5.9 ]{friedpevConstructions} have calculated the sheaves $\gker{V(\lambda)}$ for Weyl modules $V(\lambda)$ such that $0 \leq \lambda \leq 2 p - 2 $. Using the explicit descriptions of these modules found in \Cref{secSl2 } we can do the calculation for the remaining indecomposable modules in the category. \begin{Prop} \label{thmKer}
Let $\lambda = rp + a$ with $0 \leq a < p$ the remainder of $\lambda$ modulo $p$. The kernel bundles associated to the indecomposable $\slt$-modules from \cref{thmPremet} are
\begin{align*}
\gker{\Phi_\xi(\lambda)} &\simeq \mathcal O_{\mathbb P^1 }(a + 2 - p)^{\oplus r} \\
\gker{V(\lambda)} &\simeq \mathcal O_{\mathbb P^1 }(-\lambda) \oplus \mathcal O_{\mathbb P^1 }(a + 2 - p)^{\oplus r} \\
\gker{V(\lambda)^\ast} &\simeq \mathcal O_{\mathbb P^1 }(-a)^{\oplus r + 1 } \\
\gker{Q(a)} &\simeq \mathcal O_{\mathbb P^1 }(-a) \oplus \mathcal O_{\mathbb P^1 }(a + 2 - 2 p)
\end{align*}
\end{Prop}
\begin{proof}
Assume first that $\xi = [1 : \varepsilon]$.
|
822
| 10
|
arxiv
|
basis element, homogeneous of degree $m$, spans a summand of the kernel isomorphic to $k[s, t][-m]$. By definition the $\mathcal O_{\mathbb P^1 }$-module corresponding to $k[s, t][-m]$ is $\mathcal O_{\mathbb P^1 }(-m)$ so the description of the kernel translates directly to the description of the sheaf above.
The remaining cases are all identical. The modules $V(\lambda)$, $\Phi_{[0 : 1 ]}(\lambda)$, $V(\lambda)^\ast$, and $Q(a)$ give the matrices $B(\lambda)$, $B'(\lambda)$, $C(\lambda)$, and $D(a)$ whose kernels are calculated in Propositions \autoref{propBker}, \autoref{propBpker}, \autoref{propCker}, and \autoref{propQker} respectively.
\end{proof}
Next we compute $\mathscr F_i(V(\lambda))$ for any $i \neq p$ and any indecomposable $V(\lambda)$. The proof is by induction on $r$ in the expression $\lambda = rp + a$. For the base case we start with $V(\lambda)$ a simple module, i. e., $r = 0 $. Note that for the base case we do indeed determine $\mathscr F_p(V(\lambda))$, it is during the inductive step that we lose $i = p$.
\begin{Prop} \label{thmFiSimp}
If $0 \leq \lambda < p$ then
\[\mathscr F_i(V(\lambda)) = \begin{cases}\gker{V(\lambda)} & \text{if} \ i = \lambda + 1 \\ 0 & \text{otherwise. }\end{cases}\]
\end{Prop}
\begin{proof}
First note that $V(\lambda)$ has constant Jordan type $[\lambda + 1 ]$ so \cref{thmFi} tells us that when $i \neq \lambda + 1 $ the sheaf $\mathscr F_i(V(\lambda))$ is locally free of rank $0 $, hence is the zero sheaf.
For $i = \lambda + 1 $ recall from the previous proof that the map $\Theta_{V(\lambda)}$ of sheaves is given in the category of $k[s, t]$-modules by the matrix $B(\lambda)$ in \Cref{figMatB}. The $(\lambda + 1 )^\text{th}$ power of a matrix of Jordan type $[\lambda + 1 ]$ is zero so the entries of $B(\lambda)^{\lambda + 1 }$ are polynomials representing the zero function. We assume that $k$ is algebraically closed so this means $B(\lambda)^{\lambda + 1 } = 0 $ and therefore $\Theta_{V(\lambda)}^{\lambda + 1 } = 0 $. In particular
\[\gim[\lambda]{V(\lambda)} \subseteq \gker{V(\lambda)}\]
so the definition of $\mathscr F_{\lambda + 1 }(V(\lambda))$ gives
\begin{equation} \label{eqnFi}
\mathscr F_{\lambda + 1 }(V(\lambda)) = \frac{\gker{V(\lambda)} \cap \gim[\lambda]{V(\lambda)}}{\gker{V(\lambda)} \cap \gim[\lambda + 1 ]{V(\lambda)}} = \gim[\lambda]{V(\lambda)}.
\end{equation}
We have a short exact sequence of $k[s, t]$-modules
\[0 \to \im B(\lambda)^\lambda \to \ker B(\lambda) \to \frac{\ker B(\lambda)}{\im B(\lambda)^\lambda} \to 0. \]
If we show that the quotient $\ker B(\lambda)/\im B(\lambda)^\lambda$ is finite dimensional then by Serre's theorem and \autoref{eqnFi} this gives a short exact sequence of sheaves
\[0 \to \mathscr F_i(V(\lambda)) \to \gker{V(\lambda)} \to 0 \to 0 \]
and completes the proof.
To show that $\ker B(\lambda)/\im B(\lambda)^\lambda$ is a finite dimensional module note that from $B(\lambda)^{\lambda + 1 } = 0 $ we get that the columns of $B(\lambda)^\lambda$ are contained in the kernel of $B(\lambda)$ which, in \cref{propBker} we determined is a free $k[s, t]$-module with basis element
\[G = \begin{bmatrix} s^\lambda \\ -s^{\lambda - 1 }t \\ \vdots \\ (-1 )^\lambda t^\lambda \end{bmatrix}. \]
We also know by \cref{lemBlambda} that the first entry in the $j^\text{th}$ column of $B(\lambda)^\lambda$ is $c_js^{\lambda + j}t^{\lambda - j}$ for some $c_j \in k$, so the $j^\text{th}$ column must therefore be $c_js^jt^{\lambda - j}G$. The columns of $B(\lambda)^\lambda$ range from $j = 0 $ to $j = \lambda$ so this shows that $G$ times any monomial of degree $\lambda$ is contained in the image of $B(\lambda)^\lambda$. Thus the quotient $\ker B(\lambda)/\im B(\lambda)^\lambda$ is spanned, as a vector space, by the set of vectors of the form $G$ times a monomial of degree strictly less than $\lambda$. There are only finitely many such monomials therefore $\ker B(\lambda)/\im B(\lambda)^\lambda$ is finite dimensional and the proof is complete.
\end{proof}
Now for the inductive step we will make use of \cref{thmOm}, but in a slightly different form. Note that the shift in \cref{thmOm} is given by tensoring with the sheaf $\mathcal O_{\PG[\slt]}(1 )$ associated to the shifted module $\frac{k[x, y, z]}{xy + z^2 }[1 ]$. Likewise we consider $\mathcal O_{\mathbb P^1 }(1 )$ to be the sheaf associated to $k[s, t][1 ]$. Pullback through the isomorphism $\iota\colon\mathbb P^1 \to \PG[\slt]$ of \cref{exNslt} yields $\iota^\ast\mathcal O_{\PG[\slt]}(1 ) = \mathcal O_{\mathbb P^1 }(2 )$. Consequently, \cref{thmOm} has the following corollary.
\begin{Cor} \label{corFiOmega}
Let $M$ be an $\slt$-module and $1 \leq i < p$. With twist coming from $\mathbb P^1 $ we have
\[\mathscr F_i(M) \simeq \mathscr F_{p - i}(\Omega M)(2 p - 2 i). \]
\end{Cor}
Observe that $i \neq p$ in the theorem; this is why our calculation of $\mathscr F_p(V(\lambda))$ for $\lambda < p$ does not induce a calculation of $\mathscr F_p(V(\lambda))$ when $\lambda \geq p$.
\begin{Prop}
If $V(\lambda)$ is indecomposable and $i \neq p$ then
\[\mathscr F_i(V(\lambda)) \simeq \begin{cases} \mathcal O_{\mathbb P^1 }(-\lambda) & \text{if} \ i = \lambda + 1 \pmod p \\ 0 & \text{otherwise. } \end{cases}\]
\end{Prop}
\begin{proof}
Let $\lambda = rp + a$ where $0 \leq a < p$ is the remainder of $\lambda$ modulo $p$. We prove the result by induction on $r$. The base case $r = 0 $ follows from Propositions \ref{thmKer} and \ref{thmFiSimp}. For the inductive step assume $r \geq 1 $. By hypothesis the formula holds for $rp - a - 2 $ and by \cref{propOmega} we have $\Omega V(rp - a - 2 ) = V(\lambda)$. Applying \cref{corFiOmega} we get
\begin{align*}
\mathscr F_i(V(\lambda)) &= \mathscr F_{p - i}(V(rp - a - 2 ))(-2 i). \\
\intertext{If $i = a + 1 $ then}
\mathscr F_{a + 1 }(V(\lambda)) &= \mathscr F_{p - a - 1 }(V(rp - a - 2 ))(-2 a - 2 ) \\
&= \mathcal O_{\mathbb P^1 }(a + 2 - rp)(-2 a - 2 ) \\
&= \mathcal O_{\mathbb P^1 }(-a - rp) \\
&= \mathcal O_{\mathbb P^1 }(-\lambda)
\end{align*}
whereas if $i \neq a + 1 $ then $p - i \neq p - a - 1 $ so $\mathscr F_{p - i}(V(rp - a - 2 )) = 0 $. This completes the proof.
\end{proof}
\bibliographystyle{.. /Refs/alphanum}
|
1,120
| 0
|
arxiv
|
\section{Introduction}
Four-dimensional $SU(N)$ gauge theory at zero temperature is known to be
in a confining phase for all values of the bare coupling. A very large amount of work has been performed over the last decade
in an effort to isolate the types of configurations in the
functional measure responsible
for maintaining one confining phase for arbitrarily weak coupling
\cite{Rev}, \cite{LAT}. Nevertheless, a direct derivation
of this unique
feature of $SU(N)$ theories (shared only by
non-abelian ferromagnetic spin systems in $2 $ dimensions)
has remained elusive. The origin of the difficulty is clear. It is the multi-scale nature of
the problem: passage from a short distance ordered regime, where weak
coupling perturbation theory is applicable,
to a long distance strongly coupled disordered regime, where
confinement and other collective phenomena emerge. Systems involving such dramatic change in physical
behavior over different scales are hard to treat. Hydrodynamic turbulence,
involving passage from laminar to turbulent flow, is another
well-known example,
which, in fact, shares some striking qualitative features with the
confining QCD vacuum. The natural framework for addressing the problem from first principles
is a Wilsonian renormalization group (RG) block-spinning procedure bridging
short to long scales. The use of lattice regularization, i. e. the framework
of lattice gauge theory (LGT) \cite{W}, is virtually mandatory in this
context. There is no other known usable
non-perturbative formulation
of gauge theory that gives the path integral in closed form
preserving non-perturbative gauge invariance and positivity of the
transfer matrix (reflection positivity). Attempts at exact blocking constructions towards the
`perfect action' along the Wilsonian renormalized trajectory \cite{H},
however, turn out, not surprisingly, to be exceedingly complicated. There are, nonetheless, approximate RG decimation procedures that
can provide bounds on judicially chosen quantities. The basic idea in this paper is to
obtain both upper and lower bounds for the
partition function and the partition function in the presence of
external center flux. The bounds are obtained by employing
approximate decimations of the `potential moving' type \cite{M},
\cite{K}, which
can be explicitly computed to any accuracy by simple algebraic operations. This leads to a rather simple construction constraining the behavior of the
exact partition functions in the presence and in
the absence of center flux; and, through them, the exact vortex free energy
order parameter. The latter is the ratio of these two partition functions. It is thus shown to exhibit
confining behavior for all values
$0 < \beta < \infty$, of the inverse coupling $\beta=4 /g^2 $
defined at lattice spacing $a$ (UV cutoff). An earlier outline of
the argument was given in \cite{T1 }. As it will become clear in the following, there are two main ingredients
here that allow this type of result to
be obtained. The first is the use of approximate decimations that are
easily explicitly computable at every step, while correctly reflecting the
nature of RG flow in the exact theory. The second is to consider
only partition functions, or (differences of) free energies, rather than
the RG evolution of a
full effective action that would allow computation of any observable at
different scales. This more narrowly focused approach results into
tremendous simplification compared to a general RG blocking
construction. The presentation is for the most part quite explicit. Some simple
propositions, mostly containing basic bounds, serve as
building blocks of the argument. They are enumerated by roman numerals in
the text below. Most proofs have been relegated to a
series of appendices so as not to clutter what is essentially a
simple construction. Only the case of gauge group $SU(2 )$ is considered
explicitly here. The same development, however, can be applied to other
groups, and, most particularly, to $SU(3 )$ which exhibits identical
behavior under the approximate decimations. It will be helpful at this point to provide an outline of the
steps in the argument developed in the rest of the paper. In section \ref{DEC}, starting with the pure $SU(2 )$ LGT with
partition function defined on a lattice of spacing $a$, we
define a class of approximate decimation transformations to a
coarser lattice of spacing $ba$. In section \ref{Z} the resulting partition function on this decimated lattice
is shown to be an upper bound on the partition function on
the original lattice. A similar rule can be devised for obtaining a
partition function on the decimated lattice which gives a lower bound
on the original partition function. One then interpolates between these bounds. For some appropriate value of the interpolating parameter, one thus
obtains an exact
integral representation of the original partition function. This
representation is in terms of
an effective action defined on the decimated lattice of spacing $ab$
plus a bulk free energy contribution resulting from the blocking $a \to ab$. Now, any such interpolation is not unique,
and it is indeed expedient to consider different interpolation
parametrizations. The resulting partition function representation is then invariant under
such parametrization variations in its effective action. The other important ingredient is that the effective action in this
representation is constrained between the effective actions
corresponding to the upper and lower bound partition functions. Iterating
this procedure in successive decimations, a representation of the
partition function is obtained on progressively coarser lattices of
spacing $a \to ab \to ab^2 \to \cdots \to ab^n$. In section \ref{TZ} we consider the partition function in the
presence of external
center flux. This is the flux of a center vortex, introduced by a
$Z(2 )$ `twist' in the action, and rendered topologically stable by winding
around the lattice torus. The decimation-interpolation procedure
just outlined for the partition function can be applied also in the
presence of the external flux. A representation of the twisted
partition function on progressively coarser lattices can then be
obtained in a completely analogous manner. The ratio of the twisted to the untwisted partition function is the
vortex free energy order parameter. Its behavior as a function
of the size of the system characterizes the system's possible phases. By known correlation inequalities it can, furthermore, be related to the
Wilson and t'Hooft order parameters. Our representations
of the twisted and untwisted partition functions may now be used
to represent the ratio (section \ref{Z-/Z}). One may exploit the parametrization invariance of these representations
to ensure that the bulk free energy contributions resulting in each
decimation step $ab^{m-1 } \to ab^m$ explicitly cancel between numerator and
denominator in the ratio. One is then left with a representation
of the vortex free energy solely in terms of an effective action
defined on a lattice of spacing $ab^n$. Now this effective action is constrained by
the effective actions corresponding to the upper and lower bounds. The latter are easily explicitly computable by straightforward iteration
of the potential-moving decimation rules. Under successive
transformations they flow, for space-time dimension $d\leq 4 $ and
any original coupling $g$ defined at
spacing $a$, to the strong coupling regime. This is the regime where the
coefficients in the character expansion of the exponential of the action
become sufficiently small for the strong coupling cluster expansion to
converge. Confining behavior is the immediate result for the
vortex free energy, and, hence, `area law' behavior for the Wilson loop
(section \ref{CONf}). As it is well-known the theory contains only one free parameter, a
physical scale which is conventionally taken to be (some multiple of)
the string tension. This fact comes out in a natural way in the
context of RG decimations, as we will see in the following. Fixing this scale then determines the dependence $g(a)$. The fact that $g(a)\to 0 $ as $a\to 0 $ is an essentially
qualitative consequence of the flow exhibited by the decimations. Some concluding remarks are given in section \ref{SUM}. \section{Decimations} \label{DEC}
\setcounter{equation}{0 }
\setcounter{Roman}{0 }
We work on a hypercubic lattice $\Lambda \subset {\rm\bf Z}^d$ of
length $L_\mu$ in the $x^\mu$-direction, $\mu=1, \ldots , d$,
in units of the lattice spacing $a$. Individual bonds, plaquettes,
3 -cubes, etc are generically denoted by $b$, $p$, $c$, etc. More
specific notations such as $b_\mu$ or $p_{\mu\nu}$ are
used to indicate elementary $m$-cells of particular orientation. We use the standard framework and common notations of LGT with
gauge group $G$. Group elements are generically denoted by $U$,
and the bond variables by $U_b \in G$. In this paper we take
$G=SU(2 )$. We start with some appropriate plaquette action $A_p$ defined
on $\Lambda$, which, for definiteness, is taken to be
the Wilson action
\begin{equation}
A_p(U_p, \beta) ={\beta\over 2 }\;{\rm Re}\, \chi_{1 /2 }(U_p) \;, \qquad
U_p=\prod_{b\in \partial p} U_b \;, \label{Wilson}
\end{equation}
with $\beta=4 /g^2 $ defining the lattice coupling $g$. The character expansion
of the exponential of the plaquette action function is given by
\begin{equation}
\exp \left(A_p(U, \beta)\right)
= \sum_j\;d_j\, F_j(\beta)\, \chi_j(U) \label{exp}
\end{equation}
with Fourier coefficients:
\begin{equation}
F_j(\beta) = \int\, dU\;
\exp \left(A_p(U, \beta)\right) \, {1 \over d_j}\, \chi_j(U)\;. \label{Fourier}
\end{equation}
Here $dU$ denotes Haar measure on $G$, and $\chi_j$ the
character of the $j$-th representation of dimension $d_j$. So, for SU(2 ), the only case considered explicitly here, all
characters are real, $j=0, {1 \over 2 }, 1,
{3 \over 2 }, \ldots$, and $d_j=(2 j+1 )$. (\ref{Fourier}) implies that
$F_0 \geq F_j$, all $j\not=0 $. Explicitly, one finds
\begin{equation}
F_j(\beta) = {2 \over \beta}\, I_{d_j}(\beta) \,
\end{equation}
in terms of the modified Bessel function $I_\nu$. It will be convenient to work in terms of normalized coefficients:
\begin{equation}
c_j(\beta) = {\displaystyle F_j(\beta) \over \displaystyle F_0 (\beta)} \;, \label{ncoeffs}
\end{equation}
so that
\begin{eqnarray}
\exp \left(A_p(U, \beta)\right) &=& F_0 \, \Big[\, 1 + \sum_{j\not= 0 }
d_j\, c_j(\beta)\, \chi_j(U)\,
\Big] \nonumber \\
& \equiv & F_0 \;f_p(U, \beta)\,. \label{nexp}
\end{eqnarray}
The (normalized) partition function on lattice $\Lambda$ is then
\begin{equation}
Z_\Lambda(\beta) =
\int dU_\Lambda\;\prod_{p\in \Lambda}\, f_p(U_p, \beta)\equiv
\int\, d\mu_\Lambda^0
\;, \label{PF1 }
\end{equation}
where $dU_\Lambda\equiv \prod_{b\in \Lambda}dU_b$, and expectations
are computed with the measure $d\mu_\Lambda =
d\mu_\Lambda^0 / Z_\Lambda(\beta)$. The action (\ref{Wilson}) is such that
\begin{equation}
F_j (\beta)\geq 0 \;, \qquad \mbox{hence}\quad 1 \geq c_j(\beta)
\geq 0 \qquad\quad \mbox{all}\quad j \;,
\end{equation}
which implies that the measure defined by (\ref{PF1 }) satisfies reflection
positivity (RP) both in planes without sites and in planes with sites. Note that $\lim_{\beta\to \infty}c_j(\beta)=1 $. Let $\Lambda^{(n)}$ be the hypercubic lattice of spacing $b^na$,
with integer $b\geq 2 $, and
$Z_{\Lambda^{(n)}}(\{c_j(n)\})$ denote a partition
function of the form (\ref{PF1 }) defined on
$\Lambda^{(n)}$ in terms of
some given set of coefficients $\{c_j(n)\}$:
\begin{eqnarray}
Z_{\Lambda^{(n)}}(\{c_j(n)\}) & = &
\int dU_{\Lambda^{(n)}} \prod_{p\in \Lambda^{(n)}}
\Big[\, 1 + \sum_{j\not= 0 } d_j \,
c_j(n)\chi_j(U_p)\, \Big] \nonumber \\
& \equiv & \int dU_{\Lambda^{(n)}} \prod_{p\in \Lambda^{(n)}}\, f_p(U_p, n)
\equiv \int\, d\mu_{\Lambda^{(n)}}^0
\,, \label{PF2 }
\end{eqnarray}
where $dU_{\Lambda^{(n)}}\equiv \prod_{b\in \Lambda^{(n)}}dU_b$. We also employ the notations
\begin{equation}
g_p(U, n) \equiv f_p(U, n) -1 = \sum_{j\not= 0 } d_j\, c_j(n)
\, \chi_j(U) \;, \label{g}
\end{equation}
and $\| \cdot\|$ for the $\|\cdot\|_\infty$-norm:
\begin{equation}
\|g(n)\| = \sum_{j\not=0 } d_j^2 \, c_j(n) \,. \label{gnorm}
\end{equation}
One has the simple but basic result:
\prop{For $Z_{\Lambda^{(n)}}(\{c_j(n)\})$ given by (\ref{PF2 })
with $c_j(n) \geq 0 $ for all $j$, and periodic boundary conditions,
(i) $\dZ{n}(\{c_j(n)\})$ is an increasing function of each
$c_j(n)$:
\begin{equation}
\partial \dZ{n}(\{c_i(n)\}) / \partial c_j(n) \geq 0 \; ;\label{PFder0 }
\end{equation}
(ii)
\begin{equation}
Z_{\Lambda^{(n)}}(\{c_j(n)\}) \geq \Big[\,1 + \sum_{j\not=0 } d_j^2 \,
c_j(n)^6 \, \Big]^{|\Lambda^{(n)}|} \; . \label{PFlowerb1 }
\end{equation} }
(\ref{PFder0 }) is an immediate consequence of RP in planes without sites. The proof of (\ref{PFlowerb1 }), also based on RP, is given in Appendix A.
|
1,120
| 1
|
arxiv
|
compensating by `strengthening', i. e. increasing $c_j$'s of
boundary plaquettes of each cell. The simplest such scheme \cite{M},
which is adopted in the following, implements
complete removal of interior plaquettes. This may be pictured \cite{K}
as moving the potentials due to interior plaquette interactions
to the boundary. This `potential moving' may be performed as the composition of
elementary steps. The elementary potential moving step is defined in terms of
a $3 $-dimensional cell of side length $ba$ in a given
decimation direction, say the $x^\kappa$-direction,
and length $a$ in the other two directions $\mu$, $\nu$. Two such $3 $-cells adjacent along the
$\kappa$-direction are shown in Figure~\ref{Dec1 fig}. The $(b -1 )$
interior plaquettes in each cell perpendicular
to $x^\kappa$ (shaded) are removed, i. e. \begin{equation}
A_p(U_p) \to 0 \label{potmove1 }
\end{equation}
for the action at their original location, and displaced (arrows) in the
positive $x^\kappa$ direction to the position of the corresponding
plaquette (bold) on the cell boundary. There the displaced interior
plaquettes are combined with the boundary plaquette
into one plaquette $p$ with action
`renormalized' by some appropriate amount\footnote{One
may take this renormalization factor to depend on the move direction,
but we need not consider these more general transformations here. }
$\zeta_0 $:
\begin{equation}
A_p(U) \to \zeta_0 \, A_p(U) \;. \label{potmove2 }
\end{equation}
\begin{figure}[ht]
\resizebox{15 cm}{. }{\input{Dec1. pstex_t}}
\caption{Basic plaquette moving operation, $b=2 $ \label{Dec1 fig}}
\end{figure}
A complete transformation consists of performing this elementary operation
successively in every lattice direction $\kappa=1, \ldots, d$
in such a way that eventually one is left only with plaquette interactions
on a lattice of spacing $ba$. In practice, there is no reason for a choice
other than $b=2 $, but, for clarity, we keep general (integer) $b$. The result of a complete transformation
is given by equations (\ref{RG1 })-(\ref{RG5 }) below, to which a reader may
turn directly. To describe this process in more detail, let the lattice be
partitioned into $d$-dimensional hypercubic decimation cells $\sigma^d$
of side length $b a$ in each lattice direction. Plaquettes interior
to a $\sigma^d$ are defined as those not wholly contained in its
$(d-1 )$-dimensional boundary $\partial \sigma^d$. Consider the effect
of successive application of the elementary moving operation to plaquettes
of fixed orientation, say $[\mu\nu]$. There are $(d-2 )$ normal directions $\kappa_i \not= \mu, \nu$,
$i=1, \cdots, d-2 $,
in which a plaquette $p_{\mu\nu}$ can be moved. Interior $p_{\mu\nu}$'s in each $\sigma^d$ are first moved
to the cell boundary $\partial \sigma$
in groups of $(b-1 )$ parallel plaquettes,
along , say, the positive $\kappa_1 $-direction (as in Figure 1 ). They end up
in the face $\sigma^{(d-1 )}_{\kappa_1 } \subset \partial \sigma^d$
perpendicular to the $\kappa_1 $-axis. There each group is identified with the plaquette present at that
location and merged in one plaquette $p_{\mu\nu} \in
\sigma^{(d-1 )}_{\kappa_1 }$
with a `renormalized' action (\ref{potmove2 })). Similarly, $p_{\mu\nu}$ plaquettes in each face
$\sigma^{(d-1 )}_{\kappa_i}\subset \partial \sigma^d$,
with $i\not= 1, \mu, \nu$ are moved along the $\kappa_1 $-axis in
groups of $(b-1 )$ to the face $\sigma^{(d-2 )}_{\kappa_1 \kappa_i}
\subset \partial \sigma^{(d-1 )}_{\kappa_i}$
normal to the $\kappa_1 $ and $\kappa_i$ directions, where they are
merged and renormalized. There are now $(d-3 )$ directions inside the $(d-1 )$-dimensional face
$\sigma^{(d-1 )}_{\kappa_1 }$ in
which a $[\mu\nu]$-plaquette can move. Thus in proceeding to apply the elementary moving operation successively
in all directions, the once-moved-renormalized $p_{\mu\nu}$'s
in $\sigma^{(d-1 )}_{\kappa_1 }$ are next
moved, in groups of $(b-1 )$ plaquettes in the positive $\kappa_2 $-direction,
to the face $\sigma^{(d-2 )}_{\kappa_1 \kappa_2 } \subset \partial
\sigma^{(d-1 )}_{\kappa_1 }$. Similarly,
the once-moved-renormalized $p_{\mu\nu}$'s inside a face
$\sigma^{(d-2 )}_{\kappa_1 \kappa_i}$ are moved provided $\kappa_2 $ is among the
$(d-4 )$ available directions normal to a $[\mu\nu]$-plaquette inside
$\sigma^{(d-2 )}_{\kappa_1 \kappa_i}$. Continuing this process in the remaining directions
$\kappa_i$, $i=3, \ldots, (d-2 )$, the set of
$[\mu\nu]$-plaquettes on the initial lattice ends up in the
$2 $-dimensional faces
$\sigma^2 _{\kappa_1 \kappa_2 \ldots\kappa_{(d-2 )}} \subset \partial
\sigma^3 _{\kappa_1 \kappa_2 \ldots\kappa_{(d-3 )}}
\linebreak
\subset \cdots \subset
\partial \sigma^{(d-1 )}_{\kappa_1 }$. The above process, described for plaquettes of one
fixed orientation $[\mu\nu]$, is carried out for each of the
$d(d-1 )/2 $ possible choices of plaquette orientation \cite{K}. The end result of the process is then a lattice having
elementary $2 $-faces of side length $b a$, each tiled by
$b^2 $ plaquettes of side length $a$. The action of each of these $b^2 $ plaquettes has been
renormalized according to (\ref{potmove2 }) by a total factor of
\begin{equation}
\zeta_0 ^{(d-2 )} \equiv \zeta
\;. \label{totalren}
\end{equation}
This is expressed by (\ref{RG5 }) below. The integrations over the bonds interior to each
$2 $-face of side length $ba$ are now carried out. This merges the
$b^2 $ tiling plaquettes into a single plaquette of side length $ba$. These integrations are exact
and do not change the value of the partition
function that resulted after the completion of the plaquette moving
operations. We, however, allow further renormalizing the result of these
integrations by introducing, in addition to $\zeta_0 $, another parameter,
$r$ (cf. (\ref{RG2 }) below). This completes the decimation transformation to a hypercubic
lattice of spacing spacing $ba$. The important feature of this decimation transformation is
that it preserves the original one-plaquette form of the action,
so the result can again be represented in the form (\ref{nexp}). The transformation rule for successive decimations
\begin{eqnarray}
& & a\, \to b\, a \, \to\, b^2 a \to\, \cdots \to\, b^{n-1 } a
\to \, b^n a \to \cdots\nonumber \\
& & \Lambda \to \Lambda^{(1 )} \to \Lambda^{(2 )} \to \cdots \to
\Lambda^{(n-1 )} \to \Lambda^{(n)} \to \cdots\;, \nonumber
\end{eqnarray}
is then:
\begin{equation}
f_p(U, n-1 )\to F_0 (n)\, f_p(U, n) = F_0 (n)\, \Big[ 1 + \sum_{j\not= 0 }
d_j\, c_j(n)\, \chi_j(U) \Big] \,. \label{RG1 }
\end{equation}
The $n$-th step coefficients $F_0 (n)$, $c_j(n)$ are obtained from
the coefficients $c_j(n-1 )$ of the previous step by
\begin{equation}
c_j(n) = \hat{c}_j(n)^{b^2 r}\; , \label{RG2 }
\end{equation}
\begin{equation}
F_0 (n) = \hat{F}_0 (n)^{b^2 } \label{RG3 }
\end{equation}
where
\begin{equation}
\hat{c}_j(n)\equiv \hat{F}_j(n)/\hat{F}_0 (n) \leq 1 \;, \qquad j\not= 0 \;,
\label{RG4 }
\end{equation}
and
\begin{equation}
\hat{F}_j(n)= \int\, dU\;\Big[\, f(U, n-1 )\, \Big]^\zeta\,
{1 \over d_j}\, \chi_j(U)
\; . \label{RG5 }
\end{equation}
The $n=0 $ coefficients are the coefficients
$c_j(\beta)$ on the original lattice $\Lambda$. (\ref{RG5 }) encodes the end result of the plaquette moving - renormalization
operations described above, with $\zeta$ of the form (\ref{totalren});
and (\ref{RG2 }), (\ref{RG3 }) that of the
subsequent 2 -dimensional integrations, and further renormalization by
the parameter $r$. It is easily seen that $f_p(U, n) \;>\; 0 $ given that this holds for $n=0 $
(cf. (\ref{exp}), (\ref{nexp})). The effective plaquette action
on lattice $\Lambda^{(n)}$ of spacing $b^n a$
is then
\begin{eqnarray}
f_p(U, n) & = & \Big[\, 1 + \sum_{j\not= 0 } d_j\, c_j(n)\, \chi_j(U)\,
\Big] \label{nexp1 } \\
& \equiv & \exp\Big(\, A_p(U, n)\, \Big) \;, \label{actdef}
\end{eqnarray}
with effective couplings defined by the character expansion
\begin{equation}
A_p(U, n) = \beta_0 (n) + \sum_{i\not= 0 } \beta_i(n)\, d_i\, \chi_i(U) \;. \label{effact}
\end{equation}
A point on notation. In the above we used the notations $F_0 (n)$,
$c_j(n)$, $\beta_j(n)$, etc, which do not display the full set of
explicit or implicit dependences of these quantities. Thus, a more complete
notation is:
\begin{eqnarray}
c_j(n) &=& c_j(\, n, b, \zeta, r, \{c_j(n-1 )\}\, ) \nonumber \\
F_0 (n) &=& F_0 (\, n, b, \zeta, \{c_j(n-1 )\}\, ) \label{short-hand}
\end{eqnarray}
Dependence on the original coupling $\beta$ comes, of course,
iteratively through the coefficients $\{c_j(n-1 )\}$ of the preceding step. Because of the iterative nature of many of the arguments in this paper
several explicit and implicit dependences propagate to most of the
quantities used in the following. To prevent notation from getting out of
hand we generally employ short-hand notations such as those on the l. h. s. of (\ref{short-hand}), unless specific reference to particular
dependences is required. The resulting partition function after $n$ such
decimation steps is:
\begin{equation}
Z_\Lambda(\beta, n) =
\prod_{m=1 }^n F_0 (m)^{|\Lambda|/b^{md}}\; Z_{\Lambda^{(n)}}(\{c_j(n)\})
\; , \label{PF2 a}
\end{equation}
with $Z_{\Lambda^{(n)}}(\{c_j(n)\})$ of the form (\ref{PF2 }) and
coefficients (\ref{short-hand}) resulting after $n$ steps according to
(\ref{RG2 }) - (\ref{RG5 }). The bulk free energy density resulting
from decimating from scale $a$ to $b^n a$ is then
$\sum_{m=1 }^n \ln F_0 (m) /b^{md}\, $,
each term in this sum representing the contribution from
$b^{(m-1 )}a \to b^m a$ as specified by (\ref{RG1 }). The partition function (\ref{PF2 a}) is, of course, not equal to
the original partition function $Z_\Lambda(\beta)$ of (\ref{PF1 })
since the decimation transformation is not exact. How they are related
will be addressed below. \subsection{Some properties of the decimation transformations}\label{DEC2 }
The transformation rule specified by (\ref{RG1 })-(\ref {RG5 }) is
meaningful for real positive $\zeta$. Here, however, a basic
distinction can be made. As it is clear from (\ref{RG5 }),
for {\it integer} $\zeta$ the important property of positivity of the Fourier
coefficients in (\ref{RG1 }) is maintained at each decimation step:
\begin{equation}
F_0 (n)\geq 1 \;, \qquad 1 \geq c_j(n)\geq 0 \qquad \qquad
(\mbox{integer} \ \zeta) \;. \label{+c}
\end{equation}
This means that reflection positivity
is maintained at each decimation step. This clearly is not guaranteed to be
the case for non-integer $\zeta$. Thus non-integer $\zeta$ results in transformations that, in general,
violate the reflection positivity of the theory (assuming a
reflection positive original action). It is important in this connection that,
after each decimation
step, the resulting action retains the original one-plaquette form,
but will generally contain {\it all} representations in (\ref{effact}). Furthermore, among the effective couplings
$\beta_j(m)$ negative ones will occur. These features are present in general, even after a single decimation
step $a\to ba$ starting, as we did, with the single (fundamental)
representation Wilson action (\ref{Wilson}). For integer $\zeta$, however,
the resulting effective action (\ref{effact}), even in the presence of some
negative couplings, still defines a reflection positive measure,
since, as just noted, the expansion of its exponential (\ref{nexp1 }) gives
positive coefficients (\ref{+c}).
|
1,120
| 2
|
arxiv
|
and will be referred to as
MK decimation. It will be important in the following. \footnote{It is worth
noting in this context that in numerical investigations
of the standard MK recursions in gauge theories \cite{NT-BGZ}
fractional $b$, ($1 < b <2 $), which by (\ref{MKz}) corresponds to
non-integer $\zeta$, has often been used. }
There are various other interesting properties of the decimations
that can be derived from (\ref{RG2 }) - (\ref{RG5 }). The following one is particularly important. The norm (\ref{gnorm})
of the coefficients obtained by application of (\ref{RG2 }) - (\ref{RG5 })
with integer $\zeta\geq 1 $ and $r=1 $ satisfies (Appendix D):
\begin{equation}
||g(n+1 )|| \leq \Big[\, \zeta\, ||g(n)||\, \Big]^{b^2 }
\Big[\, 1 + ||g(n)||\, \Big]^{(\zeta-1 )b^2 } \,. \label{gnormrecur}
\end{equation}
Assume now that
\begin{equation}
||g(n)|| \leq \exp (- C_n)\,. \qquad \quad C_n > 0 \,, \label{gnormU1 }
\end{equation}
for some $n$. Then
\begin{equation}
||g(n+1 )|| \leq \Big[\, \zeta\, ||g(n)||\, \Big]^{b^2 } \exp \left(\,
(\zeta-1 )\, b^2 \, \right)
\leq \exp \Big[ -(\, C_n - k\, ) b^2 \Big] \,, \label{gnormrecurU1 }
\end{equation}
where $k= \ln \zeta + (\zeta-1 )$. The recursion
\begin{equation}
C_{n+1 } = C_n b^2 - k b^2 \label{gnormrecurU2 }
\end{equation}
gives
\begin{equation}
C_{n+m} = \Big[\, C_n - {b^2 k\over b^2 -1 }\, \Big] b^{2 m} + {b^2 k\over b^2 -1 }
\,. \label{gnormsoln}
\end{equation}
\prop{If for some $n$ the norm of coefficients (\ref{gnorm}) obeys
(\ref{gnormU1 }) with
\begin{equation}
C_n > {b^2 k\over b^2 -1 }\, , \label{gnormU2 }
\end{equation}
then, under iteration of the decimation transformation
(\ref{RG2 }) - (\ref{RG5 }), $||g(n+m)||\to 0 $ as
$m\to \infty$ according to (\ref{gnormsoln}). }\\
This fall-off behavior is immediately
recognizable as ``area-law''. If one assumes that $c_j(n)$ are small enough so that the theory
is within the strong coupling regime, this behavior can be immediately
deduced for the leading coefficient $c_{1 /2 }(n)$ directly from
(\ref{RG2 }) - (\ref{RG5 }):
\begin{equation}
c_{1 /2 }(n+1 ) = c_{1 /2 }(n)^{b^2 }
\exp \Big(\, [\, \ln \zeta + O(c_{1 /2 }(n))\, ]\, b^2 \, \Big)\,. \label{RGstrong}
\end{equation}
The result (\ref{gnormrecurU1 }) gives then
an estimate of the corrections due to all higher representations. What is noteworthy here, however, is that the condition (\ref{gnormU2 })
is rather weaker than the commonly stated conditions for being inside the
convergence radius of the strong coupling cluster expansion (cf. section \ref{CONf}). We note two further properties of the decimation
transformations (\ref{RG1 }) - (\ref{RG5 }). The first is that with $r=1 $
they become exact in space-time dimension $d=2 $ since then, from
(\ref{totalren}), $\zeta=1 $. The second is that, with $\zeta=b^{(d-2 )}$, vanishing coupling
$g=0 $ is a fixed point in any $d$, i. e. MK decimation is
exact at zero coupling. This follows simply from the fact that
\[ \lim_{\beta\to \infty} \Big[\int d\nu(x)\; e^{\beta f(x)} \Big]^{1 /\beta}
=\mbox{ess. sup}\ e^{f(x)} \equiv \| e^f\| \]
for any normalized measure $d\nu(x)$. Applying this to the result of
performing the plaquette moving operation starting from
(\ref{nexp}), and with $p^\prime\in \Lambda$ labeling the plaquettes
tiling the plaquettes $p\in \Lambda^{(1 )}$, one has
\begin{eqnarray}
& & \lim_{\beta\to \infty} \Bigg[\int dU_\Lambda \,
\prod_{p\in \Lambda^{(1 )}}
\prod_{p^\prime \in p}\exp\Big(\beta b^{(d-2 )}\, {1 \over 2 }
\chi_{1 /2 }(U_{p^\prime})\Big) \Bigg]^{1 /\beta} \nonumber \\
& = &
\prod_{p\in\Lambda^{(1 )}} \Big\|\exp \Big(b^{(d-2 )}{1 \over2 } \chi_{1 /2 }\Big)
\Big\|^{b^2 } = e^{|\Lambda|} =
\lim_{\beta\to \infty} \Bigg[\int dU_\Lambda\, \prod_{p\in \Lambda}
\exp\Big(\beta \, {1 \over 2 }
\chi_{1 /2 }(U_p)\Big) \Bigg]^{1 /\beta} \;. \qquad
\end{eqnarray}
This clearly holds also for $r\not= 1 $, as is evident from
the fact that $\lim_{\beta\to \infty} c_j(\beta)=1 $. This fixed point is
easily seen to be unstable. \section{Partition function} \label{Z}
\setcounter{equation}{0 }
\setcounter{Roman}{0 }
Since our decimations are not exact RG transformations, the partition
function does not in general remain invariant under them. The subsequent development hinges on the following two basic propositions
that relate partition functions under such a decimation. \subsection{Upper and lower bounds}\label{u-lPF}
Consider a partition $Z_{\Lambda^{(n-1 )}}$ on lattice $\Lambda^{(n-1 )}$ of the
form (\ref{PF2 }) given in terms of some set of coefficients $\{c_j(n-1 )\}$. Apply a decimation transformation (\ref{RG1 }) - (\ref{RG5 }) performed with
$\zeta=b^{(d-2 )}$. Denote the resulting coefficients by $c_j^U$, $F_0 ^U$,
i. e. \begin{eqnarray}
c_j^U(n, r) & \equiv & c_j(\, n, b, \, \zeta=b^{(d-2 )}, r, \{c_j(n-1 )\}\, )
\label{upperc}\\
F_0 ^U(n) & \equiv & F_0 ( \, n, b, \, \zeta=b^{(d-2 )}, \{c_j(n-1 )\}\, )
\label{upperF} \;. \end{eqnarray}
Note that
\begin{equation}
c_j^U(n, r) = c_j^U(n,1 )^r \, . \label{upperc1 }
\end{equation}
\prop{
For $Z_{\Lambda^{(n-1 )}}$ of the form (\ref{PF2 }),
a decimation transformation (\ref{RG1 }) -
(\ref{RG5 }) with $\zeta=b^{d-2 }$ and $0 < r\leq 1 $ results in an upper
bound on $Z_{\Lambda^{(n-1 )}}$:
\begin{equation}
Z_{\Lambda^{(n-1 )}}(\{c_j(n-1 )\})\, \leq \, F_0 ^U(n)^{|\Lambda^{(n)}|}\,
Z_{\Lambda^{(n)}}(\{c_j^U(n, r)\})\;. \label{U}
\end{equation}
The r. h. s. in (\ref{U}) is a monotonically
decreasing function of $r$ on $0 < r\leq 1 $. }
Given partition function $Z_{\Lambda^{(n-1 )}}$ on lattice
$\Lambda^{(n-1 )}$ of the form (\ref{PF2 }) in terms of some set of
coefficients $\{c_j(n-1 )\}$, let
\begin{eqnarray}
c_j^L(n) & \equiv & c_j(n-1 )^6 \label{lowerc1 }\\
F_0 ^L(n) & \equiv & 1 \;. \label{lowerF1 }
\end{eqnarray}
\prop{
For $\dZ{(n-1 )}$, $\dZ{n}$ of the form (\ref{PF2 }):
\begin{equation}
Z_{\Lambda^{(n)}}(\{c_j^L(n)\}) \, \leq \, Z_{\Lambda^{(n-1 }}(\{
c_j(n-1 )\}) \;. \label{L}
\end{equation}
}
The proof of III.1 is given in Appendix A,
where somewhat stronger results than (\ref{U})
are actually obtained. III.2 is a corollary of
(\ref{PFlowerb1 }) (Appendix A). For the argument in the rest of this paper,
the precise form of the lower bound is in fact not important. By II.1 (i) a further lower bound is obtained by replacing
$c_j^L(n)$ in III.2 by, for example,
\begin{equation}
c_j^L(n) \equiv c_j(n-1 )^6 \, c_j^U(n, r) \label{lowerc2 }
\end{equation}
since $0 \leq c_j^U(n, r)\leq 1 $. Another choice is to
simply set
\begin{equation}
c_j^L(n)=0 \,, \label{lowerc3 }
\end{equation}
which is a restatement of (\ref{PFlowerb2 }). A related lower bound, which, in analogy to the upper bound in III.1,
can be formulated directly in terms of the
transformations (\ref{RG1 }) - (\ref{RG5 }), is obtained by taking
$c_j^L(n)$ in III.2 to be given by:
\begin{eqnarray}
c_j^L(n) & \equiv & c_j(\, n, b, \, \zeta=1, r=1, \{c_j(n-1 )\}\, ) \nonumber\\
& = & c_j(n-1 )^{b^2 } \,,
\label{lowerc4 }\\
F_0 ^L(n) & \equiv & F_0 ( \, n, b, \, \zeta=1, \{c_j(n-1 )\}\, ) \nonumber \\
& = & 1 \; . \label{lowerF2 }
\end{eqnarray}
With this choice of $c_j^L(n)$, note that III.1 - III.2 imply the
fact that the decimations (\ref{RG1 }) - (\ref{RG4 })
become exact for $d=2 $ and $r=1 $. III.1 says that, after removal of interior plaquettes, modifying the
couplings of the remaining plaquettes
by taking $\zeta=b^{d-2 }$ (and $r\leq 1 $)
results into overcompensation. III.2 says that decimating plaquettes while leaving the couplings of the
remaining plaquettes unaffected ($\zeta=1 $, $r=1 $) results in
undercompensation. The proof of III.2 for $c_j^L(n)$ given by (\ref{lowerc4 })
is similar to that of II.1,
but need not be given here, since the weaker
bounds above will suffice. In the following it will in fact be more convenient to take (\ref{lowerc2 }) or
(\ref{lowerc3 }) for the definition of the lower bound coefficients
$c_j^L(m)$. Use of the stronger lower bounds
above may be preferable
for numerical investigations, but does not contribute anything further
to the argument in this paper. III.1 and III.2 give upper and lower bounds on the partition
function after a decimation step. It is then natural to interpolate between these bounds. \subsection{Interpolation between upper and lower bounds}\label{interbounds}
Introducing a parameter $\alpha \in [0,1 ]$, we define
coefficients $\tilde{c}_j(m, \alpha, r)$
interpolating between $c_j^L$ at $\alpha=0 $ and $c_j^U$
(\ref{upperc}) at $\alpha=1 $:
\begin{equation}
\tilde{c}_j(m, \alpha, r) = (1 -w(\alpha))\, c_j^L(m) +
w(\alpha)\, c_j^U(m, r) \;, \quad \qquad 0 < r \leq 1. \label{interc1 }
\end{equation}
with
\begin{equation}
w(0 )=0 \;, \qquad \quad w(1 )=1 \;, \quad \qquad w^\prime(\alpha) > 0
\;. \label{interc2 }
\end{equation}
For example,
\begin{equation}
w(\alpha) = {e^\alpha-1 \over e-1 } \label{w}
\end{equation}
There is clearly a variety of other choices than (\ref{interc1 }) for these
interpolating coefficients. We always require that
\begin{equation}
\partial\, \tilde{c}_j(m, \alpha, r) /\partial \, \alpha > 0 \;,
\label{interc3 }
\end{equation}
which is satisfied by (\ref{interc1 }) - (\ref{interc2 }). Similarly, we define coefficients
interpolating between (\ref{lowerF1 }) and (\ref{upperF}). For our purposes
it will be convenient to take
\begin{equation}
\tilde{F}_0 (m, h, \alpha, t) = F_0 ^U(m)^{h_t(\alpha)} \label{interF1 } \;,
\end{equation}
where $h_t$ denote a family of monotonically increasing smooth functions
of $\alpha$, labeled by a parameter
$t \in [t_a, t_b]$, and such that
\begin{equation}
h_t(0 )=0 \;, \qquad h_t(1 )=1 \,. \label{hlimits}
\end{equation}
We write $h_t(\alpha) \equiv h(\alpha, t)$. Examples
are\footnote{Supplementing
these definitions at $\alpha=0 $ as needed is understood.
|
1,120
| 3
|
arxiv
|
$\sigma(t) = t$. The interpolating partition function on $\Lambda^{(m)}$ constructed
from $\tilde{c_j}$ and $\tilde{F}_0 $ is now defined by
\begin{equation}
\tilde{Z}_{\Lambda^{(m)}}(\beta, h, \alpha, t, r)
= \tilde{F}_0 (m, h, \alpha, t)^{|\Lambda^{(m)}|}\,
Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha, r)\}) \label{interPF1 }
\end{equation}
where
\begin{eqnarray}
Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha, r)\})
& = &
\int dU_{\Lambda^{(m)}}\;\prod_{p\in \Lambda^{(m)}}\, \Big[ 1
+ \sum_{j\not= 0 } d_j\, \tilde{c}_j(m, \alpha, r)
\, \chi_j(U_p) \Big] \nonumber \\
& \equiv & \int dU_{\Lambda^{(m)}}\;\prod_{p\in \Lambda^{(m)}}\,
f_p(U_p, m, \alpha, r) \,. \label{interPF2 }
\end{eqnarray}
Combining II.1, (\ref{interc3 }) and the fact that
$\tilde{F}_0 $ is, by definition, also an increasing function of
$\alpha$ one has \\
\prop{
The interpolating free energies
$\ln Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha,
r)\})$ and $\ln \tilde{Z}_{\Lambda^{(m)}}(\beta, h,
\alpha, t, r)$ are increasing functions of $\alpha$:
\begin{equation}
\partial \ln Z_{\Lambda^{(m)}}\Big(\{\tilde{c}_j(m, \alpha, r)
\}\Big) /\partial \alpha \, >\, 0
\,. \label{interPFder1 }
\end{equation}
\\
}
Equality in (\ref{interPFder1 }) applies only in the
trivial case were all the coefficients $\tilde{c}_j$'s vanish. In terms of (\ref{interPF1 }), III.1 and III.2 give
\begin{equation}
\tilde{Z}_{\Lambda^{(m)}}(\beta, h,0, t, r)
\leq Z_{\Lambda^{(m-1 )}} \leq
\tilde{Z}_{\Lambda^{(m)}}(\beta, h,1, t, r) \,. \label{interI1 }
\end{equation}
Now $\tilde{Z}_{\Lambda^{(m)}}(\beta, h,
\alpha, t, r)$ is continuous in $\alpha$. It follows from
(\ref{interI1 }) that there exist a value of
$\alpha$ in $(0,1 )$:
\begin{equation}
\alpha(m, h, t, r, \{c_j(m-1 )\}, b, \Lambda)\equiv
\alpha_{\Lambda, h}^{(m)}(t, r) \label{interI2 }
\end{equation}
such that
\begin{equation}
\tilde{Z}_{\Lambda^{(m)}}(\beta, h, \alpha_{\Lambda, h}^{(m)}(t, r),
t, r) = Z_{\Lambda^{(m-1 )}} \,. \label{interI3 }
\end{equation}
In other words, at each given value of $t$, $r$, there exist
a value of $\alpha$ at which the partition
function on $\Lambda^{(m)}$, resulting from a decimation transformation
$\Lambda^{(m-1 )} \to \Lambda^{(m)}$, equals the partition function on
$\Lambda^{(m-1 )}$. This value is unique by III.3. By construction, $\alpha_{\Lambda, h}^{(m)}(t, r)$ is such
that (\ref{interI3 }) remains invariant under variation of
$t$, $r$ in their domain of definition, i. e. $\alpha_{\Lambda, h}^{(m)}(t, r)$ represents the
level surface of the function $\tilde{Z}_{\Lambda^{(m)}}(\beta, h, \alpha,
t, r)$ fixed by the value $Z_{\Lambda^{(m-1 )}}$. The parametrization invariance under varying $t$ will be important
later. We now examine the dependence
on $t$, $r$ in (\ref{interI2 })
more closely. Given $Z_{\Lambda^{(m-1 )}}$ and
some interpolation $h$, assume that (\ref{interI3 }) is satisfied
at the point $(t_0, r_0, \alpha=\alpha_{\Lambda, h}^{(m)})$. Then, by the
implicit function theorem, applicable by III.3, there is a
function $\alpha_{\Lambda, h}^{(m)}(t, r)$ with
continuous derivatives such that
$\alpha_{\Lambda, h}^{(m)}(t_0, r_0 )=\alpha_{\Lambda, h}^{(m)}$,
and uniquely satisfies (\ref{interI3 }) in a sufficiently small neighborhood
of $ (t_0, r_0, \alpha_{\Lambda, h}^{(m)})$. But since a
solution to (\ref{interI3 }) exists for each choice of $t, r$ in their
domain of definition, this neighborhood
can be extended by a standard
continuity argument to all points of this domain. $\alpha_{\Lambda, h}^{(m)}(t, r)$ then represents the regular
level surface of the function (\ref{interPF1 }) fixed by (\ref{interI3 }). Furthermore,
\begin{equation}
{\partial \alpha_{\Lambda, h}^{(m)}(t, r)\over \partial t} =
v(\alpha_{\Lambda, h}^{(m)}(t, r), t, r) \;,
\label{alphtder1 }
\end{equation}
where
\begin{equation}
v(\alpha, t, r) \equiv
- { \displaystyle {\partial h(\alpha, t) / \partial t}
\over{\displaystyle {\partial h(\alpha, t)\over \partial \alpha} +
A_{\Lambda^{(m)}}(\alpha, r)} }
\;, \label{alphtder2 }
\end{equation}
with
\begin{equation}
A_{\Lambda^{(m)}}(\alpha, r) \equiv {1 \over \ln F_0 ^{U}(m) }\,
{1 \over |\Lambda^{(m)}| }\,
{\partial \over \partial\alpha }\ln Z_{\Lambda^{(m)}}\, \Big(\{\tilde{c}_j( m,
\alpha, r)\}\Big) > 0 \;. \label{alphtder3 }
\end{equation}
We will always assume that $h$ is chosen such that
$\partial h/\partial t$ is negative. This is the case with the examples
(\ref{h}). Then, from (\ref{alphtder2 }), $v>0 $
on $0 <\alpha < 1 $, with $v=0 $ at $\alpha=0 $ and $\alpha=1 $. It is also useful to equivalently view $\alpha_{\Lambda, h}^{(m)}(t, r)$ as
the solution to the ODE
\begin{eqnarray}
d\alpha/dt & =& v(\alpha, t, r) \,,
\qquad \alpha\in (0,1 )\;, \label{ODE}\\
\alpha(t_0 ) & =& \alpha_{\Lambda, h}^{(m)} > 0 \;, \qquad t_0 \in [t_a, t_b]\;. \nonumber
\end{eqnarray}
Then standard results of ODE theory imply the existence of a
unique solution in a neighborhood of $\alpha_{\Lambda, h}^{(m)}>0 $,
which can in fact be extended indefinitely forward for all
$t\geq t_0 $. \footnote{Indeed, v is differentiable on
$\alpha_{\Lambda, h}^{(m)}\leq \alpha\leq 1 $ and
vanishes at $\alpha=1 $. }
A short computation using (\ref{alphtder1 }) gives
\begin{equation}
{d h(\alpha_{\Lambda, h}^{(m)}(t, r), t)\over dt} =
- {\partial \alpha_{\Lambda, h}^{(m)}(t, r)\over \partial t} \,
A_{\Lambda^{(m)}}(\alpha_{\Lambda, h}^{(m)}(t, r), r)
\,, \label{htder}
\end{equation}
as it should for consistency with (\ref{interI3 }). (\ref{htder}) and (\ref{alphtder1 }) make apparent what the
effect of a parametrization change due
to a shift in $t$ is. Increasing (decreasing) $t$ increases (decreases) the contribution of
$\ln Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m,
\alpha_{\Lambda, h}^{(m)}(t, r), r)\})$ while
decreasing (increasing) by an equal amount the contribution
from $\ln F_0 ^{U}(m)\, h\big(\alpha_{\Lambda, h}^{(m)}(t, r), t
\big)\, |\Lambda^{(m)}|$, so that the sum stays
constant and equal to $\ln Z_{\Lambda^{(m-1 )}}$ in accordance with
(\ref{interI3 }). The derivative w. r. t. $r$ is similarly given by
(\ref{alphrder1 }) - (\ref{alphrder2 }) in Appendix B. Now, by (III.1 ), the upper bound in (\ref{interI1 })
is optimized for $r=1 $, which would appear to make consideration
of other $r$ values unnecessary. The reason one may want, however, to vary
$r$ away from unity is the following. The values $\alpha_{\Lambda, h}^{(m)}(t, r)$ lie in the
interval $(0,1 )$. Consider the possibility that one finds that
$\alpha_{\Lambda, h}^{(m)}(t_m,1 )$ differs from $1 $ only by terms that
vanish as the lattice size grows. This means that, since
$v \geq 0 $ in (\ref{alphtder1 }),
$\alpha_{\Lambda, h}^{(m)}(t,1 )$
is, to within such terms, a constant function of $t$ for all $t\geq t_m$. For the purposes of the
argument in the following sections we want to exclude
this possibility, and ensure that, at least in some neighborhood of
a chosen $t$ value, the derivative (\ref{alphtder1 }) is non-vanishing
by an amount independent of lattice size. We require that
\begin{equation}
\delta^\prime < \alpha_{\Lambda, h}^{(m)}(t, r)< 1 -\delta \;,
\label{collar}
\end{equation}
with $\delta > 0 $, $\delta^\prime > 0 $ independent of the lattice size
$|\Lambda^{(m)}|$. The lower bound requirement is easily shown (Appendix B)
to be automatically satisfied by combining II.1 and (\ref{interI3 }). As it is also shown in Appendix B, one may always ensure that the upper
bound requirement in (\ref{collar}) holds
by choosing the decimation parameter $r$ to
vary, if necessary, away from unity in the domain
\begin{equation}
1 \geq r \geq 1 -\epsilon \;, \label{rdomain}
\end{equation}
where $0 < \epsilon \ll 1 $ with $\epsilon$ independent of $|\Lambda^{(m)}|$. With (\ref{collar}) in place, (\ref{alphtder1 }) and (\ref{htder})
imply (Appendix B) that
\begin{equation}
{\displaystyle \partial \alpha_{\Lambda, h}^{(m)}\over \partial t} (t, r)
\geq \eta_1 (\delta) > 0 \, , \qquad \qquad
- {\displaystyle d h\over \displaystyle dt}(\alpha_{\Lambda, h}^{(m)}(t, r), t)\geq
\eta_2 (\delta)>0 \, , \label{dercollar}
\end{equation}
where $\eta_1 $, $\eta_2 $ are lattice-size independent. Furthermore, if (\ref{collar}) already holds for $r=1 $,
it also holds for any $r$ in (\ref{rdomain}). We may as well then simplify matters in the following by setting the
parameter $r$ to the value $r=1 -\epsilon$
with some fixed small $\epsilon$. This $\epsilon$ may eventually be taken
as small as one pleases after
a sufficiently large number of decimations have been performed. This has an obvious meaning in the context of iterating the
decimation transformation as pointed out in subsection \ref{disc1 } below. We accordingly simplify notation by dropping explicit reference to
$r$, except on occasions when a statement is made for general
$r$ values. Thus we write $\alpha_{\Lambda, \, h}^{(m)} (t) \equiv
\alpha_{\Lambda, \, h}^{(m)}(t,1 -\epsilon)$, $c^U_j(m)
\equiv c^U_j(m, 1 -\epsilon))$, etc. \subsection{Representation of the partition function on decimated lattices}
\label{repZ}
So, starting on the original lattice spacing $a$, with partition function
given in terms of coefficients $\{c_j(\beta)\}$, one may iterate the
procedure represented by (\ref{interI1 }) - (\ref{interI3 }). Taking the same interpolation family $h$ in every cycle,
an iteration cycle consists of the following steps. \begin{enumerate}
\item[(i)] A decimation transformation $\Lambda^{(m-1 )}\to \Lambda^{(m)}$
given by the rules (\ref{RG1 }) -
(\ref{RG5 }) applied to the coefficients in $Z_{\Lambda^{(m-1 )}}$, and
resulting into the upper bound coefficients on $\Lambda^{(m)}$
according to (\ref{upperc}) - (\ref{upperF}) and (\ref{U}). Similarly,
a lower bound
on $\Lambda^{(m)}$ is obtained according to (\ref{L}) with lower bound
coefficients given by (\ref{lowerF1 }) and (\ref{lowerc2 }) or (\ref{lowerc3 }). \item[(ii)] Interpolation between the resulting
upper and lower bound partition functions on $\Lambda^{(m)}$
according to (\ref{interc1 }), (\ref{interF1 }), and
(\ref{interPF1 }), (\ref{interPF2 }). \item[(iii)] Fixing the value
$0 < \alpha_{\Lambda, h}^{(m)}(t) < 1 $, eq. (\ref{interI2 }),
so that the $(m-1 )$-th step partition function $Z_{\Lambda^{(m-1 )}}$ is
preserved, eq. (\ref{interI3 }). \item[(iv)] Picking a value of the parameter $t=t_m$, to fix
the coefficients
$\{\tilde{c}_j(m, \alpha_\Lambda^{(m)}(t_m))\}$ of the resulting
partition function $Z_{\Lambda^{(m)}}$, and return to step (i).
|
1,120
| 4
|
arxiv
|
\end{picture} &
\begin{picture}(30,10 )
\put(1,10 ){\vector(3, -1 ){50 }}
\end{picture} & \\
& & & & \\
\hfill \{c_j^L(2 )\} & \leq & \{\tilde{c}_j(2, \alpha_{\Lambda, \, h}^{(2 )}
(t_2 ))\} &\leq & \{c_j^U(2 )\}\hfill\\
& \begin{picture}(30,10 )
\put(30,10 ){\vector(-3, -1 ){50 }}
\end{picture} & \begin{picture}(38,8 )
\put(20,6 ){\vector(0, -2 ){20 }}
\end{picture} &
\begin{picture}(30,10 )
\put(1,10 ){\vector(3, -1 ){50 }}
\end{picture} & \\
& & & & \\
\vdots & & \vdots & & \vdots
\end{array}\\
\end{array} \label{S1 }
\end{equation}
The result after $n$ iterations is then:
\begin{eqnarray}
Z_\Lambda(\beta) &= & \int dU_\Lambda\;\prod_{p\in \Lambda}\, f_p(U, \beta)
\label{O}\\
& =& \left[\, \prod_{m=1 }^n \tilde{F}_0 (m, h, \alpha_{\Lambda, \, h}^{(m)}(t_m),
t_m)^{|\Lambda|/ b^{md}}\, \right]\;
\; Z_{\Lambda^{(n)}}\, \Big(\{\tilde{c}_j( n, \alpha_{\Lambda, \, h}^{(n)}(t_n))
\}\Big) \,. \label{A}
\end{eqnarray}
(\ref{A}) is an {\it exact integral representation} on the
decimated lattice $\Lambda^{(n)}$ of the
partition function $Z_\Lambda$ originally defined on the undecimated
lattice $\Lambda$ by the integral representation
(\ref{PF1 }) or (\ref{O}). III.3 allows the iterative procedure leading to (\ref{A}) to be
implemented in a slightly different manner, one that turns out later to
be more convenient for our purposes. Since by III.3
\begin{equation}
Z_{\Lambda^{(m)}}\, \Big(\{\tilde{c}_j(m, \alpha_{\Lambda, \, h}^{(m)}(t_m))\}
\Big) \leq
Z_{\Lambda^{(m)}}\, \Big(\{\tilde{c}_j(m, 1 )\}\Big) =
Z_{\Lambda^{(m)}}\, \Big(\{c_j^U(m)\}\Big) \,, \label{UPF}
\end{equation}
an upper bound for each successive iteration step is also obtained by applying
III.1 to the r. h. s. rather than the l. h. s. of the inequality sign in
(\ref{UPF}). The only resulting modification in the above procedure
is in step (i): the upper bound coefficients
$c^U_j(m)$ and $F_0 ^U(m)$ on $\Lambda^{(m)}$
are computed according to (\ref{upperc}) and (\ref{upperF}) but now
using the set $\{c_j^U(m-1 )\}$ rather than the set
$\{\tilde{c}_j(m-1, \alpha_{\Lambda, \, h}^{(m-1 )}(t_{m-1 }))\}$ as the
coefficient set of the previous step. The same alternative can be applied to the lower bounds in (\ref{S1 }). Since, again by III.3, one has
\begin{equation}
Z_{\Lambda^{(m)}}\, \Big(\{c_j^L(m)\}\Big) =
Z_{\Lambda^{(m)}}\, \Big(\{\tilde{c}_j(m, 0 )\}\Big) \leq
Z_{\Lambda^{(m)}}\, \Big(\{\tilde{c}_j(m, \alpha_{\Lambda, \, h}^{(m)}(t_m))\}
\Big) \,, \label{LPF}
\end{equation}
a lower bound for each successive iteration step is also obtained by applying
III.2 to the l. h. s. rather than the r. h. s. of the inequality sign
in (\ref{LPF}). If one adopts (\ref{lowerc3 }), this makes no
difference since the lower bound coefficients equal zero at every step. If one uses (\ref{lowerc2 }), the resulting modification to (\ref{S1 }) is
that in step (i) the lower bound coefficients
$c^L_j(m)$ on $\Lambda^{(m)}$
are now computed
using the set $\{c^L_j(m-1 )\}$ rather than
$\{\tilde{c}_j(m-1, \alpha_{\Lambda, \, h}^{(m-1 )}(t_{m-1 }))\}$ as the
coefficient set of the previous step. One may adopt either or both modifications following from (\ref{UPF}) or
(\ref{LPF}). Adopting both, the iterative scheme for the coefficients in
$Z_{\Lambda^{(m)}}$ replacing (\ref{S1 }) is:
\begin{equation}
\begin{array}{c}
\begin{array}{ccc}
\hfill & c_j(\beta) & \hfill\\
& \begin{picture}(60,15 )
\put(60,15 ){\vector(-4, -1 ){80 }}
\end{picture}
\begin{picture}(20,10 )
\put(12,8 ){\vector(0, -2 ){20 }}
\end{picture}
\begin{picture}(60,15 )
\put(1,15 ){\vector(4, -1 ){80 }}
\end{picture} & \\
& & \\
\end{array} \\
\begin{array}{ccccc}
\hfill \{ c_j^L(1 ) \} \qquad & \leq &
\{\tilde{c}_j(1, \alpha_\Lambda^{(1 )}(t_1 ))\}
&\leq & \qquad \{c_j^U(1 )\}\hfill\\
& & & & \\
\begin{picture}(30,10 )
\put(6,10 ){\vector(0, -2 ){20 }}
\end{picture} & &
\begin{picture}(30,10 )
\put(18,10 ){\vector(0, -2 ){20 }}
\end{picture} & &
\qquad \begin{picture}(20,10 )
\put(1,10 ){\vector(0, -2 ){20 }}
\end{picture} \\
& & & & \\
\hfill \{ c_j^L(2 ) \} \qquad & \leq &
\{\tilde{c}_j(2, \alpha_\Lambda^{(2 )}(t_2 ))\}
&\leq & \qquad \{c_j^U(2 )\}\hfill \\
& & & & \\
\begin{picture}(30,10 )
\put(6,10 ){\vector(0, -2 ){20 }}
\end{picture} & &
\begin{picture}(30,10 )
\put(18,10 ){\vector(0, -2 ){20 }}
\end{picture} & &
\qquad \begin{picture}(20,10 )
\put(1,10 ){\vector(0, -2 ){20 }}
\end{picture} \\
& & & & \\
\vdots\quad\; & & \ \vdots & & \ \vdots
\end{array}\\
\end{array} \label{S2 }
\end{equation}
This again leads, after $n$ iterations, to the representation (\ref{A}). Note, however, that the actual numerical value of
$\alpha_{\Lambda, \, h}^{(m)}(t_m)$ in (\ref{A}), fixed at each step
by requiring (\ref{interI3 }), will, in general, be different depending on
whether scheme (\ref{S1 }) or (\ref{S2 }) is used for the iteration. Also note that the upper bounds $c^U_j(m)$ in (\ref{S2 }) are not
optimal compared to those in (\ref{S1 }). The scheme (\ref{S2 }), however,
turns out to be more convenient for our purposes in the following. \subsection{Discussion of the representation (\ref{A})}\label{disc1 }
As indicated by the notation, on any finite lattice, the
$\alpha_{\Lambda, \, h}^{(m)}$ values possess
a lattice size dependence. This weak dependence enters as a
correction that vanishes inversely with lattice size. Indeed, by the standard results on the existence of
the thermodynamic limit of lattice systems, for a partition function
$Z_{\Lambda^{(m)}}(\{c_j\})$ of the form (\ref{PF2 })
on lattice $\Lambda^{(m)}$
with torus topology (periodic boundary conditions):
\begin{equation}
\ln Z_{\Lambda^{(m)}}(\{c_j\})= |\Lambda^{(m)}|\, \varphi(\{c_j\}) +
\delta\varphi_{\Lambda^{(m)}}(\{c_j\}) \, ,
\end{equation}
$\varphi(\{c_j\})$ being the free energy per unit volume in the
infinite volume limit, and $\delta\varphi_{\Lambda^{(m)}}(\{c_j\})\leq
O(\mbox{constant})$. \footnote{That is, there are no
`surface terms' for torus topology. In fact surface terms arising with
other, e. g. free, boundary conditions can be precisely defined as the
difference in the free energies computed with periodic versus such other
boundary conditions \cite{Fi}. }
From this and (\ref{interI3 }) it is straightforward to show that
\begin{equation}
\alpha_{\Lambda, h}^{(m)}(t, r) =
\alpha_h^{(m)}(t, r) + \delta
\alpha_{\Lambda, h}^{(m)}(t, r) \label{alphsplit1 }
\end{equation}
with $\delta \alpha_{\Lambda, h}^{(m)}(t, r) \to 0 $ as some inverse power
of lattice size in the large volume limit. In fact, we have already established the presence of a lattice-size
independent contribution in $\alpha_{\Lambda, h}^{(m)}(t, r)$ in an
alternative manner through (\ref{collar}),
i. e. the fact that in (\ref{alphsplit1 }) one must have
\begin{equation}
\alpha_h^{(m)}(t, r) > \delta^\prime \;. \label{alphsplit2 }
\end{equation}
An explicit expression for $\delta^\prime$ is given by (\ref{alphlowerb1 }),
(\ref{alphlowerb2 }). At weak and strong coupling the $\alpha_h^{(m)}(t, r)$
values may be
estimated analytically by comparison with the weak and
strong coupling expansions, respectively. In general, starting from (\ref{interI1 }), the location of
$\alpha_{\Lambda, \, h}^{(m)}$ satisfying (\ref{interI3 }) may be formulated
as the fixed point of a contraction mapping. This allows in principle its numerical determination, for given values of
all other parameters, to any desired accuracy. For our purposes here, however, the actual numerical values
of the $\alpha_{\Lambda, \, h}^{(m)}$'s, beyond the fact that they are fixed
between $0 $ and $1 $, will not be directly relevant. The main application of the representation (\ref{A}) in this paper will
be to relate the behavior of the exact theory to
that of the easily computable approximate decimations bounding it without
explicit knowledge of the actual $\alpha_{\Lambda, \, h}^{(m)}$ values. It is important to be clear about the meaning of (\ref{A}). The partition function $Z_\Lambda(\beta)$ is originally
given by its integral representation (\ref{O}) on
lattice $\Lambda$ of spacing $a$. (\ref{A}) then gives another integral
representation of $Z_\Lambda(\beta)$ in terms of an integrand defined
on the coarser lattice $\Lambda^{(n)}$ of spacing $b^na$ plus
a total bulk free energy contribution resulting from
decimating between scales $a$ and $b^na$. The action
$A_p(U, n, \alpha_{\Lambda, h}^{(n)})$ in $\dZ{n}(\{\tilde{c}_j(n,
\alpha_{\Lambda, h}^{(n)}\})$ is constructed to reproduce
this one physical quantity, i. e. the free energy $\ln Z_\Lambda(\beta)$,
nothing more and nothing less. In particular, it is {\it not} implied
that this action on $\Lambda^{(n)}$ can also be used to exactly
compute any other observable. For that one would need to attempt
the previous development from scratch with the corresponding
operator inserted in the integrand. Recall that, by (\ref{interc3 }), the coefficients $\tilde{c}_j(n, \alpha, r)$'s
are increasing in $\alpha$, and $\tilde{c}_j(n,1, r) =c_j^U(n, r)$,
$\;\tilde{c}_j(n,0, r) =c_j^L(n)$:
\begin{equation}
c_j^L(n) < \tilde{c}_j(\, n, \alpha_{\Lambda, h}^{(n)}(t)\, ) <
c_j^U(n)\; , \qquad \quad
\quad 0 < \alpha_{\Lambda, h}^{(n)(t)}(t) < 1 \;. \label{cineq5 }
\end{equation}
Thus,
the coefficients $\tilde{c}_j(\, n, \alpha_{\Lambda, h}^{(n)}(t)\, )$
in the representation (\ref{A}) are bounded from above by
$c_j^U(n)$ no matter what the actual values of
$\alpha_{\Lambda, h}^{(n)}(t)$ are. When considering the implications of this bound under
successive decimations the advantage of employing scheme (\ref{S2 }),
rather than (\ref{S1 }), becomes clear. The coefficients $c_j^U(n)$ on the r. h. s. column
in (\ref{S2 }) are obtained by straightforward iteration of the decimation
rules (\ref{RG2 })-(\ref{RG5 }) with $\zeta=b^{d-2 }$; i. e. only
knowledge of the $c_j^U(n-1 )$, not of the $\tilde{c}_j(n-1,
\alpha_{\Lambda, h}^{(n-1 )}(t_{n-1 }))$, is
required to obtain the $c_j^U(n)$ at the $n$-th step. The flow of these $c_j^U(n)$ coefficients then constrains the flow of the
exact representation coefficients $\tilde{c}_j(n,
\alpha_{\Lambda, h}^{(n)}(t_n))$
according to (\ref{cineq5 }) from above. In particular,
{\it if the $c_j^U(n)$'s on the r. h. s. column in (\ref{S2 })
approach the strong coupling fixed point, i. e.
|
1,120
| 5
|
arxiv
|
n\to \infty$) starting from any
$\beta > \beta_0 $, where $\beta_0 =O(1 )$. Here, for reasons discussed at the end of section \ref{interbounds},
we take $r$ in the range (\ref{rdomain}). This may be viewed as
fixing the direction from
which the point $\zeta=b^{(d-2 )}$, $r=1 $ in the parameter space of the
iteration (\ref{RG2 }) - (\ref{RG5 }) is approached. This is actually irrelevant for the flow behavior of the $
c_j^U(n,1 -\epsilon)\equiv c_j^U(n)$ since, in the case of $SU(2 )$ considered
here, this point is a structurally stable point of the
iteration. \footnote{It is,
however, very much relevant in cases where this point
is not structurally stable, e. g. in $U(1 )$. }
Note that zero lattice coupling, $g=0 $, is a fixed
point as it is
for the MK decimations. This is also
evident from $\lim_{\beta\to\infty}c_j(\beta)=1 $ and III.2. What does (\ref{scfp}) combined with (\ref{cineq5 })
imply about the question of confinement in the exact theory. The
fact that the long distance part, $\dZ{n}(\{\tilde{c}_j(n,
\alpha_{\Lambda, h}^{(n)})\})$, in (\ref{A}) flows in
the strong coupling regime does not suffice to answer the question. It is the combined contributions
from all scales between $a$ and $b^na$ in (\ref{A}) that add up to give
the exact free energy $\ln Z_\Lambda(\beta)$. Indeed, recall that,
by a parametrization change by shifts in $t$ at each decimation step,
one can shift the relative amounts assigned to these
various contributions keeping the total sum fixed (cf. remarks immediately
following (\ref{htder}). This parametrization freedom will in fact
be important in the following. On the other hand, the fact that
by (\ref{cineq5 })
the flow of $\tilde{c}_j(n, \alpha_{\Lambda, h}^{(n)}(t_n))$ to the strong
coupling regime is independent of such parametrization changes
is strongly suggestive. At any rate,
to unambiguously determine the long distance behavior of the theory
one needs to consider appropriate long distance order parameters. \section{`Twisted' partition function} \label{TZ}
\setcounter{equation}{0 }
\setcounter{Roman}{0 }
The above derivation leading to the representation (\ref{A}) for
the partition function cannot be applied in the presence of
observables without modification. Thus, in the presence of
operators involving external sources, such as the Wilson or
't Hooft loop, translation invariance is lost. Reflection
positivity is also reduced to hold only in the plane bisecting
a rectangular loop. Fortunately, there are other order parameters that can
characterize the possible phases of the theory while
avoiding most of these complications. They are the well-known vortex
free energy, and its transform with respect to the center of
the gauge group (electric flux free energy). They are in fact the natural order parameters in the present context
since they are constructed out of partition functions, i. e. partition functions in the presence of external fluxes. Let $Z_\Lambda(\tau_{\mu\nu}, \beta)$ denote the partition function
with action modified by the `twist' $\tau_{\mu\nu}$, i. e. an element of the
group center, for every plaquette on a
coclosed set of plaquettes ${\cal V}_{\mu\nu}$ winding through the
periodic lattice in the $(d-2 )$ directions perpendicular to the
$\mu$, and $\nu$-directions, i. e. winding through every
$[\mu\nu]$-plane for fixed $\mu, \nu$:
\begin{equation}
A_p(U_p) \to A_p(\tau_{\mu\nu} U_p) \;, \qquad \mbox{if} \quad
p\in {\cal V}_{\mu\nu}\;. \label{twist1 }
\end{equation}
A nontrivial twist ($\tau_{\mu\nu}\not=1 $) represents a discontinuous
gauge transformation on the set ${\cal V}_{\mu\nu}$ with
multivaluedness in the group center. Thus, for group $SU(N)$, it introduces
vortex flux characterized by elements of $\pi_1 (SU(N)/Z(N))=Z(N)$. The vortex is rendered topologically
stable by being wrapped around the lattice torus. In the case of $SU(2 )$ explicitly considered here, there is only one
nontrivial element, $\tau_{\mu\nu}=-1 $. As indicated by the notation
$Z_\Lambda(\tau_{\mu\nu}, \beta)$, the twisted partition function depends only
on the directions in which ${\cal V}_{\mu\nu}$
winds through the lattice, not
the exact shape or location of ${\cal V}_{\mu\nu}$. This expresses the mod 2 conservation of flux. Indeed, a twist $\tau_{\mu\nu}=-1 $ on the plaquettes forming
a coclosed set ${\cal V}_{\mu\nu}$ can be moved to the plaquettes forming any
other homologous
coclosed set ${\cal V}^{\ \prime}$ by the change of variables
$U_b \to -U_b$ for each bond $b$
in a set of bonds cobounded by ${\cal V} \cup {\cal V}^{\ \prime}$,
leaving $Z_\Lambda(\tau_{\mu\nu}, \beta)$ invariant. By the same token,
$Z_\Lambda(\tau_{\mu\nu}, \beta)$ is invariant under changes
mod 2 in the number of
homologous coclosed sets in $\Lambda$ carrying a twist. In the following, for definiteness, we fix, say, $\mu=1 $, $\nu=2 $, and drop
further explicit reference to the $\mu$, $\nu$ indices. Also,
we write $Z_\Lambda(-1, \beta) \equiv Z_\Lambda^{(-)}(\beta)$. (\ref{twist1 }) implies that $Z_\Lambda^{(-)}$ is obtained from
$Z_\Lambda$ by the replacement
\begin{equation}
f_p(U_p, a) \to f_p(-U_p, a)=
\Big[\, 1 + \sum_{j\not= 0 } (-1 )^{2 j}\, d_j\, c_j(\beta)\,
\chi_j(U_p)\, \Big] \label{twist2 }\,, \qquad \mbox{for each} \quad p\in {\cal V} \,,
\end{equation}
in (\ref{PF1 }), (\ref{nexp}), i. e. only half-integer representations on
plaquettes in ${\cal V}$ are
affected. In general then, the twisted version of the partition function
(\ref{PF2 }) on $\Lambda^{(n)}$ is
\begin{equation}
Z^{(-)}_{\Lambda^{(n)}}(\{c_j(n)\}) =
\int dU_{\Lambda^{(n)}}\;
\prod_{p\in \Lambda^{(n)}}\, f^{(-)}_p(U_p, n) \; ,
\label{PF1 atwist}
\end{equation}
with
\begin{equation}
f^{(-)}_p(U_p, n) =
\Big[\, 1 + \sum_{j\not= 0 } (-1 )^{2 j\, S_p[{\cal V}]}\, d_j\, c_j(n)\,
\chi_j(U_p)\, \Big]\;. \label{PF1 btwist}
\end{equation}
$S_p[{\cal V}]$ denotes the characteristic function of the plaquette set ${\cal V}$,
i. e. $S_p[{\cal V}]=1 $ if $p\in {\cal V}$, and $S_p[{\cal V}]=0 $ otherwise. A simple result (Appendix A) of obvious physical significance is:
\prop{ With $c_j(n) \geq 0 $, all $j$,
\begin{equation}
Z^{(-)}_{\Lambda^{(n)}}(\{c_j(n)\}) \leq Z_{\Lambda^{(n)}}(\{c_j(n)\}) \,. \label{Z>Z-}
\end{equation} }
Strict inequality holds in fact in
(\ref{Z>Z-}) for any nonvanishing $\beta$ on any finite lattice. Application of the decimation operation
defined in section \ref{DEC} on some given
$Z^{(-)}_{\Lambda^{(m-1 )}}$ of the form (\ref{PF1 atwist}) results in the rule
\begin{equation}
f^{(-)}_p(U, m-1 ) \to F_0 (m)\, f^{(-)}_p(U, m) = F_0 (m) \,
\Big[ 1 + \sum_{j\not= 0 } (-1 )^{2 j\, S_p[{\cal V}]}\, d_j\, c_j(m)\, \chi_j(U) \Big]
\, , \label{RG1 twist}
\end{equation}
with coefficients $F_0 (m)$, $c_j(m)$ computed according to the rules
(\ref{RG2 }) - (\ref{RG5 }). Starting on lattice $\Lambda$, the twisted
partition function resulting after $n$ such steps is
\begin{equation}
Z_\Lambda^{(-)}(\beta, n) = \prod_{m=1 }^n
F_0 (m)^{|\Lambda|/b^{md}}\; Z_{\Lambda^{(n)}}^{(-)}(\{c_j(n)\})
\; . \label{PF2 twist}
\end{equation}
Note that the flux is carried entirely in
$Z_{\Lambda^{(n)}}^{(-)}$. Indeed, bulk free energy contributions
from each $\Lambda^{(m-1 )} \to \Lambda^{(m)}$ decimation step arise from local
moving-integration operations within cells of side
length $b$ on $\Lambda^{(m-1 )}$,
i. e. topologically trivial subsets, and are thus insensitive to the flux
presence. The evolution with $n$ of the effective action in $Z_{\Lambda^{(n)}}^{(-)}$
then determines the manner in which flux spreads,
which is characteristic of the phase the system is in. \subsection{Upper and lower bounds}
In the presence of the flux, the measure in (\ref{PF1 atwist}) possesses
the property of reflection positivity only in $(d-1 )$-dimensional
planes perpendicular to any one of
the directions $\rho \not= 1,2 $ in which ${\cal V}$ winds around the lattice. One way of dealing with this is to simply consider the quantity
\begin{equation}
Z^+_{\Lambda^{(n)}}(\{c_j(n)\})\equiv {1 \over 2 }
\Big(Z_{\Lambda^{(n)}}(\{c_j(n)\}) +
Z_{\Lambda^{(n)}}^{(-)}(\{c_j(n)\})\Big) \label{Zplus}
\end{equation}
instead of $Z_{\Lambda^{(n)}}^{(-)}$. It is indeed easily checked that reflection positivity holds for
the measure in $Z^+_{\Lambda^{(n)}}$ in all planes. A direct consequence of this (Appendix A) is then the analog of II.1 :
\prop{For $\dZ{n}^+(\{c_j(n)\})$ given by (\ref{Zplus})
with $c_j(n) \geq 0 $ for all $j$, and periodic boundary conditions,
(i) $\dZ{n}^+(\{c_j(n)\})$ is an increasing function of each
$c_j(m)$:
\begin{equation}
\partial \dZ{n}^+(\{c_i(n)\}) / \partial c_j(n) \geq 0 \; ;\label{PFplusder0 }
\end{equation}
(ii)
\begin{equation}
\dZ{n}^+(\{c_j(n)\}) \geq \Big[\,1 + \sum_{j\not=0 } d_j^2 \,
c_j(n)^6 \, \Big]^{|\Lambda^{(n)}|} \; . \label{PFpluslowerb1 }
\end{equation}
}
Again, in these bounds equality holds only in the trivial case where
all $c_j(n)$'s vanish. In particular, one has
\begin{equation}
\dZ{n}^+(\{c_j(n)\}) \; > \; 1 \,. \label{PFpluslowerb2 }
\end{equation}
Note that these bounds are identical to
those in II.1. This signifies the obvious fact that they bound from below by
underestimating the bulk free energies proportional to the lattice volume,
whereas the lattice size dependence of the free energy discrepancy between
$\dZ{n}(\{c_j(n)\})$ and $\dZ{n}^{(-)}(\{c_j(n)\})$ is much weaker. Upper and lower bound statements analogous to III.1 and III.2 can be
obtained for $Z^+_{\Lambda^{(n)}}$. One has:
\prop{
For $Z_{\Lambda^{(n-1 )}}^+$ of the form
(\ref{Zplus}),
a decimation transformation (\ref{RG1 twist}), (\ref{RG2 }) -
(\ref{RG5 }) with $\zeta=b^{d-2 }$ and $0 < r\leq 1 $ results in an upper
bound on $Z_{\Lambda^{(n-1 )}}^+$:
\begin{equation}
Z^+_{\Lambda^{(n-1 )}}(\{c_j(n-1 )\})\, \leq \,
F_0 ^U(n)^{|\Lambda^{(n)}|}\,
Z^+_{\Lambda^{(n)}}(\{c_j^U(n, r)\})\;. \label{Uplus}
\end{equation}
The r. h. s. in (\ref{Uplus}) is a monotonically
decreasing function of $r$ on $0 < r\leq 1 $. }
\prop{
For $Z_{\Lambda^{(n-1 )}}^+$ of the form (\ref{Zplus}):
\begin{equation}
Z^+_{\Lambda^{(n)}}(\{c_j^L(n)\}) \, \leq \, Z^+_{\Lambda^{(n-1 }}(\{
c_j(n-1 )\}) \;, \label{Lplus}
\end{equation}
where the coefficients $c_j^L(n)$ are given by (\ref{lowerc1 }). }
The proof of IV.3, as well as that of IV.4, an easy corollary
of IV.2, are given in Appendix A. It then follows from (\ref{PFplusder0 }))
that (\ref{Lplus}) holds also with coefficients $c_j^L(n)$ given by
(\ref{lowerc2 }) or (\ref{lowerc3 }). Again, in analogy to III.2, IV.4 also holds
with $c_j^L$ given by (\ref{lowerc4 }), but this form will not be used here. \subsection{Representation of $Z_\Lambda + Z_\Lambda^{(-)}$
on decimated lattices}
The procedure of section \ref{Z}
leading to the representation (\ref{A}) for $Z_\Lambda$
can now be applied to $Z^+_\Lambda= (Z_\Lambda + Z_\Lambda^{(-)})/2 $.
|
1,120
| 6
|
arxiv
|
)}_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha, r)\}) \Big)
\label{interPF2 plus}
\end{equation}
with $Z_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha, r)\})$ given by
(\ref{interPF2 }) and $Z^{(-)}_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha, r)\})$
given by (\ref{PF1 atwist}) - (\ref{PF1 btwist}) with
coefficients $\tilde{c}_j(m, \alpha, r)$. We then have the analog of III.3 :
\prop{
The interpolating free energies
$\ln Z^+_{\Lambda^{(m)}}(\{\tilde{c}_j(m, \alpha,
r)\})$ and
$\ln \tilde{Z}^+_{\Lambda^{(m)}}(\beta, h,
\alpha, t, r)$ are
increasing functions of $\alpha$:
\begin{equation}
\partial \ln Z^+_{\Lambda^{(m)}}\Big(\{\tilde{c}_j(m, \alpha, r)\} \Big)
/\partial \alpha \, > \, 0
\,. \label{interPFder1 plus}
\end{equation}
}
In terms of (\ref{interPF1 plus}), IV.3 and IV.4 give
\begin{equation}
\tilde{Z}^+_{\Lambda^{(m)}}(\beta, h,0, t, r)
\leq Z^+_{\Lambda^{(m-1 )}} \leq
\tilde{Z}^+_{\Lambda^{(m)}}(\beta, h,1, t, r) \, . \label{interI1 plus}
\end{equation}
which implies that there exist a value of
$\alpha$ in $(0,1 )$:
\begin{equation}
\alpha^+(m, h, t, r, \{c_j(m-1 )\}, b, \Lambda)\equiv
\alpha_{\Lambda, \, h}^{+(m)}(t, r) \label{interI2 plus}
\end{equation}
such that
\begin{equation}
\tilde{Z}^+_{\Lambda^{(m)}}(\beta, h, \alpha_{\Lambda, \, h}^{+(m)}(t, r),
t, r) = Z^+_{\Lambda^{(m-1 )}} \,. \label{interI3 plus}
\end{equation}
This value is unique, for given values of $t, r$, by IV.5. $\alpha_{\Lambda, \, h}^{+(m)}(t, r)$ gives the regular level surface
of the function $\tilde{Z}^+_{\Lambda^{(m)}}(\beta, h, \alpha, t, r)$
fixed by the value $Z^+_{\Lambda^{(m-1 )}}$. All the considerations concerning the dependence on the parameters
$t, r$ in the previous section carry over directly to
$\alpha_{\Lambda, \, h}^{+(m)}(t, r)$. In particular, one has
\begin{equation}
{\partial \alpha_{\Lambda, \, h}^{+(m)}(t, r)\over \partial t} =
v^+(\alpha_{\Lambda, \, h}^{+(m)}(t, r), t, r) \;,
\label{alphplustder1 }
\end{equation}
where
\begin{equation}
v^+(\alpha, t, r) \equiv
- { \displaystyle {\partial h(\alpha, t) / \partial t}
\over{\displaystyle {\partial h(\alpha, t)\over \partial \alpha} +
A^+_{\Lambda^{(m)}}(\alpha, r)} }
\;, \label{alphplustder2 }
\end{equation}
with
\begin{equation}
A^+_{\Lambda^{(m)}}(\alpha, r) \equiv {1 \over \ln F_0 ^{U}(m) }\,
{1 \over |\Lambda^{(m)}| }\,
{\partial \over \partial\alpha }
\ln Z^+_{\Lambda^{(m)}}\, \Big(\{\tilde{c}_j( n,
\alpha, r)\}\Big) > 0 \;. \label{alphplustder3 }
\end{equation}
Again, we always assume that $h$ is chosen such that
$\partial h/\partial t$ is negative. Then, from (\ref{alphplustder2 }), $v^+>0 $
on $0 <\alpha < 1 $, with $v^+=0 $ at $\alpha=0 $ and $\alpha=1 $. Also
\begin{equation}
{d h(\alpha_{\Lambda, h}^{+(m)}(t, r), t)\over dt}
= - {\partial \alpha_{\Lambda, h}^{+(m)}(t, r)\over \partial t} \,
A^+_{\Lambda^{(m)}}(\alpha_{\Lambda, h}^{+(m)}(t, r), r)
\,. \label{hplustder}
\end{equation}
The derivative w. r. t. $r$ is similarly given by
(\ref{alphplusrder1 }). The values (\ref{interI2 plus}) obey
\begin{equation}
\delta^{+\, \prime} < \alpha_{\Lambda, \, h}^{+(m)}(t, r)<
1 -\delta^+ \label{collarplus}
\end{equation}
with lattice-size independent, positive $\delta^+$ and $\delta^{+\, \prime}$. Again, the lower bound is automatically satisfied, whereas the upper
bound is ensured by letting the parameter $r$ vary, if necessary,
in (\ref{rdomain}) (cf. Appendix B). From this it follows that the analog of (\ref{dercollar}):
\begin{equation}
{\displaystyle \partial \alpha_{\Lambda, \, h}^{+(m)}\over \partial t} (t, r)
\geq \eta^+_1 (\delta^+) > 0 \, , \qquad \qquad
- {\displaystyle d h\over \displaystyle dt}(\alpha_{\Lambda, \, h}^{+(m)}(t, r), t) \geq
\eta^+_2 (\delta^+)>0 \, , \label{dercollarplus}
\end{equation}
holds for some lattice-size independent $\eta^+_1 $, $\eta^+_2 $. Since, furthermore, (\ref{collarplus}) holds for any $r$ if it already
holds for $r=1 $, we may again set
$r = 1 -\epsilon$, and, according to the convention introduced in the
previous section, write $\alpha_{\Lambda, \, h}^{+(m)}(t) \equiv
\alpha_{\Lambda, \, h}^{+(m)}(t, 1 -\epsilon)$,
etc. As in the last section, one may iterate this procedure of
performing a decimation transformation to produce upper and lower bounds
according to (\ref{interI1 plus}), and then fixing the value
(\ref{interI2 plus}) of the interpolating parameter $\alpha$ according to
(\ref{interI3 plus}). Assume that we choose the same interpolation family $h$ at every step. Then starting from the original lattice, after $n$ iterations one obtains
\begin{eqnarray}
Z^+_\Lambda(\beta) &= & {1 \over 2 } \Big(
Z_\Lambda(\beta) + Z^{(-)}_\Lambda(\beta)\Big)
\nonumber \\
& =& \left[\, \prod_{m=1 }^n \tilde{F}_0 (m, h, \alpha_{\Lambda, \,
h}^{+(m)}(t_m), t_m)^{|\Lambda|/ b^{md}}\, \right]\;
\; Z_{\Lambda^{(n)}}^+\, \Big(\{\tilde{c}_j( n, \alpha_{\Lambda, \, h}^{+(n)}
(t_n))\}\Big) \,. \label{B}
\end{eqnarray}
The discussion in subsection \ref{disc1 } concerning the representation
(\ref{A}) of $Z_\Lambda$ applies equally well to (\ref{B}). In particular,
note that again the existence of the large volume limit implies that
\begin{equation}
\alpha_{\Lambda, h}^{+(m)}(t, r) =
\alpha_h^{+(m)}(t, r) + \delta
\alpha_{\Lambda, h}^{+(m)}(t, r) \label{alphplussplit1 }
\end{equation}
with $\delta \alpha_{\Lambda, h}^{+(m)}(t, r) \to 0 $ as some inverse power of
lattice size in the $|\Lambda^{(m)}|\to \infty$ limit. Alternatively, (\ref{collarplus}) already implies that one must have
a lattice-size independent contribution $\alpha_h^{+(m)}(t, r) \, >\,
\delta^{+\, \prime}\, >\,0 $
in (\ref{alphplussplit1 }) (cf. Appendix B). Again, either scheme (\ref{S1 }) or (\ref{S2 }) may be used to
obtain (\ref{B}). For the reasons already noted, however,
the latter scheme is more convenient for our considerations. Note, furthermore, that the bounding coefficients
$c^U_j(m)$ and $c_j^L(m)$ in this scheme are
the same for $Z_\Lambda$ and $Z^+_\Lambda$ since they do not depend
on $\alpha_{\Lambda, \, h}^{(m-1 )}$ or $\alpha_{\Lambda, \, h}^{+(m-1 )}$. We, therefore, adopt it in what follows as the common iteration scheme
for $Z_\Lambda$ and $Z^+_\Lambda$:
\begin{equation}
\begin{array}{c}
\begin{array}{ccc}
& \quad c_j(\beta) \qquad & \\
& \begin{picture}(60,20 )
\put(45,20 ){\vector(-4, -1 ){100 }}
\end{picture}
\begin{picture}(30,20 )\put(8,20 ){\vector(0, -2 ){20 }}\end{picture}
\begin{picture}(60,20 )
\put(1,20 ){\vector(4, -1 ){100 }}
\end{picture} &
\end{array} \\
\begin{array}{ccccc}
\hfill \{c_j^L(1 )\} \quad & \leq &
\{\tilde{c}_j(1, \alpha_{\Lambda, \, h}^{(1 )}(t_1 ))\}, \;
\{\tilde{c}_j(1, \alpha_{\Lambda, \, h^+}^{+(1 )}(t^+_1 ))\}
&\leq & \quad \{c_j^U(1 )\}\hfill\\
\begin{picture}(30,20 )
\put(8,20 ){\vector(0, -2 ){20 }}
\end{picture} & &
\begin{picture}(30,20 )\put(10,20 ){\vector(0, -2 ){20 }}\end{picture}
& &
\qquad \begin{picture}(30,20 )
\put(1,20 ){\vector(0, -2 ){20 }}
\end{picture} \\
\hfill \{c_j^L(2 )\} \quad & \leq &
\{\tilde{c}_j(2, \alpha_{\Lambda, \, h}^{(2 )}(t_2 ))\}, \;
\{\tilde{c}_j(2, \alpha_{\Lambda, \, h^+}^{+(2 )}(t^+_2 ))\}
&\leq & \quad \{c_j^U(2 )\}\hfill \\
\begin{picture}(30,20 )
\put(8,20 ){\vector(0, -2 ){20 }}
\end{picture} & &
\begin{picture}(30,20 )\put(10,20 ){\vector(0, -2 ){20 }}\end{picture}
& &
\qquad \begin{picture}(30,20 )
\put(1,20 ){\vector(0, -2 ){20 }}
\end{picture} \\
\vdots \; \quad & & \vdots & & \vdots \;
\end{array}\\
\end{array} \label{S3 }
\end{equation}
In (\ref{S3 }) and in the following, the more detailed notation $h^+$ and
$t^+$ is used for the choice of interpolation and
$t$-parameter values occurring in (\ref{B}) whenever they need be
distinguished from those used in the representation (\ref{A}) for
$Z_\Lambda$, which can, of course, be chosen independently. As indicated by the notation, even for common choice of interpolation
$h=h^+$ and of all other parameters, the values of
$\alpha_{\Lambda, \, h}^{+(m)}(t , r)$
fixed by the requirement (\ref{interI3 plus}) are a priori distinct
from those of $\alpha_{\Lambda, \, h}^{(m)}(t, r)$ fixed by
(\ref{interI3 }). It is easily seen, however, that for sufficiently
large lattice volume they must nearly coincide. We examine this
difference more precisely below. \section{The ratio $Z_\Lambda^{(-)}/Z_\Lambda$}\label{Z-/Z}
\setcounter{equation}{0 }
\setcounter{Roman}{0 }
We may now compare $Z_\Lambda$ and $Z_\Lambda + Z^{(-)}_\Lambda$ by
means of their representations (\ref{A}) and (\ref{B}) on
successively decimated lattices. Consider then the ratio of $Z_\Lambda + Z_\Lambda^{(-)}$ and $Z_\Lambda$ as
given by (\ref{B}) and (\ref{A}) with common choice
of interpolation $h=h^+$ after one decimation:
\begin{eqnarray}
\left(\,1 + {Z_\Lambda^{(-)} \over Z_\Lambda }\, \right) & = &
{ 2 \tilde{Z}^+_{\Lambda^{(1 )}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{+(1 )}(t^+), \, t^+) \over
\tilde{Z}_{\Lambda^{(1 )}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{(1 )}(t), \, t) } \label{ratio1 a} \\
& = & \left(\, { \tilde{Z}_{\Lambda^{(1 )}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{+(1 )}(t^+), \, t^+) \over
\tilde{Z}_{\Lambda^{(1 )}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{(1 )}(t), \, t) }\, \right) \left(\,1 +
{ Z_{\Lambda^{(1 )}}^{(-)}\, \Big(\{\tilde{c}_j(1, \alpha_{\Lambda, \, h}^{+(1 )}
(t^+))\}\Big)
\over Z_{\Lambda^{(1 )}}\, \Big(\{\tilde{c}_j(1, \alpha_{\Lambda, \, h}^{+(1 )}
(t^+))\}\Big) }
\, \right)
\label{ratio1 }
\end{eqnarray}
By construction, the r. h. s. is invariant under independent variations
of $t$ and $t^+$.
|
1,120
| 7
|
arxiv
|
resulting
constraint (\ref{ratioconstr}) is quite informative. First, it says that if in the equality (\ref{interI3 }), i. e. \[ \tilde{Z}_{\Lambda^{(1 )}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{(1 )}(t), t) = Z_\Lambda \]
one substitutes for $\alpha_{\Lambda, \, h}^{(1 )}(t)$ the wrong level
surface $\alpha_{\Lambda, \, h}^{+(1 )}(t)$,
the resulting discrepancy in the free energy per unit volume
is at most $O(1 /|\Lambda^{(1 )}|)$. Furthermore, (\ref{ratioconstr})
constrains by how much $\alpha_{\Lambda, h}^{+(1 )}(t)$
can differ from $\alpha_{\Lambda, h}^{+(1 )}(t^+)$ at $t=t^+$. From the definition (\ref{interPF1 }) and III.3, the change
in $\tilde{Z}_{\Lambda^{(1 )}}\,
(\beta, h, \alpha, t, r)$
under a shift $\delta \alpha$ in $\alpha$ satisfies
\begin{equation}
|\, \delta \ln \tilde{Z}_{\Lambda^{(1 )}}\,
(\beta, h, \alpha, t, r)| >
|\, \delta \alpha |\, |\Lambda^{(1 )}| \ln F_0 ^U(1 )\,
{\partial h(\alpha, t)\over \partial \alpha} \;. \label{varcon}
\end{equation}
When combined
with (\ref{varcon}), the constraint (\ref{ratioconstr}), taken at general $r$,
implies that one
must have
\begin{equation}
| \alpha_{\Lambda, \, h}^{+(1 )}(t, r) -
\alpha_{\Lambda, \, h}^{(1 )}(t, r) | \leq
O({1 \over |\Lambda^{(1 )}|}) \;. \label{alphdiff}
\end{equation}
This implies that in (\ref{alphsplit1 }), (\ref{alphplussplit1 }) one
has $\alpha_h^{(1 )}(t)=\alpha_h^{+(1 )}(t)$, i. e. any difference occurs only in
the parts $\delta \alpha_{\Lambda, \, h}^{(1 )}$, $\delta \alpha_{\Lambda, \, h}^{
+(1 )}$ that vary inversely with lattice size. Thus, in the large volume
limit, this difference becomes unimportant if one is
interested only in the computation of partition functions, or
bulk free energies. This, however, is
not the case for free energy differences such as the ratio (\ref{ratio1 a}). Indeed, any discrepancy of the size (\ref{alphdiff}) means that
the first factor in (\ref{ratio1 }) can contribute as much as the second
factor in round brackets. Thus the expression for the ratio
of the twisted to the untwisted partition function given by (\ref{ratio1 }),
though exact, is not immediately useful for extracting this
ratio on the coarser lattice. To address this issue one may make use of the $t$-parametrization invariance
of (\ref{ratio1 }). First the cancellation of
the bulk energies generated in the integration from
scale $a$ to $ba$ is made explicit as follows. For any given $t^+_1 $, choose $t_1 $ in $\tilde{Z}_{\Lambda^{(1 )}}
\, (\beta, \alpha_{\Lambda, h}^{(1 )}(t_1 ), t_1 )$ so
that
\begin{equation}
h(\alpha_{\Lambda, \, h}^{(1 )}(t_1 ), t_1 ) =
h(\alpha_{\Lambda, \, h}^{+(1 )}(t^+_1 ), t^+_1 ) \;. \label{hequal1 }
\end{equation}
This is clearly always possible by
(\ref{dercollar}) and (\ref{dercollarplus}), and by (\ref{alphdiff});
in fact, $t_1 -t^+_1 = O(1 /|\Lambda^{(1 )}|)$. Then (\ref{ratio1 a}) assumes the form
\begin{equation}
\left(\,1 + {Z_\Lambda^{(-)} \over Z_\Lambda }\, \right) =
{ 2 Z^+_{\Lambda^{(1 )}}\, \Big(\{\tilde{c}_j(1, \alpha^+_{\Lambda, \, h}(t^+_1 ))
\}\Big)
\over Z_{\Lambda^{(1 )}}\, \Big(\{\tilde{c}_j(1, \alpha_{\Lambda, \, h}(t_1 ))
\}\Big) } \;. \label{ratio2 }
\end{equation}
We may now iterate this procedure performing $(n-1 )$ decimation steps
according to the scheme (\ref{S3 }), at each step
choosing $t_m$, $t^+_m$ such that
\begin{equation}
h(\, \alpha_{\Lambda, \, h}^{(m)}(t_m), t_m) =
h(\, \alpha_{\Lambda, \, h}^{+(m)}(t^+_m), t^+_m) \;, \qquad
m=1, \ldots (n-1 ) \;. \label{hequal2 }
\end{equation}
Carrying out a final $n$-th decimation step one obtains
\begin{eqnarray}
\left(\,1 + {Z_\Lambda^{(-)} \over Z_\Lambda }\, \right) & = &
{ 2 \, \tilde{Z}^+_{\Lambda^{(n)}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{+(n)}(t^+), \, t^+) \over
\tilde{Z}_{\Lambda^{(n)}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{(n)}(t), \, t) } \label{ratio3 } \\
& = & { \tilde{Z}_{\Lambda^{(n)}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{+(n)}(t^+), \, t^+) \over \tilde{Z}_{\Lambda^{(n)}}\,
(\beta, h, \alpha_{\Lambda, \, h}^{(n)}(t), \, t) } \, \left(\,1 +
{ Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, \tilde{c}_j(n, \alpha_{\Lambda, \, h}^{+(n)}
(t^+))\, \}\Big)
\over Z_{\Lambda^{(n)}}\, \Big(\{\, \tilde{c}_j(n, \alpha_{\Lambda, \, h}^{+(n)}
(t^+))\, \}\Big) } \right)
\label{ratio4 }
\end{eqnarray}
The argument for $n=1 $ (eq. (\ref{ratio1 })) above may now be applied to
(\ref{ratio4 }) to conclude
\begin{equation}
| \alpha_{\Lambda, h}^{+(n)}(t, r) -
\alpha_{\Lambda, h}^{(n)}(t, r) | \leq
O({1 \over |\Lambda^{(n)}|}) \;. \label{alphdiffN}
\end{equation}
Any such discrepancy between $\alpha_{\Lambda, h}^{+(n)}(t)$ and
$\alpha_{\Lambda, h}^{(n)}(t)$ in (\ref{ratio4 }) presents the same problem
for extracting the ratio at scale $b^n a$ as at scale $ba$. In this sense (\ref{ratio4 }) is not qualitatively different from
the $n=1 $ case (\ref{ratio1 }). Transferring the discrepancy to
large $n$, however, allows a technical simplification as we see below. Next, consider (\ref{ratio3 }) rewritten as
\begin{equation}
\left(\,1 + {Z_\Lambda^{(-)} \over Z_\Lambda }\, \right) =
\left( { Z^+_{\Lambda^{(n-1 )}} \over \tilde{Z}^+_{\Lambda^{(n)}}\, (\beta, h,
\alpha_{\Lambda, \, h}^{(n)}(t), \, t) } \right)\, \left(\,1 +
{ Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, \tilde{c}_j(n, \alpha_{\Lambda, \, h}^{(n)}
(t))\, \}\Big)
\over Z_{\Lambda^{(n)}}\, \Big(\{\, \tilde{c}_j(n, \alpha_{\Lambda, \, h}^{(n)}
(t))\, \}\Big) }
\, \right)
\label{ratio5 }
\end{equation}
by use of (\ref{interI3 plus}). By construction (cf. (\ref{interI3 })),
$\alpha_{\Lambda, \, h}^{(n)}(t)$ is such
that the r. h. s. in (\ref{ratio3 }), hence in (\ref{ratio5 }),
is invariant under changes in the
parameter $t$; but note
that the two $\alpha_{\Lambda, \, h}^{(n)}$-dependent factors in
round brackets on the r. h. s. in (\ref{ratio5 }) are {\it not}
separately invariant. If, for some given $t$, $\alpha_{\Lambda, \, h}^{(n)}(t)$ is larger
(smaller) than $\alpha_{\Lambda, \, h}^{+(n)}(t)$, then,
by IV.5, $\tilde{Z}^+_\Lambda\, (\beta, h,
\alpha_{\Lambda, \, h}^{(n)}(t), \, t)$ is larger (smaller) than
$\tilde{Z}^+_{\Lambda^{(n)}}\, (\beta, h, \alpha_{\Lambda, \, h}^{+(n)}(t),
\, t)= Z^+_{\Lambda^{(n-1 )}}$,
and the second factor in round brackets on the r. h. s. of (\ref{ratio5 })
overestimates (underestimates) the ratio $Z_\Lambda^{(-)}/ Z_\Lambda $. It is then natural to ask whether there exist a value $t=
t_{\Lambda, h}^{(n)}$ such that
\begin{equation}
\tilde{Z}^+_{\Lambda^{(n)}}\, \Big(\beta, h, \alpha_{\Lambda, h}^{(n)}(
t_{\Lambda, h}^{(n)}), \, t_{\Lambda, h}^{(n)} \Big)
= Z^+_{\Lambda^{(n-1 )}} \;. \label{interIfixplus}
\end{equation}
Note that
the graphs of $\alpha_{\Lambda, h}^{(n)}(t)$ and
$\alpha_{\Lambda, h}^{+(n)}(t)$ must intersect at $t_{\Lambda, \, h}^{(n)}$. A unique solution to (\ref{interIfixplus}) indeed exists as shown in
Appendix C provided
\begin{equation}
A_{\Lambda^{(n)}}(\alpha, r) \geq
A^+_{\Lambda^{(n)}}(\alpha, r) \label{A>A+}
\end{equation}
with $r$ in (\ref{rdomain}). An equivalent statement to
(\ref{A>A+}) is
\begin{equation}
A_{\Lambda^{(n)}}(\alpha, r) \geq
A^{(-)}_{\Lambda^{(n)}}(\alpha, r) \, , \label{A>A-}
\end{equation}
where $A^{(-)}_{\Lambda^{(m)}}(\alpha, r)$ is defined
by (\ref{alphplustder3 }) but with $Z^+_{\Lambda^{(n)}}$ replaced by
$Z_{\Lambda^{(n)}}^{(-)}$. Assume now that under successive decimations the coefficients
$c^U_j(m)$ in (\ref{S3 }) evolve within the convergence radius of the
strong coupling cluster expansion. Taking then $n$ in
(\ref{ratio3 }) sufficiently
large, we need establish inequality (\ref{A>A+})\footnote{For Abelian
systems, comparison inequalities of
the type (\ref{A>A+}) either follow from Griffith's inequalities, or can be
approached by the same methods. All such known
methods fail in the non-Abelian case. } only at
strong coupling. Within this expansion it is a straightforward exercise
to establish the validity of (\ref{A>A+}), with strict inequality
on any finite lattice. We summarize the above development in the following:\\
\prop{
Consider $n$ successive decimation steps performed according to the
scheme (\ref{S3 }). Assume that there is an $n_0 $ such that the upper bound coefficients
$c^U_j(n)$ become sufficiently small for $n\geq n_0 $. Then the ratio of the twisted to the untwisted partition
function on lattice $\Lambda$, of spacing $a$, has a representation on lattice
$\Lambda^{(n)}$, of spacing $b^na$ and $n \geq n_0 $, given by:
\begin{equation}
{Z_\Lambda^{(-)}(\beta) \over Z_\Lambda(\beta) } =
{ Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})\, \}
\Big) \over Z_{\Lambda^{(n)}}\, \Big(\{\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})
\, \}\Big) } \;,
\label{ratio6 }
\end{equation}
where
\begin{equation}
\alpha_\Lambda^{*\, (n)}\equiv \alpha_{\Lambda, h}^{(n)}(t_{\Lambda, \, h}^{(n)})
\;. \label{alphstar}
\end{equation}
Here, the function $\alpha_{\Lambda, h}^{(n)}(t)$ is defined
by (\ref{interI3 }), i. e. is the solution for $\alpha$ to
\begin{equation}
\tilde{Z}_{\Lambda^{(n)}}\, (\beta, h, \alpha, \, t) =
Z_{\Lambda^{(n-1 )}}\;, \label{interI3 A}
\end{equation}
and $t_{\Lambda, \, h}^{(n)}$ is defined by
(\ref{interIfixplus}), i. e. is the solution for $t$ to the equation
\begin{equation}
\tilde{Z}^+_{\Lambda^{(n)}}\, (\beta, h, \alpha_{\Lambda, h}^{(n)}(t), \, t)
= Z^+_{\Lambda^{(n-1 )}} \;. \label{interIfixplusA}
\end{equation}
}
As indicated by the notation in (\ref{alphstar}),
any dependence on $h$ must cancel in $\alpha_\Lambda^{*\, (n)}$. Indeed, Cauchy's form of the intermediate value theorem gives
\begin{equation}
{ \ln Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, \tilde{c}_j(n, \alpha)\, \}\Big) -
\ln Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})
\, \}\Big)
\over \ln Z_{\Lambda^{(n)}}\, \Big(\{\, \tilde{c}_j(n, \alpha)\, \}\Big) -
\ln Z_{\Lambda^{(n)}}\, \Big(\{\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})\, \}
\Big) }
= { A_{\Lambda^{(n)}}^{(-)}(\xi) \over A_{\Lambda^{(n)}}(\xi) }
\leq 1 \;, \label{Cauchy}
\end{equation}
for some $\xi$ between $\alpha_\Lambda^{*\, (n)}$ and $\alpha$,
and use of (\ref{A>A-}) was made to obtain the last inequality.
|
1,120
| 8
|
arxiv
|
_\Lambda }
\geq
{ Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, c^U_j(n)\, \}\Big)
\over Z_{\Lambda^{(n)}}\, \Big(\{\, c^U_j(n)\, \}\Big) } \; . \label{ratiolowerupper}
\end{equation}
}
Now, the ratio of the interpolating
partition functions (\ref{interPF2 plus})
and (\ref{interPF2 }) interpolates monotonically between
the upper and lower bounds in (\ref{ratiolowerupper}) since
\begin{equation}
{d\over d\alpha }\; { Z^{(-)}_{\Lambda^{(n)}}(\{\tilde{c}_j(n, \alpha\})
\over
Z_{\Lambda^{(n)}}(\{\tilde{c}_j(n, \alpha)\}) } < 0 \label{ratioder}
\end{equation}
by (\ref{A>A-}). It follows that there exist a unique value
$\alpha^{*\, (n)}_\Lambda$ of $\alpha$ at which this ratio
of the interpolating partition functions equals
$Z_\Lambda^{(-)} / Z_\Lambda $. This is a restatement of (\ref{ratio6 }), but makes explicit the fact that
this value is independent of $h$. In fact, it shows that all dependence
on parametrization choices, i. e. the choice of parameters $t_m$ made
in successive decimations, eventually cancels in
$\alpha^{*\, (n)}_\Lambda$. Indeed, the latter can depend only on the number
of decimations $n$ and the initial coupling $\beta$, since this is all the
upper and lower bounds in (\ref{ratiolowerupper}) depend on. This, in retrospect, is as expected, since all bulk free-energy
contributions depending on such choices were canceled in finally
arriving at (\ref{ratio6 }), but V.2 makes it manifest. (\ref{ratiolowerupper}) was obtained as a corollary of (\ref{ratio6 }). An alternative approach would be to proceed in
the reverse direction, i. e. establish (\ref{ratiolowerupper})
directly, from which (\ref{ratio6 }) would follow by interpolation
between the upper and lower bounds as in the previous paragraph. In other words, follow also in the case of the
ratio of the partition functions the approach followed separately for
the untwisted and twisted partition functions in the previous sections. This is further discussed in Appendix C. \section{Confinement}\label{CONf}
\setcounter{equation}{0 }
\setcounter{Roman}{0 }
\subsection{Order parameters}\label{CONFOP}
The vortex free energy $F_\Lambda^{(-)}$ is defined by the
ratio of partition functions considered in the previous section:
\begin{equation}
\exp(-F_\Lambda^{(-)}(\beta)) = {Z_\Lambda^{(-)}(\beta)
\over Z_\Lambda(\beta)} \;. \label{vfe}
\end{equation}
It represents the free energy cost for adding a vortex to the vacuum,
the $Z(2 )$ flux of the inserted vortex being rendered stable by
wrapping around the toroidal lattice. As has been discussed in the
literature, all possible phases of gauge theory (Higgs, Coulomb, or
confinement) can be characterized by the behavior of (\ref{vfe})
as one lets the lattice become large. In particular, having
taken the vortex to wind through the
lattice in the directions $\kappa=3, \ldots, d$,
a confining phase is signaled by the asymptotic behavior
\begin{equation}
F_\Lambda^{(-)}(\beta) \sim L\,
\exp(\, -\hat{\sigma}(\beta) |A|\, ) \;, \label{vfeconf}
\end{equation}
where $L\equiv \prod_{\kappa\not= 1,2 }\, L_\kappa$, and
$A\equiv L_1 L_2 $. (\ref{vfeconf}) represents exponential spreading
of the flux introduced by the twist on the set ${\cal V}$ in the
transverse directions (creation of mass gap), with $ \hat{\sigma}(\beta)$
giving the exact string tension. Note that, according to (\ref{vfeconf}), $F_\Lambda^{(-)}(\beta))\to 0 $
as $|\Lambda|\to \infty $ in any power-law fashion, i. e. one
has `condensation' of the vortex flux. The behavior (\ref{vfeconf}) is dictated by physical reasoning \cite{tH},
\cite{MP},
and explicitly realized within the strong coupling expansion. As such free energies differences are generally notoriously difficult to
measure accurately, demonstration of the behavior (\ref{vfeconf})
by numerical simulations at large $\beta$'s has been achieved only
relatively recently \cite{KT}, \cite{Fetal}. The $Z(2 )$ Fourier transform of (\ref{vfe})
\begin{equation}
\exp(-F_\Lambda^{\rm el}(\beta)) = {1 \over 2 }\Big(
\,1 - {Z_\Lambda^{(-)}(\beta)
\over Z_\Lambda(\beta)} \, \Big)
\label{efe}
\end{equation}
gives the corresponding dual (w. r. t. the gauge group center) order
parameter, the color electric free energy. (\ref{vfe}) and (\ref{efe})
are ideal pure long-range order parameters. They do not suffer from
the physically irrelevant but technically quite bothersome
complications, such as loss of translational
invariance, or mass renormalization and other short
range contributions, that arise from the explicit introduction of
external sources. Such external current sources are introduced
in the definition of the Wilson and t'Hooft loops. Furthermore, the
behavior of the latter can be bounded by that of
(\ref{vfe}) and (\ref{efe}) \cite{TY}. In particular, the following relation holds. Let $C$ be a rectangular loop
of minimal area $S$ lying in a $2 $-dimensional $[12 ]$-plane. Then
\cite{TY}:
\begin{equation}
\vev{W[C]}_\Lambda \leq \left[ \exp(-F_\Lambda^{\rm el}\right]^{S/A} \,,
\label{W-vfebound}
\end{equation}
where $W[C]=\chi_{_{1 /2 }}\, \Big(\prod_{b\in C} U_b\Big)$ is the usual
Wilson loop observable. It follows from (\ref{W-vfebound}) that
confining behavior (\ref{vfeconf}) of the vortex free energy
implies confining behavior (`area-law') for the Wilson loop. \subsection{Strong coupling cluster expansion and confinement}
We now return to our considerations at the end of section \ref{Z}
regarding the flow of the coefficients $\tilde{c}_j(n,
\alpha_{\Lambda, h}^{(n)}(t))$ in our partition function
representations (\ref{A}) and (\ref{B}). This flow is bounded from above by that of
the MK coefficients $c_j^U(m)$ regardless of the specific
value assumed by the $\alpha_{\Lambda, h}^{(m)}(t_m)$'s at each decimation step
(cf (\ref{cineq5 })). Furthermore, by explicit evaluation under
the iteration rules (\ref{RG2 }) - (\ref{RG5 }), one finds that
$c_j^U(n) \to 0 $
as $n\to \infty$ for any initial $\beta$, provided $d\leq 4 $. Thus, given any initial $\beta$, one may always take the number of iterations
$n$ large enough
so that the coefficients $c_j^U(n)$ become small
enough to be within the region of convergence of the
strong coupling expansion. Then by V.1 :
\begin{equation}
\exp(-F_\Lambda^{(-)}(\beta)) =
{ Z_{\Lambda^{(n)}}^{(-)}\, \Big(\{\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})\, \}
\Big) \over Z_{\Lambda^{(n)}}\, \Big(\{\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})
\, \}\Big) } \;. \label{vfeA}
\end{equation}
The vortex free energy may then be evaluated in terms of the coefficients
$\tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})$ directly on lattice $\Lambda^{(n)}$
of spacing $b^n a$ within a convergent strong coupling polymer
expansion. Recall that, in the pure lattice gauge theory context, a polymer is
a set $Y$ of connected plaquettes containing no `free' bond, i. e. no bond
belonging to only one plaquette in $Y$ (see e. g. \cite{Mu}). The activity of a polymer $Y$ is defined by
\begin{equation}
z(Y)= \int\;\prod_{b\in Y} dU_b\;\prod_{p\in Y} g_p(U, n) \;, \label{zY}
\end{equation}
where
\begin{equation}
g_p(U, n) = \sum_{j\not= 0 } d_j\, \tilde{c}_j(n, \alpha_\Lambda^{*\, (n)})
\, \chi_j(U_p) \;. \label{g1 }
\end{equation}
The polymer expansion is then
\begin{equation}
\ln\dZ{n}
= \sum_{X \subset \Lambda^{(n)}} \, a(X)\;\prod_{Y_i \in X} \;
z(Y_i)^{n_i} \;, \label{clusterexp1 }
\end{equation}
where the sum is over all linked clusters of polymers in $\Lambda^{(n)}$,
each cluster $X$ consisting of a connected set of polymers
$Y_i$, $i=1, \ldots, k_X$ with multiplicities $n_i$. The
combinatorial factor $a(X)$ is given by
\begin{equation}
a(X)=\sum_{G(X)} (-1 )^{l(G)} \, , \label{combfactor}
\end{equation}
where the sum is over all connected graphs on $X$ (full set (including
multiplicities)
$\{Y_i\}$ as vertices with a line connecting overlapping polymers)
and $l(G)$ is the number of lines in the graph. In the case of $\dZ{n}^{(-)}$, the presence of the
flux enters the activities
(\ref{zY}) through the replacement (\ref{twist2 }). We denote the resulting
activities by $z^{(-)}(Y)$. This replacement
does not affect polymers that are wholly contained in a simply
connected part of $\Lambda^{(n)}$, since, in this case, the flux
can be removed by a change of variables in the integrals in (\ref{zY}). Only clusters that contain at least one non-simply connected
polymer forming a topologically non-trivially closed surface
can be affected. Thus, one has
\begin{equation}
\ln\dZ{n}^{(-)} - \ln \dZ{n}
= \sum_{X \subset \Lambda^{(n)}} \, a(X)\;\left(\, \prod_{Y_i \in X} \;
z^{(-)}(Y_i)^{n_i} - \, \prod_{Y_i \in X} \;
z(Y_i)^{n_i}\right)\;, \label{clusterexp2 }
\end{equation}
where the sum is only over all such topologically nontrivial
linked clusters, the contribution of all other clusters
canceling in the difference. The minimal cluster of this type consists
of a single polymer which is a
2 -dimensional plane $\Pi: x_\mu=$const., $\mu=3, \ldots, d$ on $\Lambda^{(n)}$,
thus of size $A^{(n)}=L_1 ^{(n)}L_2 ^{(n)}$, and activity
\begin{equation}
z(\Pi)= \sum_{{\rm half-int. }\atop j\geq 1 /2 }
\tilde{c}_j(n, \alpha_\Lambda^{*(n)})^{A^{(n)}}
= \tilde{c}_{1 /2 }(n, \alpha_\Lambda^{*(n)})^{\, A^{(n)}} \, [\, 1 +
\sum_{{\rm half-int. }\atop j\geq 3 /2 }
\left({\tilde{c}_j(n, \alpha^{*(n)}) \over \tilde{c}_{1 /2 }(n, \alpha^{*(n)})}
\right)^{A^{(n)}}\, ] \;. \label{lead}
\end{equation}
(Note that the terms from the higher representations in (\ref{lead}) become
utterly negligible in the large volume limit. )
There are $L^{(n)}=\prod_{\kappa\not=1,2 }L_\kappa^{(n)}$ such minimal
clusters giving the leading contribution
in (\ref{clusterexp2 }). This leading contribution is thus
seen to give the confining behavior (\ref{vfeconf}). Nonleading contributions come from nonminimal clusters
consisting of $\Pi$ with or without `decorations', and additional polymers
touching $\Pi$. Such corrections have been evaluated
in terms of the character expansion coefficients
(the $\tilde{c}_j(n, \alpha_\Lambda^{*(n)})$'s in our case) to quite high order
\cite{Mu}. They can be shown to exponentiate, so that
\begin{equation}
{1 \over L}\, F_\Lambda^{(-)}(\beta) = \exp(- \hat{\sigma}_\Lambda\, A)
\end{equation}
with
\begin{eqnarray}
\hat{\sigma}_\Lambda
& = & {1 \over b^{2 n}}\, \kappa_\Lambda(n, \alpha_\Lambda^{*(n)}) \nonumber\\
& = & {1 \over b^{2 n}}\, \Big[\, \kappa(n, \alpha_\Lambda^{*(n)})
+ O\left( (\tilde{c}_{j+1 /2 }/\tilde{c}_{1 /2 })^{L_\mu^{(n)}}\right)
+ O(n/A^{(n)}) \, \Big] \;, \label{sigma1 }
\end{eqnarray}
where \cite{Mu}
\begin{equation}
\kappa(n, \alpha_\Lambda^{*(n)}) =
\Big[-\ln \tilde{c}_{1 /2 }(n, \alpha_\Lambda^{*(n)})
- 4 \, \tilde{c}_{1 /2 }(n, \alpha_\Lambda^{*(n)})^4 +8 \, \tilde{c}_{1 /2 }(n,
\alpha_\Lambda^{*(n)})^6 + \ldots \Big] \,. \label{sigma2 }
\end{equation}
By the convergence of the expansion \cite{Ca}, the large volume limit
exists and is given given by $\hat{\sigma}=\kappa(n, \alpha^{*(n)})/b^{2 n}$,
where $\alpha^{*(n)}$ is the lattice independent part of
$\alpha_\Lambda^{*(n)}$ (cf. (\ref{alphsplit1 })). The number of iterations $n$ in the above expressions is taken large enough
so that, given some initial $\beta$ on $\Lambda$, the resulting
$c_j^U(n)$ are within the
expansion convergence regime, and one can write the representation
(\ref{vfeA}) by V.1. This implies the existence of a scale, a point to
which we return below. Otherwise, $n$ is arbitrary. By construction, our procedure is such that the ratio (\ref{vfe})
is reproduced under successive decimations. Thus, given (\ref{vfeA}) at some $n$, suppose one performs one more
decimation to lattice $\Lambda^{(n+1 )}$.
|
1,120
| 9
|
arxiv
|
\tilde{c}_j(n, \alpha^{*\, (n)})
\Big) \,, \]
with $\ln\dZ{n}$, $\ln\dZ{n+1 }$ given by (\ref{clusterexp1 }) - in fact, to
leading approximation, $\ln\dZ{n+1 }$ can be ignored.
Note that this amounts to replacing
the set of the two equations (\ref{interI3 A}) -
(\ref{interIfixplusA}) in V.1 by their ratio and
one of them. This is indeed the most convenient procedure once
(\ref{vfeA}) has been achieved.
\subsection{String tension and asymptotic freedom}
$\kappa(n, \alpha^{*(n)})$ is the string tension in lattice units of
lattice $\Lambda^{(n)}$. It is a complicated,
but well-defined function of the original coupling $\beta= 4 /g^2 $ defined on
lattice $\Lambda$, eq. (\ref{Wilson}) (cf. remarks following
(\ref{ratioder})). We write
\begin{equation}
\kappa(n, \alpha^{*(n)}) \equiv \hat{\sigma}(n, g) \,. \label{sigma3 }
\end{equation}
In dimensional units the asymptotic string tension in
$d=4 $ (\ref{sigma1 }) - (\ref{sigma2 }) is then
\begin{eqnarray}
\sigma & = & {1 \over a^2 }
{1 \over b^{2 n}}\, \hat{\sigma}(n, g)
\label{sigma4 a} \\
& = & {1 \over a^2 } \, \hat{\sigma}(g) \,. \label{sigma4 b}
\end{eqnarray}
Here, as remarked above, $n$ is assumed greater than some required smallest
$n(g)$.
This (dynamically generated) physical scale, or some chosen multiple of
it, is the only parameter in
the theory. Fixing it specifies how the coupling $g$ must vary
with changes of the (unphysical) lattice spacing $a$.
It is convenient, and customary, to introduce a fixed scale $\Lambda_0 $
serving as an arbitrary unit of physical scales. Setting
\begin{equation}
\Lambda_0 ^{\:-1 } = a b^n \, , \label{length}
\end{equation}
determines the lattice spacing $a$ such that it takes $n$ steps to
reach length scale $1 /\Lambda_0 $:
\begin{equation}
n= {1 \over \ln b} \ln {1 \over a\, \Lambda_0 } \,. \label{n-a}
\end{equation}
Fixing the string tension, given in units of $\Lambda_0 $:
\begin{equation}
\sigma = k \Lambda_0 ^2 \;, \label{sigma5 }
\end{equation}
implies
\begin{equation}
\hat{\sigma}(n, g) = k \label{sigmaI}
\end{equation}
for some constant $k$.
(\ref{sigmaI}) specifies the dependence of the bare coupling
$g$ on $n$, hence, through (\ref{n-a}), the dependence on the
lattice spacing $a$. It gives then the value $g(a)$ specified by the
value of the string tension. (This is, of course, equivalent to fixing
(\ref{sigma4 b}) directly. )
Since
\[ \hat{\sigma}(n+1, g+\Delta g)^{1 /2 } - \hat{\sigma}(n, g)^{1 /2 } =
b\, [\, \hat{\sigma}(n, g+\Delta g)^{1 /2 } - \hat{\sigma}(n, g)^{1 /2 }\, ] +
(b-1 ) \hat{\sigma}(n, g)^{1 /2 } \]
and $\Delta a = -(b-1 )a/b$ for $\Delta n=1 $, one has from
(\ref{sigmaI}):
\begin{equation}
{\Delta \sqrt{\hat{\sigma}} \over \Delta g } (a {\Delta g\over \Delta a})
= \sqrt{\hat{\sigma}} \label{sigmaIdiff}
\end{equation}
If $(a{\Delta g /\Delta a})\equiv\beta(g)$, the `beta-function', is known,
(\ref{sigmaIdiff}) can be integrated directly for $\hat{\sigma}(g)$.
(This introduces a dimensional integration constant which can serve as
the scale $\Lambda_0 $).
This is in fact the familiar
textbook argument were one {\it assumes} the existence of a
string tension so as to get (\ref{sigmaIdiff}), in which the standard weak
coupling perturbative expression for the beta function is then used.
For us, however, the existence of a non-zero string tension is the
outcome of the process of successive decimations to coarser scales as
developed above. This process embodies all relevant information in the theory.
In particular, it also supplies the specification of the function $g(a)$.
One can indeed construct the function $g(a)$ directly as follows:\\
(i) Starting with some initial value of $\beta=4 /g^2 $ perform
successive decimations following the flow into the strong coupling
regime with resulting string tension $\kappa(n, \alpha^{*(n))})$,
eq. (\ref{sigma2 }), at some $n=n_0 $. Let $k$ denote the value of this
string tension. The corresponding
value of the lattice spacing $a_0 $ is given by (\ref{n-a}), and $g=g(a_0 )$. \\
(ii) Fix the string tension as in (\ref{sigmaI}). This is then satisfied at
$n_0 $, $g(a_0 )$. \\
(iii) Vary $g$ away from $g(a_0 )$ to determine $g$ such that, under
successive decimations following the flow into the strong coupling
regime, the resulting string tension satisfies (\ref{sigmaI}) for
$n=n_0 +1 $. \\
(iv) Repeat (iii) for $n=n_0 +2, n_0 +3, \cdots, n_0 -1, \cdots$. \\
This provides the functional relation $g(a)$. In particular, for
$b=2 $, it gives the sequence of values $g(a_0 /2 ^l)$, $l=1,2, \ldots$,
starting from some value $g(a_0 )$. \footnote{This is the analog in the
present context of the `staircase' procedure in \cite{Cr}. }
Note that, according to (i) above,
the number of decimations $n_0 $ at which one chooses to apply
V.1 to obtain (\ref{vfeA}), (\ref{sigma2 }) amounts to fixing the string
tension. This is the only physical parameter in the theory.
A specification of $\Lambda_0 $ is a specification of the value
$g(a_0 )$ at spacing $a_0 $, which is a
convention of no physical import.
One then has in principle a constructive method for obtaining
$g(a)$ by a sequence of simple algebraic operations.
This is the coupling $g(a)$ as defined in the physical
non-perturbative renormalization scheme
specified by keeping the string tension fixed.
A straightforward illustration of the method is provided by setting all
$\alpha_\Lambda^{(n)}=1 $, i. e. apply it to the flow according to the
upper bound coefficients $c_j^U$ in (\ref{S1 }). This yields
$g(a)$ as given by MK decimations.
We cannot apply it explicitly to the
case of interest, i. e. the flow following the middle column
coefficients in (\ref{S2 }),
since we do not determine them explicitly in this paper.
The qualitative features at strong and weak coupling, however, are readily
discernible.
At strong coupling, i. e. small initial $\beta$, the number
of decimations needed to reach a given string tension is of
order unity, i. e. the lattice spacing $a$ is large: $a = O(\Lambda_0 ^{\, -1 })$,
and one is very far from any continuum limit.
Successive decimations, by construction, reproduce the behavior seen
within the strong coupling expansion, and
the familiar strong coupling variation given by $\beta(g) \sim g \ln g$ is
the result, as can be checked by a short computation.
The opposite limit of large initial $\beta$ corresponds to large
number of decimations, hence $a \ll \Lambda_0 ^{\, -1 }$. Indeed, recall that
$g=0 $ is a fixed point of the decimations. Hence, for $\beta\to \infty$,
one necessarily has $n\to \infty$ in order for, say, the leading upper bound
coefficient $c_{1 /2 }^U(n)$ to reach any prescribed value $ < 1 $.
Thus, $a\Lambda_0 \to 0 $. Note that this limit is well-defined by construction
since everything is bounded and continuous under successive
decimations.
Asymptotic freedom, i. e. the statement
that $g(a)\to 0 $ as $a\to 0 $, is then a direct qualitative
consequence of the flow produced by the decimations.
It is instructive to examine the actual manner in which $g(a)\to 0 $
under the upper bound decimations, i. e. the
$c_j^U(m)$'s in (\ref{S2 }). Comparing two $g$ values that differ by
one decimation step ($b=2 $), one finds
\begin{equation}
{1 \over g^2 (a)} ={1 \over g^2 (2 a)} + 2 b_0 \, \ln2 + O(g^2 )
\end{equation}
for sufficiently small $g(a)$. The constant $b_0 =(1 -1 /b^2 )/(24 \ln b)$
underestimates
the value $11 /24 \pi^2 $ obtained in a continuum perturbative calculation
by only about $3 \%$.
The actual flow (middle column in (\ref{S2 })) is faster,
corresponding to somewhat larger $b_0 $. According to RG lore,
a beta-function defined by other means, such
as fixing some renormalized coupling within weak coupling perturbation
theory, should coincide, in its universal
first two terms, with that defined by the above physical non-perturbative
scheme. This, however, is outside the scope of, and
not of direct relevance for the main argument in this paper.
To reiterate, the above procedure
completely specifies the dependence $g(a)$ in the physical
renormalization scheme defined by keeping the
string tension fixed, and this dependence is necessarily such
that $g(a)\to 0 $ as $a\to 0 $.
\section{Concluding remarks}\label{SUM}
In summary, we obtained a representation of the vortex free energy, originally
defined on a lattice of spacing $a$,
in terms of partition functions on a lattice of spacing $ab^n$.
The effective action in this representation
is bounded by the corresponding effective action resulting
from potential moving decimations (MK decimations) from spacing $a$ to
spacing $ab^n$. The latter are explicitly computable. Confining behavior
is the result, starting from any initial coupling $g$ on spacing $a$,
by taking the number of decimation $n$ large enough.
It is worth remarking again that in an
approach based on RG decimations the fact that the only
parameter in the theory is a physical scale emerges in a natural way.
Picking a number of decimations can be related to
fixing the string tension. That this can
be done only after flowing into the strong coupling regime
reflects the fact that this dynamically generated scale is an `IR effect'.
The coupling $g(a)$ is completely determined in its dependence on $a$
once the string tension is fixed.
In particular, $g(a) \to $ as $a\to 0 $.
Note that this implies that there
is no physically meaningful or unambiguous way of non-perturbatively viewing
the short distance regime independently of the long distance regime.
Computation of all physical observable quantities
in the theory must then give a multiple of the string tension or
a pure number. In the absence of other interactions, this scale provides
the unit of length; there are in fact no free
parameters. \footnote{This is part of the meaning of the common
saying ``QCD is the perfect theory''. }
There is a variety of other results related to the approach in this
paper that could not be included here. We note, in particular,
that the same procedure can be immediately transcribed to the
Heisenberg $SU(2 )$ spin model.
Also, apart from analytical results, the considerations in this
paper may be combined with Monte Carlo RG techniques to
constrain the numerical construction of improved actions at different
scales, a subject of perennial interest to the practicing lattice
gauge theorist. We hope to report on these matters elsewhere.
This research was partially supported by
NSF grant NSF-PHY-0555693.
\setcounter{equation}{0 }
|
1,088
| 0
|
arxiv
|
\section{INTRODUCTION \label{INTRODUCTION}}
When a nonlinear system shows unexpectedly simple behavior, it may be a clue that some hidden structure awaits discovery. For example, recall the classic detective story~\cite{jackson90 } that began in the 1950 s with the work of Fermi, Pasta, and Ulam~\cite{FPU, Weissert, zabu05 }. In their numerical simulations of a chain of anharmonic oscillators, Fermi et al. were surprised to find the chain returning almost perfectly, again and again, to its initial state. The struggle to understand these recurrences led Zabusky and Kruskal~\cite{zabu65 } to the discovery of solitons in the Korteweg--deVries equation, which in turn sparked a series of results showing that this equation possessed many conserved quantities--- in fact, infinitely many~\cite{miura76 }. Then several other equations turned out to have the same properties. At the time these results seemed almost miraculous. But by the mid-1970 s the hidden structure responsible for all of them--- the complete integrability of certain infinite-dimensional Hamiltonian systems~\cite{zakh71 }--- had been made manifest by the inverse scattering transform~\cite{gard67, ablo81 } and Lax pairs~\cite{lax68 }. Something similar, though far less profound, has been happening again in nonlinear science. The broad topic is still coupled oscillators, but unlike the conservative oscillators studied by Fermi et al., the oscillators in question now are dissipative and have stable limit cycles. This latest story began around 1990, when a few researchers noticed an enormous amount of neutral stability and seemingly low-dimensional behavior in their simulations of Josephson junction arrays--- specifically, arrays of identical, overdamped junctions arranged in series and coupled through a common load~\cite{tsan91, tsan92, swift92, golo92, nich92 }. Then, just a year ago, Antonsen et al. ~\cite{anto08 } uncovered similarly low-dimensional dynamics in the periodically forced version of the Kuramoto model of biological oscillators~\cite{kura84, stro00, aceb05 }. This was particularly surprising because the oscillators in that model are non-identical. As in the soliton story, these numerical observations then inspired a series of theoretical advances. These included the discovery of constants of motion~\cite{wata93, wata94 }, and of a pair of transformations that established the low-dimensionality of the dynamics~\cite{wata93, wata94, goeb95, otta08, piko08, otta09 }. But what remained to be found was the final piece, the identification of the hidden structure. Without it, it was unclear why the transformations and constants of motion should exist in the first place. In this paper we show that the group of M\"{o}bius transformations is the key to understanding this class of dynamical systems. Our analysis unifies the previous treatments of Josephson arrays and the Kuramoto model, and clarifies the geometric and algebraic structures responsible for their low-dimensional behavior. One spin-off of our approach is a new set of constants of motion; these generalize the constants found previously, and hold for a wider class of oscillator arrays. The paper is organized as follows. To keep the treatment self-contained and to establish notation, Section~\ref{BACKGROUND} reviews the relevant background about coupled oscillators and the M\"{o}bius group. In Section~\ref{MOBIUS GROUP REDUCTION} we show how to use M\"{o}bius transformations to reduce the dynamics of oscillator arrays with global sinusoidal coupling, a class that includes the Josephson and Kuramoto models as special cases. The reduced flow lives on a set of invariant three-dimensional manifolds, arising naturally as the so-called group orbits of the M\"{o}bius group. The results obtained in this way are then compared to previous findings (Section~\ref{CONNECTIONS TO PREVIOUS RESULTS}) and used to generate new constants of motion via the classical cross ratio construction (Section~\ref{CHARACTERISTICS OF THE MOTION}). We explore the dynamics on the invariant manifolds in Section~\ref{CHAOS IN JOSEPHSON ARRAYS}, and show that the phase portraits for resistively coupled Josephson arrays are filled with chaos and island chains, reminiscent of the pictures encountered in Hamiltonian chaos and KAM theory. \section{BACKGROUND \label{BACKGROUND}}
\subsection{Reducible systems with sinusoidal coupling\label{Reducible systems}}
The theory developed here was originally motivated by simulations of the governing equations for a series array of $N$ identical, overdamped Josephson junctions driven by a constant current and coupled through a resistive load. As shown in Tsang et al. ~\cite{tsan91 }, the dimensionless circuit equations for this system can be written as
\begin{equation} \label{jj_resistive}
\dot{\phi_j} = \Omega-(b+1 ) \cos \phi_j + \frac{1 }{N} \sum_{k=1 }^N \cos \phi_k
\end{equation}
for $j=1, \ldots, N$. The physical interpretation need not concern us here; the important point for our purposes is that this set of $N$ ordinary differential equations (ODEs) displayed low-dimensional dynamics. The same sort of low-dimensional behavior was later found in other kinds of oscillator arrays~\cite{golo92 } as well as in Josephson arrays with other kinds of loads~\cite{tsan92, swift92, nich92 }. Building on contributions from several teams of researchers~\cite{tsan91, tsan92, swift92, golo92, nich92 }, Watanabe and Strogatz~\cite{wata94 } showed that the system (\ref{jj_resistive}) could be reduced from $N$ ODEs to three ODEs, in the following sense. Consider a time-dependent transformation from a set of constant angles $\theta_j$ to a set of functions $\phi_j(t)$, defined via
\begin{equation} \label{WS_transformation}
\tan\left[\frac{\phi_j(t)-\Phi(t)}{2 }\right] = \sqrt{\frac{1 +\gamma(t)}{1 -\gamma(t)}} \tan\left[\frac{\theta_j-\Theta(t)}{2 }\right]
\end{equation}
for $j=1, \ldots, N$. By direct substitution, one can check that the resulting functions $\phi_j(t)$ simultaneously satisfy all $N$ equations in (\ref{jj_resistive}) as long as the three variables $\Phi(t), \gamma(t)$ and $\Theta(t)$ satisfy a certain closed set of ODEs~\cite{wata94 }. Watanabe and Strogatz also noted that the same transformation can be used to reduce any system of the form
\begin{equation} \label{reducible}
\dot{\phi_j} = f e^{i\phi_j} + g + \bar{f}e^{-i\phi_j}
\end{equation}
for $j = 1, \ldots, N$, where $f$ is any smooth, complex-valued, $2 \pi$-periodic function of the phases $\phi_1, \ldots, \phi_N$. (Here the overbar denotes complex conjugate. Also, note that $g$ has to be real-valued since $\dot{\phi_j}$ is real. ) The functions $f$ and $g$ are allowed to depend on time and on any other auxiliary state variables in the system, for example, the charge on a load capacitor or the current through a load resistor for certain Josephson junction arrays. The key is that $f$ and $g$ must be the same for all oscillators, and thus do \emph{not} depend on the index $j$. We call such systems \emph{sinusoidally coupled} because the dependence on $j$ occurs solely through the first harmonics $e^{i\phi_j}$ and $e^{-i\phi_j}$. Soon after the transformation (\ref{WS_transformation}) was reported, Goebel~\cite{goeb95 } observed that it could be related to fractional linear transformations, and he used this fact to simplify some of the calculations in Ref. ~\cite{wata94 }. At that point, research on the reducibility of Josephson arrays paused for more than a decade. The question of \emph{why} this particular class of dynamical systems (\ref{reducible}) should be reducible by fractional linear transformations was not pursued at that time, but will be addressed in Section~\ref{MOBIUS GROUP REDUCTION}. \subsection{Ott-Antonsen ansatz \label{Ott-Antonsen ansatz}}
Ott and Antonsen~\cite{otta08, otta09 } recently reopened the issue of low-dimensional dynamics, with their discovery of an ansatz that collapses the infinite-dimensional Kuramoto model to a two-dimensional system of ODEs. To illustrate their ansatz in its simplest form, let us apply it to the class of identical oscillators governed by Eq. (\ref{reducible}), in the limit $N \rightarrow \infty$. (Note that this step involves two simplifying assumptions, namely, that $N$ is infinitely large and that the oscillators are identical. The Ott-Antonsen ansatz applies more generally to systems of non-identical oscillators with frequencies chosen at random from a prescribed probability distribution--- indeed, this generalization was one of Ott and Antonsen's major advances--- but it is not needed for the issues that we wish to address. ) In the limit $N \rightarrow \infty$, the evolution of the system (\ref{reducible}) is given by the continuity equation
\begin{equation} \label{continuity}
\frac{\partial \rho}{\partial t} + \frac{\partial (\rho v)}{\partial \phi} = 0
\end{equation}
where the phase density $\rho(\phi, t)$ is defined such that $\rho(\phi, t) \mathrm{d}\phi$ gives the fraction of phases that lie between $\phi$ and $\phi + \mathrm{d}\phi$ at time $t$, and where the velocity field is the Eulerian version of (\ref{reducible}):
\begin{equation} \label{velocity}
v(\phi, t) = f e^{i\phi} + g + \bar{f}e^{-i\phi}. \end{equation}
Our earlier assumptions about the coefficient functions $f$ and $g$ now take the form that $f$ and $g$ may depend on $t$ but not on $\phi$. The time-dependence of $f$ and $g$ can arise either explicitly (through external forcing, say) or implicitly (through the time-dependence of the harmonics of $\rho$ or any auxiliary state variables in the system). Following Ott and Antonsen~\cite{otta08 }, suppose $\rho$ is of the form
\begin{equation} \label{OA_ansatz}
\rho(\phi, t) = \frac{1 }{2 \pi} \biggl\lbrace 1 +\sum_{n=1 }^\infty \bigl(\bar{\alpha}(t)^ne^{in\phi}+\alpha(t)^ne^{-in\phi}\bigr) \biggr\rbrace
\end{equation}
for some unknown function $\alpha$ that is independent of $\phi$. (Our definition of $\alpha$ is, however, slightly different from that in Ott and Antonsen~\cite{otta08 }; our $\alpha$ is their $\bar \alpha$. ) Note that (\ref{OA_ansatz}) is just an algebraic rearrangement of the usual form for the Poisson kernel:
\begin{equation} \label{Poisson}
\rho(\phi) = \frac{1 }{2 \pi} \frac{1 -r^2 }{1 -2 r\cos(\phi-\Phi)+r^2 }
\end{equation}
where $r$ and $\Phi$ are defined via
\begin{equation} \label{alpha}
\alpha = re^{i\Phi}. \end{equation}
In geometrical terms, the ansatz (\ref{OA_ansatz}) defines a submanifold in the infinite-dimensional space of density functions $\rho$. This \emph{Poisson submanifold} is two-dimensional and is parametrized by the complex number $\alpha$, or equivalently, by the polar coordinates $r$ and $\Phi$. The intriguing fact discovered by Ott and Antonsen is that the Poisson submanifold is invariant: if the density is initially a Poisson kernel, it remains a Poisson kernel for all time. To verify this, we substitute the velocity field (\ref{velocity}) and the ansatz (\ref{OA_ansatz}) into the continuity equation (\ref{continuity}), and find that the amplitude equations for each harmonic $e^{in\phi}$ are simultaneously satisfied if and only if $\alpha(t)$ evolves according to
\begin{equation} \label{alpha_eqn}
\dot{\alpha} = i\bigl(\bar{f}+g\alpha+f\alpha^2 \bigr). \end{equation}
This equation can be recast in a more physically meaningful form in terms of the complex order parameter, denoted by $\langle z \rangle$ and defined as the centroid of the phases $\phi$ regarded as points $e^{i \phi}$ on the unit circle:
\begin{equation} \label{z}
\langle z \rangle = \int_0 ^{2 \pi} e^{i\phi} \rho(\phi, t) \mathrm{d}\phi. \end{equation}
By substituting (\ref{OA_ansatz}) into (\ref{z}) we find that $\langle z \rangle = \alpha$ for all states on the Poisson submanifold. Hence, $\langle z \rangle$ satisfies the Riccati equation
\begin{equation} \label{riccati}
\dot{\langle z \rangle} = i(\bar{f} + g\langle z \rangle + f \langle z \rangle^2 ). \end{equation}
When $f$ and $g$ are functions of $\langle z \rangle$ alone, as in mean-field models, Eq. (\ref{riccati}) constitutes a closed two-dimensional system for the flow on the Poisson submanifold. More generally, the system will be closed whenever $f$ and $g$ depend on $\rho$ only through its Fourier coefficients. We will show this explicitly in Subsection~\ref{Fourier Coefficients of the Phase Distribution}, by finding formulas for all the higher Fourier coefficients in terms of $\alpha$, and hence in terms of $\langle z \rangle$. (However, as we will see, things become more complicated for states lying off the Poisson submanifold. Then $\langle z \rangle$ no longer coincides with $\alpha$ and the closed system becomes three dimensional, involving $\psi$ as well as $\alpha$. )
The work of Ott and Antonsen~\cite{otta08 } raises several questions. Why should the set of Poisson kernels be invariant. What is the relationship, if any, between the ansatz (\ref{OA_ansatz}) and the transformation (\ref{WS_transformation}) studied earlier. Why does (\ref{WS_transformation}) reduce equations of the form (\ref{reducible}) to a three-dimensional flow, whereas (\ref{OA_ansatz}) reduces them to a two-dimensional flow. As we shall see, the answers have to do with the properties of the group of conformal mappings of the unit disc to itself. Before showing how this group arises naturally in the dynamics of sinusoidally coupled oscillators, let us recall some of its relevant properties. \subsection{M\"{o}bius group \label{Mobius group}}
Consider the set of all fractional linear transformations $F: \mathbb{C} \rightarrow \mathbb{C}$ of the form
\begin{equation} \label{FLT}
F(z) = \frac{a z + b}{c z + d},
\end{equation}
where $a, b, c$ and $d$ are complex numbers, and the numerator is not a multiple of the denominator (that is, $ad-bc \neq 0 $). This family of functions carries the structure of a group. The group operation is composition of functions, the identity element is the identity map, and inverses are given by inverse functions. Of most importance to us is a subgroup $G$--- which we refer to as the \emph{M\"{o}bius group}--- consisting of those fractional linear transformations that map the open unit disc
$\mathbb{D} = \{z \in \mathbb{C}: |z| < 1 \}$ onto itself in a one-to-one way.
|
1,088
| 1
|
arxiv
|
usual_automorphism_parametrization}
F(z) = e^{i\varphi} \frac{\alpha -z}{1 - \bar{\alpha} z},
\end{equation}
for some $\varphi \in \mathbb{R}$ and $\alpha \in \mathbb{D}$. The M\"{o}bius group $G$ is in fact a three-dimensional Lie group, with real parameters $\varphi, $ Re$(\alpha)$, and Im$(\alpha)$. However, it turns out that a different parametrization of $G$ will be more notationally convenient in what follows, in the sense that it simplifies comparisons between our results and those in the prior literature. Specifically, we will view a typical element of $G$ as a mapping $M$ from the unit disc in the complex $w$-plane to the unit disc in the complex $z$-plane, with parametrization given by
\begin{equation} \label{our_automorphism_parametrization}
z=M(w) = \frac{e^{i\psi}w + \alpha}{1 + \bar{\alpha}e^{i\psi}w}
\end{equation}
where $\alpha \in \mathbb{D}$ and $\psi \in \mathbb{R}$. Note that the inverse mapping
\begin{equation} \label{inverse_parametrization}
w=M^{-1 }(z) = e^{-i\psi} \frac{z - \alpha}{1 - \bar{\alpha} z}
\end{equation}
has an appearance closer to that of the standard parametrization (\ref{usual_automorphism_parametrization}). A word about terminology: our definition of the M\"{o}bius group is not the conventional one. Usually this term denotes the larger group of all fractional linear transformations (or bilinear transformations, or linear fractional transformations), whereas we reserve the adjective M\"{o}bius for the subgroup $G$ and its elements. Thus, from now on, when we say \emph{M\"{o}bius transformation} we specifically mean an element of the subgroup $G$ consisting of analytic automorphisms of the unit disc. \section{M\"{O}BIUS GROUP REDUCTION \label{MOBIUS GROUP REDUCTION}}
In this section we show that if the equations for the oscillator array are of the form (\ref{reducible}), then the oscillators' phases $\phi_j(t)$ evolve according to the action of the M\"{o}bius group on the complex unit circle:
\begin{equation} \label{groupaction}
e^{i\phi_j(t)} = M_t(e^{i\theta_j}),
\end{equation}
for $j=1, \ldots, N$, where $M_t$ is a one-parameter family of M\"{o}bius transformations and
$\theta_j$ is a constant (time-independent) angle. In other words, the time-$t$ flow map for the system is always a M\"{o}bius map. Incidentally, this result is consistent with a basic topological fact: we know that different oscillators cannot pass through each other on $S^1 $ under the flow, so we expect the time-$t$ flow map to be an orientation-preserving homeomorphism of $S^1 $ onto itself--- and indeed any M\"{o}bius map is. We begin the analysis with an algebraic method similar to that in Goebel~\cite{goeb95 }. Then, in Sections~\ref{Geometric Method of Finding dotalpha} and~\ref{Geometric Method of Finding dotpsi}, we adopt a geometrical perspective and show that it answers several questions left open by the first method. \subsection{Algebraic Method \label{Algebraic Method}}
Parametrize the one-parameter family of M\"{o}bius transformations as
\begin{equation} \label{2 }
M_t(w) = \frac{e^{i\psi}w + \alpha}{1 + \bar{\alpha}e^{i\psi}w}
\end{equation}
where $|\alpha(t)| < 1 $ and $\psi(t) \in \mathbb{R}$, and let
\begin{equation} \label{w-defn}
w_j = e^{i \theta_j}. \end{equation}
To verify that (\ref{2 }) gives an exact solution of (\ref{reducible})--- subject to the constraint that the M\"{o}bius parameters $\alpha(t)$ and $\psi(t)$ obey appropriate ODEs, to be determined--- we compute the time-derivative of $\phi_j(t) = -i \log M_t(w_j)$, keeping in mind that $w_j$ is constant:
\begin{equation} \label{3 }
\dot{\phi_j} = \frac{\dot{\psi}e^{i\psi}w_j - i \dot{\alpha}}{e^{i\psi}w_j + \alpha} + \frac{(i\dot{\bar{\alpha}}
- \bar{\alpha}\dot{\psi})e^{i\psi}w_j}{1 + \bar{\alpha}e^{i\psi}w_j}. \end{equation}
From (\ref{inverse_parametrization}), we get
\begin{equation} \label{4 }
e^{i\psi}w_j = \frac{e^{i\phi_j} - \alpha}{1 - \bar{\alpha}e^{i\phi_j}}
\end{equation}
which when substituted into (\ref{3 }) yields
\begin{equation} \label{5 }
\dot{\phi_j} = Re^{i\phi_j} + \frac{\dot{\psi} + i\bar{\alpha}\dot{\alpha} - \alpha(i\dot{\bar{\alpha}} - \bar{\alpha}\dot{\psi})}{1 - |\alpha|^2 } + \bar{R}e^{-i\phi_j}
\end{equation}
where $R = (i\dot{\bar{\alpha}} - \bar{\alpha}\dot{\psi})/(1 - |\alpha|^2 )$. Note that Eq. (\ref{5 }) falls precisely into the algebraic form required by (\ref{reducible}). Thus, to derive the desired ODEs for $\alpha(t)$ and $\psi(t)$, we now subtract (\ref{5 }) from (\ref{reducible}) to obtain $N$ equations of the form $0 = C_1 e^{i\phi_j} + C_0 + C_{-1 } e^{-i\phi_j}$, for $j=1, \ldots, N$. If the system contains at least three distinct oscillator phases, then $C_1 $, $C_0 $, and $C_{-1 }$ must generically be zero. Explicitly,
\begin{equation} \label{6 }
f = \frac{i\dot{\bar{\alpha}} - \bar{\alpha}\dot{\psi}}{1 - |\alpha|^2 }, \;\;\;
g = \frac{\dot{\psi} + i\bar{\alpha}\dot{\alpha} - \alpha(i\dot{\bar{\alpha}} - \bar{\alpha}\dot{\psi})}{1 - |\alpha|^2 }. \end{equation}
The system (\ref{6 }) can be algebraically rearranged to give
\begin{subequations} \label{7 }
\begin{align}
\dot{\alpha} &= i(f\alpha^2 + g\alpha + \bar{f}) \label{7 a} \\
\dot{\psi} &= f\alpha + g + \bar{f}\bar{\alpha}. \label{7 b}
\end{align}
\end{subequations}
Equations (\ref{7 a}) and (\ref{7 b}) have been derived previously; they appear as Eqs. (10 ) and (11 ), respectively, in Pikovsky and Rosenblum's work~\cite{piko08 }, where they were derived by applying the transformation (\ref{WS_transformation}). Both their approach and the one above are certainly quick and clean, but they require us to guess the transformation ahead of time, and reveal little about why this transformation works. Incidentally, observe that under the change of variables $z_j~=~e^{i\phi_j}$, (\ref{reducible}) becomes
\begin{equation} \label{1.1 }
\dot{z_j} = i(f z_j^2 + g z_j + \bar{f}). \end{equation}
Equation~(\ref{1.1 }) is a Riccati equation with the form of (\ref{7 a})--- another coincidence that seems a bit surprising when approached this way. In the following subsection, we will see how these Ricatti equations emerge naturally from the infinitesimal generators of the M\"{o}bius group. \subsection{Geometric Method of Finding $\dot{\alpha}$ \label{Geometric Method of Finding dotalpha}}
Now we change our view of M\"{o}bius maps slightly. Instead of thinking of $M$ as a map from the $w$-plane to the $z$-plane, we view it as a map from the $z$-plane to itself. This requires a small and temporary change in notation, but it makes things clearer, especially when we start to discuss differential equations on the complex plane. We begin by recalling some basic facts and definitions. Suppose the coupled oscillator system contains just three distinct phases among its $N$ oscillators. Then by a property of M\"{o}bius transformations, there exists a unique M\"{o}bius transformation from any point $\bm{z}_1 = (e^{i\theta_1 }, e^{i\theta_2 }, e^{i\theta_3 })$ to any other point $\bm{z}_2 = (e^{i\phi_1 }, e^{i\phi_2 }, e^{i\phi_3 })$ in the state space $S^1 ~\times~S^1 ~\times~S^1 $. If the system instead contains only one or two distinct phases, many M\"{o}bius transformations will take $\bm{z}_1 $ to $\bm{z}_2 $, so we can still reach every point of the phase space from every other point. However, if the system contains more than three distinct phases, say $N$, then there is not in general a M\"{o}bius transformation that transforms $\bm{z}_1 = (e^{i\theta_1 }, e^{i\theta_2 }, e^{i\theta_3 }, \dotsc, e^{i\theta_N})$ to $\bm{z}_2 = (e^{i\phi_1 }, e^{i\phi_2 }, e^{i\phi_3 }, \dotsc, e^{i\phi_N})$; only some points are accessible from $\bm{z}_1 $, while others are not. In the language of group theory, we say that $\bm{z}_2 $ is in the \textit{group orbit} of $\bm{z}_1 $ if there exists a M\"{o}bius map $M$ such that $\bm{z}_2 = M(\bm{z}_1 )$. Then, as a direct consequence of the fact that M\"{o}bius maps form a three-parameter group $G$ under composition, the group orbits of $G$ partition the phase space into three-dimensional manifolds (when the phase space is at least three-dimensional). To compute infinitesimal generators for $G$, we compute the time derivatives of the three one-parameter families of curves corresponding to the three parameters of $G$: $\psi$, $\text{Re} (\alpha)$ and $\text{Im} (\alpha)$. Each of the three families is obtained from the M\"{o}bius transformation by setting two of the three parameters to zero, and leaving the remaining parameter free. For example, if we set $t = 0 $ at $\bm{z} = (z_1, \dotsc, z_N)$, these three families are
\begin{equation} \label{7.1 }
\begin{split}
M_1 (\bm{z}) &= e^{it}\bm{z} \\
M_2 (\bm{z}) &= \frac{\bm{z} - t}{1 - t\bm{z}} \\
M_3 (\bm{z}) &= \frac{\bm{z} + it}{1 - it\bm{z}}
\end{split}
\end{equation}
where $M_1 (\bm{z})$ is written in place of $(M_1 (z_1 ), \dotsc, M_1 (z_N))$ and likewise for $M_2 (\bm{z})$ and $M_3 (\bm{z})$. We continue using this shorthand in subsequent equations, writing $h(\bm{z})$ in place of $(h(z_1 ), \dotsc, h(z_N))$ for any one-parameter function $h$. The time derivatives of the curves in (\ref{7.1 }) evaluated at $t = 0 $ then give a set of infinitesimal generators for $G$:
\begin{equation} \label{7.2 }
\begin{split}
\bm{v}_1 &= i\bm{z} \\
\bm{v}_2 &= \bm{z}^2 - 1 \\
\bm{v}_3 &= i\bm{z}^2 + i. \end{split}
\end{equation}
Note that these three generators point out into the full $N$-dimensional complex space $\mathbb{C}^N$, as expected. Meanwhile, if we substitute $f = -ih_1 + h_2 $ (where $h_1 $ and $h_2 $ are real functions) into the original Riccati dynamics (\ref{1.1 }), we can rewrite this equation of motion in terms of the three infinitesimal generators:
\begin{equation} \label{7.3 }
\dot{\bm{z}} = i\bm{z}g + (\bm{z}^2 - 1 )h_1 + (i\bm{z}^2 + i)h_2. \end{equation}
The implication of the rewritten form (\ref{7.3 }) is then given by a theorem from Lie theory: if $L$ is a Lie group acting on a submanifold with linearly independent infinitesimal generators $\bm{v}_1, \dotsc, \bm{v}_n$, and $\bm{v}$ is a vector field of the form $\bm{v} = c_1 \bm{v}_1 + \dotsb + c_n\bm{v}_n$ where the coefficients $c_k$ depend only on time $t$, then the trajectory of the dynamics $\dot{\bm{z}} = \bm{v}$ from any initial condition $\bm{z}_0 $ can be expressed in the form $\{A_t(\bm{z}_0 )\}$ for a unique family $\{A_t\} \subset L$ parameterized by $t$. Since the M\"{o}bius group is a complex Lie group, this result can be applied directly to conclude (\ref{7.3 }) has the solution $\bm{z}(t) = M_t(\bm{z}_0 )$ where $\{M_t\}$ is a unique one-parameter family of M\"{o}bius transformations. Although we have so far assumed that the components $z_k$ of $\bm{z}$ lie on the complex unit circle, both (\ref{2 }) and (\ref{7.3 }) extend naturally to all of $\mathbb{C}^N$. This implies that $z_0 = 0 $ must evolve as $z(t) = M_t(0 )$ for some family $\{M_t\}$. However, Eq. ~(\ref{2 }) shows that $M(0 ) = \alpha$ for all $M \in G$. So $z(t) = M_t(0 ) = \alpha$ for all $t$, meaning that $\alpha(t)$ satisfies (\ref{7.3 }). Since (\ref{7.3 }) is just a rewriting of (\ref{1.1 }), the dynamics (\ref{7 a}) for $\alpha$ that we derived earlier are now placed in a geometrical context. This approach reveals that $\alpha(t)$ is just the image of the origin under a one-parameter family of M\"{o}bius maps, applied to any one complex plane of $\mathbb{C}^N$. It is even more illuminating to compute the infinitesimal generators within the $N$-fold torus $\mathbb{T}^N$ of phase values, i. e., the quantities $\bm{u}_k = -i \frac{d}{dt} \log M_k(e^{i\bm{\phi}})|_{t = 0 }$.
|
1,088
| 2
|
arxiv
|
\label{7.4.1 }
\dot{\bm{\phi}} = g + (2 \sin \bm{\phi})h_1 + (2 \cos \bm{\phi})h_2
\end{equation}
which is precisely what we earlier referred to as a sinusoidally coupled system (\ref{reducible}), and whose solution must therefore be of the form $\bm{\phi}_t = -i\log M_t (e^{i\bm{\theta}})$ for some $M_t \in G$. This calculation finally clarifies what is so special about sinusoidally coupled systems (\ref{reducible}): they are induced naturally by a flow on the M\"{o}bius group. This fact underlies their reducibility and all their other beautiful (but non-generic) properties. \subsection{Geometric Method of Finding $\dot{\psi}$ \label{Geometric Method of Finding dotpsi}}
We turn next to the dynamics of $\psi$. As we will show in the next section, the action of the M\"{o}bius transformation involves a clockwise rotation of the oscillator phase density $\rho(\phi, t)$ by $\arg(\alpha) - \psi$ and a counterclockwise rotation by $\arg(\alpha)$. Hence, $\psi(t)$ may be viewed as the overall counterclockwise rotation of the distribution at time $t$ relative to the initial distribution at $t = 0 $. To support this interpretation, we show here that $\dot{\psi}$ equals the average value of the vector field on the circle, given by
\begin{equation} \label{averagephidot}
\langle \dot{\phi} \rangle = \frac{1 }{2 \pi} \int_{S^1 } \dot{\phi} \, d\theta. \end{equation}
Observe the right side of the integrand (\ref{3 }) has two terms:
\begin{equation} \label{7.4.2 }
\begin{split}
R_1 (w) &= \frac{\dot{\psi}e^{i\psi}w - i \dot{\alpha}}{e^{i\psi}w + \alpha} \\
R_2 (w) &= \frac{(i\dot{\bar{\alpha}} - \bar{\alpha}\dot{\psi})e^{i\psi}w}{1 + \bar{\alpha}e^{i\psi}w}. \end{split}
\end{equation}
By Cauchy's formula,
\begin{equation} \label{7.5 }
\frac{1 }{2 \pi i} \int_{S^1 } R_2 (w) \frac{dw}{w} = R_2 (0 ) = 0. \end{equation}
So $\langle \dot{\phi} \rangle$ simplifies to
\begin{equation} \label{7.6 }
\langle \dot{\phi} \rangle = \frac{1 }{2 \pi i} \int_{S^1 } R_1 (w) \frac{dw}{w}. \end{equation}
Note that $R_1 (w)$ has a pole in the unit disc, so we make the change of variables $w \rightarrow w^{-1 }$ to move this pole outside the circle. Evaluating the resulting integral yields
\begin{equation} \label{7.7 }
\frac{1 }{2 \pi i} \. \int_{S^1 } \. R_1 (w) \frac{dw}{w}
= \frac{1 }{2 \pi i} \. \int_{S^1 } \. \frac{\dot{\psi} - i\dot{\alpha} e^{-i\psi}w}{1 + \alpha e^{-i\psi}w} \frac{dw}{w} = \dot{\psi}
\end{equation}
which completes the demonstration that $\langle \dot{\phi} \rangle = \dot{\psi}$. We can now go back and evaluate the average vector field in a different way to find the differential equation that governs $\psi(t)$. Differentiating $\phi = -i \log M_t(w)$ with respect to time and substituting the result into $\dot{\psi} = \frac{1 }{2 \pi} \int_{S^1 } \dot{\phi} \, d\theta$, we obtain
\begin{equation} \label{7.8 }
\dot{\psi} = \frac{1 }{2 \pi i} \int_{S^1 } \frac{\dot{M}_t(w)}{M_t(w)} \frac{dw}{iw}. \end{equation}
Since $M_t$ obeys the Ricatti equation, we can eliminate $\dot{M}_t$ in the numerator above to get
\begin{equation} \label{7.9 }
\dot{\psi} = \frac{1 }{2 \pi i} \int_{S^1 } (f M_t(w) + g + \bar{f} M_t(w)^{-1 }) \frac{dw}{w}. \end{equation}
There are three integrals to evaluate here. The third one involves a term $M_t(w)^{-1 }$ which has a pole inside the unit circle, so we do the same change of variables as before, $w \rightarrow w^{-1 }$, to move the pole outside. The corresponding integral then simplifies to
\begin{equation} \label{7.10 }
\frac{1 }{2 \pi i} \. \int_{S^1 } \. \. \. M_t(w)^{-1 } \frac{dw}{w}
= \frac{1 }{2 \pi i} \. \int_{S^1 } \. \frac{e^{-i\psi}w + \bar{\alpha}}{1 + \alpha e^{-i\psi}w} \frac{dw}{w} = \bar{\alpha}
\end{equation}
where the final integration follows from Cauchy's formula. Similiarly, we use Cauchy's formula to integrate the first and second terms of the integrand in (\ref{7.9 }), and thereby obtain the desired differential equation for $\psi$, thus rederiving (\ref{7 b}) found earlier. \section{CONNECTIONS TO PREVIOUS RESULTS \label{CONNECTIONS TO PREVIOUS RESULTS}}
\subsection{Relation to the Watanabe-Strogatz Transformation \label{Relation to the Watanabe-Strogatz Transformation}}
It is natural to ask how the trigonometric transformation (\ref{WS_transformation}) used in earlier studies~\cite{wata93, wata94, piko08 } relates to the M\"{o}bius transformation (\ref{2 }) used above. As we will see, (\ref{WS_transformation}) may be viewed as a restriction of (\ref{2 }) to the complex unit circle. First, by trigonometric identities, we have
\begin{equation} \label{9 }
\tan\left[\frac{\phi-\Phi}{2 }\right] = i\frac{1 - e^{i(\phi-\Phi)}}{1 + e^{i(\phi-\Phi)}}. \end{equation}
To connect this to M\"{o}bius transformations, consider what happens when we apply the map defined by (\ref{2 }) to a point $w = e^{i\theta}$ on the unit circle. Since the image is also a point on the unit circle, it can be written as $M(e^{i\theta}) = e^{i\phi}$ for some angle $\phi$. Next let $\alpha = re^{i\Phi}$ and divide both sides of (\ref{2 }) by $e^{i\Phi}$. Thus
\begin{equation} \label{trigtoMobius}
e^{i(\phi - \Phi)} = \frac{e^{i(\theta-\Theta)} + r}{1 + re^{i(\theta-\Theta)}}
\end{equation}
where $\Theta = \Phi - \psi$. Substitution of (\ref{trigtoMobius}) into the right side of (\ref{9 }) gives
\begin{equation} \label{10 }
\tan\left[\frac{\phi-\Phi}{2 }\right] = \frac{1 -r}{1 +r} \bigg(i\frac{1 - e^{i(\theta-\Theta)}}{1 + e^{i(\theta-\Theta)}}\bigg). \end{equation}
By the identity (\ref{9 }), Eq. (\ref{10 }) is equivalent to (\ref{WS_transformation}) with $\gamma = -2 r/(1 +r^2 )$. We can now see how the M\"{o}bius parameters $\alpha$ and $\psi$ operate on the set of $e^{i\theta}$ in $\mathbb{C}$. From the relationships between $\Theta$, $\gamma$, $\Phi$ and the M\"{o}bius parameters, the initial phase density is first rotated clockwise around $S^1 $ by $\arg(\alpha) - \psi$, then squeezed toward one side of the circle as a function of $|\alpha|$, and afterwards rotated counterclockwise by $\arg(\alpha)$. The squeeze, which takes uniform distributions to Poisson kernels, can be thought of as a composition of inversions, dilations and translations in the complex plane. \subsection{Invariant Manifold of Poisson Kernels \label{Invariant Manifold of Poisson Kernels}}
In Section~\ref{Ott-Antonsen ansatz} and in a previous paper~\cite{marv09 }, we used the Ott-Antonsen ansatz (\ref{OA_ansatz}) to show that systems of identical oscillators with global sinusoidal coupling contain a degenerate two-dimensional manifold among the three-dimensional leaves of their phase space foliation. This two-dimensional manifold, which we called the Poisson submanifold, consists of phase densities $\rho(\phi, t)$ that have the form of a Poisson kernel. We now rederive these results within the framework of M\"{o}bius transformations. Let $T$ denote one instance of the transformation (\ref{WS_transformation}); in other words, fix the parameters $\Phi$, $\gamma$ and $\Theta$ and let $\phi = T(\theta)$. Let $\mu$ denote the normalized uniform measure on $S^1 $; thus
\begin{equation} \label{uniform_measure}
d\mu(\theta) = \frac{1 }{2 \pi} d\theta. \end{equation}
The transformation $T$ maps $\mu$ to the measure $T_*\mu$, and, by the usual formula for transformation of single-variable measures, we have $d(T_*\mu)(\phi) = \frac{1 }{2 \pi} T^{-1 }(\phi)'d\phi$, where the prime denotes differentiation by $\phi$. From this equation it follows that $d(T_*\mu)(\phi)$ has the form of the Poisson kernel, because the inverse of the M\"{o}bius transformation (\ref{2 }) is
\begin{equation} \label{11 }
M^{-1 }(z) = e^{-i\psi} \frac{z - \alpha}{1 - \bar{\alpha}z}
\end{equation}
which implies
\begin{equation} \label{12 }
T^{-1 }(\phi) = -\psi - i\log(e^{i\phi} - \alpha) + i\log(1 - \bar{\alpha}e^{i\phi}). \end{equation}
Then by differentiation and algebraic rearrangement, we obtain
\begin{equation} \label{13 }
T^{-1 }(\phi)' = \frac{1 - r^2 }{1 - 2 r\cos(\phi-\Phi) + r^2 }. \end{equation}
The integral of $T^{-1 }(\phi)'$ over $[0,2 \pi)$ is $2 \pi$, so $d(T_*\mu)(\phi)$ is indeed a normalized Poisson kernel. Finally, if the phase distribution $d(T_*\mu)(\phi)/d\phi$ ever takes the form of a Poisson kernel with parameters $r = r_0 $ and $\Phi = \Phi_0 $, then we can set $r(0 ) = r_0 $, $\Phi(0 ) = \Phi_0 $ and $d\mu(\theta) = \frac{1 }{2 \pi}d\theta$, and the above calculation shows that $d(T_*\mu)(\phi)/d\phi$ remains a Poisson kernel for all future and past times. Hence, the set of normalized Poisson kernels constitutes an invariant submanifold of the infinite-dimensional phase space. The above demonstration also reveals that the Poisson submanifold has dimension $k + 2 $ where $k$ is the number of state variables besides $\alpha$, $\psi$ and the oscillator phases. More concretely, it implies that when the system lies on the Poisson submanifold, we can write $\dot{\alpha}$ as depending only on $\alpha$; it is not possible to require $\dot{\alpha}$ to depend on $\psi$ in any real coupling scheme. To see this, we first consider the case in which the system is closed and there are no additional state variables. Suppose $\dot{\alpha}$ does \textit{not} depend only on $\alpha$. Then some of the state space trajectories cross when projected onto the unit disc of $\alpha$ values. At the point of any crossing, the phase density $\rho(\phi, t)$ has multiple $\dot{\alpha}$ values. But by (\ref{13 }), the phase density depends only on $\alpha$, so there is nothing in the state space that can distinguish between the different $\dot{\alpha}$ values at that point. Hence, $\dot{\alpha}$ must be expressible in terms of $\alpha$ alone. By an analogous argument, $\dot{\alpha}$ is also independent of $\psi$ on the Poisson submanifold when there are $k$ other state variables besides the oscillator phases and M\"{o}bius parameters. On the other hand, if the time-dependence of $\dot{\alpha}$ arises only via a dependence on $\alpha$, then $r$ and $\Phi$ decouple from $\psi$ and the dynamics are two-dimensional regardless of whether the system is evolving on the Poisson submanifold or not. Observe that we can always force $\psi$-independence for $\dot{\alpha}$ by throwing away enough information about the locations of the other phases. For instance, in the extreme, we may simply make $f$ and $g$ constant. Finally, even when $\dot{\alpha}$ does not depend solely on $\alpha$, the dynamics still may be two dimensional. For example, in the case of completely integrable systems~\cite{wata93 }, the variables $r$ and $\Phi - \psi$ decouple from $\Phi$ to foliate the phase space with two-dimensional tori. \section{CHARACTERISTICS OF THE MOTION \label{CHARACTERISTICS OF THE MOTION}}
\subsection{Cross Ratios as Constants of Motion \label{Cross Ratios as Constants of Motion}}
The reduction of (\ref{reducible}) by the three-parameter M\"{o}bius group suggests that the corresponding system of coupled oscillators should have $N - 3 $ constants of motion. As we will see, these conserved quantities are given by the cross ratios of the points $z_j = e^{i\phi_j}$ on $S^1 $. Recall from complex analysis~\cite{conway} that the \emph{cross ratio} of four distinct points $z_1, z_2, z_3, z_4 \in \mathbb{C} \cup \{\infty\}$ is
\begin{equation} \label{14.1 }
(z_1, z_2, z_3, z_4 ) = \frac{z_1 - z_3 }{z_1 - z_4 } \cdot \frac{z_2 - z_4 }{z_2 - z_3 }
\end{equation}
This quantity is conserved under M\"{o}bius transformations: for all $\alpha$ and $\psi$, $(M(z_1 ), M(z_2 ), M(z_3 ), M(z_4 )) = (z_1, z_2, z_3, z_4 )$. Hence, the $N. /(N-4 ). $ cross ratios of the $N$ oscillator phases remain constant along the trajectories in phase space. We denote the constant value of $(z_1, z_2, z_3, z_4 )$ as $\lambda_{1234 }$. Of course, we could have defined the cross ratio for four-tuples of \textit{non-distinct} points as well, but these quantities are trivially conserved regardless of the dynamics and hence do not reduce the dimension of the phase space.
|
1,088
| 3
|
arxiv
|
work (see the Appendix), we can also confirm that the rest of the cross ratios are functionally dependent on these $N - 3 $ integrals. Since the state space of the phases is an $N$-fold torus of real variables, we expect that each of the constants of motion can be expressed in terms of real functions and variables. Indeed, if $z_1 $, $z_2 $, $z_3 $, $z_4 $ lie on the unit circle, then the cross ratio $(z_1, z_2, z_3, z_4 )$ lies on $\mathbb{R} \cup \{\infty\}$. We see this explicitly by pulling out $e^{\frac{i}{2 }(\phi_1 + \phi_3 )}$ from the factor $(e^{i\phi_1 } - e^{i\phi_3 })$ of $(e^{i\phi_1 }, e^{i\phi_2 }, e^{i\phi_3 }, e^{i\phi_4 })$, and likewise for the other three factors, and then canceling the factors $e^{\frac{i}{2 }(\phi_1 + \phi_2 + \phi_3 + \phi_4 )}$ in the numerator and denominator to find
\begin{equation} \label{17.1 }
(e^{i\phi_1 }, e^{i\phi_2 }, e^{i\phi_3 }, e^{i\phi_4 }) = \frac{S_{13 } S_{24 }}{S_{14 } S_{23 }}
\end{equation}
where
\begin{equation} \label{S_ij}
S_{ij} = \sin\left[\frac{\phi_i - \phi_j}{2 }\right]. \end{equation}
This way of writing the cross ratio also suggests a relationship with the constants of motion reported by Watanabe and Strogatz~\cite{wata93, wata94 } for completely integrable systems (those with $f = \frac{1 }{2 }e^{i\delta}\overline{\langle z \rangle}$ and $g = 0 $, where $\langle z \rangle$ is the phase centroid (\ref{z})). These constants of motion, which we will call \textit{WS integrals}, take the form
\begin{equation} \label{17.2 }
I = S_{12 } S_{23 } \dotsm S_{(N-1 )N} S_{N1 }
\end{equation}
where any permutation of the indices generates another WS integral. As previously demonstrated~\cite{wata94 }, exactly $N - 2 $ of the $N. $ index permutations of (\ref{17.2 }) are functionally independent. As we might anticipate, the WS integrals imply that the cross ratios are constants of motion: consider two distinct WS integrals $I = S_{ik}S_{kl}S_{lj}\Pi$ and $I' = S_{il}S_{lk}S_{kj}\Pi$, where $\Pi$ denotes the remaining product of factors. Assume $\Pi$ is the same for both $I$ and $I'$. Then $I/I' = -\lambda_{ijkl}$. Since $i$, $j$, $k$, $l$ are arbitrary, we can generate all cross ratios via this procedure. Additionally, if a single WS integral holds for a system in which the cross ratios are invariant, then \textit{all} WS integrals hold, since we can arbitrarily permute the indices of the first WS integral by sequences of transpositions of the form $I = -\lambda_{ijkl} I'$ in which $l$ and $k$ are interchanged. \subsection{Fourier Coefficients of the Phase Distribution \label{Fourier Coefficients of the Phase Distribution}}
When we introduced $f$ and $g$ in Section~\ref{BACKGROUND}, we required that they depend on the phases \emph{only} through the Fourier coefficients of the phase density $\rho(\phi, t)$. Since the centroid (\ref{z}) is the Fourier coefficient corresponding to the first harmonic $e^{-i\phi}$, this condition is met by standard Kuramoto models, Josephson junction series arrays, laser arrays and many other well-studied systems of globally coupled oscillators. Our goal now is to show that this condition implies the closure of (\ref{7 }), in the sense that $\dot{\alpha}$ and $\dot{\psi}$ depend only on $\alpha$ and $\psi$. To do so, we will show that the Fourier coefficient of all higher harmonics $e^{-im\phi}$ for any integer $m$ may be expressed in terms of $\alpha$ and $\psi$. For a fixed measure $\mu(\theta)$ on $[0,2 \pi)$ and a transformation $T(\theta) = -i \log M(\theta)$ of this measure via the M\"{o}bius map $M$, the Fourier coefficient of $e^{-im\phi}$ is given by
\begin{equation} \label{18 }
\langle z^m \rangle = \int_{S^1 } e^{im\phi} d(T_*\mu)(\phi) = \int_{S^1 } M(e^{i\theta})^m d\mu(\theta). \end{equation}
We use the notation $\langle z^m \rangle$ as a reminder that $\langle z \rangle$ is the phase centroid. We assume that we can take a Fourier expansion of $\mu(\theta)$, so
\begin{equation} \label{Fourier_expansion_of_measure}
d\mu(\theta) = \frac{1 }{2 \pi} \sum_{n = -\infty}^\infty c_n e^{in\theta} d\theta
\end{equation}
where the constants $c_n$ are independent of $\theta$. Since the phase distribution must be real and normalized, we know that $c_{-n} = \bar{c}_n$ and $c_0 = 1 $, so we can write
\begin{equation} \label{19 }
d\mu = \frac{1 }{2 \pi i} \biggl(1 + P(w) + \overline{P(w)}\biggr) \frac{dw}{w}
\end{equation}
where $w = e^{i\theta}$ and $P(w) = \sum_{n=1 }^\infty c_n w^n$. The formula for $\langle z^m \rangle$ then becomes:
\begin{equation} \label{20 }
\langle z^m \rangle = \frac{1 }{2 \pi i} \int_{S^1 } M(w)^m \biggl(1 + P(w) + \overline{P(w)}\biggr) \frac{dw}{w}. \end{equation}
Now, $M(w)^m (1 + P(w))$ is analytic on the open disc $\mathbb{D}$ and $M(0 )^m (1 + P(0 )) = \alpha^m$. Meanwhile, the remaining term of the integrand of (\ref{20 }) has the complex conjugate
\begin{equation} \label{22 }
\frac{\overline{M(w)}^m P(w)}{w} = \biggl(\frac{1 + \bar{\alpha}e^{i\psi}w}{e^{i\psi}w + \alpha}\biggr)^m \frac{P(w)}{w}
\end{equation}
which features an order-1 pole at $w = 0 $ and an order-$m$ pole at $w = -e^{-i\psi}\alpha$. The first residue evaluates to zero, while the second is given by
\begin{equation} \label{24 }
\frac{e^{-im\psi}}{(m-1 ). }\frac{d^{m-1 }}{dw^{m-1 }}\biggl[(1 + \bar{\alpha}e^{i\psi}w)^m \frac{P(w)}{w}\biggr] \biggl|_{w = -e^{-i\psi}\alpha. }
\end{equation}
Therefore, $\langle z^m \rangle$ is equal to $\alpha^m$ added to the complex conjugate of this second residue:
\begin{multline} \label{24.5 }
\langle z^m \rangle = \alpha^m + \sum_{k=0 }^{m-1 } \frac{(1 - |\alpha|^2 )^{k+1 }}{k. } \\
\times \sum_{n=0 }^{\infty} (-1 )^n \frac{(n+k). }{n. } \bar{c}_{n+k+1 } e^{i(m+n)\psi} \bar{\alpha}^n. \end{multline}
For example, the centroid may be written in terms of $\alpha$ and $\psi$ as
\begin{equation} \label{25 }
\langle z \rangle = \alpha + (|\alpha|^2 - 1 ) \sum_{n=1 }^\infty (-1 )^n \bar{c}_n e^{in\psi} \bar{\alpha}^{n-1 }. \end{equation}
This calculation reveals what is so special about the Poisson submanifold. Recall from Section~\ref{Invariant Manifold of Poisson Kernels} that Poisson kernels arise when we take $\mu$ to be the uniform measure. Then $c_n = 0 $ for all $n \neq 0 $ and $\langle z \rangle = \alpha$. In this exceptional case, the centroid simply evolves according to the Riccati equation (\ref{riccati}) and the dynamics of $\alpha$ and $\psi$ decouple in Eqs. ~(\ref{7 a}), (\ref{7 b}). (A similar observation about the crucial role of the uniform measure here was made by Pikovsky and Rosenblum~\cite{piko08 }. The centroid evolution equation (\ref{7 a}) on the Poisson submanifold was first written down by Ott and Antonsen; see Eq. (6 ) in Ref. ~\cite{otta08 }. )
But for the generic case of states lying off the Poisson submanifold, $\langle z \rangle$ is no longer equal to $\alpha$ and the reduced dynamics become fully three-dimensional, due to the coupling between $\alpha$ and $\psi$ induced by the relation (\ref{25 }) and the dependence of $f$ and $g$ on $\langle z \rangle$ and the higher Fourier coefficients. In the next section we will explore some of the possibilities for such three-dimensional flows. \begin{figure}
\includegraphics[scale = 1 ]{pointplot}
\caption{The qualitative trend of chaos observed in the first quadrant of the $b$-$\Omega$ parameter plane is indicated by the shaded gradient. As the shade darkens near the bifurcation curve $\Omega = b$, chaos fills increasingly larger regions of the submanifolds containing the sinusoidal initial distributions. Points (A) and (B) are chosen as (1 /20, 3 /4 ) and (17 /10, 1 ), respectively. Representative Poincar\'{e} sections for these points are shown in Fig. ~\ref{kamchaosA} and Fig. ~\ref{kamchaosB}. The region $b < 0 $ is grayed out to represent that negative values of $b$ are not physical. \label{pointplot}}
\end{figure}
\begin{figure*}
\includegraphics[scale = 1 ]{kamchaosA}
\caption{Poincar\'{e} sections of $\alpha$ at $\psi \, (\text{mod} \; 2 \pi) = 0 $ for a resistively-loaded series array of Josephson junctions with $b = 1 /20, \Omega = 3 /4 $ (pt. (A) in Fig. ~\ref{pointplot}). The initial distributions are sinusoidal with wavenumber $n$, where $n$ is (a) 1, (b) 2, (c) 3, (d) 4, (e) 5, (f) 6, (g) 7, (h) 8, (i) 16, (j) 32, and (k) $\infty$, i. e. on the Poisson submanifold. In (j) and (k), the complete trajectories are plotted instead of the intersections with the plane $\psi \, (\text{mod} \; 2 \pi) = 0 $. \label{kamchaosA}}
\end{figure*}
\begin{figure*}
\includegraphics[scale = 1 ]{kamchaosB}
\caption{Poincar\'{e} sections of $\alpha$ at $\psi \, (\text{mod} \; 2 \pi) = 0 $ for a resistively-loaded series array of Josephson junctions with $b = 17 /10, \Omega = 1 $ (pt. (B) in Fig. ~\ref{pointplot}). The initial distributions are sinusoidal with wavenumber $n$, where $n$ is (a) 1, (b) 2, (c) 4, (d) 8, (e) 16, (f) 32, (g) 64, and (h) $\infty$, i. e. on the Poisson submanifold. In (g) and (h), the full trajectories are plotted. \label{kamchaosB}}
\end{figure*}
\section{CHAOS IN JOSEPHSON ARRAYS \label{CHAOS IN JOSEPHSON ARRAYS}}
Although the leaves of the foliation imposed by the M\"{o}bius group action are only three-dimensional, they often contain chaos for commonly studied $f$ and $g$~\cite{golo92, wata94 }. In this section, we showcase this phenomenon by specializing to the case of a resistively-loaded series array of overdamped Josephson junctions. In several previous studies of sinusoidally coupled oscillators in the continuum limit, it was found that under certain conditions, the Fourier harmonics of the phase density $\rho(\phi, t)$ evolved as if they were decoupled, at least near certain points in state space~\cite{stro91, golo92, stro93 }. In the spirit of these observations, we can get a sense for how individual harmonics contribute to the chaos by starting the system (\ref{7 }) from sinusoidal phase densities with different wavenumbers $n$. To be more precise, we choose an initial density
\begin{equation} \label{initial_rho_density}
\rho(\phi,0 ) = \frac{1 }{2 \pi}(1 + \cos n\phi). \end{equation}
At $t=0 $, we choose $\alpha = \psi = 0 $ so that $M_t$ in Eq. ~(\ref{2 }) is simply the identity map, and the time-dependent change of variables $e^{i\phi} = M_t (e^{i\theta})$ reduces to $\phi = \theta$, initially. Thus, the corresponding density of $\theta$ is
\begin{equation} \label{26 }
\sigma_n(\theta) = \frac{1 }{2 \pi}(1 + \cos n\theta). \end{equation}
This density is independent of time, just as the angles $\theta_j$ were in the finite-$N$ case. Next we flow the density forward by $e^{i\phi} = M_t (e^{i\theta})$, where the M\"{o}bius parameters $\alpha(t), \psi(t)$ satisfy the reduced flow (\ref{7 }). Then, by our earlier results, the resulting density $\rho(\phi, t)$ automatically satisfies the governing equations (\ref{continuity}), (\ref{velocity}). The three-dimensional plot of $\text{Re} (\alpha(t))$, $\text{Im} (\alpha(t))$ and $\psi(t)$ indicates how such a single-harmonic density evolves in time, revealing for example whether it exhibits chaos, follows a periodic orbit, or approaches a fixed point. To ease the notation, from now on we write $\alpha$ in Cartesian coordinates as
\begin{equation} \label{Cartesian}
\alpha = x + iy.
|
1,088
| 4
|
arxiv
|
}
u = 2 x \, \text{Re} (f) + g - 2 y \, \text{Im} (f). \end{equation}
We immediately see that for every fixed point of this system, $|\alpha| = 1 $ and $\psi$ is arbitrary. If for some change of state variables $\zeta(x, y, \psi)$, $\eta(x, y, \psi)$, and $\xi(x, y, \psi)$, the ODEs $\dot{\zeta}$ and $\dot{\eta}$ constitute a closed two-dimensional system and $\dot{\xi}$ receives all of its $t$-dependence through $\zeta$ and $\eta$, then there could be other fixed points for the physical system, namely where $\dot{\zeta} = \dot{\eta} = 0 $ but $\dot{\xi} \neq 0 $. Examples of the second type of fixed point include the splay states found on the Poisson submanifold~\cite{tsan91, stro93 }. As discussed in Section~\ref{Reducible systems}, series arrays of Josephson junctions with a resistive load have dynamics given by Eqs. ~(\ref{jj_resistive}), (\ref{continuity}), and (\ref{velocity}), with $f = -(b+1 )/2 $ and $g = \Omega + \text{Re} \langle z \rangle$, where $b$ and $\Omega$ are dimensionless combinations of certain circuit parameters~\cite{tsan91, marv09 } and $ \langle z \rangle$ is the complex order parameter (\ref{z}). The dynamics of $x$, $y$ and $\psi$ are given by substitution into (\ref{28 }):
\begin{equation} \label{29 }
\begin{split}
\dot{x} &= -uy \\
\dot{y} &= ux - \frac{b+1 }{2 }(1 - x^2 - y^2 ) \\
\dot{\psi} &= u
\end{split}
\end{equation}
with $u = \Omega + \text{Re} \langle z \rangle - (b+1 )x$. From (\ref{25 }) and (\ref{26 }),
$\text{Re} \langle z \rangle = x + (-1 )^n \frac{1 }{2 }(x^2 + y^2 - 1 )(x^2 +y^2 )^{(n-1 )/2 } \cos[n\psi-(n-1 )\tan^{-1 }(y/x)]$. We can now plot the phase portrait for (\ref{29 }) on the cylinder $\{(x, y, \psi) | \, x, y, \psi \in \mathbb{R}, x^2 + y^2 \leq 1 \}$. In the simple case where $\alpha$ decouples from $\psi$, trajectories can be projected down onto the $\alpha$-disc without intersecting themselves or each other. However, in the more typical case that $\alpha$ and $\psi$ are interdependent, we use Poincar\'{e} sections at $\psi \, (\text{mod} \; 2 \pi) = 0 $ to sort out the structure. In these Poincar\'{e} sections, quasiperiodic trajectories (ideally) appear as closed curves or island chains, periodic trajectories appear as fixed points or period-$p$ points of integer period, and chaotic trajectories fill the remaining regions of the unit disc. First, however, we must choose an appropriate $b$ and $\Omega$. To do so, we consider their definitions in terms of the original circuit parameters: $b = R/(NR_J)$ and $\Omega = bI_b/I_c$, where $N$ is the number of junctions, $I_b$ the source current, $R$ the load resistance, $I_c$ the critical current of each Josephson junction, and $R_J$ the intrinsic Josephson junction resistance~\cite{tsan91, marv09 }. Because the resistances must be positive in the physical system, we examine only $b > 0 $ in our simulations. Additionally, $I_c$ represents a positive current magnitude, while $I_b$ reflects both a source current magnitude and direction. Since the circuit is symmetric with respect to reversal of the source circuit (see Fig. 1 of~\cite{marv09 }), the corresponding dynamical system is left unchanged by the reflection $\Omega \rightarrow -\Omega, x \rightarrow -x$. Hence, we also restrict our study to positive values of $\Omega$. If $b/\Omega > 1 $, (\ref{29 }) implies there are fixed points at $x^* = \Omega/b$, $y^* = \pm \sqrt{1 - \Omega^2 /b^2 }$ for arbitrary $\psi$. In numerical experiments, the negative-$y^*$ line of fixed points appears to attract distributions, while the positive-$y^*$ line repels them. Along the bifurcation curve $\Omega = b$, the two rows of fixed points merge at $x = 1 $, and we find computational evidence that a splay state (for which $\dot{x} = \dot{y} = 0 $) emerges from their union and moves inside the unit disc along the $x$-axis toward the origin as $b$ is decreased or $\Omega$ is increased. We can see from (\ref{29 }) that any such state must lie on the $x$-axis for all parameter values, as it did in previous characterizations of the Poisson submanifold~\cite{marv09 }. For the submanifolds we examined, chaos only appeared in the portion of the first quadrant in the $b$-$\Omega$ plane that did not contain the fixed points, and the chaos became more widespread as $b/\Omega \rightarrow 1 $. This is illustrated schematically in Fig. ~\ref{pointplot}; the gradient of increasing darkness represents increasingly pervasive chaos. In submanifolds where the chaos was not widespread, the dynamics on the Poincar\'{e} sections were reminiscent of a Kolmogorov-Arnold-Moser Hamiltonian system with hierarchies of islands enclosing nested sets of closed orbits. Nevertheless, we do not have an explicit Hamiltonian for (\ref{jj_resistive}) as we do for its averaged counterpart~\cite{wata93 }. The increase in chaotic behavior is clearly visible in Figs. ~\ref{kamchaosA} and Fig. ~\ref{kamchaosB}, which show sequences of Poincar\'{e} sections corresponding to the points (A) and (B) in Fig. ~\ref{pointplot}. Point (A) lies at $(b, \Omega) = (1 /20,3 /4 )$, about 1 /2 unit from the bifurcation curve $\Omega = b$, while point (B) lies at $(b, \Omega) = (17 /10,1 )$, about 1 /3 unit from $\Omega = b$. As an example of the pattern of escalating chaos, observe that Figs. ~\ref{kamchaosB}(a), (b), (c) have larger, more dramatically overlapping chaotic regions than the corresponding plots (a), (b), (d) of Fig. ~\ref{kamchaosA}. Although not shown, the chaotic trajectories that produced the scattered points in the Poincar\'{e} sections are phase coherent: they cycle smoothly and unidirectionally around the splay states throughout each period of $\psi$. When the splay states are moved toward the edge of the unit disc by increasing $b$ or decreasing $\Omega$, these trajectories appear increasingly less prone to return to the same neighborhoods in the Poincar\'{e} sections, resulting in the observed amplification of chaotic behavior. It is also possible to interpret the association between the parameter values and the intensity of the chaos in terms of the underlying physical parameters. In terms of these parameters, the limit $b/\Omega \rightarrow 1 ^-$ translates to $I_c/I_b \rightarrow 1 ^-$ or $I_b \rightarrow I_c^+$, which predicts that chaos should appear in real series arrays of Josephson junctions if the source current is reduced to near the critical current of the junctions. Even though the Poincar\'{e} sections in Fig. ~\ref{kamchaosA} and Fig. ~\ref{kamchaosB} show differing degrees of chaos, both series of plots depict a trend of decreasing chaotic behavior with increasing $n$. This stems from the dependence of $g$ on the phase centroid $\langle z \rangle$, which in turn arises because the oscillators are coupled only through their effect on the first harmonic of the phase density. For a coupling of this type, a sinusoidal phase density with a very short period and rapid oscillations (high $n$) ``looks'' nearly identical (in the Riemann-Lebesgue sense) to a uniform density. Hence, in the limit of large $n$, we see $\alpha$ decoupling from $\psi$, just as it does on the Poisson submanifold (recall that the Poisson submanifold corresponds to a uniform density in $\theta$, as shown in Section~\ref{Invariant Manifold of Poisson Kernels}). From this perspective, then, chaos becomes increasingly dominant as we move ``away'' from the Poisson submanifold, down toward small $n$. Finally, we point out a surprising feature in the Poincar\'{e} sections of (A) that was common in other simulations we performed. Starting at $n = 5 $, we see prominent sets of period-$(n - 1 )$ islands which appear for $n$ up to 8 in Fig. ~\ref{kamchaosA}. This ring of islands appears for higher $n$ as well and forms an increasingly larger and thinner band as $n$ is increased. Inside the dilating band, a set of nested orbits resembling the corresponding neutrally stable cycles of the Poisson submanifold grows, filling the unit disc and approaching coincidence with the trajectories on the Poisson submanifold. We are currently unclear on why exactly $(n-1 )$ islands emerge from the M\"{o}bius group action on (\ref{26 }), but pose this as an open question for future study. Although it is tempting to try to extrapolate our numerical results to the case of non-identical oscillators, Ott and Antonsen~\cite{otta09 } have recently demonstrated that such systems contain a two-dimensional submanifold (the generalization of the simpler Poisson submanifold studied here) that carries all the long-term dynamics of the phase centroid $\langle z \rangle$. Their results hold for the common case in which $g$ is a time-independent angular frequency with some distribution of values among the oscillators, and $f$ is a function of time, independent of oscillator variability. Our numerical experiments, together with this new result, indicate that the widespread neutral stability in systems of identical, sinusoidally-coupled phase oscillators is a consequence of their special symmetries and underlying group-theoretic structure. \section{APPENDIX \label{APPENDIX}}
We show that the $N. /(N-4 ). $ cross ratios of the oscillator phases are functionally dependent on the $N - 3 $ cross ratios $\{\lambda_{1234 }, \lambda_{2345 }, \dotsc, \lambda_{(N-3 )(N-2 )(N-1 )N}\}$. To do so, we use the fact that the $4. $ cross ratios corresponding to the $4. $ permutations of $z_i, z_j, z_k, z_l$ can be written as elementary functions of $\lambda_{ijkl}$:
\begin{equation} \label{14.2 }
\begin{split}
\lambda_{ijkl} &= \lambda_{jilk} = \lambda_{klij} = \lambda_{lkji} \\
\lambda_{ijlk} &= 1 /\lambda_{ijkl} \\
\lambda_{iklj} &= 1 /(1 - \lambda_{ijkl}) \\
\lambda_{ikjl} &= 1 - \lambda_{ijkl} \\
\lambda_{ilkj} &= \lambda_{ijkl}/(1 - \lambda_{ijkl}) \\
\lambda_{iljk} &= (\lambda_{ijkl} - 1 )/\lambda_{ijkl} \\
\end{split}
\end{equation}
Additionally, we can obtain new cross ratios from existing ones by multiplication:
\begin{equation} \label{15 }
\lambda_{ijkl}\lambda_{jmkl} = \lambda_{imkl}
\end{equation}
Using these facts, we need to show that we can write $\lambda_{PQRS}$ for any distinct $P, Q, R, S \in \{1, 2, \dotsc, N\}$ in terms of elements from $\{\lambda_{1234 }, \lambda_{2345 }, \dotsc, \lambda_{(N-3 )(N-2 )(N-1 )N}\}$. First, note that we can rewrite (\ref{15 }) as a function $F_j$ which takes two cross ratios $\lambda_{ijkl}$ and $\lambda_{jklm}$ (with indices in order), permutes the indices as necessary to eliminate $z_j$, executes the multiplication and returns the product with its indices in order:
\begin{equation} \label{16 }
F_j(\lambda_{ijkl}, \lambda_{jklm}) = \lambda_{iklm}
\end{equation}
Observe, however, that $F_j$ is just short-hand for a composition of elementary functions from (\ref{14.2 }):
\begin{equation} \label{17 }
F_j(\lambda_{ijkl}, \lambda_{jklm}) = \frac{1 }{1 - \lambda_{ijkl}(\lambda_{jklm} - 1 )/\lambda_{jklm}}
\end{equation}
We can also define the analogous functions $G_k$ and $H_l$:
\begin{equation} \label{30 }
\begin{split}
G_k(\lambda_{ijkl}, \lambda_{jklm}) &= \lambda_{ijlm} \\
H_l(\lambda_{ijkl}, \lambda_{jklm}) &= \lambda_{ijkm} \\
\end{split}
\end{equation}
These functions have their own compositions like that of $F_j$ in (\ref{17 }). Let $\lambda_{pqrs}$ correspond to the permutation of $\lambda_{PQRS}$ in which the indices are in order. We can write $\lambda_{PQRS}$ in terms of $\lambda_{pqrs}$ using one of the functions in (\ref{14.2 }). Thus, the problem reduces to showing that we can obtain $\lambda_{pqrs}$ from the elements of $\{\lambda_{1234 }, \lambda_{2345 }, \dotsc, \lambda_{(N-3 )(N-2 )(N-1 )N}\}$ by elimination of the indices between $p$, $q$, $r$, $s$ using the operations $F_j$, $G_k$, $H_l$. If there are one or more indices between $i$ and $j$, we say there is a \textit{gap} between $i$ and $j$. Now observe that we can obtain the first gap between $p$ and $q$ using only $\lambda_{ijkl}$ with no gaps; we grow this gap iteratively one index at a time by the operation: $F_k(\lambda_{pk(k+1 )(k+2 )}, \lambda_{k(k+1 )(k+2 )(k+3 )}) = \lambda_{p(k+1 )(k+2 )(k+3 )}$. We can then grow the second gap between $q$ and $r$ to its full size using only $\lambda_{ijkl}$ that have no gaps between $j$ and $k$ or $k$ and $l$ (each of which could be made from $\lambda_{ijkl}$ with no gaps) using the operation: $G_k(\lambda_{pqk(k+1 )}, \lambda_{qk(k+1 )(k+2 )}) = \lambda_{pq(k+1 )(k+2 )}$. Finally, we can create the third gap between $r$ and $s$ using only $\lambda_{ijkl}$ with no gaps between $k$ and $l$ (which could be made from $\lambda_{ijkl}$ with fewer gaps) using the operation: $H_k(\lambda_{pqrk}, \lambda_{qrk(k+1 )}) = \lambda_{pqr(k+1 )}$. Since each $\lambda_{ijkl}$ (with $i < j < k < l$) can be built up from $\lambda_{ijkl}$ with fewer gaps, the proof is complete: all $N. /(N-4 ).
|
869
| 0
|
arxiv
|
\section*{Program Summary}
\section{Introduction}
With the recent discovery of a bosonic resonance
\cite{Atlas:2012 gk, CMS:2012 gu} showing all the characteristics of the
SM Higgs boson a long search might soon come to a successful end. In
contrast there are no hints for a signal of supersymmetric (SUSY)
particles or particles predicted by any other extension of the
standard model (SM)
\cite{:2012 rz, Chatrchyan:2012 jx, CMS-PAS-SUS-11 -022, CMS-PAS-SUS-12 -005, :2012 mfa}. Therefore, large areas of the parameter space of the simplest SUSY
models are excluded. The allowed mass spectra as well as the best fit
mass values to the data are pushed to higher and higher values
\cite{pMSSM}. This has lead to an increasing interest in the study of
SUSY models which provide new features. For instance, models with
broken $R$-parity \cite{rpv, rpvsearches} or compressed spectra
\cite{compressed} might be able to hide much better at the LHC, while
for other models high mass spectra are a much more natural feature
than this is the case in the minimal-supersymmetric standard model
(MSSM) \cite{finetuning}. However, bounds on the masses and couplings of beyond the SM
(BSM) models follow not only from direct searches at colliders. New
particles also have an impact on SM processes
via virtual quantum corrections, leading in many
instances to sizable deviations from the SM expectations. This holds
in particular for the anomalous magnetic moment of the muon
\cite{Stockinger:2006 zn} and
processes which are highly suppressed in the SM. The latter are mainly lepton flavor violating (LFV) or decays involving
quark flavor changing neutral currents (qFCNC). While the prediction
of LFV decays in the SM is many orders of magnitude below the
experimental sensitivity \cite{Cheng:1985 bj}, qFCNC is
experimentally well established. For instance, the observed rate of $b
\to s\gamma$ is in good agreement with the SM expectation and this
observable has put for several years strong
constraints on qFCNCs beyond the SM \cite{bsgamma}. The experiments at the LHC have reached now a sensitivity to test also
the SM prediction for BR($B^0 _s\to\mu\bar\mu$) as well as BR($B^0 _d\to
\mu \bar{\mu}$) \cite{Buras:2012 ru}
\begin{eqnarray}
\BRx{B^0 _s \to \mu \bar{\mu}}_{\text{SM}} &=& \scn{(3.23 \pm 0.27 )}{-9 } \label{eq:Bsintro1 }, \\
\BRx{B^0 _d \to \mu \bar{\mu}}_{\text{SM}} &=& \scn{(1.07 \pm 0.10 )}{-10 }. \label{eq:Bsintro2 }
\end{eqnarray}
Using the measured finite width difference of the $B$ mesons the time integrated branching ratio
which should be compared to experiment is \cite{Buras:2013 uqa}
\begin{eqnarray}
\BRx{B^0 _s \to \mu \bar{\mu}}_{\text{theo}} &=& \scn{(3.56 \pm 0.18 )}{-9 } \, . \end{eqnarray}
Recently, LHCb reported the first evidence for $B^0 _s \to \mu
\bar{\mu}$. The observed rate \cite{LHCb:2012 ct}
\begin{equation}
\BRx{B^0 _s \to \mu \bar{\mu}} = (3.2 ^{+1.5 }_{-1.2 })\times 10 ^{-9 } \\
\end{equation}
fits nicely to the SM prediction. For $\BRx{B^0 _d \to \mu \bar{\mu}}$
the current upper
bound: $9.4 \cdot 10 ^{-10 }$
is already of the same order as the SM expectation. This leads to new constraints for BSM models and each model has to be
confronted with these measurements. So far, there exist several public
tools which can calculate
$\BRx{B^0 _{s, d}\to \ell\bar{\ell}}$ as well as other observables in the context of the MSSM
or partially also for the next-to-minimal supersymmetric standard
model (NMSSM) \cite{NMSSM}: {\tt superiso} \cite{superiso}, {\tt
SUSY\_Flavor} \cite{susyflavor}, {\tt NMSSM-Tools} \cite{NMSSMTools},
{\tt MicrOmegas} \cite{MicrOmegas} or {\tt SPheno} \cite{spheno}. However, for more complicated SUSY models none of the available tools
provides the possibility to calculate
these decays easily. This gap is now closed by the interplay of the {\tt
Mathematica} package {\tt SARAH}\xspace \cite{sarah} and the spectrum generator
{\tt SPheno}\xspace. {\tt SARAH}\xspace
already has many SUSY models incorporated but allows also an easy and
efficient implementation of new models. For all of these models {\tt SARAH}\xspace
can generate new modules for {\tt SPheno}\xspace for a comprehensive numerical
evaluation. This functionality is extended, as described in this paper, by a
full 1 -loop calculation of $B^0 _{s, d}\to\ell\bar{\ell}$. The rest of the paper is organized as follows: in
sec. ~\ref{sec:analytical} we recall briefly the analytical calculation
for BR($B^0 _{s, d}\to\ell\bar{\ell}$). In
sec. ~\ref{sec:implementation} we discuss the implementation of this
calculation in {\tt SARAH}\xspace and {\tt SPheno}\xspace before we conclude in
sec. ~\ref{sec:conclusion}. The appendix contains more information
about the calculation and generic results for the amplitudes. \section{Calculation of BR($B^0 _{s, d}\to\ell\bar{\ell}$)}
\label{sec:analytical}
In the SM this decay was first calculated in ref~\cite{Inami:1980 fz},
in the analogous context of kaons. The higher order corrections were
first presented in \cite{Buchalla:1993 bv}; see also
\cite{Misiak:1999 yg}. In the context of supersymmetry this was
considered in \cite{bsmumu-susy}. See also the interesting correlation
between BR$(B_s^0 \to\mu\bar\mu)$ and $(g-2 )_\mu$ \cite{Dedes:2001 fv}. We present briefly the main steps of the calculation of BR($B^0 _{q}\to
\ell_k\bar{\ell}_l$) with $q=s, d$. We follow
closely the notation of ref. ~\cite{Dedes:2008 iw}. The effective
Hamiltonian can be parametrized by
\begin{eqnarray}
\label{eq:effectiveH} \mathcal{H}&=&\frac{1 }{16 \pi^2 }
\sum_{X, Y=L, R}\kl{C_{SXY}\mathcal{O}_{SXY}+C_{VXY}\mathcal{O}_{VXY}+C_{TX}\mathcal{O}_{TX}}
\, ,
\end{eqnarray}
with the Wilson coefficients $C_{SXY}, C_{VXY}, C_{TX}$ corresponding to
the scalar, vector and tensor operators
\begin{equation}
\mathcal{O}_{SXY} = (\bar q_j P_X q_i)(\bar \ell_l P_Y \ell_k) \,, \hspace{0.5 cm}
\mathcal{O}_{VXY} = (\bar q_j \gamma^\mu P_X q_i)(\bar \ell_l \gamma_\mu P_Y
\ell_k) \,, \hspace{0.5 cm}
\mathcal{O}_{TX} = (\bar q_j \sigma^{\mu\nu} P_X q_i)(\bar \ell_l \sigma_{\mu\nu} \ell_k) \, . \end{equation}
$P_L$ and $P_R$ are the projection operators on left respectively
right handed states. The expectation value of the axial vector matrix
element is defined as
\begin{eqnarray}
\label{eq:fBs}
\langle 0 \left| \bar b \gamma^\mu \gamma^5 q \right| B^0 _q(p)\rangle
&\equiv& ip^\mu f_{B^0 _q} \, . \end{eqnarray}
Here, we introduced the meson decay constants $f_{B^0 _q}$ which can be
obtained from lattice QCD simulations \cite{Laiho:2009 eu}. The current
values for $B^0 _s$ and $B^0 _d$ are given by \cite{Davies:2012 qf}
\begin{equation}
\label{eq:fBsValue}
f_{B^0 _s} = (227 \pm 8 )~{\text{Me}\hspace{-0.05 cm}\text{V}} \,, \hspace{1 cm} f_{B^0 _d} = (190 \pm 8 )~{\text{Me}\hspace{-0.05 cm}\text{V}} \, . \end{equation}
Since the momentum $p$ of the meson is the only four-vector available,
the matrix element in \qref{eq:fBs} can only depend on
$p^\mu$. The incoming momenta of the $b$ antiquark and the $s$ (or $d$) quark are $p_1, p_2 $ respectively, where $p=p_1 +p_2 $. Contracting \qref{eq:fBs} with $p_\mu$ and using the
equations of motion $\bar b \slashed p_1 =-\bar b m_b$ and $\slashed
p_2 q=m_q q$ leads to an
expression for the pseudoscalar current
\begin{eqnarray}
\label{eq:fBspseudo}
\langle 0 \left| \bar b \gamma^5 q\right| B^0 _q(p) \rangle &=& - i \frac{M_{B_q^0 }^2 f_{B^0 _q}}{m_b+m_q} \, . \end{eqnarray}
The vector and scalar currents vanish
\begin{equation}
\label{eq:vanish}
\langle 0 \left| \bar b \gamma^\mu q \right| B^0 _q(p)\rangle=
\langle 0 \left| \bar b q\right| B^0 _q(p)
\rangle =0 \, . \end{equation}
From eqs. ~(\ref{eq:fBs}) and (\ref{eq:fBspseudo}) we obtain
\begin{equation}
\langle 0 \left| \bar b \gamma^\muP_{L/R} q \right| B^0 _q(p)\rangle =
\mp \frac i2 p^\mu f_{B^0 _q} \label{eq:fBsPLR1 } \, , \hspace{1 cm}
\langle 0 \left| \bar b P_{L/R} q
\right| B^0 _q(p)\rangle = \pm \frac i2 \frac {M_{B^0 _q}^2
f_{B^0 _q}}{m_b+m_q} \, . \end{equation}
In general, the matrix element $\mathcal M$ is a function of the form
factors $F_S, F_P, F_V, F_A$ of the scalar, pseudoscalar, vector and
axial-vector current and can be expressed by
\begin{equation}
\label{eq:matrixelementBs}
(4 \pi)^2 \mathcal M = F_S \bar \ell \ell + F_P \bar \ell \gamma^5 \ell
+ F_V p_\mu \bar \ell
\gamma^\mu \ell + F_A p_\mu \bar \ell \gamma^\mu \gamma^5 \ell \, . \end{equation}
Note that there is no way of building an antisymmetric 2 -tensor out of
just one vector $p^\mu$. The matrix element of the tensor operator
$\mathcal{O}_{TX}$ must therefore vanish. The form factors can be
expressed by linear combinations of the Wilson coefficients of
eq. ~(\ref{eq:effectiveH})
\begin{eqnarray}
F_S &=& \frac i4 \frac{M_{B^0 _q}^2 f_{B^0 _q}}{m_b+m_q} \kl{ C_{SLL} +
C_{SLR} - C_{SRR}-C_{SRL}}, \label{eq:formfactorsBs1 }\\
F_P &=& \frac i4 \frac{M_{B^0 _q}^2 f_{B^0 _q}}{m_b+m_q} \kl{ -C_{SLL} +
C_{SLR} - C_{SRR}+C_{SRL}}, \label{eq:formfactorsBs2 }\\
F_V &=& -\frac i4 f_{B^0 _q} \kl{ C_{VLL} + C_{VLR} - C_{VRR}-C_{VRL}}, \label{eq:formfactorsBs3 }\\
F_A &=& -\frac i4 f_{B^0 _q} \kl{ -C_{VLL} + C_{VLR} - C_{VRR}+C_{VRL}}. \label{eq:formfactorsBs4 }
\end{eqnarray}
The main task is to calculate the different Wilson coefficients for a
given model. These Wilson coefficients receive at the 1 -loop level
contributions from
various wave, penguin
and box diagrams, see Figures~\ref{fig:wave}-\ref{fig:box2 } in
\ref{app:amplitudes}. Furthermore, in some models these decays could
also happen already at tree-level \cite{Dreiner:2006 gu}. The
amplitudes for all possible, generic diagrams which can contribute to
the Wilson coefficients have been calculated with {\tt FeynArts}\xspace/{\tt FormCalc}\xspace
\cite{feynarts} and the results are listed in
\ref{app:amplitudes}. This calculation has been performed in the
$\overline{\text{DR}}$ scheme and 't Hooft gauge. How these results
are used together with {\tt SARAH}\xspace and {\tt SPheno}\xspace to get numerical results
will be discussed in the next section. After the calculation of the form factors, the squared amplitude is
\begin{align}
\label{eq:squaredMBsllp}
(4 \pi)^4 \abs{\ampM}^2 &=2 \abs{F_S}^2 \kl{M_{B^0 _q}^2 -(m_\ell+m_k)^2 }
+2 \abs{F_P}^2 \kl{M_{B^0 _q}^2 -(m_\ell-m_k)^2 }
\\
\nonumber &+ 2 \abs{F_V}^2 \kl{M_{B^0 _q}^2 (m_k-m_\ell)^2 -(m_k^2 -m_\ell^2 )^2 } \\
\nonumber &+ 2 \abs{F_A}^2 \kl{M_{B^0 _q}^2 (m_k+m_\ell)^2 -(m_k^2 -m_\ell^2 )^2 } \\
\nonumber &+ 4 \Re (F_s F_V^*) (m_\ell-m_k)\kl{M_{B^0 _q}^2 +(m_k+m_\ell)^2 } \\
\nonumber &+ 4 \Re (F_P F_A^*) (m_\ell+m_k)\kl{M_{B^0 _q}^2 -(m_k-m_\ell)^2 } \, . \end{align}
Here, $m_\ell$ and $m_k$ are the lepton masses.
|
869
| 1
|
arxiv
|
6 \pi}
\frac{\abs{\mathcal{M}}^2 }{M_{B_q^0 }}\sqrt{1 -\kl{\frac{m_k+m_l}{M_{B_q^0 }}}^2 }\sqrt{1 -\kl{\frac{m_k-m_l}{M_{B_q^0 }}}^2 }
\end{equation}
with $\tau_{B^0 _q}$ as the life time of the mesons. \section{Automatized calculation of $\ensuremath{B^0 _{s, d}\to \ell \bar{\ell} \xspace}$}
\label{sec:implementation}
\subsection{Implementation in {\tt SARAH}\xspace and {\tt SPheno}\xspace}
{\tt SARAH}\xspace is the first 'spectrum-generator-generator' on the market which
means that it can generate Fortran source for {\tt SPheno}\xspace to obtain a
full-fledged spectrum generator for models beyond the MSSM. The main
features of a {\tt SPheno}\xspace module written by {\tt SARAH}\xspace are a precise mass
spectrum calculation based on 2 -loop renormalization group equations
(RGEs) and a full 1 -loop calculation of the mass spectrum. Two-loop
results known for the MSSM can be included. Furthermore, also the
decays of SUSY and Higgs particles are calculated as well as
observables like $\ell_i \to \ell_j \gamma$, $\ell_i \to 3 \ell_j$, $b\to
s\gamma$, $\delta\rho$, $(g-2 )$, or electric dipole moments. For more
information about the interplay between {\tt SARAH}\xspace and {\tt SPheno}\xspace we refer
the interested reader to Ref. ~\cite{Staub:2011 dp}. Here we extend the list of observables by BR($B^0 _s\to\ell\bar
{\ell}$) and BR($B^0 _d\to \ell\bar{\ell}$). For this purpose, the
generic tree--level and 1 --loop amplitudes calculated with
{\tt FeynArts}\xspace/{\tt FormCalc}\xspace given in \ref{app:amplitudes} have been implemented
in {\tt SARAH}\xspace. When {\tt SARAH}\xspace generates the output for {\tt SPheno}\xspace it checks for
all possible field combinations which can populate the generic
diagrams in the given model. This information is then used to generate
Fortran code for a numerical evaluation of all of these diagrams. The
amplitudes are then combined to the Wilson coefficients which again
are used to calculate the form factors
eqs. ~(\ref{eq:formfactorsBs1 })-(\ref{eq:formfactorsBs4 }). The
branching ratio is finally calculated by using
eq. ~(\ref{eq:Bsllpbranching}). Note, the known 2 --loop QCD corrections
of Refs. ~\cite{Buchalla:1993 bv, Misiak:1999 yg, Bobeth:2001 jm} are
not included in this calculation. The Wilson coefficients for $\ensuremath{B^0 _{s, d}\to \ell \bar{\ell} \xspace}$ are
calculated at a scale $Q=160 $~GeV by all modules generated by {\tt SARAH}\xspace,
as this is done by default by {\tt SPheno}\xspace in the MSSM. Hence, as input
parameters for the calculation the running SUSY masses and couplings at
this scale obtained by a 2 -loop RGE evaluation down from the SUSY scale are used. In the standard model gauge sector we use the running value of $\alpha_{em}$, the on-shell
Weinberg angle $\sin \Theta_W^2 = 1 -\frac{m_W^2 }{m_Z^2 }$ with $m_W$ calculated
from $\alpha_{em}(M_Z)$, $G_F$ and the $Z$ mass. In addition, the CKM matrix calculated
from the Wolfenstein parameters ($\lambda$, $A$, $\rho$, $\eta$) as well as
the running quark masses enter the calculation. To obtain the running SM parameters at $Q=160 $~GeV
we use 2 -loop standard model RGEs of Ref. ~\cite{Arason:1991 ic}. The default SM values as well as the derived parameters are given in Tab. ~\ref{tab:sm}. Note, even if CP violation is not switched on the calculation of the SUSY spectrum, always the phase
of the CKM matrix is taken into account in these calculations. This is especially important for $B_d^0 $
decays. \begin{table}[bt]
\centering
\small
\begin{tabular}{|l|l|l|l|l|}
\hline \hline
\multicolumn{5 }{|c|}{default SM input parameters} \\
\hline
$\alpha^{-1 }_{em}(M_Z) = 127.93 $ & $\alpha_s(M_Z) = 0.1190 $ & $G_F = 1.16639 \cdot 10 ^{-5 }~\text{GeV}^{-2 }$
& $\rho = 0.135 $ & $\eta = 0.349 $ \\
$m_t^{pole} = 172.90 ~\text{GeV}$ & $M_Z^{pole} = 91.1876 ~\text{GeV} $ & $m_b(m_b) = 4.2 ~\text{GeV} $ & $\lambda = 0.2257 $ & $A = 0.814 $ \\
\hline \hline
\multicolumn{5 }{|c|}{derived parameters} \\
\hline
$m_t^{\overline{DR}} = 166.4 ~\text{GeV}$ &
$| V_{tb}^* V_{ts}| = 4.06 *10 ^{-2 } $ & $| V_{tb}^* V_{td}| = 8.12 *10 ^{-3 } $ & $m_W = 80.3893 $ & $\sin^2 \Theta_W = 0.2228 $ \\
\hline
\end{tabular}
\caption{SM input values and derived parameters by default used for the numerical evaluation of $B^0 _{s, d} \to l\bar{l}$ in {\tt SPheno}\xspace. }
\label{tab:sm}
\end{table}
All standard model parameters can be adjusted by using the
corresponding standard blocks
of the SUSY LesHouches Accord 2 (SLHA2 ) \cite{SLHA}. Furthermore, the default input values for the
hadronic parameters given in Table~\ref{tab:input} are used. These
can be changed in the Les Houches input accordingly to the Flavor Les Houches
Accord (FLHA) \cite{Mahmoudi:2010 iz} using the following blocks:
\begin{verbatim}
Block FLIFE #
511 1.525 E-12 # tau_Bd
531 1.472 E-12 # tau_Bs
Block FMASS #
511 5.27950 0 0 # M_Bd
531 5.3663 0 0 # M_Bs
Block FCONST #
511 1 0.190 0 0 # f_Bd
531 1 0.227 0 0 # f_Bs
\end{verbatim}
While {\tt SPheno}\xspace includes the
chiral resummation for the MSSM, this is not taken into account in the
routines generated by {\tt SARAH}\xspace because of its large model dependence. \begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3 }{|c|}{Default hadronic parameters} \\
\hline
$m_{B^0 _s} = 5.36677 $ GeV & $f_{B^0 _s} = 227 (8 )$ MeV & $\tau_{B^0 _s} = 1.466 (31 )$ ps \\
$m_{B^0 _d} = 5.27958 $ GeV & $f_{B^0 _d} = 190 (8 )$ MeV & $\tau_{B^0 _d} = 1.519 (7 )$ ps \\
\hline
\end{tabular}
\caption{Hadronic input values by default used for the numerical evaluation of $B^0 _{s, d} \to l\bar{l}$ in {\tt SPheno}\xspace. }
\label{tab:input}
\end{table}
\subsection{Generating and running the source code}
We describe briefly the main steps necessary to generate and run the
{\tt SPheno}\xspace code for a given model: after starting {\tt Mathematica} and
loading {\tt SARAH}\xspace it is just necessary to evaluate the demanded model and
call the function to generate the {\tt SPheno}\xspace source code. For instance,
to get a {\tt SPheno}\xspace module for the B-L-SSM \cite{Khalil:2007 dr, FileviezPerez:2010 ek, O'Leary:2011 yq}, use
\begin{verbatim}
<<[$SARAH-Directory]/SARAH. m;
Start["BLSSM"];
MakeSPheno[];
\end{verbatim}
{\tt MakeSPheno[]} calculates first all necessary information
(\textit{i. e. } vertices, mass matrices, tadpole equations, RGEs,
self-energies) and then exports this
information to Fortran code and writes all necessary auxiliary
functions needed to compile the code together with {\tt SPheno}\xspace. The entire
output is saved in the directory
\begin{verbatim}
[$SARAH-Directory]/Output/BLSSM/EWSB/SPheno/
\end{verbatim}
The content of this directory has to be copied into a new
subdirectory of {\tt SPheno}\xspace called {\tt BLSSM} and afterwards the code can
be compiled:
\begin{verbatim}
cp [$SARAH-Directory]/Output/BLSSM/EWSB/SPheno/* [$SPheno-Directory]/BLSSM/
cd [$SPheno-Directory]
make Model=BLSSM
\end{verbatim}
This creates a new binary {\tt SPhenoBLSSM} in the directory {\tt bin}
of {\tt SPheno}\xspace. To run the spectrum calculation a file called {\tt
LesHouches\!. in. BLSSM} containing all input parameters in the Les
Houches format has to be provided. {\tt SARAH}\xspace writes also a template for
such a file which has been copied with the other files to {\tt
/BLSSM}. This example can be evaluated via
\begin{verbatim}
. /bin/SPhenoBLSSM BLSSM/LesHouches. in. BLSSM
\end{verbatim}
and the output is written to {\tt SPheno. spc. BLSSM}. This file
contains all information like the masses, mass matrices, decay widths
and branching ratios, and observables. For the $B^0 _{s, d}\to \ell \bar{\ell}$
decays
the results are given twice for easier comparison: once for the full
calculation and once including only the SM contributions. All results
are written to the block {\tt SPhenoLowEnergy} in the spectrum file
using the following numbers:
\begin{center}
\begin{tabular}{|cl|cl|}
\hline
{\tt 4110 } & $\text{BR}^{SM}(B^0 _d\to e^+e^-)$ & {\tt 4111 } & $\text{BR}^{full}(B^0 _d\to e^+e^-)$ \\
{\tt 4220 } & $\text{BR}^{SM}(B^0 _d\to \mu^+\mu^-)$ & {\tt 4221 } & $\text{BR}^{full}(B^0 _d\to \mu^+\mu^-)$ \\
{\tt 4330 } & $\text{BR}^{SM}(B^0 _d\to \tau^+\tau^-)$ & {\tt 4331 } & $\text{BR}^{full}(B^0 _d\to \tau^+\tau^-)$ \\
{\tt 5110 } & $\text{BR}^{SM}(B^0 _s\to e^+e^-)$ & {\tt 5111 } & $\text{BR}^{full}(B^0 _s\to e^+e^-)$ \\
{\tt 5210 } & $\text{BR}^{SM}(B^0 _s\to \mu^+e^-)$ & {\tt 5211 } & $\text{BR}^{full}(B^0 _s\to \mu^+e^-)$ \\
{\tt 5220 } & $\text{BR}^{SM}(B^0 _s\to \mu^+\mu^-)$ & {\tt 5221 } & $\text{BR}^{full}(B^0 _s\to \mu^+\mu^-)$ \\
{\tt 5330 } & $\text{BR}^{SM}(B^0 _s\to \tau^+\tau^-)$ & {\tt 5331 } & $\text{BR}^{full}(B^0 _s\to \tau^+\tau^-)$ \\
\hline
\end{tabular}
\end{center}
Note, we kept for completeness and as cross-check $\text{BR}^{SM}(B^0 _s\to \mu^+e^-)$ which has to vanish. The same steps can be repeated for any other model implemented in
{\tt SARAH}\xspace, or the {\tt SUSY-Toolbox} scripts \cite{Staub:2011 dp} can be
used for an automatic implementation of new models in {\tt SPheno}\xspace as well
as in other tools based on the {\tt SARAH}\xspace output. \subsection{Checks}
We have performed several cross checks of the code generated by
{\tt SARAH}\xspace: the first, trivial check has been that we reproduce the known
SM results and that those agree with the full calculation in the limit of
heavy SUSY spectra. For the input parameters
of Tab. ~\ref{tab:sm} we obtain $\text{BR}(B^0 _s\to \mu^+ \mu^-)^{SM} = 3.28 \cdot 10 ^{-9 }$
and $\text{BR}(B^0 _d\to \mu^+ \mu^-)^{SM} = 1.08 \cdot 10 ^{-10 }$ which are in good agreement
with eqs. ~(\ref{eq:Bsintro1 })-(\ref{eq:Bsintro2 }). Secondly, as mentioned in
the introduction there are several codes which calculate these decays
for the MSSM or NMSSM. A detailed comparison of all of these codes is
beyond the scope of the presentation here and will
be presented elsewhere \cite{comparison}. However, a few comments are in order:
the code generated by {\tt SARAH}\xspace as well as most other codes usually show
the same behavior. There are differences in the numerical
values calculated by the programs because of different values
for the SM inputs. For instance, there is an especially strong
dependence on the value of the
electroweak mixing angle and, of course, of the hadronic parameters used in the
calculation \cite{Buras:2012 ru}. In addition, these processes are
implemented with different accuracy in different tools: the treatment
of NLO QCD corrections \cite{Bobeth:2001 jm}, chiral resummation
\cite{Crivellin:2011 jt}, or SUSY box diagrams is not the
same. Therefore, we depict in Fig. ~\ref{fig:comparison} a comparison
between {\tt SPheno 3.2.1 }, {\tt Superiso 3.3 } and {\tt SPheno by
SARAH} using the results normalized to the SM limit of each program. \begin{figure}[hbtp]
\centering
\includegraphics[width=0.5 \linewidth]{FeynAmps/m0 _R. pdf} \\
\includegraphics[width=0.5 \linewidth]{FeynAmps/m0 M12 _R. pdf} \\
\includegraphics[width=0.5 \linewidth]{FeynAmps/tb_R.
|
869
| 2
|
arxiv
|
fig:comparison}
\end{figure}
It is also possible to perform a check of self-consistency: the
leading-order contribution has to be finite which leads to non-trivial
relations among the amplitudes for all
wave and penguin diagrams are given in \ref{sec:waveBapp} and \ref{sec:penguinB}. Therefore, we can check these relations numerically by
varying the renormalization scale used in all loop integrals. The
dependence on this scale should cancel and the branching ratios should
stay constant. This is shown in Figure~\ref{fig:scaledependence}:
while single contributions can change by several orders the sum of all
is numerically very stable. \begin{figure}[hbt]
\centering
\includegraphics[width=0.6 \linewidth]{FeynAmps/renorm-plot-FA}
\caption{The figure shows $|\sum F_A|_{\text{penguin}}$ and $|\sum F_A|_{\text{wave}}$ as well as the sum of both $|\sum F_A|$. Penguin and wave contributions have opposite signs that interchange between $Q=10 ^2 {\text{Ge}\hspace{-0.05 cm}\text{V}}$ to $Q=10 ^3 {\text{Ge}\hspace{-0.05 cm}\text{V}}$. }
\label{fig:scaledependence}
\end{figure}
\subsection{Non-supersymmetric models}
We have focused our discussion so far on SUSY models. However, even if
{\tt SARAH}\xspace is optimized for the study of SUSY models it is also able to
handle non-SUSY models to some extent. The main drawback at the moment
for non-SUSY models is that the RGEs can not be calculated
because the results of Refs. ~\cite{Martin:1993 zk, Fonseca:2011 vn} which
are used by {\tt SARAH}\xspace are not valid in this case. However, all other
calculations like the ones for the vertices, mass matrices and
self-energies don't use SUSY properties and therefore apply to
any model. Hence, it is also possible to generate {\tt SPheno}\xspace code for
these models which calculates $B^0 _{s, d}\to\ell\bar{\ell}$. The main
difference in the calculation comes from the missing possibility to
calculate the RGEs: the user has to provide numerical values for all
parameters at the considered scale which then enter the
calculation. We note that in order to fully support non-supersymmetric
models with {\tt SARAH}\xspace the calculation of the corresponding RGEs at 2 -loop
level will be included in {\tt SARAH}\xspace in the future \cite{nonsusyrges}. \section{Conclusion}
\label{sec:conclusion}
We have presented a model independent implementation of the flavor
violating decays $B^0 _{s, d} \to \ell\bar{\ell}$ in {\tt SARAH}\xspace and {\tt SPheno}\xspace. Our
approach provides the possibility to generate source code which
performs a full 1 -loop calculation of these observables for any
model which can be implemented in {\tt SARAH}\xspace. Therefore, it takes care of
the necessity to confront many BSM models in the future with the
increasing constraints coming from the measurements of $B^0 _{s, d} \to
\ell\bar{\ell}$ at the LHC. \section*{Acknowledgements}
W. P. \ thanks L. ~Hofer for discussions. This work has been supported
by the Helmholtz alliance `Physics at the Terascale' and W. P. \ in part
by the DFG, project No. PO-1337 /2 -1. HKD acknowledges support from
BMBF grant 00160200. \input{appendix. tex}
\input{lit. tex}
\end{document}
\section{Conventions}
\label{conventions}
\subsection{Passarino-Veltman integrals}
We use in the following the conventions of \cite{Pierce:1996 zz}
for the Passarino-Veltman integrals. All Wilson coefficients appearing
in the following can be expressed by the integrals
\begin{eqnarray}
\label{eq:B0 withPsq0 }
B_0 (0, x, y)&=& \Delta + 1 + \ln \kl{\frac{Q^2 }{y}} +
\frac{x}{x-y}\ln \kl{\frac{y}{x}} \\
\Delta &=& \frac 2 {4 -D} - \gamma_E + \log 4 \pi \\
B_1 (x, y, z) &=& \frac 12 (z-y)\frac{B_0 (x, y, z)-B_0 (0, y, z)}{x}-\frac 12 B_0 (x, y, z) \\
C_0 (x, y, z) &=& \frac{1 }{y-z} \left[ \frac{y}{x-y} \log \frac yx - \frac{z}{x-z} \log \frac zx \right] \label{eq:C0 } \\
\nonumber C_{00 }(x, y, z)
&=& \frac 14 \kl{1 -\frac 1 {y-z}\kl{ \frac {x^2 \log x - y^2 \log y}{x-y}-\frac{x^2 \log x-z^2 \log z}{x-z}}} \\
D_0 (x, y, z, t) &=& \frac{C_0 (x, y, z)-C_0 (x, y, t)}{z-t} \label{eq:D0 reduce} \\
\nonumber &=& -\left[ \frac{y\log \frac yx}{(y-x)(y-z)(y-t)} +
\frac{z\log \frac zx}{(z-x)(z-y)(z-t)}+\frac{t\log \frac
tx}{(t-x)(t-y)(t-z)} \right] \\
D_{00 }(x, y, z, t) & = & -\frac 14 \left[ \frac{y^2 \log \frac yx}{(y-x)(y-z)(y-t)} +
\frac{z^2 log \frac zx}{(z-x)(z-y)(z-t)}+\frac{t^2 \log \frac
tx}{(t-x)(t-y)(t-z)} \right]. \end{eqnarray}
Note, the conventions of ref. ~\cite{Pierce:1996 zz} (Pierce, Bagger
[PB]) are different than those presented in ref. ~\cite{Dedes:2008 iw}
(Dedes, Rosiek, Tanedo [DRT]). The box integrals are related by
\begin{eqnsub}
\label{eq:Dloopsall}
D_{0 } &=& D_0 ^{(\text{PB})}=- D_0 ^{(DRT)}, \\
\label{eq:Dloopsall2 }
D_{00 }&=& D_{27 }^{(\text{PB})}=- \frac 14 D_{2 }^{(DRT)} \,. \end{eqnsub}
\subsection{Massless limit of loop integrals}
In some amplitudes (i. e. penguin diagrams $(a-b)$, box diagram $(v)$) the
following combinations of loop integrals appear:
\begin{align}
I_1 &= B_0 (s, M_{F1 }^2, M_{F2 }^2 )+M_S^2 C_0 (s,0,0, M_{F2 }^2, M_{F1 }^2, M_S^2 ), \\
I_2 &= C_0 (0,0,0, M_{F2 }^2, M_{F1 }^2, M_{V2 }^2 ) + M_{V1 }^2 D_0 (M_{F2 }^2, M_{F1 }^2, M_{V1 }^2, M_{V2 }^2 ). \end{align}
The loop functions $B_0, C_0, D_0 $ diverge for massless fermions (e. g. neutrinos in the MSSM) but
the expressions $I_1, I_2 $ are finite. However, this limit must be taken analytically in order to avoid
numerical instabilities. In a generalized form and in the limit of zero external momenta, $I_i$ can be expressed by
\begin{align}
I_1 (a, b, c) &= B_0 (0, a, b)+c C_0 (0,0,0, a, b, c) \equiv B_0 (0, a, b)+c C_0 (a, b, c), \\
I_2 (a, b, c, d) &= C_0 (0,0,0, a, b, d) + c D_0 (a, b, c, d) \equiv C_0 (a, b, d)+c D_0 (a, b, c, d). \end{align}
Using eqs. ~{\ref{eq:B0 withPsq0 }, \ref{eq:C0 }, \ref{eq:D0 reduce}} we obtain in the limit $a\to 0 $
\begin{align}
I_1 (0, b, c) &= B_0 (0,0, b) + c C_0 (0, b, c) \\
&= \Delta + 1 - \log \frac b{Q^2 } + c \frac 1 {b-c}\log \frac cb \\
&= \Delta + 1 + \log Q^2 + \frac {c}{b-c} \log c + \kl{-1 -\frac c{b-c}}\log
b
\end{align}
The term proportional to $\log b$ vanishes in the limit $b\to 0 $
\begin{equation}
I_1 (0,0, c) = \Delta + 1 - \log \frac c{Q^2 }
\end{equation}
The same strategy works for $I_2 $:
\begin{align}
I_2 (0, b, c, d) &= C_0 (0, b, d)+c D_0 (0, b, c, d) \\
&= \frac 1 {b-d} \log \frac{d}{b} + c \frac{C_0 (0, b, c)-C_0 (0, b, d)}{c-d} \\
&= \frac 1 {b-d} \log \frac{d}{b} + \frac c{c-d} \frac 1 {b-c} \log \frac{c}{b} -
\frac{c}{c-d}\frac 1 {b-d}\log \frac {d}{b} \\
&= \frac{(c-d)(b-c)\log \frac db + c (b-d)\log \frac cb - c(b-c) \log \frac
db}{(b-d)(c-d)(b-c)} \label{eq:I2 last}
\end{align}
The denominator of \qref{eq:I2 last} is finite for $b\to 0 $ and in the
numerator, the $\log b$ terms cancel each other:
\begin{equation}
( (c-d)c + cd -c^2 ) \log b =0. \end{equation}
Hence, we end up with
\begin{align}
I_2 (0,0, c, d) &= \frac{-c(c-d)\log d - cd \log c + c^2 \log
d}{cd(c-d)} = \frac{\log \frac dc}{c-d}. \end{align}
\subsection{Parametrization of vertices}
We are going to express the amplitude in the following in terms of
generic vertices. For this purpose, we parametrize a vertex between
two fermions and one vector or scalar respectively as
\begin{eqnarray}
\label{eq:chiralvertices1 }
& G_A \gamma_\mu P_L + G_B \gamma_\mu P_R\, , & \\
\label{eq:chiralvertices2 }
& G_A P_L + G_B P_R \, . &
\end{eqnarray}
$P_{L, R}=\frac 12 (1 \mp \gamma^5 )$ are the polarization operators. In
addition, for the vertex between three vector bosons and the one between one
vector boson and two scalars the conventions are as follows
\begin{eqnarray}
& G_{VVV} \cdot \kl{g_{\mu\nu}(k_2 -k_1 )_\rho+g_{\nu\rho}(k_3 -k_2 )_\mu
+g_{\rho\mu}(k_1 -k_3 )_\nu} \, , \\
& G_{SSV}\cdot (k_1 -k_2 )_\mu\, . &
\end{eqnarray}
Here, $k_i$ are the (ingoing) momenta of the external particles. \section{Generic amplitudes}
\label{app:amplitudes}
We present in the following the expressions for the generic amplitudes obtained with {\tt FeynArts}\xspace and {\tt FormCalc}\xspace. All coefficients that are not explicitly listed are zero. Furthermore, the Wilson coefficients are left--right symmetric, i. e. \begin{equation}
C_{XRR} = C_{XLL} (L \leftrightarrow R) \,, \hspace{1 cm} C_{XRL}
= C_{XLR} (L \leftrightarrow R),
\end{equation}
with $X=S, V$ and where $(L \leftrightarrow R)$ means that
the coefficients of the left and right polarization part of each vertex have to be interchanged. \allowdisplaybreaks
\subsection{Tree Level Contributions}
\label{sec:treelevel}
\begin{figure}[htbp]
\centering
\begin{tabular}{cc}
\includegraphics[width=4 cm]{FeynAmps/STree1 x} & \includegraphics[width=4 cm]{FeynAmps/STree2 x} \\
(a) & (b) \\
\includegraphics[width=4 cm]{FeynAmps/TTree1 x} & \includegraphics[width=4 cm]{FeynAmps/TTree2 x} \\
(c) & (d) \\
\includegraphics[width=4 cm]{FeynAmps/UTree1 x} & \includegraphics[width=4 cm]{FeynAmps/UTree2 x} \\
(e) & (f)
\end{tabular}
\caption{Tree level diagrams with vertex numbering}
\label{fig:treeleveldiagrams}
\end{figure}
Since in models beyond the MSSM it might be possible that $\ensuremath{B^0 _s\to \ell\bar{\ell}}\xspace$ is already possible at tree--level. This is for instance the case for trilinear $R$-parity violation \cite{Dreiner:2006 gu}. The generic diagrams are given in Figure~\ref{fig:treeleveldiagrams}. The chiral vertices are parametrized as in eqs. ~(\ref{eq:chiralvertices1 })-(\ref{eq:chiralvertices2 }) with $A=1, B=2 $ for vertex 1 and $A=3, B=4 $ for vertex 2.
|
869
| 3
|
arxiv
|
3 }{M_V^2 -t} \\
C^{(e)}_{SLL} &=& 16 \pi^2 \frac{-G_1 G_3 }{2 (M_S^2 -u)} \,, \hspace{1 cm} C^{(e)}_{VLL} = 16 \pi^2 \frac{G_2 G_3 }{2 (M_S^2 -u)} \\
C^{(f)}_{SLR} &=& 16 \pi^2 \frac{-2 G_2 G_4 }{M_V^2 -u} \,, \hspace{1 cm} C^{(f)}_{VLR} = 16 \pi^2 \frac{-G_1 G_4 }{M_V^2 -u}
\end{eqnarray}
Here, $s$, $t$ and $u$ are the usual Mandelstam variables. \subsection{Wave Contributions}
\label{sec:waveBapp}
\begin{figure}[htpb]
\centering
\begin{tabular}{cc}
\includegraphics[width=3 cm]{FeynAmps/BsLLwaveS1 SS}
& \includegraphics[width=3 cm]{FeynAmps/BsLLwaveS1 VS} \\
(a) & (b) \\
\includegraphics[width=3 cm]{FeynAmps/BsLLwaveS1 SV} & \includegraphics[width=3 cm]{FeynAmps/BsLLwaveS1 VV}\\
(c) & (d) \\
\includegraphics[width=3 cm]{FeynAmps/BsLLwaveT1 SS}
& \includegraphics[width=3 cm]{FeynAmps/BsLLwaveT1 VS}
\\
(e) & (f) \\
\includegraphics[width=3 cm]{FeynAmps/BsLLwaveT1 SV}
& \includegraphics[width=3 cm]{FeynAmps/BsLLwaveT1 VV}
\\
(g) & (h) \\
\includegraphics[width=3 cm]{FeynAmps/BsLLwaveU1 SS}
& \includegraphics[width=3 cm]{FeynAmps/BsLLwaveU1 VS}
\\
(i) & (j) \\
\includegraphics[width=3 cm]{FeynAmps/BsLLwaveU1 SV}
& \includegraphics[width=3 cm]{FeynAmps/BsLLwaveU1 VV}
\\
(k) & (l) \\
\end{tabular}
\caption[Generic wave diagrams]{Generic wave diagrams. For every diagram there is a crossed version, where the loop attaches to the other external quark. }
\label{fig:wave}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=4 cm]{FeynAmps/BsLLwaveS1 numbers}
\includegraphics[width=4 cm]{FeynAmps/BsLLwaveS2 numbers} \\
\includegraphics[width=4 cm]{FeynAmps/BsLLwaveT1 numbers}
\includegraphics[width=4 cm]{FeynAmps/BsLLwaveT2 numbers} \\
\includegraphics[width=4 cm]{FeynAmps/BsLLwaveU1 numbers}
\includegraphics[width=4 cm]{FeynAmps/BsLLwaveU2 numbers}
\caption{Generic wave diagram vertex numbering}
\label{fig:wavenumbering}
\end{figure}
The generic wave diagrams are given in Figure~\ref{fig:wave}. The internal
quark which attaches to the vector or scalar propagator has generation index
$n$. Couplings that depend on $n$ carry it as an additional index. The chiral
vertices are parametrized as in
eqs. ~(\ref{eq:chiralvertices1 })-(\ref{eq:chiralvertices2 }) with $A=1, B=2 $ for
vertex 1, $A=3, B=4 $ for vertex 2, $A=5, B=6 $ for vertex 3 and $A=7, B=8 $ for
vertex 4, see also Figure~\ref{fig:wavenumbering} for the numbering of the
vertices. If a vertex is labelled $3 ^\prime$ for instance, the corresponding couplings
are $G_5 ^\prime, G_6 ^\prime$. Furthermore, we define the following abbreviations:
\begin{align}
\label{eq:wavebubbleSapp}
f_{S1 } &= \frac{1 }{m_n^2 -m_{i}^2 } \kl{-M_F(G_1 G_{3 n}m_n+G_2 G_{4 n}m_{i})B_0 ^{(i)}+(G_2 G_{3 n}m_nm_{i}+G_1 G_{4 n}m_{i}^2 )B_1 ^{(i)}},
\\
f_{S2 } &= \frac{1 }{m_{j}^2 -m_n^2 }\kl{M_F(G_{2 n}G_4 m_{j}+G_{1 n}G_3 m_n)B_0 ^{(j)}-(G_{2 n}G_3 m_{j}^2 +G_{1 n}G_4 m_{j}
m_n)B_1 ^{(j)}}, \\
\tilde f_{S2 } &= \frac{1 }{m_{j}^2 -m_n^2 }\kl{M_F(G_{1 n}G_3 m_{j}+G_{2 n}G_4 m_n)B_0 ^{(j)}-(G_{1 n}G_4 m_{j}^2 +G_{2 n}G_3 m_{j}
m_n)B_1 ^{(j)}}, \\
\label{eq:wavebubbleVapp}
f_{V1 } &= \frac{1 }{m_n^2 -m_{i}^2 }\kl{2 M_F(G_1 G_{4 n} m_n+G_2 G_{3 n}m_{i})B_0 ^{(i)}+(G_2 G_{4 n}m_nm_{i}+G_1 G_{3 n}m_{i}^2 )B_1 ^{(i)}}, \\
f_{V2 } &= \frac{1 }{m_{j}^2 -m_n^2 }\kl{2 M_F(G_{2 n}G_3 m_j+G_{1 n}G_4 m_n)B_0 ^{(j)}+(G_{2 n}G_4 m_{j}^2 +G_{1 n}G_3 m_{j} m_n)B_1 ^{(j)}}, \\
\tilde f_{V2 } &= \frac{1 }{m_{j}^2 -m_n^2 }\kl{2 M_F(G_{1 n}G_4 m_j+G_{2 n}G_3 m_n)B_0 ^{(j)}+(G_{1 n}G_3 m_{j}^2 +G_{2 n}G_4 m_{j} m_n)B_1 ^{(j)}}. \end{align}
The $m_i, m_j$ are the quark masses and $B_{0,1 }^{(i)} = B_{0,1 }(m_i^2, M_F^2, M_S^2 )$ (or $M_V^2 $ instead of $M_S^2 $). $m_n$ is the mass of the internal quark with generation index $n$. Couplings that involve the internal quark are also labelled with $n$ (e. g. $G_{3 n}$). Using these conventions, the contributions to the Wilson coefficients are
\begin{eqnarray}
C^{(a)}_{SLL}&=& \frac{G_7 }{M_{S0 }^2 -s} \kl{ G_{5 n} f_{S1 }+ G_{5 n}^\prime f_{S2 } } \\
C^{(c)}_{VLL}&=& \frac{G_7 }{M_{V0 }^2 -s} \kl{ -G_{5 n} f_{S1 } - G_{5 n}^\prime \tilde f_{S2 } } \to \frac{G_7 }{M_{V0 }^2 -s} G_5 G_1 G_4 B_1 (0, M_F, M_S) \\
C^{(b)}_{SLL}&=&\frac{2 G_7 }{M_{S0 }^2 -s} \kl{ G_{5 n} f_{V1 } - G_{5 n}^\prime f_{V2 } } \\
C^{(d)}_{VLL}&=&\frac{2 G_7 }{M_{V0 }^2 -s} \kl{-G_{5 n} f_{V1 } + G_{5 n}^\prime \tilde f_{V2 }} \to \frac{2 G_7 }{M_{V0 }^2 -s} G_5 G_1 G_3 B_1 (0, M_F, M_V) \\
C^{(e)}_{SLL} &=& \frac{-1 }{2 (M_{S0 }^2 -t)} \kl{ G_{5 n} G_7 f_{S1 } + G_{5 n}^\prime G_7 ^\prime f_{S2 } } \\
C^{(e)}_{VLR} &=& \frac{ -1 }{2 (M_{S0 }^2 -t)} \kl{ G_{5 n} G_8 f_{S1 } + G_{6 n}^\prime G_7 ^\prime \tilde f_{S2 } } \\
C^{(g)}_{SLR}&=& \frac{+2 }{M_{V0 }^2 -t} \kl{ G_{5 n} G_8 f_{S1 } + G_{6 n}^\prime G_7 ^\prime f_{S2 } } \\
C^{(g)}_{VLL}&=&\frac{-1 }{M_{V0 }^2 -t} \kl{ G_{5 n}G_7 f_{S1 } + G_{5 n}^\prime G_7 ^\prime \tilde f_{S2 } } \\
C^{(f)}_{SLL}&=& \frac{-1 }{M_{S0 }^2 -t} \kl{ G_{5 n}G_7 f_{V1 } - G_{5 n}^\prime G_7 ^\prime f_{V2 } } \\
C^{(f)}_{VLR} &=& \frac{-1 }{M_{S0 }^2 -t} \kl{ G_{5 n}G_8 f_{V1 } - G_{6 n}^\prime G_7 ^\prime \tilde f_{V2 } } \\
C^{(h)}_{SLR} &=& \frac{+4 }{M_{V0 }^2 -t} \kl{ G_{5 n}G_8 f_{V1 } - G_{6 n}^\prime G_7 ^\prime f_{V2 } } \\
C^{(h)}_{VLL} &=& \frac{+2 }{M_{V0 }^2 -t} \kl{ - G_{5 n}G_7 f_{V1 } + G_{5 n}^\prime G_7 ^\prime \tilde f_{V2 } }\\
C^{(i)}_{SLL} &=& \frac{-1 }{2 (M_{S0 }^2 -u)} \kl{ G_{5 n}G_7 f_{S1 } + G_{5 n}^\prime G_7 ^\prime f_{S2 } } \\
C^{(i)}_{VLL} &=& \frac{+1 }{2 (M_{S0 }^2 -u)} \kl{ G_{5 n}G_8 f_{S1 } + G_{6 n}^\prime G_{7 }^\prime \tilde f_{S2 } }\\
C^{(k)}_{SLR}&=& \frac{-2 }{M_{V0 }^2 -u}\kl{G_{6 n}G_8 f_{S1 }+G_{6 n}^\prime G_8 ^\prime f_{S2 }} \\
C^{(k)}_{VLR} &=& \frac{-1 }{M_{V0 }^2 -u} \kl{G_{6 n}G_7 f_{S1 }+G_{5 n}^\prime G_8 ^\prime \tilde f_{S2 }} \\
C^{(j)}_{SLL}&=& \frac{-1 }{M_{S0 }^2 -u} \kl{ G_{5 n} G_7 \tilde f_{V1 } - G_{5 n}^\prime G_7 ^\prime f_{V2 } } \\
C^{(j)}_{VLL} &=& \frac{-1 }{M_{S0 }^2 -u} \kl{ - G_{5 n}G_8 \tilde f_{V1 } + G_{6 n}^\prime G_7 ^\prime \tilde f_{V2 } } \\
C^{(l)}_{SLR} &=& \frac{-4 }{M_{V0 }^2 -u} \kl{ G_{6 n}G_8 \tilde f_{V1 } - G_{6 n}^\prime G_8 ^\prime f_{V2 } }\\
C^{(l)}_{VLR} &=& \frac{-2 }{M_{V0 }^2 -u} \kl{ G_{6 n}G_7 \tilde f_{V1 } - G_{5 n}^\prime G_8 ^\prime \tilde f_{V2 } }
\end{eqnarray}
\subsection{Penguin Contributions}
\label{sec:penguinB}
\begin{figure}[htpb]
\centering
\includegraphics[width=4 cm]{FeynAmps/PenV}
\caption{Vertex number conventions for a representative penguin diagram}
\label{fig:penguinvertex}
\end{figure}
\begin{figure}[hbt]
\centering
\begin{tabular}{cccc}
\includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinScalarFFS} & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinVectorFFS} & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinScalarSSF} & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinVectorSSF} \\
(a) & (b) & (c) & (d) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinScalarFFV} & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinVectorFFV} &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinScalarFVV} & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinVectorFVV} \\
(e) & (f) & (g) & (h) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinScalarFSV1 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinVectorFSV1 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinScalarFSV2 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpPenguinVectorFSV2 } \\
(i) & (j) & (k) & (l)
\end{tabular}
\caption{Generic penguin diagrams}
\label{fig:penguin}
\end{figure}
Diagrams with scalar propagators have $C_{VXY}=0 $ and those with vector propagators have \mbox{$C_{SXY}=0 $}. The vertex number conventions are given in fig. ~\ref{fig:penguinvertex} and all possible diagrams are depicted in Figure~\ref{fig:penguin}. The chiral vertices are parametrized as in eqs. ~(\ref{eq:chiralvertices1 })-(\ref{eq:chiralvertices2 }) with $A=1, B=2 $ for vertex 1, $A=3, B=4 $ for vertex 2 and $A=7, B=8 $ for vertex 4. Vertex 3 can be a chiral vertex, in this case $A=5, B=6 $ is used. Otherwise, we will denote it with a index $5 $ and give as additional subscript the kind of vertex.
|
869
| 4
|
arxiv
|
1 {M_{S0 }^2 -s}G_1 G_3 G_{5, SSS}G_7 M_F C^{(c, d)}_0 \\
C^{(c)}_{SLR}&= \frac 1 {M_{S0 }^2 -s}G_1 G_3 G_{5, SSS}G_8 M_F C^{(c, d)}_0 \\
C^{(d)}_{VLL}&= - \frac 2 {M_{V0 }^2 -s} G_1 G_4 G_{5, SSV} G_7 C^{(c, d)}_{00 }\\
C^{(d)}_{VLR}&= - \frac 2 {M_{V0 }^2 -s} G_1 G_4 G_{5, SSV} G_8 C^{(c, d)}_{00 } \\
C^{(e)}_{SLL}&= -\frac 4 {M_{S0 }^2 -s} G_1 G_4 G_7 \kl{G_5 B^{(e, f)}_0 +(G_6 M_{F1 }M_{F2 }+G_5 M_V^2 )C^{(e, f)}_0 } \\
C^{(e)}_{SLR}&= -\frac 4 {M_{S0 }^2 -s} G_1 G_4 G_8 \kl{G_5 B^{(e, f)}_0 +(G_6 M_{F1 }M_{F2 }+G_5 M_V^2 )C^{(e, f)}_0 } \\
C^{(f)}_{VLL}&= \frac{2 }{M_{V0 }^2 -s} G_1 G_3 G_7 \kl{G_5 B^{(e, f)}_0 +(-G_6 M_{F1 }M_{F2 }+G_5 M_V^2 )C^{(e, f)}_0 -2 G_5 C^{(e, f)}_{00 }} \\
C^{(f)}_{VLR}&= \frac{2 }{M_{V0 }^2 -s} G_1 G_3 G_8 \kl{G_5 B^{(e, f)}_0 +(-G_6 M_{F1 }M_{F2 }+G_5 M_V^2 )C^{(e, f)}_0 -2 G_5 C^{(e, f)}_{00 }} \\
C^{(g)}_{SLL}&= \frac 4 {M_{S0 }^2 -s} G_1 G_4 G_{5, SVV} G_7 M_{F} C^{(g, h)}_0 \\
C^{(g)}_{SLR}&= \frac 4 {M_{S0 }^2 -s} G_1 G_4 G_{5, SVV} G_8 M_{F} C^{(g, h)}_0 \\
C^{(h)}_{VLL}&= -\frac 2 {M_{V0 }^2 -s} G_1 G_3 G_{5, VVV}G_7 \kl{B^{(g, h)}_0 +M_{F}^2 C^{(g, h)}_0 +2 C^{(g, h)}_{00 }} \\
C^{(h)}_{VLR}&= -\frac 2 {M_{V0 }^2 -s} G_1 G_3 G_{5, VVV}G_8 \kl{B^{(g, h)}_0 +M_{F}^2 C^{(g, h)}_0 +2 C^{(g, h)}_{00 }} \\
C^{(i)}_{SLL}&= \frac 1 {M_{S0 }^2 -s} G_1 G_3 G_{5, SSV} G_7 \kl{B^{(i-l)}_0 +M_{F}^2 C^{(i-l)}_0 } \\
C^{(i)}_{SLR}&= \frac 1 {M_{S0 }^2 -s} G_1 G_3 G_{5, SSV} G_8 \kl{B^{(i-l)}_0 +M_{F}^2 C^{(i-l)}_0 } \\
C^{(k)}_{SLL}&= -\frac 1 {M_{S0 }^2 -s} G_1 G_4 G_{5, SSV} G_7 \kl{B^{(i-l)}_0 +M_{F}^2 C^{(i-l)}_0 } \\
C^{(k)}_{SLR}&= -\frac 1 {M_{S0 }^2 -s} G_1 G_4 G_{5, SSV} G_8 \kl{B^{(i-l)}_0 +M_{F}^2 C^{(i-l)}_0 } \\
C^{(j)}_{VLL}&= \frac{1 }{M_{V0 }^2 -s} G_1 G_4 G_{5, SVV}G_7 M_F C^{(i-l)}_0 \\
C^{(j)}_{VLR}&= \frac{1 }{M_{V0 }^2 -s} G_1 G_4 G_{5, SVV}G_8 M_F C^{(i-l)}_0 \\
C^{(l)}_{VLL}&= \frac{1 }{M_{V0 }^2 -s} G_1 G_3 G_{5, SVV}G_7 M_F C^{(i-l)}_0 \\
C^{(l)}_{VLR}&= \frac{1 }{M_{V0 }^2 -s} G_1 G_3 G_{5, SVV}G_8 M_F C^{(i-l)}_0
\end{align}
Here, the arguments of the Passarino-Veltman integrals are as follows, with $s=M_{B^0 _q}^2 $:
\begin{align}
C_X^{(a, b)} &=C_X (s,0,0, M_{F2 }^2, M_{F1 }^2, M_{S}^2 ) \, \hspace{1 cm}
B_X^{(a, b)} = B_X(s, M_{F1 }^2, M_{F2 }^2 ) \\
C_X^{(c, d)} & = C_X(0, s,0, M_{F}^2, M_{S1 }^2, M_{S2 }^2 ) \\
C_X^{(e, f)} &=C_X(s,0,0, M_{F2 }^2, M_{F1 }^2, M_{V}^2 ) \, \hspace{1 cm}
B_X^{(e, f)} = B_X(s, M_{F1 }^2, M_{F2 }^2 )\\
C_X^{(g, h)}&=C_X(0, s,0, M_{F}^2, M_{V1 }^2, M_{V2 }^2 ) \, \hspace{1 cm}
B_X^{(g, h)}=B_X(s, M_{V1 }^2, M_{V2 }^2 ) \\
C_X^{(i-l)} &= C_X(0, s,0, M_{F}^2, M_{S}^2, M_{V}^2 ) \, \hspace{1 cm}
B_X^{(i-l)} = B_X(s, M_{S}^2, M_{V}^2 )
\end{align}
\subsection{Box Contributions}
\begin{figure}[htpb]
\centering
\begin{tabular}{ccc}
\includegraphics[width=4 cm]{FeynAmps/BoxIns1 } & \includegraphics[width=4 cm]{FeynAmps/BoxIns2 } & \includegraphics[width=4 cm]{FeynAmps/BoxIns4 } \\
\end{tabular}
\caption{Vertex number conventions for a set of representative box diagrams}
\label{fig:boxvertex}
\end{figure}
\begin{figure}[hbt]
\centering
\begin{tabular}{ccc}
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxS1 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxS1 ins2 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxS2 ins4 } \\
(a) & (b) & (c) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxS2 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxS2 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxS1 ins4 } \\
(d) & (e) & (f) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxSV1 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxSV1 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxSV2 ins4 } \\
(g) & (h) & (i) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxVS1 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxVS1 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxVS2 ins4 } \\
(j) & (k) & (l)
\end{tabular}
\caption{Generic box diagrams I}
\label{fig:box1 }
\end{figure}
\begin{figure}[hbt]
\centering
\begin{tabular}{ccc}
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxSV2 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxSV2 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxSV1 ins4 } \\
(m) & (n) & (o) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxVS2 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxVS2 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxVS1 ins4 } \\
(p) & (q) & (r) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxV1 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxV1 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxV1 ins4 } \\
(s) & (t) & (u) \\
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxV2 } & \includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxV2 ins2 } &
\includegraphics[width=3 cm]{FeynAmps/BtoLLpBoxV2 ins4 } \\
(v) & (w) & (x)
\end{tabular}
\caption{Generic box diagrams II}
\label{fig:box2 }
\end{figure}
The vertex number conventions for boxes are shown in figs. ~\ref{fig:boxvertex}, while all possible generic diagrams are given in Figures~\ref{fig:box1 } and \ref{fig:box2 }. All vertices are chiral and they are parametrized as in eqs. ~(\ref{eq:chiralvertices1 })-(\ref{eq:chiralvertices2 }) with $A=1, B=2 $ for vertex 1, $A=3, B=4 $ for vertex 2, $A=5, B=6 $ for vertex 3 and $A=7, B=8 $ for vertex 4. If there are two particles of equal type in a loop (say, two fermions), the one between vertices 1 and 2 (2 or 3 ) will be labelled $F1 $ and the other one will be $F2 $.
|
869
| 5
|
arxiv
|
} \\
C^{(c)}_{SLL} &= \frac 12 G_1 G_3 G_5 G_7 M_{F1 }M_{F2 }D^{(a-c)}_0 \\
C^{(c)}_{SLR} &= -2 G_1 G_4 G_5 G_8 D^{(a-c)}_{00 } \\
C^{(c)}_{VLL} &= -\frac 12 G_2 G_4 G_5 G_7 M_{F1 }M_{F2 } D^{(a-c)}_0 \\
C^{(c)}_{VLR} &= -G_2 G_3 G_5 G_8 D^{(a-c)}_{00 } \\
C^{(d)}_{SLL} &= \frac 12 G_1 G_3 G_5 G_7 M_1 M_2 \cdot D^{(d-f)}_0 \\
C^{(d)}_{SLR} &= 2 G_1 G_3 G_6 G_8 \cdot D^{(d-f)}_{00 } \\
C^{(d)}_{VLL}&= -G_2 G_3 G_6 G_7 \cdot D^{(d-f)}_{00 } \\
C^{(d)}_{VLR}&=\frac 12 G_2 G_3 G_5 G_8 M_1 M_2 \cdot D^{(d-f)}_{0 } \\
C^{(e)}_{SLL} &= \frac 12 G_1 G_3 G_5 G_7 M_{F1 }M_{F2 } \cdot D^{(d-f)}_0 \\
C^{(e)}_{SLR} &= 2 G_1 G_3 G_6 G_8 \cdot D^{(d-f)}_{00 } \\
C^{(e)}_{VLL}&= -\frac 12 G_2 G_3 G_5 G_8 M_{F1 }M_{F2 }\cdot D^{(d-f)}_{0 } \\
C^{(e)}_{VLR}&= G_2 G_3 G_6 G_7 \cdot D^{(d-f)}_{00 } \\
C^{(f)}_{SLL} &= \frac 12 G_1 G_3 G_5 G_7 M_{F1 }M_{F2 }D^{(d-f)}_0 \\
C^{(f)}_{SLR} &= -2 G_1 G_4 G_5 G_8 D^{(d-f)}_{00 } \\
C^{(f)}_{VLL} &= G_2 G_4 G_5 G_7 D^{(d-f)}_{00 } \\
C^{(f)}_{VLR} &= \frac 12 G_2 G_3 G_5 G_8 M_{F1 }M_{F2 }D^{(d-f)}_0 \\
C^{(g)}_{SLL}&= 2 G_1 G_3 G_6 G_7 \kl{C^{(g-i)}_0 +M_{F1 }^2 D^{(g-i)}_0 -2 D^{(g-i)}_{00 }}\\
C^{(g)}_{SLR}&= 2 G_1 G_3 G_5 G_8 \kl{C^{(g-i)}_0 +M_{F1 }^2 D^{(g-i)}_0 -2 D^{(g-i)}_{00 }}\\
C^{(g)}_{VLL}&= G_2 G_3 G_5 G_7 M_{F1 }M_{F2 } D^{(g-i)}_0 \\
C^{(g)}_{VLR}&= G_2 G_3 G_6 G_8 M_{F1 }M_{F2 } D^{(g-i)}_0 \\
C^{(h)}_{SLL}&= -4 G_1 G_3 G_5 G_7 D^{(g-i)}_{00 }\\
C^{(h)}_{SLR}&= -4 G_1 G_3 G_6 G_8 D^{(g-i)}_{00 }\\
C^{(h)}_{VLL}&= G_2 G_3 G_5 G_8 M_{F1 }M_{F2 } D^{(g-i)}_0 \\
C^{(g)}_{VLR}&= G_2 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(g-i)}_0 \\
C^{(i)}_{SLL}&= -G_1 G_3 G_5 G_7 \kl{C^{(g-i)}_0 +M_S^2 D^{(g-i)}_0 -8 D^{(g-i)}_{00 }} \\
C^{(i)}_{SLR}&= 2 G_1 G_3 G_5 G_8 M_{F1 }M_{F2 }D^{(g-i)}_0 \\
C^{(i)}_{VLL}&= G_2 G_3 G_5 G_7 \kl{C^{(g-i)}_0 +M_S^2 D^{(g-i)}_0 -2 D^{(g-i)}_{00 }} \\
C^{(i)}_{VLR}&= G_2 G_4 G_5 G_8 M_{F1 }M_{F2 } D^{(g-i)}_0 \\
C^{(j)}_{SLL}&= 2 G_2 G_3 G_5 G_7 \kl{C^{(j-l)}_0 +M_{F1 }^2 D^{(j-l)}_0 -2 D^{(j-l)}_{00 }}\\
C^{(j)}_{SLR}&= 2 G_2 G_3 G_6 G_8 \kl{C^{(j-l)}_0 +M_{F1 }^2 D^{(j-l)}_0 -2 D^{(j-l)}_{00 }}\\
C^{(j)}_{VLL}&= G_1 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(j-l)}_0 \\
C^{(j)}_{VLR}&= G_1 G_3 G_5 G_8 M_{F1 }M_{F2 } D^{(j-l)}_0 \\
C^{(k)}_{SLL}&= -4 G_2 G_3 G_5 G_8 D^{(j-l)}_{00 }\\
C^{(k)}_{SLR}&= -4 G_2 G_3 G_6 G_7 D^{(j-l)}_{00 }\\
C^{(k)}_{VLL}&= G_1 G_3 G_5 G_7 M_{F1 }M_{F2 } D^{(j-l)}_0 \\
C^{(k)}_{VLR}&= G_1 G_3 G_6 G_8 M_{F1 }M_{F2 } D^{(j-l)}_0 \\
C^{(l)}_{SLL}&= -G_1 G_3 G_5 G_8 (C^{(j-l)}_0 +M_V^2 D^{(j-l)}_0 -8 D^{(j-l)}_{00 }) \\
C^{(l)}_{SLR}&= 2 G_1 G_4 G_5 G_7 M_{F1 }M_{F2 }D^{(j-l)}_0 \\
C^{(l)}_{VLL}&= G_2 G_4 G_5 G_8 (C^{(j-l)}_0 +M_V^2 D^{(j-l)}_0 -2 D^{(j-l)}_{00 }) \\
C^{(l)}_{VLR}&= G_2 G_3 G_5 G_7 M_{F1 }M_{F2 } D^{(j-l)}_0 \\
C^{(m)}_{SLL}&= - G_1 G_3 G_6 G_7 \left( C^{(m-o)}_0 +M_S^2 D^{(m-o)}_0 - \frac 14 (13 G_1 G_3 G_6 G_7 +3 G_2 G_4 G_5 G_8 )D^{(m-o)}_{00 }\right) \\
C^{(m)}_{SLR}&=-2 G_1 G_3 G_5 G_8 M_{F1 }M_{F2 } D^{(m-o)}_0 \\
C^{(m)}_{VLL}&= G_2 G_3 G_5 G_7 M_{F1 }M_{F2 } D^{(m-o)}_0 \\
C^{(m)}_{VLR}&= - G_2 G_3 G_6 G_8 \kl{C^{(m-o)}_0 +M_S^2 D^{(m-o)}_0 -2 D^{(m-o)}_{00 }}\\
C^{(n)}_{SLL}&= 8 G_1 G_3 G_6 G_8 D^{(m-o)}_{00 }\\
C^{(n)}_{SLR}&= 2 G_1 G_3 G_5 G_7 M_{F1 }M_{F2 } D^{(m-o)}_0 \\
C^{(n)}_{VLL}&= -2 G_2 G_3 G_6 G_7 D^{(m-o)}_{00 }\\
C^{(n)}_{VLR}&= G_2 G_3 G_5 G_8 M_{F1 }M_{F2 }D^{(m-o)}_0 \\
C^{(o)}_{SLL}&= -\frac 14 (13 G_1 G_3 G_5 G_7 +3 G_2 G_4 G_6 G_8 )D^{(m-o)}_{00 } \\
C^{(o)}_{SLR}&= -2 G_1 G_4 G_5 G_8 M_{F1 }M_{F2 }D^{(m-o)}_0 \\
C^{(o)}_{VLL}&= G_2 G_4 G_5 G_7 M_{F1 }M_{F2 } D^{(m-o)}_0 \\
C^{(o)}_{VLR}&= 2 G_2 G_3 G_5 G_8 D^{(m-o)}_{00 } \\
C^{(p)}_{SLL} &=- G_2 G_3 G_5 G_7 \left( C^{(p-r)}_0 +M_V^2 D^{(p-r)}_0 - \frac 14 (13 G_2 G_3 G_5 G_7 +3 G_1 G_4 G_6 G_8 )D^{(p-r)}_{00 } \right)\\
C^{(p)}_{SLR}&= -2 G_2 G_3 G_6 G_8 M_{F1 }M_{F2 } D^{(p-r)}_0 \\
C^{(p)}_{VLL}&= G_1 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(p-r)}_0 \\
C^{(p)}_{VLR}&= - G_1 G_3 G_5 G_8 \kl{C^{(p-r)}_0 +M_V^2 D^{(p-r)}_0 -2 D^{(p-r)}_{00 }}\\
C^{(r)}_{SLL}&= 8 G_1 G_3 G_5 G_7 D^{(p-r)}_{00 }\\
C^{(q)}_{SLR}&= 2 G_1 G_3 G_6 G_8 M_{F1 }M_{F2 } D^{(p-r)}_0 \\
C^{(q)}_{VLL}&= -2 G_2 G_3 G_5 G_8 D^{(p-r)}_{00 }\\
C^{(q)}_{VLR}&= G_2 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(p-r)}_0 \\
C^{(r)}_{SLL}&= -\frac 14 (13 G_2 G_4 G_5 G_7 +3 G_1 G_3 G_6 G_8 )D^{(p-r)}_{00 } \\
C^{(r)}_{SLR}&= -2 G_2 G_3 G_5 G_8 M_{F1 }M_{F2 }D^{(p-r)}_0 +\frac 34 (G_2 G_4 G_5 G_7 -G_1 G_3 G_6 G_8 )D^{(p-r)}_{00 } \\
C^{(r)}_{VLL}&= G_1 G_3 G_5 G_7 M_{F1 }M_{F2 }D^{(p-r)}_0 \\
C^{(r)}_{VLR}&= 2 G_1 G_4 G_5 G_8 D^{(p-r)}_{00 } \\
C^{(s)}_{SLL}&= -4 G_2 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(s-u)}_{0 }\\
C^{(s)}_{SLR}&= -4 G_2 G_3 G_5 G_8 M_{F1 }M_{F2 } D^{(s-u)}_0 \\
C^{(s)}_{VLL}&= -4 G_1 G_3 G_5 G_7 \kl{C^{(s-u)}_0 +M_{F1 }^2 D^{(s-u)}_0 -3 D^{(s-u)}_{00 }}\\
C^{(s)}_{VLR}&= -4 G_1 G_3 G_6 G_8 \kl{C^{(s-u)}_0 +M_{F1 }^2 D^{(s-u)}_0 }\\
C^{(t)}_{SLL}&= -4 G_2 G_3 G_5 G_8 M_{F1 }M_{F2 } D^{(s-u)}_{0 }\\
C^{(t)}_{SLR}&= -4 G_2 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(s-u)}_0 \\
C^{(t)}_{VLL}&= 16 G_1 G_3 G_5 G_7 D^{(s-u)}_{00 }\\
C^{(t)}_{VLR}&= 4 G_1 G_3 G_6 G_8 D^{(s-u)}_{00 }\\
C^{(u)}_{SLL}&= -4 G_2 G_4 G_5 G_7 M_{F1 }M_{F2 } D^{(s-u)}_{0 }\\
C^{(u)}_{SLR}&= -8 G_2 G_3 G_5 G_8 D^{(s-u)}_{00 } \\
C^{(u)}_{VLL}&= 16 G_1 G_3 G_5 G_7 D^{(s-u)}_{00 }\\
C^{(u)}_{VLR}&= 2 G_1 G_4 G_5 G_8 M_{F1 }M_{F2 } D^{(s-u)}_{0 }\\
C^{(v)}_{SLL}&= 8 G_2 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(v-x)}_{0 }\\
C^{(v)}_{SLR}&= 8 G_2 G_3 G_5 G_8 \kl{C^{(v-x)}_0 +M_{V1 }^2 D^{(v-x)}_0 } \\
C^{(v)}_{VLL}&= -4 G_1 G_3 G_5 G_7 \kl{C^{(v-x)}_0 +M_{V1 }^2 D^{(v-x)}_0 -3 D^{(v-x)}_{00 }}\\
C^{(v)}_{VLR}&= 2 G_1 G_3 G_6 G_8 M_{F1 }M_{F2 } D^{(v-x)}_{0 }\\
C^{(w)}_{SLL}&= 8 G_1 G_3 G_6 G_8 M_{F1 }M_{F2 } D^{(v-x)}_{0 }\\
C^{(w)}_{SLR}&= 32 G_1 G_3 G_5 G_7 D^{(v-x)}_{00 } \\
C^{(w)}_{VLL}&= -2 G_2 G_3 G_6 G_7 M_{F1 }M_{F2 } D^{(v-x)}_0 \\
C^{(w)}_{VLR}&= 4 G_2 G_3 G_5 G_8 D^{(v-x)}_{00 }\\
C^{(x)}_{SLL}&= -4 G_1 G_4 G_5 G_8 M_{F1 }M_{F2 } D^{(v-x)}_{0 }\\
C^{(x)}_{SLR}&= -8 G_1 G_3 G_5 G_7 (C^{(v-x)}_0 +M_{V1 }^2 D^{(v-x)}_0 -3 D^{(v-x)}_{00 }) \\
C^{(x)}_{VLL}&= -2 G_2 G_3 G_5 G_8 M_{F1 }M_{F2 } D^{(v-x)}_0 \\
C^{(x)}_{VLR}&= -4 G_2 G_4 G_5 G_7 (C^{(v-x)}_0 +M_{V1 }^2 D^{(v-x)}_0 )
\end{align}
The arguments of the loop functions for the different amplitudes are
\begin{align}
D_X^{(a-c)}&= D_X(M_{F1 }^2, M_{F2 }^2, M_{S1 }^2, M_{S2 }^2 ) \\
D_X^{(d
|
869
| 6
|
arxiv
|
-f)}&= D_X(M_{F1 }^2, M_{F2 }^2, M_{S1 }^2, M_{S2 }^2 ) \\
C_X^{(g-i)}&= C_X(\vec 0 _3, M_{F2 }^2, M_V^2, M_{S}^2 ) \, \hspace{1 cm}
D_X^{(g-i)}= D_X(M_{F1 }^2, M_{F2 }^2, M_{V}^2, M_{S}^2 )\\
C_X^{(j-l)}&= C_X(\vec 0 _3, M_{F2 }^2, M_S^2, M_{V}^2 ) \, \hspace{1 cm}
D_X^{(j-l)}= D_X(M_{F1 }^2, M_{F2 }^2, M_{S}^2, M_{V}^2 )\\
C_X^{(m-o)}&= C_X(\vec 0 _3, M_{F2 }^2, M_{F1 }^2, M_V^2 ) \, \hspace{1 cm}
D_X^{(m-o)}= D_X(M_{F2 }^2, M_{F1 }^2, M_S^2, M_V^2 )\\
C_X^{(p-r)}&= C_X(\vec 0 _3, M_{F2 }^2, M_{F1 }^2, M_S^2 ) \, \hspace{1 cm}
D_X^{(p-r)}= D_X(M_{F2 }^2, M_{F1 }^2, M_V^2, M_S^2 ) \\
C_X^{(s-u)}&= C_X(\vec 0 _3, M_{F2 }^2, M_{V1 }^2, M_{V2 }^2 )\, \hspace{1 cm}
D_X^{(s-u)}= D_X(M_{F1 }^2, M_{F2 }^2, M_{V1 }^2, M_{V2 }^2 ) \\
C_X^{(v-x)}&= C_X(\vec 0 _3, M_{F2 }^2, M_{F1 }^2, M_{V2 }^2 ) \, \hspace{1 cm}
D_X^{(v-x)}= D_X(M_{F2 }^2, M_{F1 }^2, M_{V1 }^2, M_{V2 }^2 )
\end{align}
\end{appendix}
|
1,272
| 0
|
arxiv
|
\section{Introduction}
Flow-induced pressure fluctuations extensively exist in engineering applications, such as aviation technologies~\cite{colonius2004 computational}, ventilation~\cite{waye1997 effects} and biomechanics~\cite{seo2011 high}. In many of these applications, the major task is to identify the noise source and then to reduce or eliminate it (see e. g. rotorcrafts~\cite{colonius2004 computational} and ventilations~\cite{waye1997 effects}). Flow-induced sound also plays a crucial role in the biomechanics, such as phonation (flow-induced sound in the larynx~\cite{zhao2002 computational}) and heart murmurs (blood-flow-induced noise~\cite{el2005 computer}). Accurate prediction of the flow-induced noise in these systems can be utilized to optimize low noise engineering design. The acoustic prediction in biomechanics also has potential in medical diagnosis~\cite{seo2011 high}. However, in most of the engineering applications, fluid-structure-acoustics interactions involving complex geometries are involved, making the accurate prediction of the sound generation extremely challenging. Numerical methods for computational aeroacoustics (CAA) can be categorized into three groups~\cite{inoue2002 sound}. The first group of methods calculate the near field flow dynamics by using computational fluid dynamics (CFD) techniques, and predict the far-field sound generation by employing acoustic analogies, such as the pioneering analogy proposed by Lighthill~\cite{lighthill1952 sound} and the extension of this analogy to include the influence of the solid boundaries in the sound field~\cite{curle1955 influence}. Since Lighthill's work, great efforts have been made to improve the method~\cite{schram2009 boundary, farassat1988 extension, di1997 new}. The second group of methods, also called acoustic/viscous splitting methods, decompose the compressible viscous equations into incompressible and perturbed compressible parts. The perturbation in the far-field represents the acoustic quantities~\cite{hardin1994 acoustic}. The third group of methods employ direct numerical simulation (DNS) to solve the compressible Navier-Stokes equations. By using DNS, the generation and propagation processes of the sound in the near and intermediate fields can be calculated, without suffering from the limitations such as the low Mach number and compactness of the source region~\cite{inoue2002 sound, schlanderer2017 boundary}. In order to resolve the sound pressure which is much smaller compared to the ambient pressure~\cite{colonius2004 computational}, high-order finite difference methods (e. g. compact schemes~\cite{lele1992 compact} and Weighted/Targeted Essentially Non-oscillation schemes~\cite{liu1994 weighted, fu2016 family}) on body-conformal meshes are widely used. In these methods, it is difficult to generate high-quality structured meshes around the complex and moving boundaries. The finite volume method has the ability to handle complex boundaries. However, it is still challenging to handle moving boundaries. In addition, it suffers from the low-order accuracy and introduces extra dissipation and dispersion~\cite{sun2012 immersed}. The high-order overset grid method~\cite{sherer2005 high} has been successfully applied for CAA in complex geometries. But, the applications of this type of methods are limited due to their complexity of implementation and low efficiency at low Mach numbers~\cite{seo2011 high}. As an efficient method for fluid--structure interaction (FSI), the immersed boundary (IB) method, first developed by Peskin~\cite{peskin1977 numerical, peskin2002 immersed}, has been extended to acoustics~\cite{seo2011 high, sun2012 immersed}. In this method, structured mesh is generated initially and fixed during the computation. Therefore, the regeneration of mesh is avoided. Because of its simple boundary treatment, IB method has gained popularity for a wide range of applications \cite{mittal2005 immersed, huang2007 simulation, tian2014 fluid, sotiropoulos2014 immersed}. In the hybrid method proposed by Seo and Mittal~\cite{seo2011 high}, the IB method and the hydrodynamic/acoustic splitting technique are combined to handle the acoustic problems involving complex geometries in low-Mach number flow. It encounters challenges when the Mach number rises and the complex geometries move. Penalty IB (pIB) is a typical IB method~\cite{kim2007 penalty}, where the IB is conceptually split into two Lagrangian components: one component is massless and interacts with the fluid exactly as the traditional IB method, and the other component carrying mass is connected to the massless component by virtual stiff springs. However, most of the previous studies based on pIB method focus on FSI problems without acoustics~\cite{mittal2005 immersed, tian2011 efficient, huang2010 three, qiu2016 boundary, ghias2007 sharp, wang2017 immersed}. Sun et al. presented an IB method which considers the linear Euler equations for acoustic scattering modeling~\cite{sun2012 immersed}. In their work, the moving boundary and fluid viscosity are not explored. The ability of the pIB method for fluid--structure--acoustics interactions involving complex geometries has not been well explored. This is the motivation for this work. Flapping foil has drawn growing attention recently, due to its high performance in micro aerial vehicles (MAVs)~\cite{tian2013 force, shahzad2016 effects, shahzad2018 effects, shahzad2018 effectspof, tian2018 aerodynamic} and power generators~\cite{liu2016 discrete, liu2017 flapping, tian2014 improving, liu2019 kinematic}. The aerodynamic characteristics of the flapping foil including its thrust, lift and power efficiency have been extensively studied by researchers~\cite{tian2013 force, yin2010 effect}. However, the aeroacoustics induced by the flapping foil has not been well understood~\cite{geng2017 effect}. Although the high performance of insect flight can be achieved with low noises, the flapping sound generated by the foil may have potential effects on biological functions, such as the communication using aposematic signals with the locomotion-induced sound~\cite{geng2017 effect}. Moreover, the study on the sound induced by flapping foil may also have potential applications in medical inspections, such as the phonation of human which performs like a flapping foil induced sound~\cite{zhao2002 computational}. Due to its importance in many areas, the sound generation by flapping foils is numerically studied by the present method. The effects of both the elasticity and the geometrical shape of the foil on the force and sound generation process are analyzed. The numerical examples presented here can also enrich the limited database of fluid--structure--acoustics interactions. In this paper, an immersed boundary method introduced in our previous work~\cite{wang2017 immersed} is extended to fluid--structure--acoustics interactions involving large deformations and complex geometries. The organization of the paper is as follows. The numerical approach is briefly introduced in Section 2. Several validations including acoustic waves scattered by a stationary cylinder, sound generation by a stationary and a rotating cylinder, sound generation by an insect in hovering flight, deformation of a red blood cell induced by acoustic waves and acoustic waves scattered by a stationary sphere are presented in Section 3. Application of the current numerical method in modelling flapping foil induced acoustics is presented in Section 4. Finally, conclusions are given in Section 5. \section{Numerical method}
The current numerical method includes three important components: the structure, compressible fluid and fluid--structure interaction. Without loss of generality, a flexible plate immersed in the two-dimensional fluid is used as an example to introduce the structure dynamics. The plate is assumed to be elastic and its dynamics is governed by the following nonlinear equation~\cite{huang2007 simulation, tian2010 interaction, wang2017 immersed}
\begin{equation}
\rho_s \frac{\partial^2 \boldsymbol{X}}{\partial t^2 }+ \frac{\partial}{\partial s} \left[(K_S |{\frac{\partial \boldsymbol{X}}{\partial s}}|-1 ) {\frac{\partial \boldsymbol{X}}{\partial s}}\right] + K_B {\frac{\partial^4 \boldsymbol{X}}{\partial s^4 }}= \boldsymbol{F_f},
\label{eq:beamequation}
\end{equation}
where $\boldsymbol{X}$ is the Lagrangian coordinates of the flexible beam, $\rho_s$ is linear density, $K_S$ and $K_B$ are respectively the stretching and bending rigidity, $s$ is the arc coordinate, and $\boldsymbol{F}_f$ is the external force acting on the beam. The absolute nodal coordinate formulation (ANCF) proposed by Shabana~\cite{shabana1997 flexible, shabana1998 application, shabana2013 dynamics}
is adopted to solve Eq. ~\ref{eq:beamequation}. This method was combined with the IB method by Wang et al. ~\cite{wang2017 immersed}. The flow dynamics considered here are governed by the compressible viscous Navier--Stokes equations
\begin{eqnarray}
&&\frac{\partial Q}{\partial t} + \frac{\partial F}{\partial x} + \frac{\partial G}{\partial y}+ \frac{\partial H}{\partial z}-
\frac{1 }{\Re}(\frac{\partial F_u}{\partial x}+\frac{\partial G_v}{\partial y}+\frac{\partial H_v}{\partial z})= S, \\
&&Q=[\rho, \rho u, \rho v, \rho w, E]^T, \quad F=[\rho u, \rho u^2 +P, \rho u v, \rho u w, (E+P) u]^T, \\
&&G=[\rho v, \rho u v, \rho v^2 + P, \rho v w, (E+P)v]^T, \quad H=[\rho w, \rho u w, \rho v w, \rho w^2 + P, (E+P)w]^T, \\
&&F_u=[0, \tau_{xx}, \tau_{xy}, \tau_{xz}, b_x]^T, \quad G_v=[0, \tau_{xy}, \tau_{yy}, \tau_{yz}, b_y]^T, \quad H_v=[0, \tau_{xz}, \tau_{yz}, \tau_{zz}, b_z]^T, \\
&&b_x=u \tau_{xx}+ v \tau_{xy}+w \tau_{xz}, \quad b_y=u \tau_{xy}+ v \tau_{yy}+ w \tau_{yz}, \quad b_z=u \tau_{xz}+ v \tau_{yz}+ w \tau_{zz},
\end{eqnarray}
where $\rho$ is the fluid density, $u$, $v$ and $w$ are respectively the three velocity components, $P$ is the pressure, $E$ is the total energy, $S$ is a general source term including the IB-imposed Eulerian force and other body forces, Re is the Reynolds number, and $\tau_{ij}$ is the shear stress. In the fluid solver, the fifth-order Weighted Essentially Non-oscillation (WENO) scheme proposed by Liu et al. ~\cite{liu1994 weighted} is used for the spatial discretization of the convective term. We also introduce a recently developed scheme, the Target Essentially Non-oscillation (TENO)~\cite{fu2016 family}, to discretize the convective term, to compare the performance of the IB--WENO with that of the IB--TENO. For the viscous terms, a fourth-order central difference scheme is used to discretize the spatial derivatives. For all unsteady equations involved in flow solver, the third-order TVD Runge-Kutta method is used for temporal discretization~\cite{shu1988 efficient}. The dynamics of the fluid and flexible structures are solved independently. The interaction force is calculated explicitly using a feedback law~\cite{goldstein1993 modeling} based on the pIB method~\cite{kim2007 penalty}. The interaction force between the fluid and the structure can be determined by the feedback law \cite{kim2007 penalty}
\begin{equation}
\boldsymbol{F}_f = \alpha \int_0 ^t (\boldsymbol{U}_{ib} - \boldsymbol{U}) dt + \beta (\boldsymbol{U}_{ib} - \boldsymbol{U}),
\label{eq:penatly}
\end{equation}
where $\boldsymbol{U}_{ib}$ is the boundary velocity obtained by interpolation at the IB, $\boldsymbol{U}$ is the structure velocity, and $\alpha$ and $\beta$ are large positive constants. It is noted that this method does not transform the Lagrangian density of structure into the Eulerian density, as did in Zhu and Peskin~\cite{zhu2002 simulation}. Therefore, it is not necessary to apply the incompressible limitation or to modify the continuity equation~\cite{zhu2002 simulation, wang2017 immersed}. Eq. ~\ref{eq:penatly} is the Lagrangian force acting on the structure. The Lagrangian force acting on the fluid by the immersed boundary is -$\boldsymbol{F}_f$, which is spread onto fluid nodes to achieve the boundary condition. The interpolation of the velocity and the spreading of the Lagrange force to the adjacent grid points are expressed as
\begin{equation}
\boldsymbol{U}_{ib} (s, t) = \int_{V} \boldsymbol{u} (x, t) \delta_h (\boldsymbol{X}(s, t) - \boldsymbol{x}) d \boldsymbol{x},
\label{eq:ibvelocity}
\end{equation}
\begin{equation}
\boldsymbol{f} (\boldsymbol{x}, t) = -\int_{\Gamma} \boldsymbol{F}_f (s, t) \delta_h (\boldsymbol{X}(s, t) - \boldsymbol{x}) ds,
\label{eq:fluidforce}
\end{equation}
where $\boldsymbol{u}$ is the fluid velocity, $\boldsymbol{X}$ is the coordinates of structural nodes, $\boldsymbol{x}$ is the coordinates of fluid, $s$ is the arc coordinate for a two dimensional domain, $V$ is the fluid domain, $\Gamma$ is the structure domain and $\delta_h$ is the smoothed Dirac delta function~\cite{peskin2002 immersed}, expressed as
\begin{equation}
\delta_h(x, y, z) = \frac{1 }{h^3 } \lambda(\frac{x}{h}) \lambda(\frac{y}{h}) \lambda(\frac{z}{h}). \label{eq:phifun}
\end{equation}
In this paper, the four-point delta function introduced by Peskin~\cite{peskin2002 immersed} is used
\begin{equation}
\lambda (r) =
\begin{cases}
\frac{1 }{8 } (3 - 2 |r| + \sqrt{1 + 4 |r| - 4 |r|^2 }), & 0 \leq|r|<1 \\
\frac{1 }{8 } (5 - 2 |r| - \sqrt{-7 + 12 |r| - 4 |r|^2 }), & 1 \leq|r|<2 \\
0, & 2 \leq|r|. \end{cases}
\label{eq:deltfun}
\end{equation}
Instead of using uniform mesh which is time consuming, a non-uniform mesh is used here to improve the computational efficiency~\cite{tian2014 fluid}. The mesh is uniform in both directions within a small inner box containing the solid for achieving good accuracy and interpolation in the IB method, and it is stretched in the remaining of the computational domain. Non-reflecting boundary conditions are applied on the boundaries~\cite{thompson1987 time, thompson1990 time}. In addition, the solver is parallelized using hybrid OpenMP and MPI~\cite{rabenseifner2006 hybrid}. \section{Validations}
Validations of the present solver for fluid--structure interactions including flow over a stationary cylinder, structure dynamics, deformation of a flexible panel induced by shock waves in a shock tube, an inclined flexible plate in a hypersonic flow, and shock-induced collapse of a cylindrical helium cavity in the air have been conducted in our previous work~\cite{wang2017 immersed}.
|
1,272
| 1
|
arxiv
|
4 }]
\end{equation}
where $\epsilon=10 ^{-3 }$. The initial density and pressure are respectively 1.0 and $1 /\gamma$ ($\gamma=1.4 $), and the initial velocity of the fluid is zero. We used a uniform Cartesian mesh over a computational domain of ($20 D\times 20 D$) with three different mesh spacings: $D/100 $, $D/50 $ and $D/25 $. The time histories of the fluctuating pressure, $\Delta p=P-p_\infty$ where $P$ and $p_\infty$ are respectively the current and ambient pressure, at points (2, 0 ) and (2, 2 ) are plotted in Fig. ~\ref{Fig:acoustic-pt} along with the DNS results reported in Ref. ~\cite{liu2007 brinkman}. According to the comparison in Fig. ~\ref{Fig:acoustic-pt}, all the three mesh spacings are able to capture the fluctuating pressure. The small discrepancies depicted in the scaled up figures (see insets in Fig. ~\ref{Fig:acoustic-pt}) can be diminished when the mesh size is $D/100 $. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.2 in]{. /Figs/scattering-a1. eps}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/scattering-b1. eps}
\end{center}
\caption{Acoustic waves scattered by a stationary cylinder: comparison of the time histories of pressure fluctuation with available data from Ref. ~\cite{liu2007 brinkman}. (a) (2, 0 ), (b) (2, 2 ). }
\label{Fig:acoustic-pt}
\end{figure}
Fig. ~\ref{Fig:acoustic-cy-iblbmcomp} presents the comparison of present results with those calculated by the immersed boundary--lattice Boltzmann method (IB-LBM) in Ref. ~\cite{chen2014 comparative}. The mesh spacing for the fluid is $D/40 $, same as that used in Ref. ~\cite{chen2014 comparative}. The results show that the present IB--WENO scheme has an excellent performance on capturing the scattered acoustic pressure. Specifically, the sound pressure predicted by the current explicit IB is significantly better than that predicted by the explicit IB--LBM, as reported in Ref. ~\cite{chen2014 comparative}. \begin{figure}
\begin{center}
\includegraphics[width=3.2 in]{. /Figs/cy_scattering_com_IB_LBM. eps}
\end{center}
\caption{Acoustic waves scattered by a stationary cylinder: comparison of the time histories of fluctuating pressure with available data from Ref. ~\cite{chen2014 comparative} at (0, 5 ). }
\label{Fig:acoustic-cy-iblbmcomp}
\end{figure}
We further present the fluctuating pressure contour for mesh spacing of $D/50 $ at three instants in Fig. ~\ref{Fig:acoustic-p-contour} along with figures from Ref. ~\cite{bailoor2017 fluid} to illustrate the propagation of the acoustic wave. As shown in Fig. ~\ref{Fig:acoustic-p-contour}(a), a principal pulse is generated due to the initial pressure perturbation. When the acoustic wave impacts on the cylinder, a reflected wave off from the cylinder surface is generated and yields to a secondary acoustic wave, as demonstrated by Fig. ~\ref{Fig:acoustic-p-contour}(b). As reported by Liu and Vasilyes~\cite{liu2007 brinkman}, two parts of the principal wave front split by the cylinder traverse its span, collide, and merge, thereby generating a third acoustic wave front (see Fig. ~\ref{Fig:acoustic-p-contour}(c)). These three acoustic wave fronts are all well captured according to the time history of fluctuating pressure presented in Fig. ~\ref{Fig:acoustic-pt}. This benchmark validation shows that the present method has a good accuracy in capturing acoustic waves. \begin{figure}
\begin{center}
\hskip-5.0 in (a)
\includegraphics[width=3.0 in]{. /Figs/acoustic-p-contour-t2 -ref. JPG}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/acoustic-p-contour-t2. jpeg}\\
\hskip-5.0 in (b)
\includegraphics[width=3.0 in]{. /Figs/acoustic-p-contour-t4 -ref. JPG}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/acoustic-p-contour-t4. jpeg}\\
\hskip-5.0 in (c)
\includegraphics[width=3.0 in]{. /Figs/acoustic-p-contour-t6 -ref. JPG}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/acoustic-p-contour-t6. jpeg}\\
\end{center}
\caption{Acoustic waves scattered by a stationary cylinder: qualitative comparison of the pressure perturbation contours calculated by Shantanu~\cite{bailoor2017 fluid} (left column) and the present method (right column) at times (a) $t$ = 2.0, (b) $t$ = 4.0 and (c) $t$ = 6.0. The contour level ranges from $-2.0 \times10 ^{-5 }$ to $2.0 \times10 ^{-5 }$. }
\label{Fig:acoustic-p-contour}
\end{figure}
\subsection{Sound generation by a stationary cylinder in a uniform flow}
Flow around a stationary cylinder has been extensively studied theoretically, experimentally and numerically~\cite{son1969 numerical, graf1998 experiments, he1997 lattice}, since the work of Strouhal on aeolian tones. Most of previous studies on the sound generation due to flow past a circular cylinder was done by using hybrid or acoustic/viscous splitting methods to reduce the computational expense~\cite{inoue2002 sound}. However, direct numerical simulation (DNS) is an effective way to identify both the aerodynamics and characteristic features of the sound accurately. Inoue and Hatakeyama studied the sound generation by a stationary~\cite{inoue2002 sound} and rotating~\cite{inoue2003 control} cylinder in a uniform flow using DNS, and clarified the relation between the vortex/flow dynamics and the sound pressure. In this section, a stationary cylinder in a uniform flow is considered. The cylinder is located at the origin. In order to discuss the sound pressure, we define $r$ (nondimensionalized by the diameter of the cylinder) as the distance from the origin and $\theta$ as the circumferential angle. The non-dimensional parameters that govern this problem are defined as follows
\begin{equation}
{\rm Re}=\rho_f U_0 D/\mu, \quad M=U_0 /c,
\label{eq:cy_para}
\end{equation}
where $D$ is the diameter of the cylinder, $U_0 $ is the inlet velocity, and $\rho_f$, $\mu$ and $c$ are respectively the density, viscosity and sound speed of the fluid in the far field. In the current stationary case, Re=150 and $M=0.2 $. Three mesh regions are used in the simulations to improve the computational efficiency, including a cylinder occupied region, a sound region and a sponge region (similar as that in Ref. ~\cite{inoue2002 sound}) around the cylinder. The cylinder occupied region extends from ($-1.25 D, -1.25 D$) to ($1.25 D, 1.25 D$). The sound region is $212 D$ in width and length. The sponge region is as large as $400 D$ in both width and length to diminish the reflections from the boundary. Non-reflecting boundary are applied on the external boundaries of the sponge region. In the cylinder occupied region, the mesh spacing is $D/40 $. The mesh spacing in the sound region increases to $D/5 $. There is a sinusoidal transition between the cylinder occupied region and sound region. The maximum mesh spacing in the sponge region is $12.5 D$. In order to achieve a fast transition to the asymmetric K\'arm\'an vortex street, an initial velocity perturbation is applied in the near wake. The time histories of the drag ($C_D$) and lift ($C_L$) coefficients scaled by the $0.5 \rho_f U_0 ^2 D$ are presented in Fig. ~\ref{Fig:Re150 M0 _2 clcd}, with data reported in Ref. ~\cite{inoue2002 sound} for comparison. The amplitudes of $C_L$ and mean value of $C_D$ are $0.525 $ and $1.40 $, respectively, which agree well with the results ($C_L=0.520 $ and $C_{D, m}=1.39 $) from Ref. ~\cite{inoue2002 sound}. The Strouhal number calculated by the present method is about 0.178, which is also close to 0.175 in Ref. ~\cite{dumbser2016 high} and 0.183 in Ref. ~\cite{inoue2002 sound}. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.2 in]{. /Figs/Re150 M0 _2 cd. eps}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/Re150 M0 _2 cl. eps}
\end{center}
\caption{Sound generation by a stationary cylinder in a uniform flow: time histories of $C_D$ (a) and $C_L$ (b) at Re=150 and $M=0.2 $. }
\label{Fig:Re150 M0 _2 clcd}
\end{figure}
The fluctuating pressure $\Delta \tilde{p}$ is defined by $\Delta \tilde{p}(x, y, t)=\Delta p(x, y, t)-\Delta \bar{p}(x, y)$, where $\Delta p$ denotes the total fluctuating pressure and $\bar{p}$ is the time average pressure. $\Delta p=p-p_{\infty}$ with $p_{\infty}$ being the ambient pressure~\cite{inoue2002 sound}. In present paper, the fluid density $\rho_f$ and sound speed $c$ are used to nondimensionalize the pressure. Fig. ~\ref{Fig:Re150 M0 _2 decay} shows decay of the pressure peaks (include both positive and negative peaks) measured at $\theta=90 ^o$, which agrees well with results from Ref. ~\cite{inoue2002 sound}. The pressure peaks of the cylindric pressure waves generated by the unsteady flow tend to decay in proportion to $r^{-\frac{1 }{2 }}$, which agrees well with the theoretical prediction by Landau and Lifshitz~\cite{Landau1987 fluidmech}, as indicated by the dashed line in Fig. ~\ref{Fig:Re150 M0 _2 decay}. \begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/Re150 M0 _2 dpradial. eps}
\end{center}
\caption{Sound generation by a stationary cylinder in a uniform flow: comparison of the decay of pressure peaks measured at $\theta=90 ^o$ with available data from Ref. ~\cite{inoue2002 sound}. $+$ and $-$ denote the positive and negative peaks. The dashed line is the theoretical prediction by Ref. ~\cite{Landau1987 fluidmech} showing that the pressure peaks tend to decay in proportion to $r^{-\frac{1 }{2 }}$. }
\label{Fig:Re150 M0 _2 decay}
\end{figure}
We further increase the Reynolds number to 1000 (Ma=0.2 ), and assume it is still in laminar flow regime. The same problem has also been examined by Brentner et al. ~\cite{brentner1997 computation}, who used the Lighthill acoustic analogy to separate flow dynamics and acoustics calculations. Fig. ~\ref{Fig:cyRe1000 clcd} presents the comparison of the time histories of $C_D$ and $C_L$ with the data from Ref. ~\cite{brentner1997 computation}. The present drag coefficient and St number are respectively 1.60 and 0.215, which agree with the numerical results (1.56 and 0.238 ) in Ref. ~\cite{brentner1997 computation}. The discrepancy (2.6 \% in the drag coefficient and 9.7 \% in St) is probably due to the compressibility of the fluid neglected in the reference, as indicated in Ref. ~\cite{brentner1997 computation} where the compressible solver predicts lower frequency than the incompressible one. Fig. ~\ref{Fig:cyRe1000 _spl_st} shows the SPL (reference pressure $20 uPa$) in the frequency space obtained by the Fast Fourier transform (FFT). For comparison, data from in Ref~\cite{brentner1997 computation, revell1978 experimental} are shown in the same figure. The present results agree well with those from the references. The maximum discrepancies of the peak values are about 5 \%, which is probably induced by the neglected quadrupole source in FW-H. It should be noted that the results from Ref. ~\cite{brentner1997 computation} were calculated in two steps: the incompressible N-S equations were first solved, then the acoustics was computed by solving Ffowcs Williams--Hawkings (FW-H) equation using the resolved unsteady flow as the input with the quadrupole source neglected, while, the present method obtains the sound pressure from DNS. The experimental results from Ref. ~\cite{revell1978 experimental} show much lower peaks at high Reynolds number, but the frequency space still coincides with that at low Reynolds number. According to Ref. ~\cite{brentner1997 computation}, it is still challenging to accurately predict the SPL in the frequency space by using turbulence models such as $k-\omega$ and SST. In the future, our effort will be made to incorporate large eddy simulation and wall models into the current solver to address this challenge. Fig. ~\ref{Fig:cyRe1000 M0 _2 _dpcontour} is a snapshot of the fluctuating pressure contours for future validation of newly developed method. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.2 in]{. /Figs/cyRe1000 _cd. eps}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/cyRe1000 _cl. eps}
\end{center}
\caption{Sound generation by a stationary cylinder in a uniform flow: time histories of $C_D$ (a) and $C_L$ (b) at Re=1000 and $M=0.2 $. }
\label{Fig:cyRe1000 clcd}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4.5 in]{. /Figs/cyRe1000 _spl_freq. eps}
\end{center}
\caption{Sound generation by a stationary cylinder in a uniform flow: comparison of the sound pressure level measured at a distance of $128 D$ and $90 ^o$ from the inlet flow: Re=1000 and $M=0.2 $. }
\label{Fig:cyRe1000 _spl_st}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.05 in]{. /Figs/cyRe1000 M0 _2 _dp_contour.
|
1,272
| 2
|
arxiv
|
}) and the velocity ratio, defined by $\alpha=M_\theta/M$, where $M_\theta$ is the anticlockwise angular velocity of the rotating cylinder. In the current rotating case, Re=160, $M=0.2 $ and $\alpha=1.5 $. The mesh strategy is same as that in Section 3.2. \begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/rot_dpradial. eps}
\end{center}
\caption{Sound generation by a rotating cylinder in a uniform flow: decay of pressure peaks measured at $\theta=90 ^o$. The dashed line indicates that the pressure peaks tend to decay in proportion to $r^{-\frac{1 }{2 }}$. }
\label{Fig:Re160 M0 _2 Rotdecay}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/rot_r75 dp. eps}
\end{center}
\caption{Sound generation by a rotating cylinder in a uniform flow: comparison of the root-mean-square of $\Delta \tilde{p}$ (measured at at $r=r'(1 -M {\rm cos}\theta)$ with $r'=75 $) from the present computation ($\Delta$) and that obtained from Ref. ~\cite{inoue2003 control} ($+$). }
\label{Fig:Re160 M0 _2 Rotdp}
\end{figure}
The decay of the fluctuating pressure peaks measured at $\theta=90 ^o$ is plotted in Fig. ~\ref{Fig:Re160 M0 _2 Rotdecay}. The pressure peaks also tend to decay in proportion to $r^{-\frac{1 }{2 }}$, similar to that discussed in the stationary case. The polar plots in Fig. ~\ref{Fig:Re160 M0 _2 Rotdp} are the root-mean-square of the fluctuating pressure $\Delta \tilde{p}$ measured at $r=r'(1 -M {\rm cos}\theta)$ with $r'=75 $, taking the Doppler effect into account. The data from Ref. ~\cite{inoue2003 control} are also presented in Fig. ~\ref{Fig:Re160 M0 _2 Rotdp} for comparison. The results show that the pressure measured at $\theta=90 ^o$ is larger than that measured at $\theta=0 ^o$, indicating the lift dipole dominates the sound generation. The profile is not symmetric with respect to the line $\theta=0 ^o$ and $180 ^o$ as that in the stationary condition, due to the effects of rotation. The good agreement between the present results and the computational results from Ref. ~\cite{inoue2003 control} shows that the present method has an excellent ability to handle the acoustic simulations involving moving boundaries. Fig. ~\ref{Fig:Re160 M0 _2 Rotpt} shows the time histories of fluctuating pressure measured at $r=75 $, $\theta=90 ^o$ and $270 ^o$. The results show that the pressure peaks measured at $\theta=90 ^o$ are higher than those at $\theta=270 ^o$, due to the fact that the anticlockwise rotation of the cylinder leads to asymmetrical lift and drag. This is also indicated by the root-mean-square of $\Delta \tilde{p}$ plotted in Fig. ~\ref{Fig:Re160 M0 _2 Rotdp}. \begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/rot_r75 th90 th270. eps}
\end{center}
\caption{Sound generation by a rotating cylinder in a uniform flow: time histories of the fluctuating pressure $\Delta \tilde{p}$ measured at $r=75 $, $\theta=90 ^o$ and $270 ^o$. }
\label{Fig:Re160 M0 _2 Rotpt}
\end{figure}
\subsection{Sound generation by an insect in hovering flight}
In this section, an insect in hovering flight is considered to validate the current solver in handling moving body with relatively complex geometry configuration. The schematic of the problem is shown in Fig. ~\ref{Fig:mav_sch}. A circular cylinder (body) with a diameter of $0.5 L$, and two elliptic cylinders (wings) with the dimensions of $L$ and $0.4 L$ are used to model the insect. The two wings flap symmetrically, and the flapping motion of the wings is prescribed by
\begin{equation}
\alpha(t)=\alpha_0 [1 +sin(2 \pi f t)],
\label{eq:mav_premotion}
\end{equation}
where the amplitude of the flapping angle $\alpha_0 $ is $25.3 ^o$ and $f$ is the flapping frequency. The Reynolds number defined by $\rho_f U_{max} L/\mu$ is 200, where $U_{max}$ is the maximum wing tip velocity. The Strouhal number defined by $f L/U_{max}$ is 0.25, and the Mach number based on $U_{max}$ is 0.1. The computational domain extends from ($-100 L$, $-100 L$) to ($100 L$, $100 L$), with 50 points to resolve the wing length. \begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/mav_sch. eps}
\end{center}
\caption{Schematic of the prescribed flapping motion of an insect. }
\label{Fig:mav_sch}
\end{figure}
A direct comparison of the lifts generated by the body and wing is shown in Fig. ~\ref{Fig:mav_cl}. It is found that the present results agree well with the data from Ref. ~\cite{seo2011 computation}. Fig. ~\ref{Fig:mav_d60 _dp} shows that the fluctuating pressure measured at (0, $60 L$) and (0, $-60 L$) from present computation is also in a good agreement with the data from Ref. ~\cite{seo2011 computation}. Good agreements of the root-mean-square of fluctuating pressure measured at a distance of $50 L$ shown in Fig. ~\ref{Fig:mav_d50 _dprms} confirm the good performance of the current method in handling acoustics involving complex moving geometries. Additionally, a qualitatively comparison of the fluctuating pressure is presented in Fig. ~\ref{Fig:mav_pres_contour}. \begin{figure}
\begin{center}
\includegraphics[width=3.2 in]{. /Figs/mav_cl. eps}
\end{center}
\caption{Sound generation by an insect in hovering flight: comparison of the lift generated by the wing and body. }
\label{Fig:mav_cl}
\end{figure}
\begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)\\
\includegraphics[width=3.0 in]{. /Figs/mav_dp_d60 _1. eps}
\hskip0.1 in
\includegraphics[width=3.0 in]{. /Figs/mav_dp_d60 _2. eps}
\end{center}
\caption{Sound generation by an insect in hovering flight: comparison of fluctuating pressure measured at (a) (0, $60 L$) and (b) (0, $-60 L$). }
\label{Fig:mav_d60 _dp}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/mav_d50 _dprms. eps}
\end{center}
\caption{Sound generation by an insect in hovering flight: comparison of the root-mean-square of fluctuating pressure measured at a distance of $50 L$. }
\label{Fig:mav_d50 _dprms}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.2 in]{. /Figs/mav_pres_coutour_ref. jpg}
\hskip0.1 in
\includegraphics[width=3.0 in]{. /Figs/mav_pres_contour. eps}
\end{center}
\caption{Sound generation by an insect in hovering flight: comparison of the instantaneous fluctuating pressure contours calculated by Seo and Mittal~\cite{seo2011 computation} (left) and the present method (right). The contour level ranges from $-0.005 \rho_f c^2 $ (blue) to $0.005 \rho_f c^2 $ (red). }
\label{Fig:mav_pres_contour}
\end{figure}
\subsection{Deformation of a red blood cell induced by acoustic waves}
In this section, we consider the deformation of a red blood cell (RBC) induced by acoustic waves to validate the current method in handling fluid--structure--acoustics interactions. This problem was experimentally studied by Mishra et al. ~\cite{Puja2014 Deformation}. In the simulation, a localized pressure perturbation of Gaussian distribution is applied on the fluid domain, which is expressed as
\begin{equation}
P_a=A^*{\rm exp}[-{\rm ln}(2 )\frac{(y-1 )^2 }{0.04 }]+A^*{\rm exp}[-{\rm ln}(2 )\frac{(y+1 )^2 }{0.04 }],
\end{equation}
where $A^*=A/(\rho_f c^2 )$ is the non-dimensional perturbation amplitude, and $c$ is the sound speed of the unperturbed fluid. The structure-to-fluid mass ratio is $m^*=\rho_s/(\rho_f D)=0.2 $, where $D$ is the diameter of the cylindrical cell. The RBC is assumed to be elastic, and its non-dimensional stretching and bending rigidity are $K_S^*=K_S/(\rho_f c^2 D)=0.1 $ and $E_B^*=E_B/(\rho_f c^2 D^3 )=1.0 \times10 ^{-6 }$, respectively. The prestress is also applied on the RBC to hold it in a cylindrical shape initially. The pressure perturbation amplitudes ranging from 0.2 to 1.0 are examined. The instantaneous sound pressure contours at three instants for $A^*=0.4 $ are shown in Fig. ~\ref{Fig:rbc_contour}. Fig. ~\ref{Fig:rbc_def} shows a qualitative comparison of the instantaneous deformation of the RBC. The results show that the shape of the RBC changes from a cylinder to an ellipse under the load of the symmetrically distributed pressure perturbations. As shown in Fig. ~\ref{Fig:rbc_def}, the deformation increases with the pressure amplitude, which qualitatively agrees with the experimental results. In addition, the wavy deformations of the RBCs are observed for large $A^*$ (e. g. $A^*=1.0 $). This phenomenon is the dynamic response of the RBC to the external load, and is consistent with the experimental observation. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.1 in]{. /Figs/rbcKs01 Pa04 t04. eps}
\includegraphics[width=2.1 in]{. /Figs/rbcKs01 Pa04 t06. eps}
\includegraphics[width=2.1 in]{. /Figs/rbcKs01 Pa04 t08. eps}
\end{center}
\caption{Deformation of a red blood cell induced by acoustic waves: instantaneous fluctuating pressure contour with $A^*=0.4 $ at $tc/D=$ 0.4 (a), 0.6 (b) and 0.8 (c). The contour level ranges from 0 (blue) to $0.2 \rho_f c^2 $ (red). }
\label{Fig:rbc_contour}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.2 in]{. /Figs/rbcKs01 Pa02 def. eps}
\includegraphics[width=1.2 in]{. /Figs/rbcKs01 Pa04 def. eps}
\includegraphics[width=1.2 in]{. /Figs/rbcKs01 Pa06 def. eps}
\includegraphics[width=1.2 in]{. /Figs/rbcKs01 Pa08 def. eps}
\includegraphics[width=1.2 in]{. /Figs/rbcKs01 Pa1 def. eps}\\
\includegraphics[width=6.0 in]{. /Figs/rbcdef_ref. jpg}
\end{center}
\caption{Deformation of a red blood cell induced by acoustic waves: qualitatively comparison of the deformations of the RBCs calculated by present solver (top) and those obtained by Mishra et al. ~\cite{Puja2014 Deformation} from experiments (bottom). }
\label{Fig:rbc_def}
\end{figure}
\subsection{Acoustic waves scattered by a stationary sphere}
In this section, acoustic waves scattered by a stationary sphere is conducted to validate the present solver in capturing acoustic waves in three-dimensional space. In this case, a rigid sphere with a radius of $R$ is fixed in the fluid with its center at (0, 0 ). A periodic Gaussian pressure source in the fluid domain is applied
\begin{equation}
A_p=-A{\rm exp}\{-B {\rm log}(2 )[(x-2 )^2 +y^2 +z^2 ]\}{\rm cos}(\omega t),
\end{equation}
where $A=0.01 $, $B=16 $ and $\omega=2 \pi$. All quantities are non-dimensionalized by the radius of the sphere, the ambient fluid density and sound speed. Three mesh regions are used to improve the computational efficiency and conserve the accuracy as used in Section 3.2. Three mesh spacings of $R/20 $, $R/40 $ and $R/80 $ within a domain of $2 R\times2 R\times2 R$ covering the sphere are used. The far field mesh spacing is $R/10 $. The sphere is discretized by triangular elements with a mesh spacing approximately equal to that of the fluid domain. The fluctuating pressure along $x$-direction on the line of $y=0, z=0 $ and along $y$-direction on the line of $x=0, z=0 $ are presented in Fig. ~\ref{Fig:acoustic-sphere-dp} with data from Refs. ~\cite{morris1995 scattering, tam1997 second}. As shown in this figure, the fluctuating pressure from the present simulation approaches to the results from the references when the mesh is refined. Good agreement is achieved when the finest mesh spacing of $R/80 $ is used, showing that the present numerical method is able to predict acoustic waves in three-dimensional space. Fig. ~\ref{Fig:acoustic-sphere-contour} shows three views of the fluctuating pressure field at the mesh spacing of $R/80 $ when the periodic solution is obtained. To compare the performance of the IB--WENO and the IB--TENO, Fig. ~\ref{Fig:acoustic-sphere-dp_TENO} shows the pressure fluctuation along $x$-direction. It is found that the numerical dissipation of the IB-TENO is lower compared to the IB-WENO, as demonstrated by the peak values near $x/R=3.1 $ and 4.7. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.2 in]{. /Figs/sphere-dp-x. eps}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/sphere-dp-y.
|
1,272
| 3
|
arxiv
|
:acoustic-sphere-contour}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3.2 in]{. /Figs/sphere-dp-line-x-TENO_comp. eps}
\end{center}
\caption{Acoustic waves scattered by a stationary sphere: comparison of the pressure fluctuation along $x$-direction on the line of $y=0, z=0 $ calculated by WENO and TENO. }
\label{Fig:acoustic-sphere-dp_TENO}
\end{figure}
\section{Sound generation by flapping foils in a uniform flow}
Having conducted rigorous validations, the sound generated by flapping foils in forward flight and flapping foil energy harvester is numerically studied in the two-dimensional domain by considering the geometrical shape and the flexibility of the foil. Both the force and sound generations are compared and analyzed. \subsection{Flapping foils in forward flight}
\subsubsection{Physical problem}
In this section, the acoustic perturbations induced by flapping foils in forward flight are considered, as shown in Fig. ~\ref{Fig:rflapsch}~\cite{tian2013 force}. Here, the foil is clamped at the leading edge, and the clamping device undergoes a combined translational and rotational motions, given by~\cite{tian2013 force}
\begin{equation}
\boldsymbol{X}_0 (t)=\frac{A_0 }{2 } {\rm cos}(2 \pi f t) [{\rm cos}\beta, {\rm sin}\beta], \quad \alpha(t)=\frac{\alpha_m}{2 }{\rm sin}(2 \pi f t+ \phi). \label{eq:flapping_motion}
\end{equation}
where $A_0 $ is the translational amplitude, $f$ is the flapping frequency, $\alpha_m$ is the rotation amplitude, and $\beta$ is the angle between the stroke plane and the horizontal plane. The phase difference ($\phi$) is 0 unless it is mentioned. \begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/rfflappingsch. eps}
\end{center}
\caption{Schematic of a rigid (left) and flexible (right) foil flapping in a uniform flow. }
\label{Fig:rflapsch}
\end{figure}
The non-dimensional parameters including the flapping amplitude, Reynolds number, inlet velocity, Mach number, mass ratio and frequency ratio that control this problem are given by
\begin{equation}
\frac{A_0 }{L}, \quad {\rm Re}=\frac{\rho_f UL}{\mu}, \quad U_r=\frac{U_0 }{U}, \quad M=\frac{\pi U}{2 c}, \quad m^*=\frac{\rho_s}{\rho_f L}, \quad \omega^*=\frac{2 \pi f}{\omega_n},
\label{eq:flapping_para}
\end{equation}
where $L$ is the chord length, $U=2 f A_0 $ is the average translational velocity of the leading edge, $f$ is the flapping frequency of the foil, $c$ is the sound speed of the fluid, $\rho_f$ is the fluid density, $\rho_s$ is the linear density of the foil, and $\omega_n=k_n^2 /L^2 \sqrt{E_B/\rho_s}$ with $k_n=1.8751 $ (the frequency of the first natural vibration mode of the wing with fixed leading edge~\cite{landau1986 theory, tian2013 force}) and $E_B$ being the bending rigidity. Here, Re=100, $m*=5.0 $ and $U_r=0.4 $. The thrust, lift and power coefficients are defined as
\begin{equation}
C_T=\frac{2 F_T}{\rho_f U^2 L}, \quad C_L=\frac{2 F_L}{\rho_f U^2 L}, \quad C_p=\frac{-2 \int_0 ^L \boldsymbol{f}\cdot \boldsymbol{v} dl}{\rho_f U^3 L},
\label{eq:ctcl}
\end{equation}
where $F_T$ and $F_L$ are respectively the thrust and lift force acting on the foil by the ambient fluid, $\boldsymbol{f}$ is the hydrodynamic traction on the foil, and $\boldsymbol{v}$ is the velocity of the foil. The fluctuating pressure $\Delta \tilde{p}$ is defined in the same way in Section 3.2. All the pressures presented afterwards are scaled by the fluid density $\rho_f$ and sound speed $c$. \subsubsection{Mesh convergence study}
First, mesh convergence is conducted to guarantee the reliability of the solver in modeling the acoustic perturbations induced by flapping foils. Non-uniform mesh scheme is adopted to improve the computational efficiency, where a uniform refined mesh region around the foil is used to enhance the accuracy. In the simulation, the computational domain extends from ($-36.25 L$, $-36.25 L$) to ($36.25 L$, $36.25 L$). Three cases are considered for the mesh convergence study: $\Delta x=L/80 $, $L/40 $ and $L/20 $, where $\Delta x$ is the finest mesh spacing around the foil. Other non-dimensional parameters for these three cases are: $M=0.1 $, $A_0 /L=1.25 $, $\beta=45 ^o$ and $\alpha_m=0 ^o$. In Fig. ~\ref{Fig:ctlp-meshcong}, the time histories of thrust, lift and power coefficients are presented. The results show that reliable results can be achieved with $\Delta x=L/40 $. \begin{figure}
\begin{center}
\hskip-3.0 in (a)
\includegraphics[width=5.0 in]{. /Figs/convergency-ct. eps}
\hskip-3.0 in (b)
\includegraphics[width=5.0 in]{. /Figs/convergency-cl. eps}
\hskip-3.0 in (c)
\includegraphics[width=5.0 in]{. /Figs/convergency-cp. eps}
\end{center}
\caption{Flapping foils in forward flight: time histories of $C_T$ (a), $C_L$ (b) and $Cp$ (c) for Re=100, $M=0.1 $, $U_r=0.4 $, $A_0 /L=1.25 $ and $\alpha_m=0 $. }
\label{Fig:ctlp-meshcong}
\end{figure}
Similar to Section 3.2, we define $r$ as the distance from the origin ( nondimensionalized by $L$) and $\theta$ as the circumferential angle. The histories of the fluctuating pressure measured at $r=10, \theta=90 ^o$ are presented in Fig. ~\ref{Fig:dpmeshcong}, where it is found that the effects of the mesh difference are negligible for mesh size of $L/80 $ and $L/40 $. Therefore, $\Delta x=L/40 $ is adopted in the later simulations. This figure also shows that the positive pressure wave peaks are significantly larger than the negative ones, which will be discussed in detail later. \begin{figure}
\begin{center}
\includegraphics[width=5.0 in]{. /Figs/congdlr10 th90. eps}
\end{center}
\caption{Flapping foils in forward flight: time histories of the fluctuating pressure measured at $r=10 $, $\theta=90 ^o$ for Re=100, $M=0.1 $, $U_r=0.4 $, $A_0 /L=1.25 $ and $\alpha_m=0 $. }
\label{Fig:dpmeshcong}
\end{figure}
\subsubsection{Propagation and decay of pressure waves}
In Fig. ~\ref{Fig:rflappre}, the instantaneous contours of the fluctuating pressure in a flapping period are presented. It is observed that during downstroke, negative pressure pulses are generated from the upper side of the foil, and positive pressure pulses are generated from the lower side of the foil, and vice versa. During a foil stroke, positive pressure is formed on the loading face due to fluid being displaced and negative pressure is formed on the opposite face mainly due to the leading-edge vortices. The positive and negative pressure exchange sides during the switch between the downstroke and the upstroke~\cite{geng2017 effect}. A clear illustration of the foil-loading mechanism can be found in the work by Inada et al. ~\cite{inada2009 numerical}. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.0 in]{. /Figs/flapping_0 _025 _dp_15 T. eps}
\hskip0.1 in
\includegraphics[width=3.0 in]{. /Figs/flapping_0 _025 _dp_15 _25 T. eps}\\
\hskip-3.0 in (c) \hskip3.0 in (d)
\includegraphics[width=3.0 in]{. /Figs/flapping_0 _025 _dp_15 _5 T. eps}
\hskip0.1 in
\includegraphics[width=3.0 in]{. /Figs/flapping_0 _025 _dp_15 _75 T. eps}\\
\end{center}
\caption{Flapping foils in forward flight: instantaneous contours of the fluctuating pressure $\Delta p$ at Re=100, $M=0.1 $, $U_r=0.4 $, $A_0 /L=1.25 $ and $\alpha_m=0 $ at (a) $t = 0 T/4 $, (b) $t = T/4 $, (c) $t = 2 T/4 $ and (d) $t = 3 T/4 $ with the finest mesh size of $L/40 $. The contour level ranges from $-1.0 \times10 ^{-3 }\rho_f c^2 $ (blue) to $1.0 \times10 ^{-3 }\rho_f c^2 $ (red) with the interval of $4.0 \times10 ^{-5 }\rho_f c^2 $. }
\label{Fig:rflappre}
\end{figure}
The circumferential distribution of fluctuating pressure peaks are presented in Fig. ~\ref{Fig:dpcirmeshcong}. It shows that the pressure peaks decay with the distance. In addition, the positive peaks are much larger than the negative ones, because as mentioned above, the positive and negative pressure fluctuations are generated by different mechanisms. Fig. ~\ref{Fig:dpcirmeshcong} shows that the pressure peaks decay with the increasing distance, and the positive peaks are much larger than the negative peaks. In addition, Fig. ~\ref{Fig:dpcirmeshcong} shows that the pressure peaks distribution is symmetrical about the stroke plane, indicated by the dashed line in the figure. It is also noted that some markers plotted in Fig. ~\ref{Fig:dpcirmeshcong} are not smoothly distributed along the circumference. A reasonable explanation is that the oscillation is introduced by the vortices. When the vortex approaches to the probes, the recorded pressure will be perturbed. As the distance increases, the fluctuating pressure along the circumference tend to be smoother, as shown in Fig. ~\ref{Fig:dpcirmeshcong}. This is due to the fact that the vortex effects reduce with the distance. The fluctuating pressure peaks generated by the flapping wing on the windward side ($180 ^o-270 ^o$) are slightly larger than those on the leeward side ($0 ^o-90 ^o$), as shown in Fig. ~\ref{Fig:dpcirmeshcong}, due to the presence of the free stream. In order to figure out the influence of the phase difference ($\phi$) between the translational and rotational motions on the fluctuating pressure, combined translational and rotational motions of the flapping wing at $\phi = 0 ^o, 45 ^o$ and $90 ^o$ with $\alpha_m=\pi/4 $ are simulated. Polar diagrams of the fluctuating pressure peaks are presented in Fig. ~\ref{Fig:posneg_peak_phase}. It is found that the fluctuating pressure peaks on the windward side decrease slightly with the phase difference. Similar to the case without rotational motion, the positive fluctuating pressure peaks on the windward side are larger than the negative ones, due to the presence of the free stream. On the leeward side, the difference between the positive and negative fluctuating pressure peaks are not significant. However, the comparison between the flapping with and without translational motion, as shown in Figs. ~\ref{Fig:dpcirmeshcong} and \ref{Fig:posneg_peak_phase}, shows that the combined translational and rotational motions can reduce the fluctuating pressure peaks, especially those on the leeward side. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.25 in]{. /Figs/dpcirpositive. eps}
\hskip0.1 in
\includegraphics[width=3.25 in]{. /Figs/dpcirnegative. eps}
\end{center}
\caption{Flapping foils in forward flight: polar diagram of the fluctuating pressure peaks for Re=100, $M=0.1 $, $U_r=0.4 $, $A_0 /L=1.25 $ and $\alpha_m=0 $. Positive (a) and negative (b) fluctuating pressure peaks at a distance of $10 L$ ($\Delta$), $14 L$ (o) and $18 L$ ($+$). The dashed line indicates the direction of the stroke plane. }
\label{Fig:dpcirmeshcong}
\end{figure}
\begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/phase0 positive_negative_peaks. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/phase45 positive_negative_peaks. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/phase90 positive_negative_peaks. eps}\\
\end{center}
\caption{Flapping foils in forward flight: polar diagram of the positive ($\Delta$) and negative ($o$) fluctuating pressure peaks generated by a flapping wing under translational and rotational motions at a distance of $18 L$: (a) $\phi=0 ^o$, (b) $\phi=45 ^o$ and (c) $\phi=90 ^o$. }
\label{Fig:posneg_peak_phase}
\end{figure}
Shown in Fig. ~\ref{Fig:dpcirdirec} are the polar plots of the fluctuating pressures $\Delta \tilde{p}$ at different instants from $0 /6 T$ to $3 T/6 $ in the 31 st cycle. As the upstroke process is a inverse of the downstroke process, only a half period is presented. It is found that the directivity of the pressure waves fluctuates around the stroke angle $45 ^o$ at the flapping frequency. Fig. ~\ref{Fig:dpcirdirec} also describes the whole sound generation process due to the flapping wing loading and the vortex shedding. \begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.2 in]{. /Figs/T30 dpcir. eps}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/T30 _17 dpcir. eps}\\
\hskip-3.0 in (c) \hskip3.0 in (d)
\includegraphics[width=3.2 in]{.
|
1,272
| 4
|
arxiv
|
stroke plane. }
\label{Fig:dpcirdirec}
\end{figure}
After the analysis of the sound generation mechanism and amplitude, we would like to further analyze the frequency of the flapping foil-induced sound by using FFT to analyze the fluctuating pressure. The fluctuating pressure of ten cycles from $10 T$ to $20 T$ measured at the probes along $r=18 $ are used in the frequency analysis. Fig. ~\ref{Fig:dpcirfreq} shows the polar distribution of the fluctuating pressures at the frequency of $f$ (the flapping frequency), $2 f$ and $3 f$. It indicates that the sound pressure is dominated by the frequency of $f$. The peaks of the fluctuating pressures at the frequencies of $2 f$ and $3 f$ are much lower than those at the frequency of $f$. The fluctuating pressure at the frequencies over $3 f$ are negligible. \begin{figure}
\begin{center}
\includegraphics[width=4.0 in]{. /Figs/dpcir_freq. eps}
\end{center}
\caption{Flapping foils in forward flight: polar diagram of the fluctuating pressure peaks at $r=18 $ with different frequencies. Where, $\Delta$, o and $+$ denote the frequency of $f$, $2 f$ and $3 f$, respectively. }
\label{Fig:dpcirfreq}
\end{figure}
\subsubsection{Comparison of the sound generated by flat plate and NACA0015 foil}
Here, two flat plates (a rigid and a flexible) and a rigid NACA0015 foil undergoing combined translational and rotational motion in a uniform flow, which widely exists in the insects wings~\cite{tian2013 force}, are numerically simulated to study the influences of the foil flexibility and geometry. The motion of the foil is controlled by Eq. ~\ref{eq:flapping_motion}. All the non-dimensional parameters are defined in Eq. ~\ref{eq:flapping_para}. The amplitude of the rotating angle $\alpha_m=\pi/4 $, other parameters for the flexible foils are: frequency ratio $\omega^*=0.6 $ and $m^*=5.0 $. The thrust, lift and power coefficients defined in Eq. ~\ref{eq:ctcl} are presented in Fig. ~\ref{Fig:ctlp-rf}, along with the numerical data from Ref. ~\cite{tian2013 force}. It is found the results from the current compressible solver agree well with the data obtained by Tian et al. ~\cite{tian2013 force} using an incompressible solver. Because the Mach number used in the current solver is low ($M=0.1 $), the compressibility of the fluid is not significant. As seen from Fig~\ref{Fig:ctlp-rf}, the discrepancies between the current results and those from Ref. ~\cite{tian2013 force} for the flexible foil seem to be much more significant than those for the rigid foil. A plausible explanation is that the flexible foil induces higher fluid velocity around it, which makes the compressibility more significant. The histories of the thrust, lift and power coefficients in a period for the three cases are plotted in Fig. ~\ref{Fig:flapping_aerody} for comparison. The results show that the flexibility of the plate has a remarkable influence on the force generation. The thrust and lift are enhanced significantly by the flexibility. However, the comparison between the rigid plate and NACA0015 foil indicates that the geometrical shape does not have significant effects on the aerodynamic characteristics at the conditions considered. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/ct. eps}
\includegraphics[width=2.0 in]{. /Figs/cl. eps}
\includegraphics[width=2.0 in]{. /Figs/cp. eps}
\end{center}
\caption{Flapping foils in forward flight: time histories of $C_T$ (a), $C_L$ (b) and $Cp$ (c) for Re=100, $M=0.1 $, $U_r=0.4 $, $A_0 /L=1.25 $ and $\alpha_m=\pi/4 $. Where, solid line (present) and dashed line (Ref. ~\cite{tian2013 force}) denote the flexible foil, dotted line (present) and dash-dotted line (Ref. ~\cite{tian2013 force}) denote the rigid foil. The grey and white regions indicate downstroke and upstroke, respectively. }
\label{Fig:ctlp-rf}
\end{figure}
\begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/flapping_ct. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/flapping_cl. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/flapping_cp. eps}\\
\end{center}
\caption{Flapping foils in forward flight: comparison of the drag coefficient (a), lift coefficient (b) and power coefficient (c) histories in a period. Here, Re=100, $M = 0.1 $, $Ur = 0.4 $, $A_0 /L= 1.25 $ and $\alpha_m=\pi/4 $. }
\label{Fig:flapping_aerody}
\end{figure}
In order to compare the sound generation of rigid and flexible foils, the polar diagrams of the root-mean-square of $\Delta \tilde{p}$ measured at $r=30 $ are presented in Fig. ~\ref{Fig:rf_flapping_cir}. As shown in Fig. ~\ref{Fig:rf_flapping_cir} (a) and (c), the fluctuating pressure generated by the rigid plate and NACA0015 foil distributes symmetrically about the stroke plane (indicated by the dashed line in Fig. ~\ref{Fig:rf_flapping_cir} (a) and (c)). However, Fig. ~\ref{Fig:rf_flapping_cir} (b) shows that the distribution of the fluctuating pressure generated by the flexible foil is asymmetrical. An evident shift (about $15 ^o$ anticlockwise) is observed, as shown in Fig. ~\ref{Fig:rf_flapping_cir} (b). As analyzed in Ref. ~\cite{tian2013 force}, the deformation of the flexible plate during upstroke is higher than that during downstroke, due to the presence of free stream. Obviously, this sound shift is introduced by the asymmetrical deformation (as shown in Fig. ~\ref{Fig:deform_flapping}) of the foil. Fig. ~\ref{Fig:rf_flapping_cir} also shows that the flexible plate generates the largest fluctuating pressure, which is significantly larger than that generated by the rigid plate and NACA0015 foil. The results indicate that the flexibility influences the sound generation significantly. The effects of geometrical shape on the sound amplitudes are much smaller. In addition, it is found that the fluctuating pressure on the lower section is larger than that on the upper section. A reasonable explanation is that the lower section is located on the windward side and the upper section is on the leeward side. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/flapping_rig_rms. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/flapping_flex_rms. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/flapping_naca_rms. eps}\\
\end{center}
\caption{Flapping foils in forward flight: root-mean-square of $\Delta \tilde{p}$ measured at $r=30 $: (a) rigid plate, (b) flexible plate and (c) NACA0015 foil. Re=100, $M = 0.1 $, $Ur = 0.4 $, $A_0 /L= 1.25 $ and $\alpha_m=\pi/4 $. The dashed lines indicate the directions of the stroke planes. }
\label{Fig:rf_flapping_cir}
\end{figure}
\begin{figure}
\begin{center}
\hskip-3.0 in (a) \hskip3.0 in (b)
\includegraphics[width=3.2 in]{. /Figs/downstroke_deformation. eps}
\hskip0.1 in
\includegraphics[width=3.2 in]{. /Figs/upstroke_deformation. eps}\\
\end{center}
\caption{Flapping foils in forward flight: deformation of the flexible foil during (a) downstroke and (b) upstroke. Here, Re=100, $M = 0.1 $, $Ur = 0.4 $, $A_0 /L= 1.25 $, $\alpha_m=\pi/4 $, $\omega^*=0.6 $ and $m^*=5.0 $. The dashed lines indicate the directions of the stroke planes. }
\label{Fig:deform_flapping}
\end{figure}
The FFT is further used to study the frequency of the fluctuating pressure. According to the frequency analysis of the fluctuating pressure peaks, three main frequencies ($f$, $2 f$ and $3 f$) are captured in these three flapping foils. The fluctuating pressures at higher frequencies are negligible. The polar diagrams of the fluctuating pressure peaks at the three main frequencies are presented in Fig. ~\ref{Fig:freq_flapping}. It is found that the frequency of $f$ dominates in the rigid flapping foil. However, Fig. ~\ref{Fig:freq_flapping} (b) shows that both the frequency of $f$ and $2 f$ dominate in the flexible flapping foil. As shown in Fig. ~\ref{Fig:freq_flapping} (a) and (c), the fluctuating pressure for rigid foils distributes symmetrically to the stroke plane as mentioned before. Fig. ~\ref{Fig:freq_flapping} (b) shows that the fluctuating pressure at the frequency of $f$ generated by the flexible foil shift about $15 ^o$ anticlockwise compared with that generated by the rigid foil. However, the shift of the fluctuating pressure at the frequency of $2 f$ approaches about $30 ^o$. This indicates that the propagation direction of the fluctuating pressure from a flexible foil at the frequency of $f$ and $2 f$ exhibit remarkable differences, which can be used to identify the characteristics of the sound source. Additionally, the comparison between the rigid plate and NACA0015 foil also shows that the geometrical effects on the sound generation is not significant as mentioned in Section 4.1.4. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/flapping_rig_freq. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/flapping_flex_freq. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/flapping_naca_freq. eps}\\
\end{center}
\caption{Flapping foils in forward flight: polar diagram of the fluctuating pressure peaks at $r=30 $ with different frequencies: (a) rigid plate, (b) flexible plate and (c) NACA0015 foil. Here, Re=100, $M = 0.1 $, $Ur = 0.4 $, $A_0 /L= 1.25 $ and $\alpha_m=\pi/4 $. Where, $\Delta$, o and $+$ denote the frequency of $f$, $2 f$ and $3 f$, respectively. }
\label{Fig:freq_flapping}
\end{figure}
\subsection{Flapping foil energy harvester}
In this section, we further adopt the current numerical method to study the sound generated by a thin plate (without considering the thickness of the plate) and a NACA0015 foil (stream-lined shape) flapping as energy harvesters. A foil undergoing prescribed plunge and pitch governed by Eq. ~\ref{eq:flapping_motion} in a uniform flow is considered. The characteristic parameters defined in Eq. ~\ref{eq:flapping_para} are given as: $A_0 /L=2.0 $, $M=0.1 $, $U_r=1.78 $, $\alpha_m=152.6 ^o$ and $\beta=90 ^o$. The Reynolds number, defined by Re=$\rho_f U_0 L/\mu$ is 1100, and the distance from leading edge to the pitching axis is $L/3 $. The details of the setup can be found in Refs. ~\cite{kinsey2008 parametric, tian2015 fsi}. The computational domain based on a non-uniform Cartesian mesh has a size of $96 L\times96 L$, with a uniform area extending from ($-2.5 L$, $-2.5 L$) to ($2.5 L$, $2.5 L$), where the mesh spacing is $L/100 $. Extensive preliminary tests have been conducted to make sure the results are independent of the mesh size and computational domain. The time histories of non-dimensional drag ($C_D$, defined as $-C_T$), lift ($C_L$) and power ($C_p$) extracted from the fluid obtained from the present simulations agree well with those available in Ref. ~\cite{kinsey2008 parametric} as shown in Fig. ~\ref{Fig:naca_clcd}. \begin{figure}
\begin{center}
\hskip-3.0 in (a)
\includegraphics[width=3.0 in]{. /Figs/naca_cd. eps}
\hskip-3.0 in (b)
\includegraphics[width=3.0 in]{. /Figs/naca_cl. eps}
\hskip-3.0 in (c)
\includegraphics[width=3.0 in]{. /Figs/naca_cp. eps}
\end{center}
\caption{Flapping foil energy harvester: time histories of $C_D$ (a), $C_L$ (b) and $C_p$ (c) from a NACA0015 foil flapping in a uniform flow at Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. }
\label{Fig:naca_clcd}
\end{figure}
Rigid and flexible plates without considering the thickness effect are also numerically simulated for comparison. All the non-dimensional parameters for the rigid plate are same as that for the NACA0015 foil. For the flexible plate, the mass ratio defined in Eq. ~\ref{eq:flapping_para} is 5.0, the non-dimensional bending rigidity of the flexible plate defined as $E^*=E_B/(\rho_f U_0 ^2 L^3 )$ is $1.0 $. Fig. ~\ref{Fig:naca_flex_clcd} presents the comparison of the drag, lift and power coefficients in a period obtained from the NACA0015 foil and the two plates. It is found that the tendencies of $C_D$, $C_L$ and $C_p$ are consistent, though small discrepancies are observed. Specially, Fig.
|
1,272
| 5
|
arxiv
|
85 and 0.82, respectively, compared with that of only 0.58 for the flexible plate. The results show that NACA0015 foil is the most efficient in energy harvesting among the three foils. The flexibility of the foil in this case deteriorates the energy harvesting efficiency. The instantaneous vorticity contours in a period are presented in Fig. ~\ref{Fig:naca_vortex}. Similar vortex fields can be observed for the three cases, without remarkable differences. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/naca0015 _cd. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/naca0015 _cl. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/naca0015 _cp. eps}\\
\end{center}
\caption{Flapping foil energy harvester: comparison of the drag coefficient (a), lift coefficient (b) and power coefficient (c) histories in a period. Here, Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. }
\label{Fig:naca_flex_clcd}
\end{figure}
\begin{figure}
\begin{center}
\hskip-5.0 in (a) \\
\includegraphics[width=1.5 in]{. /Figs/naca0015 _vor_5 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _vor_5 _25 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _vor_5 _5 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _vor_5 _75 T. eps}\\
\hskip-5.0 in (b) \\
\includegraphics[width=1.5 in]{. /Figs/naca0015 _rigid_vor_5 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _rigid_vor_5 _25 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _rigid_vor_5 _5 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _rigid_vor_5 _75 T. eps}\\
\hskip-5.0 in (c) \\
\includegraphics[width=1.5 in]{. /Figs/naca0015 _flex_vor_m5 _E1 _5 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _flex_vor_m5 _E1 _5 _25 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _flex_vor_m5 _E1 _5 _5 T. eps}
\includegraphics[width=1.5 in]{. /Figs/naca0015 _flex_vor_m5 _E1 _5 _75 T. eps}
\end{center}
\caption{Flapping foil energy harvester: instantaneous vorticity contours in a period with an interval of $T/4 $: (a) NACA0015 foil, (b) rigid plate and (c) flexible plate. Here, Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. The contour level of the vorticity ranges from $-5 U_0 /D$ (blue) to $5 U_0 /D$ (red). }
\label{Fig:naca_vortex}
\end{figure}
Fig. ~\ref{Fig:naca_decay} presents the decay of the fluctuating pressure (defined in Section 4.1 ) measured at $\theta=90 ^o$. It shows that the fluctuating pressure decays in proportion to the $r^{-\frac{1 }{2 }}$ in the intermediate and far fields, which agrees well with the theoretical result~\cite{Landau1987 fluidmech}. In all cases, the positive fluctuating pressure is slightly larger than the negative one, which may indicate the loading process is stronger than the vortices shedding effects on the fluctuating pressure generation at the measured point. The comparison of Fig. ~\ref{Fig:naca_decay} (b) and (c) shows that the flexibility of the plate increases the positive pressure at the measured point, which can be reasonably explained by the larger tip-displacement of the flexible plate. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/naca_dp_decay. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/naca_rigid_dp_decay. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/naca_flexible_m5 E1 _dp_decay. eps}\\
\end{center}
\caption{Flapping foil energy harvester: decay of pressure peaks measured at $\theta=90 ^o$: (a) NACA0015 foil, (b) rigid plate and (c) flexible plate. Here, Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. The dashed line indicates that the pressure peaks tend to decay in proportion to $r^{-\frac{1 }{2 }}$. }
\label{Fig:naca_decay}
\end{figure}
\begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/naca_dp_freq_r20. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/naca_rigid_dp_freq_r20. eps}
\hskip0.1 in
\includegraphics[width=2.0 in]{. /Figs/naca_flexible_m5 E1 _dp_freq_r20. eps}\\
\end{center}
\caption{Flapping foil energy harvester: polar diagram of the fluctuating pressure peaks at $r=20 $ with different frequencies: (a) NACA0015 foil, (b) rigid plate and (c) flexible plate. Here, Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. Where, $\Delta$, o and $+$ denote the frequency of $f$, $2 f$ and $3 f$, respectively. }
\label{Fig:naca_freq}
\end{figure}
To illustrate the frequency characteristics of the fluctuating pressure generated by the foil, the FFT is used to analyze the fluctuating pressure. Polar diagrams of the fluctuating pressure peaks measured at $r=20 $ with three frequencies $f$, $2 f$ and $3 f$ are shown in Fig. ~\ref{Fig:naca_freq}. It is found that the sound at frequency of $f$ dominates in the vertical direction for all cases, while the fluctuating pressure at frequency of $2 f$ dominates in the horizontal direction, and the fluctuating pressures at higher frequency are negligible. The fluctuating pressure in the vertical direction is dominated by the frequency of the lift ($f$, see Fig. ~\ref{Fig:naca_freq} (b)). Similarly, the fluctuating pressure in the horizontal direction is dominated by the frequency of the drag ($2 f$, see Fig. ~\ref{Fig:naca_freq} (a)). Comparison of the fluctuating pressure generated by NACA0015 foil and rigid plate shows negligible differences indicating that the geometrical shape of the foil does not significantly affect the sound generation. It is also found that the fluctuating pressure at the frequency of $2 f$ induced by the flexible plate is larger than that induced by NACA0015 foil and rigid plate, which agrees with the larger amplitudes of drag induced by the flexible plates, see Fig. ~\ref{Fig:naca_flex_clcd} (a). The smaller fluctuating pressure at the frequency of $f$ and $3 f$ induced by the flexible plate also agrees with the smaller lift amplitude generated by the flexible plate, see Fig. ~\ref{Fig:naca_flex_clcd} (b). The instantaneous contours of the fluctuating pressure $\Delta p$ in a period are further presented in Fig. ~\ref{Fig:naca_dp_contour}. It is found that the difference of the fluctuating pressure field generated by the NACA0015 foil and rigid plate is not significant. The one of the flexible plate tends to be different at $0 T/4 $ and $2 T/4 $, when the deformation of the plate is significant. \begin{figure}
\begin{center}
\hskip-1.8 in (a) \hskip1.8 in (b) \hskip1.8 in (c)
\includegraphics[width=2.0 in]{. /Figs/naca0015 _dp_5 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _rigid_dp_5 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _flex_dp_m5 _E1 _5 T. eps}\\
\includegraphics[width=2.0 in]{. /Figs/naca0015 _dp_5 _25 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _rigid_dp_5 _25 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _flex_dp_m5 _E1 _5 _25 T. eps}\\
\includegraphics[width=2.0 in]{. /Figs/naca0015 _dp_5 _5 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _rigid_dp_5 _5 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _flex_dp_m5 _E1 _5 _5 T. eps}\\
\includegraphics[width=2.0 in]{. /Figs/naca0015 _dp_5 _75 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _rigid_dp_5 _75 T. eps}
\includegraphics[width=2.0 in]{. /Figs/naca0015 _flex_dp_m5 _E1 _5 _75 T. eps}\\
\end{center}
\caption{Flapping foil energy harvester: instantaneous contours of the fluctuating pressure $\Delta p$ induced by NACA0015 foil (a), rigid plate (b) and flexible plate (c) in a period with an interval of $T/4 $. The contour level ranges from $-5.0 \times10 ^{-4 }\rho_f c^2 $ to $5.0 \times10 ^{-4 }\rho_f c^2 $ with an interval of $1.25 \times10 ^{-5 }\rho_f c^2 $. Here, Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. }
\label{Fig:naca_dp_contour}
\end{figure}
Here, three additional cases at $E_b^*=0.25, $ 0.5 and 5.0 are considered to illustrate the effects of the flexibility on the sound generation. Fig. ~\ref{fig:power_flex_dp_rms} shows the polar diagram of the root-mean-square of the fluctuating pressure. It shows that the fluctuating pressure generated by the flexible plates increase slightly with the flexibility. However, the flexible plate at $E_b^*=0.5 $ generates a significantly larger fluctuating pressure, as the plate flaps near its natural frequency ($\omega^*$ defined in Eq. ~\ref{eq:flapping_para} is 0.8 ). \begin{figure}
\begin{center}
\includegraphics[width=3.5 in]{. /Figs/power_generator_flex_dp_rms. eps}
\end{center}
\caption{Flapping foil energy harvester: polar diagram of the root-mean-square of the fluctuating pressure at a distance of $37.5 L$, where $\Delta$, o, $+$ and $\cdot$ denote $E_b^*$ at 5.0, 1.0, 0.5 and 0.25, respectively. Here, Re=1100, $M=0.1 $, $\alpha_m=152.6 ^o$, $A_0 /L=2.0 $ and $\beta=90 ^o$. }
\label{fig:power_flex_dp_rms}
\end{figure}
\section{Conclusions}
In this paper, an immersed boundary method for fluid--structure--acoustics interactions involving large deformations and complex geometries is presented. The validations including acoustic waves scattered from a stationary cylinder, sound generation by a stationary and a rotating cylinder in a uniform flow, sound generation by an insect in hovering flight, deformation of a red blood cell induced by acoustic waves and acoustic waves scattered by a stationary sphere have been conducted. Results show that the current solver has a good performance in modelling fluid--structure--acoustics interactions involving large deformations and complex geometries. It indicates that the immersed boundary method handled by delta function is accurate enough for predicting the dilatation and acoustics. We further adopt the present solver to model fluid--structure--acoustics interactions of flapping foils in forward flight and flapping foil energy harvester. The numerical examples presented here can also enrich the limited database of fluid--structure--acoustics interactions. In the computation of flapping foils in forward flight, the present simulation captures the propagation and the decay of the sound pressure accurately. Two main mechanisms to generate positive and negative pressure are observed. The present simulation shows that the positive pressure formed on the loading face is much larger than the negative pressure generated by the vortex shedding. The directivity of the pressure wave also presents fluctuations around the stroke plane. The flexibility of the foil generates larger thrust with higher fluctuating pressure, but the geometrical shape does not have significant influences on the force and sound generation. Based on the FFT analysis of the fluctuating pressures, the fluctuating pressures are dominated by the flapping frequency $f$ of the foil for the rigid and NACA0015 flapping foil. However, both $f$ and $2 f$ components are significant for the flexible foil. In the computation of flapping foil energy harvester, it is found that the fluctuating pressure generated by NACA0015 foil, rigid plate and flexible plate are similar in terms of the frequency. The lift dominates the pressure fluctuation in the vertical direction and the drag dominates the pressure fluctuation in the horizontal direction. Some differences of the fluctuating pressure are also observed and analyzed. The results show that the geometrical shape does not have significant effects on the force and sound generation, while the flexibility of the plate tends to deteriorate the power extraction. The current flexible plate also induces larger sound at the frequency of $2 f$ and weaker sound at the frequencies of $f$ and $3 f$.
|
542
| 0
|
arxiv
|
\section{Introduction}
Coronal lines are collisionally excited forbidden transitions within
low-lying levels of highly ionized species (IP $>$ 100 eV). As such,
these lines form in extreme energetic environments and thus
are unique tracers of AGN activity; they are not seen in starburst
galaxies. Coronal lines appear from X-rays to IR and are common in
Seyfert galaxies regardless of their type (Penston et al. 1984 ;
Marconi et al. 1994 ; Prieto \& Viegas 2000 ). The strongest ones are seen in the IR; in the
near-IR they can even dominate the line spectrum (Reunanen et al. 2003 ). Owing to the high ionization potential, these lines
are expected to be
limited to few tens to hundred of parsec around the active nucleus. On the basis of spectroscopic observations,
Rodriguez-Ardila et al. (2004, 2005 )
unambiguously established the size of the coronal line region (CLR)
in NGC~1068 and the Circinus Galaxy, using the coronal
lines [SiVII] 2.48 ~$\rm \mu m$\,, [SiVI] 1.98 ~$\rm \mu m$\~,, [Fe\, {\sc vii}] 6087 \AA,
[Fe\, {\sc x}] 6374 \AA\ and [Fe\, {\sc xi}] 7892 \AA\. They find these
lines extending up
to 20 to 80 pc from the nucleus, depending on ionization potential. Given those sizes, we started
an adaptive-optics-assisted imaging program with the
ESO/VLT aimed at
revealing the detailed morphology of the CLR in some of nearest
Seyfert galaxies. We use as a tracer the isolated IR line [Si VII]
2.48 ~$\rm \mu m$\ (IP=205.08 eV). This letter presents the resulting narrow-band images of the [Si VII]
emission line, which
reveal for the first time the detailed morphology of
the CLR, and with suitable resolution for comparison with radio and
optical-
lower-ionization-gas images. The morphology of the CLR is sampled
with a spatial resolutions almost a factor 5 better than any
previously obtained, corresponding to scales $\sim <$10 pc. The
galaxies presented are all Seyfert type 2 : Circinus, NGC~1068,
ESO~428 -G1 and NGC~3081. Ideally, we had liked to image type 1
objects, but, in the Southern Hemisphere, there are as yet no known,
suitable type~1 sources at sufficiently low redshift to guarantee the
inclusion of [Si VII] 2.48 ~$\rm \mu m$\ entirely in the filter pass-band. \section{Observations, image registration and astrometry}
Observations were done with the adaptive-optics assisted IR
camera NACO at the ESO/VLT. Two narrow band filters, one centered on
the coronal [SiVII] 2.48 ~$\mu m$ line and an adjacent band centered on
2.42 ~$\mu m$ line-free continuum, were used. The image scale was
0.027 arcsec pixel$^{-1 }$ in all cases, 0.013 arcsec pixel$^{-1 }$
in NGC\,1068. Integration times
were chosen to keep the counts within the linearity range: $\sim 20 $
minutes per filter and source. For each filter, the photometry was
calibrated against standard stars observed after each science target. These stars were further used as PSF when needed and for deriving a
correction factor that
normalizes both narrow-band filters to provide
equal number of counts
per a given flux. In deriving this factor it is assumed that
the continuum level
in the stars is the same in both filters and not emission lines are
not present. The wavefront sensor of the adaptive optics system followed the optical
nucleus of the galaxies to determine seeing corrections. The achieved spatial resolution was estimated from stars available in
the field of the galaxies when possible; this was not possible in
NGC 3081 and NGC 1068 (cf. Table 1 ). The resolutions were comparable in both filters
within the reported errors in Table 1. Continuum-free [SiVII]2.48 ~$\rm \mu m$\ line images are shown
in Figs. 1 and 2 for each
galaxy. These were produced after applying the normalization factor
derived from the standers stars. The total integrated coronal line emission derived from these
images is listed in Table 2. For comparison, [SiVII] 2.48 ~$\rm \mu m$\ fluxes
derived from long-slit spectroscopy are also provided. Also in these figures, images
with the 2.48 ~$\rm \mu m$\~ filter of the standard stars -also used as PSF's control- are
shown. The images provide a rough assessment of
the image quality/resolution achieved in the
science frames. For the case of Circinus and ESO 428 -G014, a more
accurate evaluation is possible from the images of
a field star. One of these field star is shown in both filters
in Figs. 1 e and 2 b respectively.. To get an assesment of the
image quality at the lowest signal levels, the images of the field stars in particular are
normalized to the galaxy peak at the corresponding filter. These
are much fainter than the galaxy nucleus, thus, the
star peak is a mere $\sim$5 \% of the galaxy peak. \begin{table*}
\centering
\caption{Galaxies scale and achieved NACO angular resolution. $*$: in NGC 1068, is given the
size of the nucleus as K-band interferometry sets an upper limit for the
core of 5 mas (Weigelt et al. 2004 ); in NGC 3081, the size of a PSF star
taken after the science frames is given}
\begin{tabular}{cccccccc}
\hline
AGN & Seyfert & 1 arcsec & Stars & FWHM & FWHM & Size of nucleus \\
&type &in pc & in field & arcsec & pc & FWHM arcsec \\
\hline
Circinus & 2 & 19 & 2 & 0.19 $\pm$0.02 & 3.6 & 0.27 \\
NGC 1068 & 2 & 70 & 0 & 0.097 $^*$ & 6.8 & $<$0.097 \\
ESO\,428 -G014 & 2 & 158 & 3 & 0.084 $\pm$0.006 & 13 & 0.15 $\pm$0.01 \\
NGC\,3081 & 2 & 157 & 0 & 0.095 $^*$ & 14 & $<$0.32 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Size and photometry of the 2.48 $\mu m$ coronal line region. $*$: from a 1 '' x 1.4 '' aperture \label{flxTb}}
\begin{tabular}{ccccc}
\hline
{AGN} & {Radius from nucleus} & {Flux NACO} & {Flux long-slit} & {Reference long-slit} \\
&pc &\multicolumn{2 }{c}{in units of $10 ^{-14 }$~erg~s$^{-1 }$ cm$^{-2 }$} \\
\hline
Circinus & 30 & 20 & 16 & Oliva et al. 1994 \\
NGC 1068 & 70 & 72 & 47 $^*$ & Reunanen et al. 2003 \\
ESO\,428 -G014 & 120 - 160 & 2.0 & 0.7 $^*$ & Reunanen et al. 2003 \\
NGC\,3081 & 120 & 0.8 & 0.8 $^*$ & Reunanen et al. 2003 \\
\hline
\end{tabular}
\end{table*}
Radio and HST images were used, where available, to establish an
astrometric reference frame for the CLR in each of the galaxies. For NGC~1068 (Fig 1 a, b, \& c), the registration of radio,
optical HST and adaptive-optics IR images by Marco et
al. (1997 ; accuracy $\sim$ 0.05 '') was adopted. The comparison of the [SiVII]
2.48 ~$\rm \mu m$\ line image with the HST [OIII] 5007 \AA\ followed by
assuming the peak emission in Marco's et
al. K-band image to coincide with that in the NACO 2.42 ~$\rm \mu m$\ continuum
image. The comparison with the MERLIN 5 ~GHz image of Gallimore et
al. (2004 ) was done assuming that the nuclear radio source 'S1 ' and
the peak emission in the NACO 2.42 ~$\rm \mu m$\ image are coinciding. In Circinus (Fig. 1 d, e \& f), the registration
of NACO and HST/$H\alpha$ images was done on the basis of 3 --4
stars or unresolved star clusters available in all fields. That provides an accurate registration better
than 1 pixel (see Prieto et al. 2004 ). No radio image of comparable
resolution is available for this galaxy. For ESO\,428 -G014 (Fig. 2 a, b, \& c), NACO images were registered on
the basis of 3 available stars in the field. Further registration with
a VLA 2 ~cm image (beam 0.2 ''; Falcke, Wilson \& Simpson 1998 ) was made
on the assumption that the continuum peak at 2.42 ~$\mu m$ coincides with
that of the VLA core. We adopted the astrometry provided by
Falcke et al. (uncertainty $\sim$ 0.3 ")
who performed the registration of the 2 ~cm and the
HST/H$\alpha$ images, and plotted the HST/H$\alpha$ atop the NACO
coronal line image following that astrometry. NGC~3081 (Fig. 2 d, e, \& f) has no stars in the field. In this case
NACO 2.42 ~$\rm \mu m$\ and 2.44 ~$\rm \mu m$\ images, and an additional NACO deep
Ks-band image, were registered using the fact that the NACO adaptive
optics system always centers the images at the same position of the
detector within 1 pixel (0.027 ''). The registration with a HST/WFPC2
image at 7910 \AA\ (F791 W), employed as a reference the outer isophote
of the Ks-band image which show very similar morphology to that seen
in the HST 7910 \AA\ image. Further comparison with an HST PC2
$H\alpha$ image relied on the astrometry by Ferruit et al. (2000 ). The registration with an HST/FOC UV image at 2100 \AA\ (F210 M) was
based on the assumption that the UV nucleus and the continuum peak emission at 2.42 2.42 ~$\rm \mu m$\ coincides. The radio images available for this galaxy have a beam
resolution $>0.5 ''$ (Nagar et al. 1999 ), which includes all the
detected coronal extended emission, and are therefore not used in this
work. \section{The size and morphology of the coronal line region}
In the four galaxies, the CLR resolves into a bright
nucleus and extended emission along a preferred position angle, which
usually coincides with that of the extended lower-ionization gas. The
size of the CLR is a factor 3 to 10 smaller than the extended
narrow line region (NLR). The maximum radius (Table 2 ) varies from 30
pc in Circinus to 70 pc in NGC 1068, to $\sim >$ 120 pc in NGC 3081
and ESO~428 -G014. The emission in all cases is diffuse or filamentary,
and it is difficult to determine whether it further breaks
down into compact knots or blobs such as those found in H$\alpha$, [OIII]
5007 \AA\ or radio images even though the resolutions are comparable. In Circinus, [SiVII]2.48 ~$\rm \mu m$\ emission extends across the nucleus and
aligns with the orientation of its one-sided ionization cone, seen in
H$\alpha$, or in [OIII] 5007 \AA. In these lines, the counter-cone is not seen (Wilson et
al. 2002 ), but in [SiVII], presumably owing to the reduced extinction,
extended diffuse emission is detected at the counter-cone position
(Fig. 1 f; Prieto et al. 2004 ). This has been further confirmed with
VLT/ISAAC spectroscopy
which shows both [SiVII]2.48 ~$\rm \mu m$\ and [SiVI] 1.96 ~$\rm \mu m$\ extending up to
30 pc radius from the nucleus (Rodriguez-Ardila et al. 2004 ). In the coronal line image, the North-West emission
is defining an opening cone angle larger than that in $H\alpha$. The morphology of [SiVII] in this region is suggestive of
the coronal emission tracing the
walls of the ionization cone (see fig. 1 f). In ESO~428 -G014, the coronal emission is remarkably aligned with
the radio-jet (Fig. 2 c). The 2 ~cm emission is stronger in the
northwest direction, and [SiVII] is stronger in that direction too. H$\alpha$ emission is also collimated along the radio structure, but
the emission spreads farther from the projected collimation axis and
extends out to a much larger radius from the nucleus than the coronal
or radio emission (Fig. 2 b). Both H$\alpha$ and the 2 cm emission
resolve into several blobs but the coronal emission is more diffuse. In NGC 3081, the coronal emission resolves into a compact nuclear
region and a detached faint blob at $\sim$120 pc north of it. The HST
[OIII] 5007 and H$\alpha$ images show rather collimated structure
extending across the nucleus along the north-south direction over
$\sim$ 300 pc radius (Ferruit et al. 2000 ). Besides the nucleus, the second brightest region in those lines
coincides with the detached
[Si VII] emission blob (Fig. 2 d). At this same position, we also find
UV emission in a HST/FOC image at 2100 \AA. NGC~1068 shows the strongest [Si VII] 2.48 ~$\rm \mu m$\ emission among the four
galaxies, a factor three larger than in Circinus, and the only case
where the nuclear emission shows detailed structure. At
$\sim 7 ~pc$ radius from the radio core S1, [Si VII] emission
divides in three bright blobs. The position of S1 falls in between the blobs. The southern blob looks
like a concentric shell. The northern blob coincides with the
central [OIII] peak emission at the vortex of the ionization cone; the
other two blobs are not associated with a particular
enhancement in [OIII] or radio emission (Fig. 1 b \& c).
|
542
| 1
|
arxiv
|
Discussion}
ESO~438 -G014 and NGC~3081 show the largest and best collimated [SiVII]
emission, up to 150 pc radius from the nucleus. To reach those
distances by nuclear photoionization alone would require rather low
electron densities or a very strong (collimated) radiation field.
Density measurements in the CLR are scarce: Moorwood et al. (1996 )
estimate a density $n_e \sim 5000 ~cm^{-3 }$ in Circinus on the basis of
[NeV]~14.3 ~$\rm \mu m$\ /24.3 ~$\rm \mu m$; Erkens et al. (1997 ) derive $n_e < 10 {^7 }~
cm^{-3 }$ in several Seyfert 1 galaxies, on the basis of several
optical [FeVII] ratios. This result may be uncertain because the
optical [Fe VII] are weak and heavily blended. Taking $n_e \sim 10 ^4
cm^{-3 }$ as a reference value, it results in an ionization parameter U
$<\sim 10 ^{-3 }$ at 150 ~pc from the nucleus, which is far too low to produce strong [SiVII]
emission (see e. g Ferguson et al. 1997 ; Rodriguez-Ardila et
al. 2005 ).
We argue that, in addition to photoionization, shocks must
contribute to the coronal emission. This proposal is primarily
motivated by a parallel spectroscopic study of the kinematics
of the CLR gas of several Seyfert galaxies
(Rodriguez-Ardila et al. 2005 ), which reveals
coronal line profiles with velocities 500 $\rm km\, s^{-1 }$ $< v <$ 2000 $\rm km\, s^{-1 }$ .
Here we assess the proposal in a qualitative
manner, by looking for evidence for shocks from the morphology of the
gas emission.
In ESO~428 -G014, the remarkable alignment between [Si VII] and the radio emission
is a strong indication of the interaction
of the radio jet with the ISM. There is spectroscopic
evidence of a highly turbulent ISM in this object: asymmetric
emission line profiles at each side of the nucleus indicate gas velocities
of up to 1400 $\rm km\, s^{-1 }$ (Wilson \& Baldwin 1989 ). Shocks with those
velocities heat the gas to temperatures of $>\sim 10 ^7 K$, which will
locally produce bremsstrahlung continuum in the UV -- soft X-rays
(Contini et al. 2004 ) necessary to produce coronal lines. [Si VII]
2.48 ~$\rm \mu m$\ with IP = 205.08 eV eV will certainly be enhanced in this process.
The concentric shell-like structure seen in NGC 3081 in [OIII] 5007
\AA\ and H$\alpha$ (Ferruit et al. 2000 )
is even more suggestive of propagating shock
fronts. From the [OIII]/H$\alpha$ map by Ferruit et al., the
excitation level at the position of the [Si VII] northern blob is similar to
that of the nucleus, which points to similar ionization parameter
despite the increasing distance from the nucleus. The cloud density might then
decrease with distance to balance the
ionization parameter, but this would demand a strong radiation field
to keep line emission efficient. Alternatively, a local source of
excitation is needed. The presence of cospatial UV
continuum, possibly locally generated bremsstrahlung, and [Si VII]
line emission circumstantially supports the shock-excitation proposal.
In the case of Circinus and NGC~1068, the direct evidence for shocks
from the [Si VII] images is less obvious. In NGC~1068, the orientation
of the three blob nuclear structure does not show an obvious
correspondence with the radio-jet; it may still be possible we missed
the high velocity coronal gas component measured in NGC 1068
in our narrow band filter.
In Circinus, there are not radio
maps of sufficient resolution for a meaningful comparison. However,
both galaxies present high velocity nuclear outflows, which are
inferred from the asymmetric and blueshifthed profiles measured in the
[OIII] 5007 gas in the case of Circinus (Veilleux \& Bland-Hawthorn
1997 ), and in the Fe and Si coronal lines in both. In the latter,
velocities of $\sim$500 $\rm km\, s^{-1 }$ in Circinus and $\sim$ 2000 $\rm km\, s^{-1 }$ in NGC
1068 are inferred from the coronal profiles (Rodriguez-Ardila et
al. 2004, 2005 ).
An immediate prediction for the presence of shocks is the production
of free-free emission, with a maximum in the UV- -- X-ray,
from the shock-heated
gas. We make here a first order assessment of this contribution using
results from photoionization - shocks composite models by Contini et
al. (2004 ), and compare it with the observed soft X-rays.
For each galaxy, we derive the 1 keV emission due to
free-free from models computed for a nuclear ionizing flux, $F_h = 10 ^{13 }
photons~cm^{-2 } s^{-1 } eV^{-1 }$, pre-shock density $n_o=300 cm^{-3 }$ and shock velocity closer
to the gas velocities measured in these galaxies (we use
figure A3 in Contini et al. ). The selection of this high-
ionizing-flux value
has a relative low impact does on the 1 keV emission estimate
as the bremsstrahlung emission from this flux drops sharply shortwards the Lyman
limit; the results are more depending on the strength of the
post-shock bremsstrahlung component, this being mainly dominated
by the shock velocity and peaks in the soft X-rays (see fig. A3 in Contini et al.
for illustrative examples). Regarding selection of densities,
pre-shock densities of a few hundred $ cm^{-3 }$
actually imply densities downstream (from where
shock-excited lines are emitted) a factor of 10 - 100 higher,
the higher for higher velocities,
and thus whitin the range of those estimated from coronal line measureemnts (see above).
Having selected the model parameters, we further assume that the estimated 1 keV emission
comes from a region with size that of the observed [Si VII] emission.
Under those premises, the results are as follows. For NGC 1068,
assuming the free-free emission extending uniformly over a $\pi
\times (70 pc)^2 cm^{-2 }$ region (cf. Table 1 ), and models for shock
velocities of 900 $\rm km\, s^{-1 }$, the inferred X-ray flux is larger by a factor
of 20 compared with the nuclear 1 keV Chandra flux derived by Young et
al. (2001 ). One could in principle account for this difference by
assuming a volume filling factor of 5 -10 \%, which in turn would
account for the fact that free-free emission should mostly be produced
locally at the fronts shock.
In the case of Circinus, following the same procedure, we assume a
free-free emission size of $\pi \times (30 pc)^2 cm^{-2 }$ (cf. Table
1 ), and models of shock velocities of 500 $\rm km\, s^{-1 }$ (see above). In this
case, the inferred X-ray flux is lower than the 1 keV BeppoSAX flux,
as estimated in Prieto et al. (2004 ), by an order of magnitude. For
the remaining two galaxies, we assume respective free-free emission
areas (cf. Table 1 ) of 300 pc x 50 pc for ESO 428 -G014 -- the width of [Si VII] is $\sim 50 pc$ in the direction
perpendicular to the jet -- and $2 \times (\pi \times (14 ~pc)^2
cm^{-2 })$ for NGC 3081 -- in this case, free-free emission is assumed to
come from
the nucleus and the detached [Si VII] region North of it only. Taking
the models for shocks velocities of 900 $\rm km\, s^{-1 }$, the inferred X-ray
fluxes, when compared with 1 KeV fluxes estimated from BeppoSAX data
analysis by Maiolino et al. (1998 ), are of the same order for ESO
428 -G014 and about an order of magnitude less in NGC 3081.
The above results are clearly dominated by the assumed size of the
free-free emission region, which is unknown. The only purpose of this
exercise is to show that under reasonable assumptions of shock
velocities, as derived from the line profiles,
the free-free emission generated by these shocks in the
X-ray could be accommodated within the observed soft
X-ray fluxes.
We thank Heino Falcke who provided us with the 2 cm radio image
of ESO 428 -G014, and Marcella Contini for a thorough review of the manuscript.
|
1,396
| 0
|
arxiv
|
\section{Introduction}
The $\delta$~Scuti class of variables are dwarfs or giants with spectral types
between A2 and F5. They lie on an extension of the Cepheid instability strip
with periods in the range 0.02 --0.3 ~d. Most of the pulsational driving in
these stars is by the $\kappa$~mechanism in the He{\sc II} partial ionization
zone. A large number of $\delta$~Sct stars have
been identified in photometric time series data from the {\it Kepler}
spacecraft. A study of these stars by \citet{Balona2011 c} has revealed that
even in the center of the instability strip, no more than half the stars
pulsate as $\delta$~Sct variables. This is a surprising finding; the reason
for damping of pulsations in constant stars in the instability strip is
presently not known. Prior to the {\it MOST, CoRoT} and {\it Kepler} space missions, it was
thought that the frequency spectra in $\delta$~Sct stars may become very
dense as the detection threshold is lowered. The reason is that modes of
high degree, $l$, which are not seen from the ground because of the low
amplitudes due to cancellation effects, should be easily seen at the
precision level attained for these space missions \citep{Balona1999 }. Indeed, this expectation appears to have been fulfilled in {\it CoRoT}
observations of HD~50844 \citep{Poretti2009 }. This star appears to have a
very high mode density over the whole frequency range, indicating that modes
of relatively high $l$ are seen. However, in a study of {\it Kepler}
$\delta$~Sct stars, \citet{Balona2011 c} find that, in general, the mode
density is quite moderate and that HD~50844 is probably an exception. In order to study mode density, one needs to be sure that frequencies are
correctly extracted from the data. This is not an issue for the high
amplitude peaks in the periodogram, but since the number of modes is
expected to increase with decreasing amplitude, one needs to estimate the
probability that a given peak is due to noise (the false-alarm probability). The significance level of a frequency can be estimated easily only for the
case of an equally-spaced time series, when the Lomb-Scargle criterion can
be used \citep{Scargle1982 }. When the sampled times are not uniformly spaced,
the problem is intractable and can only be solved by numerical means
\citep{Frescura2008 }. Ground-based data are seldom equally spaced and the
``four-sigma'' rule is often used to estimate the significance level
\citep{Breger1993 }. This rule states that a peak is deemed significant if
its amplitude exceeds the background noise level of the periodogram by a
factor of four. This is a purely empirical rule with no statistical
foundation, but does seem to be reasonable in many cases
\citep{Kuschnig1997, Koen2010 }. The {\it CoRoT} data are uniformly spaced
and it is therefore important to apply the correct statistics. For a time series of duration, $t$, the width of a peak in the
power spectrum is inversely proportional to $t$. Thus for a time series of a
given duration, there is a maximum limit on the number of peaks per frequency
interval that can be measured. Since the mode density is expected to
increase with decreasing amplitude, blending of peaks due to the finite
frequency resolution becomes very important. Thus the number of frequencies
that can be extracted, and hence the mode density, depends not only on a
correct estimation of the false alarm probability, but also on effects
related to the finite frequency resolution. These effects have not been
taken into account in estimates of mode density by \citet{Poretti2009 }
and \citet{Balona2011 c}. From the above considerations, it is clear that one needs to carefully reconsider
the question of significance in frequency extraction for {\it CoRoT} and {\it
Kepler} data and, in particular, the importance of frequency resolution and
close frequencies. In this paper we use simulations at various mode
densities to determine the effect of mode density on frequency extraction. We also compare results obtained with the Lomb-Scargle false alarm
probability (FAP) criterion with those calculated using the four-sigma
rule. It turns out that neither of these criteria are useful if the
number of modes increases with decreasing amplitude. Finally, we compare
the amplitude density in the periodogram of HD\,50844 with those of {\it
Kepler} $\delta$~Scuti stars. \section{The data and noise properties}
The {\it CoRoT} observations of HD~50844 consist of $n = 155827 $ white light
data points obtained between HJD~2452590.0684 and HJD~2452647.7817 (57.7 ~d)
with a cadence of 30 ~s. The time series is almost uninterrupted, except for
data taken at the southern magnetic anomaly and some other points which were
rejected because they were clearly anomalous. We first of all need to understand the noise properties of these data. One
way to do this is to select stars which appear to be constant, or vary the
least. The noise level in the periodogram of such a star may be used to
calibrate the noise level in any other star in a purely empirical way. The definition of what is meant by the noise level in the periodogram needs
to be clarified. In visual estimates of the noise level, this is generally
taken as the height of the majority of peaks. The peaks in the periodogram
may be likened to blades of grass on a lawn and the mean height of the peaks
may be called the ``grass'' level. This loose definition has been placed on
a firmer footing by \citet{Balona2011 a} who defined $\sigma_G = 2.5 \sigma$,
where $\sigma$ is the median height of the periodogram peaks in a region
free of signals. In the four-sigma rule, a peak with amplitude $A$ may be
considered significant if $A > 4 \sigma_G$. One may also define the mean
background noise level as just the average of the power or the amplitude. Since the background noise level typically increases towards low frequencies,
and since the backgound is often difficult to estimate in crowded regions of
the spectrum, the exact meaning of ``mean background level'' is often not
clear. It can be shown that the mean amplitude, $A$, of the periodogram of pure
white noise with variance $\sigma_0 $ is given by $A = 2 \sigma_0 /\sqrt{n}$,
where $n$ is the number of points in the time series (see \citet{Kendall1976 }). From the high-frequency tail of the periodogram of HD~50844 we find $A =
0.0035 $~mmag which gives $\sigma_0 = 0.691 $~mmag. Cast in terms of apparent
magnitude, $V$, and duration of the time series, $\Delta t$, the noise level
in the periodogram can be written as
$$\log \sigma_G = a + \frac{1 }{5 }V - \frac{1 }{2 }\log \Delta t, $$
where $a$ is a constant. By fitting 175 of the least variable stars in the
A--F range in the {\it Kepler} data, \citet{Balona2011 a} found
$$\log \sigma_G = -0.93 + 0.21 V - 0.47 \log \Delta t, $$
where $\sigma_G$ is measured in ppm and $\Delta t$ in days, confirming the
expected relationship. In the case of the {\it CoRoT} data, we do not have access to a large number
of constant or nearly constant A--F stars for a completely independent
estimate of the noise level. However, we do have data for HD~292790, also
measured by \citet{Poretti2009 }. This star is variable at low frequencies,
but the noise level is well defined at the higher frequencies relevant for
$\delta$~Sct stars. The magnitude of HD~292790 is $V = 9.48 $ and the
time-series duration $\Delta t = 54.66 $~d. For the $\delta$~Scuti star
HD~50844, $V = 9.09 $ and $\Delta t = 57.70 $~d. Fig. ~\ref{period}
shows the periodograms for these two stars. Fig. ~\ref{pnoise} shows an
expanded region to show the noise levels. The periodogram of HD~292790 can be understood in terms of rotation of a
spotted star with frequency $f = 0.4386 $~d$^{-1 }$ \citep{Poretti2009 }. The periodogram contains a signal at 13.9668 ~d$^{-1 }$ and its harmonics
which is the orbital frequency of the satellite. For this star we
estimate $\sigma_G = 5 $ ppm, which allows us to determine the constant
$a$ in the above equation. We obtain
$$\log \sigma_G = -0.33 + 0.2 V - 0.5 \log \Delta t, $$
which means that {\it Kepler} data are about four times more precise than
{\it CoRoT} data. This formula gives $\sigma_G = 4.0 $~ppm for HD~50844,
which is indeed the noise level of the high frequency tail in Fig. ~\ref{pnoise}. Thus, according to the four sigma rule, we may assume that for HD~50844 only
frequencies with amplitudes exceeding approximately 16 ~ppm are likely to be real. The spectral window of the {\it CoRoT} data for HD~50844 is shown in
Fig. ~\ref{specwin}. The FWHM of the central peak is about 0.02 ~d$^{-1 }$,
which means that frequencies separated by less than about 0.01 ~d$^{-1 }$ are
not likely to be resolved. \begin{figure}
\centering
\includegraphics{period. ps}
\caption{Periodograms of HD~50844 (top) and HD~292790 (bottom). The
estimated noise level for HD~50844 is $\sigma_G = 4.0 $\, ppm while for
HD~292790 it is $\sigma_G = 5.0 $\, ppm (see Fig. \, \ref{pnoise} for an expanded
view). }
\label{period}
\end{figure}
\begin{figure}
\centering
\includegraphics{pnoise. ps}
\caption{Periodograms of HD~50844 (top) and HD~292790 (bottom) with expanded
amplitude scale to show the noise level. }
\label{pnoise}
\end{figure}
\begin{figure}
\centering
\includegraphics{specwin. ps}
\caption{Spectral window of the HD~50844 {\it CoRoT} data. }
\label{specwin}
\end{figure}
\section{Tests of significance}
Although the four-sigma test of significance is widely used because it is
simple to apply, it is by no means a rigorous test. For example,
\citet{Koen2010 } finds that the actual significance levels corresponding to
the four sigma limit may vary by orders of magnitude, depending on the exact
data configuration. He finds that the number and time spacing of the
observations have little influence on the significance levels. On the other
hand, the total duration of the time series, the frequency range that is
searched and previous prewhitening of the data greatly affect the significance
level of the four sigma rule. In particular, prewhitening removes too much
power at the given frequency, hence the estimated mean noise level is lower
than it should be, decreasing the false alarm probability. This means that
repeated application of the rule on successively prewhitened data will tend
to assign significance to peaks which are probably noise. \citet{Koen2010 }
also find that the four sigma rule, when applied to a single frequency peak,
is likely to underestimate its significance. In other words, there are
peaks of lower amplitude which are significant, but which are considered to
be noise by the four-sigma rule. More precise significance tests for periodograms have been discussed by,
among others, \citet{Scargle1982, Horne1986, Koen1990 } and
\citet{Schwarzenberg1998 }. Somewhat different methods are discussed by
\citet{Reegen2007, Reegen2008 } and \citet{Baluev2008 }. An excellent discussion
of the problem is presented by \citet{Frescura2008 }. The problem is clearly a
very difficult one for data which is not uniformly sampled. For uniformly-sampled
data Scargle's significance test \citep{Scargle1982 } is easy to calculate and
gives the false-alarm probability of a periodogram peak given the power
level and the noise variance of the data. Since the {\it CoRoT} and {\it Kepler}
data are equally sampled, except for occasional small gaps, it is clear that
this test is to be preferred to the four-sigma rule. For equally-spaced data, \citet{Scargle1982 } shows that the false-alarm
probability (FAP), $P(z)$, of a peak with power $z$ is given by
$$P(z) = 1 - \left\{1 - \exp(-z/\sigma^2 _0 )\right\}^{N_i}, $$
where $\sigma^2 _0 $ is the noise variance of the data and $N_i$ is the number of
independent frequencies. We call this the Lomb-Scargle significance
criterion. It should be noted that in some references the power is normalized
by the noise variance, while in others it is not. We use the un-normalized definition. This relationship can be inverted to give the power for any FAP,
$$z = -\sigma^2 _0 \ln\left\{1 - \left(1 - P(z)\right)^{1 /N_i}\right\}. $$
The number of independent frequencies is well defined only for
equally-spaced data and is given by $N_i = n/2 $, where $n$ is the number of
data points \citep{Frescura2008 }. Astronomers usually prefer the periodogram
to be a function of amplitude rather than power. For our definition of power,
$z$, the amplitude is given by $A = 2 \sqrt{z/n}$. Given a certain FAP,
$P_A$, one may then calculate an amplitude, $A_A$, which corresponds to this
probability,
$$A_A = 2 \sqrt{ -\frac{\sigma^2 _0 }{n}\ln\left\{1 - \left(1 -
P_A\right)^{1 /N_i}\right\}}.
|
1,396
| 1
|
arxiv
|
the actual
times of {\it CoRoT} observations of this star to generate a time series
comprising of many sinusoidal variations with randomly generated
frequencies, amplitudes and phases. The simulations consist of uniformly
distributed frequencies in the range 0 --30 ~d$^{-1 }$, which is roughly the
frequency range of pulsations in HD~50844. In $\delta$~Sct stars, the number of
modes increases sharply with decreasing amplitude. To roughly mimic this,
the amplitudes in our simulations are exponentially distributed. A noise error
with a Gaussian distribution and standard deviation of 0.500 ~mmag was added to
each point of the synthetic time series. This leads to a mean periodogram
height of 0.0025 ~mmag or a grass noise level $\sigma_G = 0.006 $~mmag,
roughly similar to that found in HD~50844. The aim of these simulations is
to determine the effect of mode density on frequency extraction and to test
the reliability of these frequencies in data with a high signal-to-noise (S/N)
ratio. We also wish to investigate the efficiency of the four-sigma rule and
the Lomb-Scargle significance criterion. The time series was analyzed using the Lomb periodogram and the algorithm
described by \citet{Press1989 } for fast implementation. The peak of maximum
amplitude is located and its frequency measured by fitting a quadratic to
points around maximum amplitude. This frequency, together with up to 20
previous frequencies, is used in a simultaneous least-squares Fourier fit to
the data. Once the limit of 20 frequencies has been reached, the original time
series is replaced by the prewhitened time series and the process repeated
until the peak of highest amplitude is no longer significant. We start with the sparsest frequency set of 30 frequencies. In this case
we are able to extract 65 frequencies for which FAP $>$ 0.01 (or 47
frequencies using the four-sigma limit) even though the simulated data has
only 30 frequencies. Of these frequencies, the first 28 of highest
amplitude match the known frequencies. The two missing frequencies do
appear, but are far down on the list since they have very low amplitudes. The question that needs to be asked is where do the additional 35 frequencies
which have significant amplitudes come from. Fig. \, \ref{a30 } shows the periodogram in a specific frequency region and also
a schematic periodogram of the same region which shows that the large peak
actually consists of two unresolved closely-spaced frequencies. These
frequencies and amplitudes are $f_1 = 19.3458 $~d$^{-1 }$, $A_1 = 7.118 $~mmag,
$f_2 = 19.3300 $~d$^{-1 }$, $A_2 = 0.574 $~mmag. The figure also shows a
schematic periodogram of the extracted frequencies. Instead of extracting
just a single frequency, the code finds numerous, roughly equally-spaced
frequency components of relatively high amplitudes. A third frequency at
$f_3 = 19.4475 $~d$^{-1 }$, $A_3 = 1.855 $~mmag also shows fictitious
components. \begin{figure}
\centering
\includegraphics{a30. ps}
\caption{Schematic periodogram of known frequency components (with positive
amplitudes) and extracted components (negative amplitudes) in a simulation. }
\label{a30 }
\end{figure}
The origin of these fictitious components are easy to understand. They come
about because the extracted frequency of the unresolved peak is
19.3465 ~d$^{-1 }$, which differs from the true frequency. Even though the
difference is only 0.0007 ~d$^{-1 }$, the prewhitened data still contains
significant signal because the amplitude of the unresolved peak is so high. Prewhitening, in fact, is equivalent to adding a fictional signal to the
data. When the prewhitening frequency and amplitude is sufficiently close
to the true values, both are removed through destructive interference. If the
prewhitening frequencies and amplitudes not quite correct, what remains is a
sequence of equally-spaced frequencies of diminishing amplitude (an
interference pattern). If the frequency of interest has a low amplitude,
the residual interference pattern may be below the noise level, which is
nearly always the case in ground-based observations. Space data have such
high signal-to-noise ratio that the amplitudes of the residual interference
pattern is well above the noise level. Thus far more frequencies are
extracted than actually exist. Provided that the S/N level is sufficiently
high, successive frequency extraction will lead to an ever increasing number
of frequencies at low amplitudes which do not actually exist. \begin{table}
\centering
\caption{Results of frequency extraction using the Lomb periodogram on
simulated data. Random frequencies uniformly distributed in the range
0 --30 ~d$^{-1 }$ and random amplitudes exponentially distributed in the
range 0 --10 ~mmag were used. $N_0 $ is the number of generated frequencies,
$N$ is the number of these frequencies with amplitudes above the FAP = 0.01
threshold. $N_{\rm ex}$ is the number of extracted frequencies with
FAP $<$ 0.01 using the Lomb-Scargle periodogram. The frequencies extracted
using the four-sigma criterion is given by $N_4 $. The two last columns give
the numbers of significant frequencies using non-linear optimization. $L_{\rm ex}$ refers to the numbers using the Lomb-Scargle FAP and
$L_4 $ to the four-sigma criterion. }
\begin{tabular}{rrrrrr}
\hline
\hline
$N_0 $ & $N$ & $N_{\rm ex}$& $N_4 $ & $L_{\rm ex}$ & $L_4 $ \\
\hline
30 & 30 & 65 & 47 & 30 & 29 \\
50 & 50 & 137 & 87 & 57 & 54 \\
100 & 98 & 220 & 150 & 132 & 112 \\
200 & 198 & 537 & 350 & 402 & 320 \\
300 & 298 & 879 & 585 & 729 & 596 \\
500 & 493 & 1611 & 1141 & 1439 & 1197 \\
700 & 693 & 2101 & 1545 & 1976 & 1687 \\
1000 & 998 & 3711 & 2215 & 2566 & 2283 \\
2000 & 1982 & 3405 & 2805 & 3108 & 2797 \\
3000 & 2969 & 3599 & 2968 & 3488 & 3187 \\
\\
\hline
\end{tabular}
\label{syn}
\end{table}
Results of simulations using an increasing density of frequencies are shown
in Table~\ref{syn}. We note that the four-sigma criterion is more stringent
than the Lomb-Scargle FAP criterion, though it is clear that these criteria
are irrelevant in identifying the true frequencies. Note also that the
number of extracted frequencies increases only slowly for $N > 1000 $. The
reason for this is due to the finite resolution imposed by the limited
duration of the time series. There are two problems that come to light. The first problem consists in
unavoidable errors in blended frequencies with high amplitudes, resulting in
a cascade of low-amplitude (but highly significant) fictitious frequencies. The second problem is one of frequency resolution which increases the number
of blended frequencies and compounds the problem. It might be possible to
recognize close frequencies using nonlinear optimization. To test this
possibility we used the Levenberg-Marquard algorithm in combination with the
Lomb periodogram to optimize the frequencies, grouping closely-spaced
frequencies in a simultaneous optimization scheme. Results are shown in
Table\, \ref{syn}. While nonlinear optimization gives much better results
for low mode densities, it still fails badly as the density increases. Another symptom which arises from continuous prewhitening of data with very
high S/N is the development of a plateau in the periodogram consisting of a
very large number of blended peaks of almost equal amplitude. An example of
this can be seen in the simulations arising from the simulation with $N_0 =
1000 $ frequencies. Such a plateau occurs in other simulations, but is most
prominent in those with a high mode density. The development of a plateau
can be seen in Fig. \,3 of \citet{Poretti2009 }. \begin{figure}
\centering
\includegraphics{plateau. ps}
\caption{Periodogram of simulated data with 1000 actual frequencies after
prewhitening by 2000 frequencies showing the characteristic plateau. }
\label{plateau}
\end{figure}
From these simulations we conclude that one should be very careful in
extracting frequencies with small amplitudes in stars with very high S/N data
containing several high-amplitude peaks. In particular, no trust should be
placed in the very large number of frequencies which results from
successive prewhitening in such stars. This is undoubtedly the case in
HD\,50844. \section{Amplitude distribution}
The simulations described above show that the effect of finite frequency
resolution is very important. In fact, it is likely to be of far greater
importance well before the amplitude threshold appropriate to a given false
alarm probability. A question that is directly relevant to estimation of
the mode density is the distribution of amplitudes. In order to estimate
the mode density for the lowest detectable amplitudes, one needs to
calculate the amplitude distribution, i. e., the number of modes within a
given amplitude range. From the results of the previous section, it is
clear that repeated prewhitening will lead to an ever increasing number of
false modes for low amplitudes. In this section we investigate the
amplitude distribution using simulations. For this purpose, we generated a number of time series with random
frequencies uniformly distributed in the frequency range 0 --30 ~d$^{-1 }$
with uniformly distributed amplitudes in the amplitude range 0 --10 ~mmag. This differs from the analysis in the previous section where we used an
approximately exponential amplitude distribution. A uniform amplitude
distribution is not expected in $\delta$~Sct stars, but it allows the
known and extracted amplitude distributions to be more easily compared. Table\, \ref{uni} shows the number of simulated frequencies and the number
of extracted frequencies using the Lomb periodogram. We did not attempt to
apply non-linear optimization of the frequencies, since this does not
resolve the frequency resolution problem discussed in the previous section. Note that the number of extracted frequencies is much larger than the number
of real frequencies. This is not surprising because, in general, the
amplitudes in this simulation are much larger than in the exponential
distribution which increases the cascading effect that arises when
frequencies are unresolved. \begin{table}
\centering
\caption{Results of frequency extraction using the Lomb periodogram on
simulated data with uniform amplitude distribution. $N_0 $ is the total
number of simulated frequencies and $N$ is the number of
simulated frequencies with amplitudes above the FAP = 0.01 threshold. $N_{\rm ex}$ is the number of extracted frequencies with FAP $<$ 0.01
using the Lomb-Scargle periodogram. }
\begin{tabular}{rrr}
\hline
\hline
$N_0 $ & $N$ & $N_{\rm ex}$ \\
\hline
30 & 30 & 105 \\
50 & 50 & 238 \\
100 & 99 & 490 \\
200 & 199 & 1226 \\
300 & 299 & 1833 \\
500 & 499 & 2684 \\
700 & 699 & 3217 \\
1000 & 998 & 3711 \\
2000 & 1998 & 3711 \\
3000 & 2996 & 5212 \\
\\
\hline
\end{tabular}
\label{uni}
\end{table}
\begin{figure}
\centering
\includegraphics{amphist. ps}
\caption{Distribution of amplitudes extracted from the Lomb periodogram for
a selection of simulated data. The solid histogram shows the relative number
of extracted frequencies as a function of amplitude (mmags). The dashed
histogram is the true distribution. The panels are labeled according to the
total number of known frequencies. }
\label{amphist}
\end{figure}
\begin{table*}
\centering
\caption{Significant frequencies and amplitudes in HD~50844. The frequency $f$ is
in d$^{-1 }$, the amplitude $A$ in mmillimags and the phase $\phi$ in radians. This is a fit to $A\cos(2 \pi f(t - t_0 ) + \phi)$ with $t_0 = 2590.0000 $. The second and third columns list mode identifications from spectroscopy,
$(l, m)$, and radial velocity amplitudes, $A_{\rm RV}$ (km\, s$^{-1 }$) from
\citet{Poretti2009 }.
|
1,396
| 2
|
arxiv
|
672 \pm 0.00008 $ & $ 6.76 \pm 0.01 $ & $ 1.559 \pm 0.001 $ \\
$f_3 $ & (5,1 ) & & $11.25803 \pm 0.00010 $ & $ 4.37 \pm 0.01 $ & $ 0.773 \pm 0.002 $ \\
$f_4 $ & (3,3 ) & 0.52 & $12.84848 \pm 0.00010 $ & $ 3.55 \pm 0.01 $ & $-2.281 \pm 0.002 $ \\
$f_5 $ & & 0.45 & $12.23840 \pm 0.00009 $ & $ 3.53 \pm 0.01 $ & $-0.010 \pm 0.002 $ \\
$f_6 $ & (4,3 ) & & $14.44681 \pm 0.00008 $ & $ 3.08 \pm 0.01 $ & $ 1.986 \pm 0.003 $ \\
$f_7 $ & (3,2 ) & & $13.27339 \pm 0.00008 $ & $ 2.87 \pm 0.01 $ & $-2.871 \pm 0.003 $ \\
$f_8 $ & (5,3 ) & 0.11 & $13.35871 \pm 0.00008 $ & $ 2.51 \pm 0.01 $ & $ 1.803 \pm 0.003 $ \\
$f_9 $ & (11,7 )& 0.05 & $11.75098 \pm 0.00010 $ & $ 1.58 \pm 0.01 $ & $ 1.585 \pm 0.005 $ \\
$f_{10 }$ & (2, -2 )& 0.08 & $ 5.26680 \pm 0.00011 $ & $ 1.24 \pm 0.01 $ & $ 1.362 \pm 0.007 $ \\
$f_{11 }$ & & 0.20 & $14.46124 \pm 0.00011 $ & $ 1.24 \pm 0.01 $ & $ 0.071 \pm 0.007 $ \\
$f_{12 }$ & (4,2 ) & 0.10 & $14.43221 \pm 0.00011 $ & $ 1.08 \pm 0.01 $ & $-2.167 \pm 0.008 $ \\
$f_{13 }$ & & 0.10 & $ 9.95266 \pm 0.00011 $ & $ 1.02 \pm 0.01 $ & $ 2.798 \pm 0.008 $ \\
& & & $ 0.00497 \pm 0.00011 $ & $ 1.26 \pm 0.04 $ & $-2.483 \pm 0.029 $ \\
$f_{14 }$ & & & $11.98370 \pm 0.00011 $ & $ 0.94 \pm 0.01 $ & $ 1.148 \pm 0.009 $ \\
$f_{15 }$ & & & $ 6.55652 \pm 0.00011 $ & $ 0.82 \pm 0.01 $ & $ 2.895 \pm 0.010 $ \\
$f_{17 }$ & & & $13.56876 \pm 0.00011 $ & $ 0.85 \pm 0.01 $ & $-3.090 \pm 0.010 $ \\
$f_{16 }$ & & & $ 7.40484 \pm 0.00011 $ & $ 0.85 \pm 0.01 $ & $ 1.412 \pm 0.010 $ \\
$f_{18 }$ & (5,0 ) & & $10.26038 \pm 0.00011 $ & $ 0.79 \pm 0.01 $ & $-0.864 \pm 0.010 $ \\
$f_{19 }$ & & 0.08 & $ 5.78209 \pm 0.00011 $ & $ 0.74 \pm 0.01 $ & $-1.964 \pm 0.011 $ \\
& & & $11.70934 \pm 0.00012 $ & $ 0.69 \pm 0.01 $ & $-1.332 \pm 0.012 $ \\
$f_{20 }$ & & & $ 6.62851 \pm 0.00012 $ & $ 0.71 \pm 0.01 $ & $ 1.919 \pm 0.012 $ \\
& & & $12.12271 \pm 0.00012 $ & $ 0.70 \pm 0.01 $ & $ 1.160 \pm 0.012 $ \\
& & & $ 7.63511 \pm 0.00012 $ & $ 0.66 \pm 0.01 $ & $ 2.999 \pm 0.013 $ \\
& & & $27.29546 \pm 0.00012 $ & $ 0.66 \pm 0.01 $ & $ 2.046 \pm 0.013 $ \\
$f_{22 }$ & (3,1 ) & & $14.47801 \pm 0.00011 $ & $ 0.68 \pm 0.01 $ & $ 1.201 \pm 0.013 $ \\
& & & $14.23452 \pm 0.00011 $ & $ 0.56 \pm 0.01 $ & $ 2.125 \pm 0.015 $ \\
$f_{27 }$ & (4, -2 )& & $ 5.67451 \pm 0.00012 $ & $ 0.61 \pm 0.01 $ & $-1.642 \pm 0.014 $ \\
$2 f_1 $ & & & $13.84992 \pm 0.00012 $ & $ 0.59 \pm 0.01 $ & $-2.488 \pm 0.014 $ \\
& & & $ 4.66075 \pm 0.00012 $ & $ 0.53 \pm 0.01 $ & $ 0.390 \pm 0.016 $ \\
& & & $ 1.28065 \pm 0.00012 $ & $ 0.51 \pm 0.01 $ & $ 3.041 \pm 0.016 $ \\
$f_{30 }$ & - & 0.08 & $ 5.41839 \pm 0.00012 $ & $ 0.53 \pm 0.01 $ & $-2.480 \pm 0.016 $ \\
$f_{32 }$ & (5, -2 )& & $ 5.49082 \pm 0.00012 $ & $ 0.51 \pm 0.01 $ & $-0.946 \pm 0.016 $ \\
& & & $ 3.48107 \pm 0.00012 $ & $ 0.47 \pm 0.01 $ & $ 0.403 \pm 0.018 $ \\
& & & $11.33797 \pm 0.00012 $ & $ 0.48 \pm 0.01 $ & $-0.782 \pm 0.017 $ \\
& & & $ 1.17976 \pm 0.00012 $ & $ 0.46 \pm 0.01 $ & $-2.041 \pm 0.018 $ \\
$f_{44 }$ & (6,4 ) & & $14.60065 \pm 0.00012 $ & $ 0.46 \pm 0.01 $ & $ 0.842 \pm 0.018 $ \\
& & & $14.26363 \pm 0.00012 $ & $ 0.46 \pm 0.01 $ & $ 0.472 \pm 0.018 $ \\
& & & $18.72302 \pm 0.00012 $ & $ 0.45 \pm 0.01 $ & $ 1.073 \pm 0.019 $ \\
& & & $ 7.25400 \pm 0.00012 $ & $ 0.44 \pm 0.01 $ & $ 2.297 \pm 0.019 $ \\
& & & $10.62982 \pm 0.00012 $ & $ 0.43 \pm 0.01 $ & $-1.538 \pm 0.019 $ \\
& & & $14.06179 \pm 0.00012 $ & $ 0.42 \pm 0.01 $ & $ 2.927 \pm 0.020 $ \\
& & & $15.75242 \pm 0.00012 $ & $ 0.42 \pm 0.01 $ & $-0.870 \pm 0.020 $ \\
$f_{43 }$ & (4, -2 )& 0.12 & $ 5.04317 \pm 0.00012 $ & $ 0.42 \pm 0.01 $ & $-1.560 \pm 0.020 $ \\
& & & $ 0.32135 \pm 0.00012 $ & $ 0.42 \pm 0.01 $ & $-0.776 \pm 0.020 $ \\
& & & $13.06176 \pm 0.00013 $ & $ 0.40 \pm 0.01 $ & $ 3.040 \pm 0.021 $ \\
$f_{50 }$ &(12,10 )& & $15.22420 \pm 0.00012 $ & $ 0.39 \pm 0.01 $ & $-2.349 \pm 0.021 $ \\
$f_{46 }$ &(14,12 )& & $19.75031 \pm 0.00012 $ & $ 0.39 \pm 0.01 $ & $-0.179 \pm 0.021 $ \\
& & & $ 1.23137 \pm 0.00013 $ & $ 0.38 \pm 0.01 $ & $-1.064 \pm 0.022 $ \\
& & & $ 3.38306 \pm 0.00013 $ & $ 0.37 \pm 0.01 $ & $ 2.196 \pm 0.023 $ \\
& & & $ 0.93230 \pm 0.00013 $ & $ 0.37 \pm 0.01 $ & $ 0.115 \pm 0.023 $ \\
$f_{51 }$ & (8,5 ) & 0.08 & $11.64339 \pm 0.00013 $ & $ 0.36 \pm 0.01 $ & $ 1.352 \pm 0.023 $ \\
& & & $ 2.22293 \pm 0.00013 $ & $ 0.35 \pm 0.01 $ & $-2.960 \pm 0.024 $ \\
& & & $ 3.79566 \pm 0.00013 $ & $ 0.34 \pm 0.01 $ & $-0.829 \pm 0.024 $ \\
& & & $ 3.57994 \pm 0.00013 $ & $ 0.33 \pm 0.01 $ & $-0.908 \pm 0.025 $ \\
& & & $ 6.57856 \pm 0.00013 $ & $ 0.35 \pm 0.01 $ & $ 1.095 \pm 0.024 $ \\
& & & $15.56830 \pm 0.00013 $ & $ 0.33 \pm 0.01 $ & $ 2.248 \pm 0.025 $ \\
& & & $ 2.85138 \pm 0.00013 $ & $ 0.32 \pm 0.01 $ & $-1.230 \pm 0.026 $ \\
& & & $ 8.56364 \pm 0.00013 $ & $ 0.31 \pm 0.01 $ & $ 1.316 \pm 0.027 $ \\
\\
\hline
\end{tabular}
\label{hd}
\end{table*}
In Fig. \ref{amphist} we show the distribution of amplitudes for four
selected simulations. In all cases, the true distribution is flat (uniform
amplitude distribution) and extends from 0 --30 ~d$^{-1 }$. The amplitude
distributions derived from the extracted frequencies are very different and
always show a large number of low-amplitude frequencies (out of scale in the
Figure). In addition, there is considerable spillage to high amplitudes
due to two or more frequencies which are unresolved. The larger the number
of frequencies the greater the number of frequencies at very low and very
high amplitudes.
|
1,396
| 3
|
arxiv
|
periodogram is a result of the window function.
\section{Frequencies in HD~50844 }
If we apply the Lomb-Scargle criterion to the 30 -s cadence data of HD~50844
we find that there are about 1800 frequencies with amplitudes in excess of
0.014 ~mmag ($P_A = 0.01 $) and 1700 frequencies in excess of 0.015 ~mmag
($P_A = 0.001 $), mostly within the range 0 --30 ~d$^{-1 }$. This is
equivalent to 60 frequencies per d$^{-1 }$ which is at the limit of
frequency resolution. As we have seen, these numbers are likely to be
considerably larger than actually present in the star.
In Table\, \ref{hd} we list the frequencies of highest amplitude. Since
all these frequencies have quite large amplitudes, there is no danger of
them being fictitious. Our frequencies and amplitudes agree very well with
those of \citet{Poretti2009 } and we have adopted the same numbering scheme for
the frequencies. There are very few combination frequencies; we find
$f_{10 } = 4 f_1 - 2 f_2 $ and $f_{12 } = 2 f_4 -f_3 $, but these are most probably
coincidences. The harmonic $2 f_1 $ is clearly visible.
\section{Comparison with {\it Kepler} $\delta$~Scuti stars}
One question that is of interest is whether HD~50844 has a higher mode
density than other $\delta$~Scuti stars. We cannot really answer this
question because it is not possible to extract frequencies below a certain
threshold level. Also, we have seen that the finite time resolution and the
prewhitening technique provide severe limitations on the frequencies that
can be reliably extracted. What can be answered is whether the amplitude
density in the periodogram is larger than in other $\delta$~Scuti stars.
To do this we need to compare the amplitude density in HD~50844 with those
of other stars.
The {\it Kepler} data provide by far the most homogeneous sample of light
curves of $\delta$~Sct stars. Nearly all $\delta$~Sct stars observed by
{\it Kepler} were discovered in the data from Quarters 0,1 and 2 which are
in the public domain. However, for most of these stars the length of the
time series is about 30 ~d. Since frequency resolution is important for
determining mode density, we need to ensure that the length of the time
series for each star is the same. We decided to truncate the time series of
all the {\it Kepler} stars to 25 ~d in order to obtain the largest sample of
$\delta$~Scuti stars. Also, we limit the data to stars observed with short
cadence (exposure times of about 60 ~s). While considerably more {\it
Kepler} $\delta$~Sct stars have been observed with long cadence (exposure
time about 30 ~min), these cannot be used because the highest frequency
that can be detected is only about 24 ~d$^{-1 }$, whereas most $\delta$~Sct
stars still have modes with frequencies as high as 50 ~d$^{-1 }$.
There are 357 $\delta$~Scuti stars observed in short-cadence mode in the
{\it Kepler} database. Since all these stars were observed for the same
length of time, we can use the area of the periodogram above a certain
amplitude level as proportional to the number of modes with amplitudes higher
than this level. Fig. \, \ref{moden} shows this plot for the {\it Kepler}
$\delta$~Sct stars. The top panel shows the plot for HD\,50844 ; the star
is clearly not exceptional in this regard.
\begin{figure}
\centering
\includegraphics{moden. ps}
\caption{Relative amplitude density as a function of amplitude for {\it Kepler}
$\delta$~Scuti stars. The top panel shows the same plot expanded to show
HD~50844 (thick line). }
\label{moden}
\end{figure}
\section{Conclusion}
Using simulations, we show that it is not possible to extract frequencies
reliably down to the expected noise level using successive prewhitening.
The reason is due to unresolved peaks in the periodogram which become more
numerous as the amplitude decreases. Such unresolved peaks lead to a
multitude of erroneous frequencies. Prewhitening by a frequency which differs
from the true frequency by a significant amount leaves many frequency residuals
of lower amplitude. In the high S/N data from space missions, these spurious
residual frequencies often have significant amplitudes, leading to a cascading
effect of spurious frequencies. Simulations show that frequency extraction
using the Lomb-Scargle or four sigma criterion can lead to a gross over-estimate of
the true number of frequencies. Using nonlinear optimization does not solve
this problem. It appears that the extraordinary high mode density of the
{\t CoRoT} star HD\,50844 found by \citet{Poretti2009 } is not real, but a
result of this phenomenon.
Because of the effect described above, it is not possible to distinguish
between frequencies present in the star and spurious low-amplitude frequencies
caused by prewhitening of unresolved frequency groups. Thus it is
not possible to count the number of individual modes with any degree of
certainty below a certain amplitude level. One can, however, compare the
power in the periodogram above a given amplitude for different stars. We
made such a comparison for HD\,50844 with 357 $\delta$~Scuti stars in the
{\it Kepler} public archive. It turns out that HD\,50844 is not exceptional
in this regard.
\bibliographystyle{mn2 e}
|
1,161
| 0
|
arxiv
|
\section{Introduction}
Magnetic reconnection is an important fundamental process in plasmas. It has drawn growing attention in extreme astrophysical sites such as
pulsars and magnetars,
where plasmas primarily consist of electron--positron pairs. One of the most notable applications is the magnetic dissipation problem in a pulsar wind,
an ultrarelativistic plasma flow from the pulsar magnetosphere. It is expected that an equatorial current sheet (like the heliospheric current sheet)
flaps extremely due to the oblique rotation of the central neutron star, and
that reconnection inside the current sheet dissipates the magnetic energy \cite{coro90 }. Importantly, in those environments the magnetic energy exceeds
the rest mass energy of lightweight electrons and positrons. When reconnection transfers magnetic energy to that of plasmas,
we need to take into account special relativity effects
both in the bulk motion and in the plasma heat. Our understanding of relativistic reconnection is much more limited
than our understanding of the nonrelativistic counterpart. In the last decade, there has been some fundamental work
in the following two areas:
magnetohydrodynamic (MHD) theories of a steady-state structure \cite{bf94 b, lyut03, lyu05 } and
time-dependent particle-in-cell simulations on kinetic scales
\cite{zeni01, claus04, zeni07, zeni08, lyu08 }. Since these two are temporally and spatially separated,
it has been difficult to associate the results from the two research areas. After an initial attempt \cite{naoyuki06 },
MHD simulations
of the basic reconnection process
have been stalled for a while. In order to bridge the two areas, we have developed
two simulation models to study relativistic magnetic reconnection:
the electron-positron two-fluid model \cite{zeni09 a, zeni09 b} and
the resistive relativistic MHD (RRMHD) model \cite{zeni10 b}. \section{Two-fluid Simulations}
The basic equations consist of relativistic fluid equations for electrons and positrons,
and Maxwell equations \cite{zeni09 a, zeni09 b}. For simplicity, $c$ is set to $1 $. \begin{eqnarray}
\label{eq:cont}
\partial_t ( \gamma_p n_p ) &=& -\div (n_p \vec{u}_p), \\
\label{eq:mom}
\partial_t \Big( { \gamma_p w_p \vec{u}_p } \Big)
&=& -\div \Big( { w_p \vec{u}_p\vec{u}_p } + \delta_{ij} p_p \Big)
+ \gamma_p n_p q_p (\vec{E}+\vec{v_p}\times\vec{B}) \nonumber \\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- \tau_{fr} n_p n_e (\vec{u}_p-\vec{u}_e), \\
\label{eq:ene}
\partial_{t} \Big(\gamma_p^2 w_p - p_p \Big)
&=& -\div ( \gamma_p w_p \vec{u}_p ) + \gamma_p n_p q_p (\vec{v_p}\cdot\vec{E})
- \tau_{fr} n_p n_e ({\gamma}_p-{\gamma}_e), \\
\label{eq:cont2 }
\partial_t ( \gamma_e n_e ) &=& -\div (n_e \vec{u}_e), \\
\label{eq:mom2 }
\partial_t \Big( { \gamma_e w_e \vec{u}_e } \Big)
&=& -\div \Big( { w_e \vec{u}_e\vec{u}_e } + \delta_{ij} p_e \Big)
+ \gamma_e n_e q_e (\vec{E}+\vec{v_e}\times\vec{B}) \nonumber \\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- \tau_{fr} n_p n_e (\vec{u}_e-\vec{u}_p), \\
\label{eq:ene2 }
\partial_{t} \Big(\gamma_e^2 w_e - p_e \Big)
&=& -\div ( \gamma_e w_e \vec{u}_e ) + \gamma_e n_e q_e (\vec{v_e}\cdot\vec{E})
- \tau_{fr} n_p n_e ({\gamma}_e-{\gamma}_p), \\
\partial_t {\vec{B}} &=& - \nabla \times \vec{E},
~~~~~~~~
\partial_t {\vec{E}} = \nabla \times \vec{B} - 4 \pi ( q_p n_p \vec{u}_p+q_e n_e \vec{u}_e ). \end{eqnarray}
In these equations,
the subscript $p$ means positron properties (and $e$ for electrons),
$\gamma$ is the Lorentz factor,
$n$ is the proper density,
$\vec{u}=\gamma\vec{v}$ is the 4 -vector,
$w$ is the enthalpy,
$p$ is the proper pressure, and $q_p=-q_e$ is the charge. We assume a $\Gamma$-law equation of state
with the adiabatic index $\Gamma=4 /3 $. Therefore the enthalpy is given by $w=nmc^2 +[\Gamma/(\Gamma-1 )]p$. In order to mimic the effective resistivity,
we introduce an inter-species friction force
in the last terms of the momentum and energy equations with a coefficient $\tau_{fr}$. Internally, we consider the energy density
relative to the rest mass energy density
(i. e., we subtract Eq. ~\eqref{eq:cont} from Eq. ~\eqref{eq:ene})
similarly as in earlier work \cite{naoyuki06, marti03 }. The time evolution is solved by a standard numerical scheme,
i. e. a modified Lax--Wendroff scheme. A common difficulty of relativistic fluid simulations is that
the fluid macro properties (the ``conservative variables'') on the left hand sides
are nonlinear combinations of the basic elements (the ``primitive variables'')
such as $\gamma$ and $p$ \cite{marti03 }. In our simulation,
we calculate the primitive variables from the conservative variables
by analytically solving a quartic equation of $|{u}|$ \cite{zeni09 a}
on all grid cells at each half timestep. We investigate a two-dimensional system evolution in the $x$--$z$ plane. The reconnection point is set to the origin $(x, z)=(0,0 )$. We employ a Harris-like initial configuration:
$\vec{B} = B_0 \tanh(z)~\vec{\hat{x}}, $
$\vec{j} = {B_0 } \cosh^{-2 }(z)~\vec{\hat{y}}$,
$n = n_0 \cosh^{-2 }(z) + 0.1 n_{0 }$, $p=nmc^2 $,
$\vec{u}=0 $, and $\vec{E} = \eta \vec{j}$. In the upstream region ($|z|{\gg}0 $),
the magnetization parameter is $\sigma = B_0 ^2 /[4 \pi (2 w)] =4 $. The relevant Alfv\'{e}n speed is $c_{A, up}=[\sigma/(1 +\sigma)]^{1 /2 } \sim 0.89 c$
or $\gamma c_{A, up} = \sqrt{\sigma} = 2 $. We assume that the effective resistivity $\eta \propto \tau_{fr}$ is
localized around the reconnection point. Neumann-like boundaries are located at $x = \pm 120 $ and at $z = \pm 60 $. \begin{figure}
\includegraphics[width=\columnwidth]{f1. pdf}
\caption{(Color online)
The plasma 4 -velocity $u_x$ and the magnetic field lines
at (a) $t=100 $ and (b) $t=400 $. In Panel (a) the arrows indicate the vertical structures,
which will be discussed later. In Panel (b) the dashed lines represent Petschek-type slow-shock regions. (From run L3 in Ref. ~\cite{zeni09 a})
}
\end{figure}
Reconnection takes place around the center. Figure 1 a shows the $x$-component of the 4 -velocity, $u_x=\gamma v_x$, at $t=100 $
in unit of the light transit time $c^{-1 }$. Magnetic field lines are transported from the background regions,
cut and reconnected at the reconnection point, and then
ejected with the reconnection jets. One can see the bi-directional jets from the reconnection point and
magnetic islands (plasmoids) in front of the jets. The 4 -velocity of the reconnection jet is typically $u_x \sim 2 $,
which is Alfv\'{e}nic with respect to the upstream condition,
and it is $u_x \sim 3.5 $ at the local maximum right behind the plasmoid. Reconnection sets up an inflow speed of $v_z \sim 0.14 $-$17 $. Since the reconnection rate or a normalized form of the flux transfer speed
is an order of $\mathcal{R} \sim (v_{in}/v_{out}) \sim O(0.1 )$,
this is a fast reconnection. Later, the plasmoids reach and go through
the open boundaries at $x = \pm 120 $. Although there are minor reflections from the boundaries,
the system exhibits a quasisteady structure in the long term. Figure 1 b presents the late-time snapshot at $t=400 $. The reconnection jets are confined by slow-shock-like regions,
as indicated by the dashed lines in Fig. 1 b. One can see a quasisteady Petschek-type structure \cite{petschek}. Interestingly, our analysis reveals that
the Petschek outflow becomes narrower than the nonrelativistic counterpart,
as theoretically predicted by Lyubarsky \cite{lyu05 }. Meanwhile, reconnection proceeds faster,
especially in the ultrarelativistic, high-$\sigma$, regimes. It was also found that an out-of-plane magnetic field (guide field)
drastically changes the composition of the energy outflow. Without a guide field the main energy carrier is the plasma enthalpy flux, while
the Poynting flux carries the energy in the presence of a moderate guide-field \citep{zeni09 b}. \section{Resistive MHD Simulations}
We use the following RRMHD equations \cite{naoyuki06, kom07 }
in Lorentz--Heaviside units with $c=1 $. \begin{eqnarray}
\partial_t (\gamma \rho) + \div (\rho \vec{u}) &=& 0, \\
\partial_t ( \gamma w\vec{u} + \vec{E}\times\vec{B} )
+ \div \Big( ( p + \frac{B^2 +E^2 }{2 } ) \vec{I}
+ w \vec{u}\vec{u}
- \vec{B}\vec{B} - \vec{E}\vec{E} \Big)
&=& 0, \\
\partial_t ( \gamma^2 w-p + \frac{ {B}^2 +{E}^2 }{2 } )
+ \div ( \gamma w\vec{u} + \vec{E}\times\vec{B} ) &=& 0, \\
\partial_t\vec{B} + \nabla \times \vec{E} = 0, ~~~~~~
\partial_t\vec{E} - \nabla \times \vec{B} = -\vec{j}, ~~~~~~
\partial_t{\rho_c} + \div \vec{j} &=& 0,
\\
\label{eq:ohm}
\gamma \Big( \vec{E} + \vec{v}\times\vec{B} - (\vec{E}\cdot\vec{v}) \vec{v} \Big)
= \eta ( \vec{j} - \rho_c \vec{v} )&&
\end{eqnarray}
Here, $\rho_c$ is the charge density. We developed three RRMHD codes \cite{naoyuki06, kom07, pal09 }. Among them, the present simulations are carried out by the HLL-type code \cite{kom07 }. The RRMHD schemes differ from nonrelativistic schemes in the following two ways. One is related to the relativistic primitive variables. We use the same quartic equation solver to recover the primitive variables. The other involves the usage of Amp\`{e}re's law. In the nonrelativistic MHD, we assume $\vec{j}\equiv\nabla \times \vec{B}$ and then
derive $\vec{E}$ from Ohm's law. In the relativistic MHD, we use the Amp\`{e}re's law to advance $\vec{E}$. Configurations are similar to those in the two-fluid case. We employ left-right symmetry to reduce computational cost. We use a localized resistivity $\eta=\eta(x, z)$
in the relativistic Ohm's law (Eq. \ref{eq:ohm}). The system evolves similarly in the two-fluid and MHD runs. In the present case, we can resolve sharper shock structures
because we employ a shock-capturing scheme. In the two-fluid model,
the system contains the Larmor radius or the inertial length
and so
it makes no sense to discuss shorter structures. We have no such concern in the scale-free RRMHD model. Figure 2 a
shows the $u_x$-profile in the developed stage. This is a Petschek-type reconnection geometry. The typical outflow is Alfv\'{e}nic, $u_x \sim 2 $. The jet hits the plasmoid at a fast shock (``FS'' in Fig. 2 a)
at the maximum 4 -velocity of $u_x \sim 2.75 $. \begin{figure}
\includegraphics[width=\columnwidth]{f2. pdf}
\caption{(Color online)
(a) The $x$-component of 4 -velocity ($u_x$) at the developed stage. The contour lines indicate the magnetic field lines. (b) The $u_x$-profile in the front side of the plasmoid. The domain is identical to the yellow box in Panel (a). (From run 1 in Ref. ~\cite{zeni10 b})
}
\end{figure}
The simulation reveals new shock structures:
postplasmoid vertical slow shocks,
forward vertical slow shocks,
and the shock-reflection structure. One can see the postplasmoid shocks at $x\sim 60 $,
as indicated by ``SS-1 '' in Fig. 2 a. The shock propagates in the $+x$-direction
at the front of the reverse plasma flow
($u_x<0 $; blue regions in Fig. 2 a). As the plasmoid moves,
it compresses the surrounding plasmas on both upper and lower sides,
and the cavity region also appears behind the plasmoid. The pressure gradient between those two regions drives
the reverse flow along the field lines
\cite{zeni10 b, zeni11 }. The right side is the shock upstream. One can also recognize similar shock-like structure
in the two-fluid run, as indicated by arrows in Fig. 1 a. We find another small shock outside the plasmoid
(``SS-2 '' in Fig. 2 a; see also Fig. 2 c in Ref. ~\cite{zeni10 b}). This is a relativistic version of
the forward vertical slow-shock \cite{zeni11 }. We further analyze the properties of these two shocks. In the shock normal direction \cite{son67 },
we evaluate the shock speed $v_{shock}$ and the MHD wave speeds
in the lab-frame \cite{keppens08 }. Key results are presented in Table \ref{table}. In both cases, on the right (upstream) sides,
the shocks travel faster than the outgoing slow-mode, but slower than the Alfv\'{e}n mode,
$v_{s+, R} < v_{shock} < c_{A+, R}$. This is the character of slow shocks. Finally,
we find the shock-reflection structure in the front side,
as indicated by the small box in Fig. 2 a. Figure 2 b shows the $u_x$-profile in the same domain. This one takes advantage of the current sheet configuration. Since there is initially a dense plasma near the center $z \sim 0 $,
plasma flows are bifurcated.
|
1,161
| 1
|
arxiv
|
x, n_z)$ &
$v_{shock}$ &
$v_{s+, L}$ &
$v_{s+, R}$ &
$c_{A+, R}$ &
\\
\hline
SS-1 &
(61.5, 5.0 ) &
(1.00, -0.08 ) &
0.41 &
0.50 &
0.29 &
0.80
\\
SS-2 &
(105.9, 8.2 ) &
(0.90,0.43 ) &
0.62 &
0.66 &
0.56 &
0.80
\\
\hline
\end{tabular}
\caption{Selected shock properties. }
\footnotetext{
The shock normal vector $\vec{\hat{n}}$,
the shock velocity $v_{shock}$,
the slow-mode, and the Alfv\'{e}n wave speeds
on the left ($L$) and right ($R$) sides of the shock.
All velocities are evaluated
in the lab-frame
in the $\vec{\hat{n}}$-direction.
}
\label{table}
\end{table}
\section{Discussion}
These simulations have revealed new insights in
the fluid-scale properties of relativistic reconnection.
The relativistic Petschek reconnection is fast,
it features an Alfv\'{e}nic outflow jet, and
the outflow exhaust becomes narrower and narrower
as the magnetization parameter $\sigma$ increases.
The overall results confirm Lyubarsky's theory \cite{lyu05 }.
In our simulations, $\sigma$ ranges up to ${\sim}10 $.
Extended parameter surveys and many basic issues need to be investigated.
The simulations exhibit new shock structures,
especially in the RRMHD model.
We think that they appear when the Alfv\'{e}nic reconnection jet exceeds
the sound speeds \cite{zeni10 b, zeni11 }.
Comparing the outflow speed $\approx c_{A, up}=[\sigma/(1 +\sigma)]^{1 /2 }$ and
the upper limit of the sound speed $1 /\sqrt{3 }$,
a sufficient condition for a supersonic reconnection jet is
\begin{equation}
\sigma > 1 /2.
\end{equation}
Therefore we expect various MHD shocks in the reconnection system
in high-$\sigma$ regimes.
At present, the most important issue is the effective resistivity,
which crucially controls the system evolution.
In this work, we have employed a spatially-localized resistivity for our main results.
It is known that such resistivity leads to a Petschek-type fast reconnection
in the nonrelativistic MHD studies \cite{ugai77 }.
Meanwhile, other resistivity models give significantly different pictures:
a turbulent outflow with secondary islands \cite{zeni09 a},
a slow Sweet--Parker reconnection and so on \cite{zeni10 b}.
We need to find a practical resistivity model
to reproduce realistic reconnection,
by referring to the kinetic results \cite{zeni07 }.
Of course a true resistivity is desirable, but
this is a long-standing problem in reconnection physics.
The RRMHD model has another problem to overcome.
The standard equations are very stiff
when the magnetic Reynolds number $S \sim 1 /\eta$ is high.
Implicit-type schemes \cite{pal09, dumbser09 }
have been recently developed to explore high-$S$ regimes
and
applications to Sweet--Parker reconnection are in progress \cite{takahashi10 }.
The development of the two-fluid and RRMHD models will benefit
the large-scale modeling of relativistic MHD instabilities \cite{ober09, ober10 }.
Even in ideal MHD studies,
the late-time evolution often exhibits
potential reconnection sites with antiparallel magnetic fields
or (numerical) reconnection.
Reconnection with a physical resistivity will be a basic element of
these important problems.
Our results have implications for space physics also.
Inspired by these results,
we carried out nonrelativistic MHD simulations and
found the same shock structures in low-beta plasmas \cite{zeni11 }.
In other words, our discoveries are ubiquitous
in both relativistic and nonrelativistic reconnections.
In the solar corona,
the fast-shock-reflection structure at the plasmoid front
may be related to energetic particle acceleration.
\begin{theacknowledgments}
The authors acknowledge valuable discussions with
T. Miyoshi, Y. Mizuno, H. ~R. Takahashi, and A. ~F. Vinas.
This research was supported by the NASA Center for Computational Sciences.
One of the authors (S. Z. ) gratefully acknowledges support from
NASA's postdoctoral program and
JSPS Fellowship for Research Abroad.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
252
| 0
|
arxiv
|
\section{Introduction}\label{sec:intro}
\cite{baker2016} has created a novel method to measure Economic Policy
Uncertainty, the EPU index,
which has attracted significant attention and been followed by a strand of literature
since its proposal.
However, it entails a carefully designed framework and significant manual efforts
to complete its calculation.
Recently, there has been significant progress in the methodology
of the generation process of EPU\@, e.g.\
differentiating contexts for uncertainty~\citep{saltzman2018},
generating index based on Google Trend~\citep{castelnuovo2017},
and correcting EPU for Spain~\citep{ghirelli2019}.
I wish to extend the scope of index-generation
by proposing this generalized method,
namely the Wasserstein Index Generation model (WIG).
This model (WIG)
incorporates several methods that are widely used in machine learning,
word embedding~\citep{mikolov2013},
Wasserstein Dictionary Learning~\citep[WDL]{schmitz2018},
Adam algorithm~\citep{kingma2015},
and Singular Value Decomposition (SVD).
The ideas behind these methods are essentially dimension reduction.
Indeed, WDL reduces the dimension of the dataset into its bases
and associated weights, and SVD could shrink the dimension of
bases once again to produce unidimensional indices for further
analysis.
I test WIG’s effectiveness in generating the
Economic Policy Uncertainty index~\citep[EPU]{baker2016},
and compare the result against existing ones~\citep{azqueta-gavaldon2017},
generated by the auto-labeling Latent Dirichlet Allocation
\citep[LDA]{blei2003} method.
Results reveal that
this model requires a much smaller dataset to achieve better results,
without human intervention.
Thus, it can also be applied for generating other time-series
indices from news headlines in a faster and more efficient manner.
Recently, \cite{shiller2017} has called for more attention in collecting
and analyzing text data of economic interest.
The WIG model responds to
this call in terms of generating time-series sentiment
indices from texts by facilitating machine learning algorithms.
\section{Methods and Material}
\subsection{Wasserstein Index Generation Model}\label{subsec:wig}
\citet{schmitz2018} proposes an unsupervised machine learning technique to
cluster documents into topics, called the Wasserstein Dictionary Learning (WDL),
wherein both documents and topics are considered as discrete
distributions of vocabulary.
These discrete distributions can be reduced into bases and corresponding
weights to capture most information in the dataset and thus
shrink its dimension.
Consider a corpus with \(M\) documents and a vocabulary of \(N\) words.
These documents form a matrix of
\(Y=\left[y_{m}\right] \in \mathbb{R}^{N \times M}\),
where \(m \in\left\{1, \dots, M\right\}\),
and each \(y_{m} \in \Sigma^{N}\).
We wish to find topics \(T \in \mathbb{R}^{N \times K}\),
with associated weights \(\Lambda \in \mathbb{R}^{K \times M}\).
In other words, each document is a discrete distribution,
which lies in an \(N\)-dimensional simplex.
Our aim is to represent and reconstruct these documents according to some
topics \(T \in \mathbb{R}^{N \times K}\), with associated weights
\(\Lambda \in \mathbb{R}^{K \times M}\), where
\(K\) is the total number of topics to be clustered.
Note that each topic is a distribution of vocabulary,
and each weight represents its associated document as a weighted barycenter
of underlying topics.
We could also obtain a distance matrix of the total vocabulary
\(C^{N \times N}\), by first generating word embedding
and measuring word distance pairwise by using a metric function,
that is, \(C_{ij} = d^2(x_i, x_j)\), where
\(x \in \mathbb{R}^{N \times D}\),
\(d(\boldsymbol{\cdot})\) is Euclidean distance,
and \(D\) is the embedding depth.
\footnote{\cite{saltzman2018}
proposes differentiating the use of ``uncertainty'' in both
positive and negative contexts.
In fact, word embedding methods, for example, \ Word2Vec \citep{mikolov2013},
can do more. They consider not only the positive
and negative context for a given word, but
all possible contexts for all words.
}
Further, we could calculate the distances between documents and
topics, namely the Sinkhorn distance.
It is essentially a \(2\)-Wasserstein distance,
with the addition of an entropic regularization
term to ensure faster computation.
\footnote{One could refer to \cite{cuturi2013} for the Sinkhorn algorithm
and \cite{villani2003} for the theoretic results in optimal transport.}
\begin{definition}[Sinkhorn Distance]
Given \(\mu, \nu \in \mathscr{P}(\Omega)\),
\(\mathscr{P}(\Omega)\) as a Borel probability measure on \(\Omega\),
\(\Omega \subset \mathbb{R}^{N}\), and \(C\) as cost matrix,
\begin{equation}\label{def:sinkhorn}
\begin{aligned}
S_{\varepsilon} (\mu, \nu; C) & := \min_{\pi \in
\Pi(\mu, \nu)} \langle\pi , C\rangle + \varepsilon \mathcal{H}(\pi) \\
s.t.\ \Pi(\mu, \nu) & :=\left\{\pi \in \mathbb{R}_{+}^{N
\times N}, \pi \mathds{1}_{N}=\mu, \pi^{\top}
\mathds{1}_{N}=\nu\right\},
\end{aligned}
\end{equation}
where \(\mathcal{H}(\pi) := \langle\pi,\log(\pi)\rangle\)
and \(\varepsilon\) is Sinkhorn weight.
\end{definition}
Given the distance function for a single document,
we could set up the loss function for the training process:
\begin{equation}\label{eq:lossfcn}
\begin{aligned}
& \min_{R, A} \sum_{m=1}^{M} \mathcal{L}\left(y_m, y_{S_{\varepsilon}}
\left(T(R), \lambda_m(A) ; C, \varepsilon\right)\right), \\
& given~~t_{nk}(R) := \frac{e^{r_{nk}}}{\sum_{n'} e^{r_{n'k}}},~~
\lambda_{nk}(A) := \frac{e^{a_{km}}}{\sum_{k'} e^{a_{k'm}}}.
\end{aligned}
\end{equation}
In Equation~\ref{eq:lossfcn},
\(y_{S_{\varepsilon}}\left(\boldsymbol{\cdot}\right)\)
is the reconstructed document given topics \(T\) and weight\(\lambda\)
under Sinkhorn distance (Equation.~\ref{def:sinkhorn}).
Moreover, the constraint that \(T\) and \(\Lambda\) being distributions
in Equation~\ref{def:sinkhorn} is automatically
fulfilled by column-wise \textit{Softmax} operation in the loss function.
The process is formulated in Algorithm~\ref{alg:wdl},
where we first initialized matrix \(R\) and \(A\) by taking a random sample
from a Standard Normal distribution and take \textit{Softmax}
on them to obtain \(T\) and \(\Lambda\).
\(\nabla_{T}\mathcal{L(\boldsymbol{\cdot}~;\varepsilon)}\) and
\(\nabla_{\Lambda}\mathcal{L(\boldsymbol{\cdot}~;\varepsilon)}\)
are the gradients taken from the loss function with respect to
topics \(T\) and weights \(\Lambda\).
The parameters \(R\) and \(A\) are then optimized
by the Adam optimizer with the gradient at hand and learning rate \(\rho\).
\textit{Softmax} operation is operated again to ensure
constraints being unit simplex (as shown in Equation~\ref{eq:lossfcn}).
\begin{algorithm}
\caption{Wasserstein Index Generation\protect}
\begin{algorithmic}[1]
\REQUIRE Word distribution matrix \(Y\). Batch size \(s\).\\
Sinkhorn weight \(\varepsilon\). Adam Learning rate \(\rho\).
\ENSURE Topics \(T\), weights \(\Lambda\).
\STATE Initialize \(R, A \sim \mathcal{N}(0,1)\).
\STATE \(T \leftarrow Softmax(R)\), \(\Lambda \leftarrow Softmax(A)\).
\FOR{Each batch of documents}
\STATE
\(R \leftarrow R - Adam\left(\nabla_{T}
\mathcal{L(\boldsymbol{\cdot}~;\varepsilon)};~\rho\right)\),\\
\(A \leftarrow A - Adam\left(\nabla_{\Lambda}
\mathcal{L(\boldsymbol{\cdot}~;\varepsilon)};~\rho\right)\).
\STATE \(T \leftarrow Softmax\left(R\right)\),
\(\Lambda \leftarrow Softmax\left(A\right)\).
\ENDFOR
\end{algorithmic}
\label{alg:wdl}
\end{algorithm}
Next, we generate the time-series index.
By facilitating Singular Value Decomposition (SVD) with one component,
we can shrink the dimension of vocabulary
from \(T^{N \times K}\) to \(\widehat T^{1 \times K}\).
Next, we multiply \(\widehat T\) by \(\Lambda^{K \times M}\)
to get \(Ind^{1 \times M}\), which is the document-wise score given
by SVD\@.
Adding up these scores by month and scaling the index to get a mean of
100 and unit standard deviation, we obtain the final index.
\subsection{Data and Computation}
I collected data from \textit{The New York Times}
comprising news headlines from Jan.~1, 1980 to Dec.~31, 2018.
The corpus contained 11,934 documents, and 8,802 unique tokens.
\footnote{
Plots given in Figure~\ref{results}, however, are from
Jan.~1, 1985 to Aug.~31, 2016 for maintaining the same range to be compared
with that from~\cite{azqueta-gavaldon2017}.}
Next, I preprocess the corpus for further training process, for example,
by removing special symbols, combining entities,
and lemmatizing each token.
\footnote{Lemmatization refers to the process of converting each word to
its dictionary form according to its context.}
Given this lemmatized corpus, I facilitate Word2Vec to generate embedding
vectors for the entire dictionary and thus am able to calculate the distance
matrix \(C\) for any pair for words.
To calculate the gradient (as shown in Algorithm~\ref{alg:wdl}),
I choose the automatic differentiation library,
PyTorch \citep{paszke2017}, to perform differentiation of the loss function
and then update the parameters using the Adam algorithm \citep{kingma2015}.
To determine several important hyper-parameters, I use cross validation
as is common in machine learning techniques.
One-third of the documents are set for testing data and the rest are used for
the training process:
Embedding depth \(D = 10\),
Sinkhorn weight \(\varepsilon = 0.1\),
batch size \(s = 64\),
topics \(K = 4\),
and Adam learning rate \(\rho = 0.005\).
Once the parameters are set at their optimal values,
the entire dataset is used for training, and thus, the topics
\(T\) and their associated weights \(\Lambda\) are obtained.
\section{Results}\label{results}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{newsvdplot_anno.png}
\caption{
Original EPU~\protect\citep{baker2016},
EPU with LDA~\protect\citep{azqueta-gavaldon2017},
and EPU with WIG in Sec.~\protect\ref{subsec:wig}.
}
\label{fig:epu}
\end{figure}
As shown in Figure~\ref{fig:epu}, the EPU index generated by the WIG model
clearly resembles the original EPU\@.
Moreover, the WIG detects the emotional spikes better than LDA,
especially during major geopolitical events, such as
``Gulf War I,'' ``Bush Election,'' ``9/11,''
``Gulf War II,'' and so on.
For comparison, I calculated the cumulated difference between the original EPU
with that generated by WIG and LDA, respectively
(Figure~\ref{fig:cumsumdiff}, \ref{appen}).
Results indicate that the WIG model slightly out-performs LDA.
To further examine this point, I apply the Hodrick–Prescot filter\footnote{
The HP filter was applied with a monthly weighted parameter 129600.
}
to three EPU indices,
and calculate the
Pearson's and Spearman's correlation factors between
the raw series, cycle components, and trend components,
as shown in~\ref{tab:correlation}, \ref{appen}.
These tests also suggest that the series generated by WIG
capture the EPU's behavior better than LDA over this three-decade period.
Moreover, this method only requires a small dataset compared with LDA\@.
The dataset used in this article contains only news headlines, and
the dimensionality of the dictionary is only a small fraction compared with
that of the LDA method. The WIG model takes only half an hour for computation
and still produces similar results.\footnote{
Comparison of datasets are in Table~\ref{tab:comparison}, \ref{appen}.
}
Further, it extends the scope of automation in the generation process.
Previously, LDA was considered an automatic-labeling method,
but it continues to require human interpretation
of topic terms to produce time-series
indices. By introducing SVD, we could eliminate this requirement
and generate the index automatically as a black-box method.
However, it by no means loses its interpretability. The key terms are still
retrievable, given the result of WDL, if one wishes to view them.
Last, given its advantages,
the WIG model is not restricted to generating EPU, but could potentially
be used on any dataset regarding a certain topic
whose time-series sentiment index
is of economic interest. The only requirement is that the input corpus
be related to that topic, but this is easily satisfied.
\section{Conclusions}
I proposed a novel method to generate time-series indices of economic interest
using unsupervised machine learning techniques.
This could be applied as a black-box method, requiring only a small dataset,
and is applicable to any time-series indices' generation.
This method incorporates deeper methods from machine learning research,
including word embedding, Wasserstein Dictionary Learning,
and the widely used Adam algorithm.
\section*{Acknowledgements}
I am grateful to Alfred Galichon for launching this project and
to Andr\'es Azqueta-Gavald\'on for kindly providing his EPU data.
I would also like to express my gratitude to referees at the
3rd Workshop on Mechanism Design for Social Good (MD4SG~'19)
at ACM Conference on Economics and Computation (EC~’19)
and the participants at the Federated Computing Research Conference (FCRC 2019)
for their helpful remarks and discussions.
I also appreciate the helpful suggestions from the anonymous referee.
This research did not receive any specific grant from funding agencies
in the public, commercial, or not-for-profit sectors.
|
498
| 0
|
arxiv
|
\section{Introduction}
\label{irl::intro}
Approximately 350,000 Americans suffer from serious spinal cord injuries (SCI), resulting in loss of
some voluntary motion control. Recently, epidural and transcutaneous spinal stimulation have proven
to be promising methods for regaining motor function. To find the optimal stimulation signal, it is
necessary to quantitatively measure the effects of different stimulations on a patient. Since motor
function is our concern, we mainly study the effects of stimulations on patient motion, represented
by a sequence of poses captured by motion sensors. One typical experiment setting is shown in Figure
\ref{irl:game}, where a patient moves to follow a physician's instructions, and a sensor records the
patient's center-of-pressure (COP) continuously. This study will assist our design of stimulating
signals, as well as advancing the understanding of patient motion with spinal cord injuries. \begin{figure}
\centering
\subfloat[A patient sitting on a sensing device\label{irl::game1 }]{
\includegraphics[width=0.2 \textwidth]{game. eps}
}
\qquad
\subfloat[Instructions on movement directions\label{irl::game2 }]{
\includegraphics[width=0.2 \textwidth]{instructions. eps}
}
~
\subfloat[The patient's COP trajectories during the movement\label{irl::game3 }]{
\includegraphics[width=0.2 \textwidth]{gametrajectory. eps}
}
\caption{Rehabilitative game and observed trajectories: in Figure \ref{irl::game1 }, the patient
sits on a sensing device, and then moves to follow the instructed directions in Figure
\ref{irl::game2 }. Figure \ref{irl::game3 } shows the patient's center-of-pressure (COP) during
the movements. }
\label{irl:game}
\end{figure}
We assume the stimulating signals will alter the patient's initial preferences over poses,
determined by body weight distribution, spinal cord injuries, gravity, etc., and an accurate
estimation of the preference changes will reveal the effect of spinal stimulations on spinal cord
injuries, as other factors are assumed to be invariant to the stimulations. To estimate the
patient's preferences over different poses, the most straightforward approach is counting the pose
visiting frequencies from the motion, assuming that the preferred poses are more likely to be
visited. However, the patient may visit an undesired pose to follow the instructions or to change
into a subsequent preferred poses, making preference estimation inaccurate without regarding the
context. In this work, we formulate the patient's motion as a Markov Decision Process, where each state
represents a pose, and its reward value encodes all the immediate factors motivating the patient to
visit this state, including the pose preferences and the physician's instructions. With this
formulation, we adopt inverse reinforcement learning (IRL) algorithms to estimate the reward value
of each state from the observed motion of the patient. Existing solutions of the IRL problem mainly work on small-scale problems, by collecting a set of
observations for reward estimation and using the estimated reward afterwards. For example, the
methods in \cite{irl::irl1, irl::irl2, irl::subgradient} estimate the agent's policy from a set of
observations, and estimate a reward function that leads to the policy. The method in
\cite{irl::maxentropy} collects a set of trajectories of the agent, and estimates a reward function
that maximizes the likelihood of the trajectories. However, the state space of human motion is huge
for non-trivial analysis, and these methods cannot handle it well due to the reinforcement learning
problem in each iteration of reward estimation. Several methods \cite{irl::guidedirl, irl::relative}
solve the problem by approximating the reinforcement learning step, at the expense of a
theoretically sub-optimal solution. The problem can be simplified under the condition that the transition model and the action set
remain unchanged for the subject, thus each reward function leads to a unique optimal value
function. Based on this assumption, we propose a function approximation method that learns the
reward function and the optimal value function, but without the computationally expensive
reinforcement learning steps, thus it can be scaled to a large state space. We find that this
framework can also extend many existing methods to high-dimensional state spaces. The paper is organized as follows. We review existing work on inverse reinforcement learning in
Section \ref{irl::related}, and formulate the function approximation inverse reinforcement learning
method in large state spaces in \ref{irl::largeirl}. A simulated experiment and a clinical
experiment are shown in Section \ref{irl::experiments}, with conclusions in Section
\ref{irl::conclusions}. \section{Related Works}
\label{irl::related}
The idea of inverse optimal control is proposed by Kalman \cite{irl::kalman}, white the inverse
reinforcement learning problem is firstly formulated in \cite{irl::irl1 }, where the agent observes
the states resulting from an assumingly optimal policy, and tries to learn a reward function that
makes the policy better than all alternatives. Since the goal can be achieved by multiple reward
functions, this paper tries to find one that maximizes the difference between the observed policy
and the second best policy. This idea is extended by \cite{irl::maxmargin}, in the name of
max-margin learning for inverse optimal control. Another extension is proposed in \cite{irl::irl2 },
where the purpose is not to recover the real reward function, but to find a reward function that
leads to a policy equivalent to the observed one, measured by the amount of rewards collected by
following that policy. Since a motion policy may be difficult to estimate from observations, a behavior-based method is
proposed in \cite{irl::maxentropy}, which models the distribution of behaviors as a maximum-entropy
model on the amount of reward collected from each behavior. This model has many applications and
extensions. For example, \cite{irl::sequence} considers a sequence of changing reward functions
instead of a single reward function. \cite{irl::gaussianirl} and \cite{irl::guidedirl} consider
complex reward functions, instead of linear one, and use Gaussian process and neural networks,
respectively, to model the reward function. \cite{irl::pomdp} considers complex environments,
instead of a well-observed Markov Decision Process, and combines partially observed Markov Decision
Process with reward learning. \cite{irl::localirl} models the behaviors based on the local
optimality of a behavior, instead of the summation of rewards. \cite{irl::deepirl} uses a
multi-layer neural network to represent nonlinear reward functions. Another method is proposed in \cite{irl::bayirl}, which models the probability of a behavior as the
product of each state-action's probability, and learns the reward function via maximum a posteriori
estimation. However, due to the complex relation between the reward function and the behavior
distribution, the author uses computationally expensive Monte-Carlo methods to sample the
distribution. This work is extended by \cite{irl::subgradient}, which uses sub-gradient methods to
simplify the problem. Another extensions is shown in \cite{irl::bayioc}, which tries to find a
reward function that matches the observed behavior. For motions involving multiple tasks and varying
reward functions, methods are developed in \cite{irl::multirl1 } and \cite{irl::multirl2 }, which try
to learn multiple reward functions. Most of these methods need to solve a reinforcement learning problem in each step of reward
learning, thus practical large-scale application is computationally infeasible. Several methods are
applicable to large-scale applications. The method in \cite{irl::irl1 } uses a linear approximation
of the value function, but it requires a set of manually defined basis functions. The methods in
\cite{irl::guidedirl, irl::relative} update the reward function parameter by minimizing the relative
entropy between the observed trajectories and a set of sampled trajectories based on the reward
function, but they require a set of manually segmented trajectories of human motion, where the
choice of trajectory length will affect the result. Besides, these methods solve large-scale
problems by approximating the Bellman Optimality Equation, thus the learned reward function and Q
function are only approximately optimal. We propose an approximation method that guarantees the
optimality of the learned functions as well as the scalability to large state space problems. \section{Function Approximation Inverse Reinforcement Learning}
\label{irl::largeirl}
\subsection{Markov Decision Process}
A Markov Decision Process is described with the following variables:
\begin{itemize}
\item $S=\{s\}$, a set of states
\item $A=\{a\}$, a set of actions
\item $P_{ss'}^a$, a state transition function that defines the probability that state $s$ becomes
$s'$ after action $a$. \item $R=\{r(s)\}$, a reward function that defines the immediate reward of state $s$. \item $\gamma$, a discount factor that ensures the convergence of the MDP over an infinite
horizon. \end{itemize}
A motion can be represented as a sequence of state-action pairs:
\[\zeta=\{(s_i, a_i)|i=0, \cdots, N_\zeta\}, \]
where $N_\zeta$ denotes the length of the motion, varying in different observations. Given the
observed sequence, inverse reinforcement learning algorithms try to recover a reward function that
explains the motion. One key problem is how to model the action in each state, or the policy, $\pi(s)\in A$, a mapping
from states to actions. This problem can be handled by reinforcement learning algorithms, by
introducing the value function $V(s)$ and the Q-function $Q(s, a)$, described by the Bellman Equation
\cite{irl::rl}:
\begin{align}
&V^\pi(s)=\sum_{s'|s, \pi(s)}P_{ss'}^{\pi(s)}[r(s')+\gamma*V^\pi(s')], \\
&Q^\pi(s, a)=\sum_{s'|s, a}P_{ss'}^a[r(s')+\gamma*V^\pi(s')],
\end{align}
where $V^\pi$ and $Q^\pi$ define the value function and the Q-function under a policy $\pi$. For an optimal policy $\pi^*$, the value function and the Q-function should be maximized on every
state. This is described by the Bellman Optimality Equation \cite{irl::rl}:
\begin{align}
&V^*(s)=\max_{a\in A}\sum_{s'|s, a}P_{ss'}^a[r(s')+\gamma*V^*(s')], \\
&Q^*(s, a)=\sum_{s'|s, a}P_{ss'}^a[r(s')+\gamma*\max_{a'\in A}Q^*(s', a')]. \end{align}
In typical inverse reinforcement learning algorithms, the Bellman Optimality Equation needs to be
solved once for each parameter updating of the reward function, thus it is computationally
infeasible when the state space is large. While several existing approaches solve the problem at the
expense of the optimality, we propose an approximation method to avoid the problem. \subsection{Function Approximation Framework}
Given the set of actions and the transition probability, a reward function leads to a unique optimal
value function. To learn the reward function from the observed motion, instead of directly
learning the reward function, we use a parameterized function, named as \textit{VR function}, to
represent the summation of the reward function and the discounted optimal value function:
\begin{equation}
f(s, \theta)=r(s)+\gamma*V^*(s),
\label{equation:approxrewardvalue}
\end{equation}
where $\theta$ denotes the parameter of \textit{VR function}. The function value of a state is named
as \textit{VR value}. Substituting Equation \eqref{equation:approxrewardvalue} into Bellman Optimality Equation, the
optimal Q function is given as:
\begin{equation}
Q^*(s, a)=\sum_{s'|s, a}P_{ss'}^af(s', \theta),
\label{equation:approxQ}
\end{equation}
the optimal value function is given as:
\begin{align}
V^*(s)&=\max_{a\in A}Q^*(s, a)\nonumber\\
&=\max_{a\in A}\sum_{s'|s, a}P_{ss'}^af(s', \theta),
\label{equation:approxV}
\end{align}
and the reward function can be computed as:
\begin{align}
r(s)&=f(s, \theta)-\gamma*V^*(s)\nonumber\\
&=f(s, \theta)-\gamma*\max_{a\in A}\sum_{s'|s, a}P_{ss'}^af(s', \theta). \label{equation:approxR}
\end{align}
This approximation method is related to value function approximation method in reinforcement
learning, but the proposed method can compute the reward function without solving a set of linear
equations in stochastic environments. Note that this formulation can be generalized to other extensions of Bellman Optimality Equation by
replacing the $max$ operator with other types of Bellman backup operators. For example,
$V^*(s)=\log_{a\in A}\exp Q^*(s, a)$ is used in the maximum-entropy method\cite{irl::maxentropy};
$V^*(s)=\frac{1 }{k}\log_{a\in A}\exp k*Q^*(s, a)$ is used in Bellman Gradient Iteration
\cite{irl::BGI}. For any \textit{VR function} $f$ and any parameter $\theta$, the optimal Q function $Q^*(s, a)$,
optimal value function $V^*(s)$, and reward function $r(s)$ constructed with Equation
\eqref{equation:approxQ}, \eqref{equation:approxV}, and \eqref{equation:approxR} always meet the
Bellman Optimality Equation. Under this condition, we try to recover a parameterized function
$f(s, \theta)$ that best explains the observed motion $\zeta$ based on a predefined motion model. Combined with different Bellman backup operators, this formulation can extend many existing methods
to high-dimensional spaces, like the motion model based on the value function in
\cite{irl::motionvalue}, $p(a|s)=-v(s)-\log\sum_k p_{s, k}\exp(-v(k))$, the reward function in
\cite{irl::maxentropy}, $p(a|s)=\exp{Q(s, a)-V(s)}$, and the Q function in \cite{irl::bayirl}. The
main limitation is the assumption of a known transition model $P_{ss'}^a$, but it only requires a
partial model on the experienced states rather than a full environment model, and it can be learned
independently in an unsupervised way. To demonstrate the usage of the framework, this work chooses $max$ as the Bellman backup operator
and a motion model $p(a|s)$ based on the optimal Q function $Q^*(s, a)$ \cite{irl::bayirl}:
\begin{equation}
P(a|s)=\frac{\exp{b*Q^*(s, a)}}{\sum_{\tilde{a}\in
A}\exp{b*Q^*(s, \tilde{a})}},
\label{equation:motionmodel}
\end{equation}
where $b$ is a parameter controlling the degree of confidence in the agent's ability to choose
actions based on Q values. In the remaining sections, we use $Q(s, a)$ to denote the optimal Q values
for simplified notations.
|
498
| 1
|
arxiv
|
log{P(\zeta|\theta)}\nonumber\\
&=\log{\prod_{(s, a)\in \zeta} P(a|\theta;s)}\nonumber\\
&=\log{\prod_{(s, a)\in \zeta} \frac{\exp{b*Q^*(s, a)}}{\sum_{\hat{a}\in A}\exp{b*Q^*(s, \hat{a})}}
}\nonumber\\
&=\sum_{(s, a)\in\zeta}(b*Q(s, a)-\log{\sum_{\hat{a}\in
A}\exp{b*Q(s, \hat{a}))}},
\label{equation:loglikelihood}
\end{align}
and the gradient of the log-likelihood is given by:
\begin{align}
\nabla_\theta L_{nn}(\theta)&=\sum_{(s, a)\in\zeta}(b*\nabla_\theta Q(s, a)\nonumber\\
&-b*\sum_{\hat{a}\in
A}P((s, \hat{a})|r(\theta))\nabla_\theta Q(s, \hat{a})). \label{equation:loglikelihoodgradient}
\end{align}
With a differentiable approximation function,
\[\nabla_\theta Q(s, a)=\sum_{s'|s, a}P_{ss'}^a\nabla_\theta f(s', \theta), \]
and
\begin{align}
\nabla_\theta L_{nn}(\theta)&=\sum_{(s, a)\in\zeta}(b*\sum_{s'|s, a}P_{ss'}^a\nabla_\theta f(s', \theta)\nonumber\\
&-b*\sum_{\hat{a}\in
A}P((s, \hat{a})|r(\theta))\sum_{s'|s, a}P_{ss'}^a\nabla_\theta f(s', \theta)),
\label{equation:loglikelihoodgradient}
\end{align}
where $\nabla_\theta f(s', \theta)$ denotes the gradient of the neural network output with respect to neural
network parameter $\theta=\{w, b\}$. If the \textit{VR function} $f(s, \theta)$ is linear, the objective function in Equation
\eqref{equation:loglikelihood} is concave, and a global optimum exists. However, a multi-layer
neural network works better to handle the non-linearity in approximation and the high-dimensional
state space data. A gradient ascent method can be used to learn the parameter $\theta$:
\begin{equation}
\label{equation:gradientascent}
\theta=\theta+\alpha*\nabla_\theta L_{nn}(\theta),
\end{equation}
where $\alpha$ is the learning rate. When the method converges, we can compute the optimal Q function, the optimal value function, and the
reward function based on Equation \eqref{equation:approxrewardvalue}, \eqref{equation:approxQ},
\eqref{equation:approxV}, and \eqref{equation:approxR}. The algorithm under a neural network-based
approximation function is shown in Algorithm \ref{alg:nnapprox}. This method does not involve solving the MDP problem for each updated parameter $\theta$, and
large-scale state spaces can be easily handled by an approximation function based on a deep neural
network. \begin{algorithm}[tb]
\caption{Function Approximation IRL with Neural Network}
\label{alg:nnapprox}
\begin{algorithmic}[1 ]
\STATE Data: {$\zeta, S, A, P, \gamma, b, \alpha$}
\STATE Result: {optimal value $V[S]$, optimal action value $Q[S, A]$, reward value $R[S]$}
\STATE create variable $\theta=\{W, b\}$ for a neural network
\STATE build $f[S, \theta]$ as the output of the neural network
\STATE build $Q[S, A]$, $V[S]$, and $R[S]$ based on Equation \eqref{equation:approxrewardvalue},
\eqref{equation:approxQ}, \eqref{equation:approxV}, and \eqref{equation:approxR}. \STATE build loglikelihood $L_{nn}[\theta]$ based on $\zeta$ and $Q[S, A]$
\STATE compute gradient $\nabla_\theta L_{nn}[\theta]$
\STATE initialize $\theta$
\WHILE{not converging}
\STATE $\theta=\theta+\alpha*\nabla_\theta L_{nn}[\theta]$
\ENDWHILE
\STATE evaluate {optimal value $V[S]$, optimal action value $Q[S, A]$, reward value $R[S]$}
\STATE return $R[S]$
\end{algorithmic}
\end{algorithm}
\subsection{Function Approximation with Gaussian Process}
Assuming the \textit{VR function} $f$ is a Gaussian process (GP) parameterized by $\theta$, the
posterior distribution is similar to the distribution in \cite{irl::gaussianirl}:
\begin{align}
P(\theta, f_u|S_u, \zeta)
&\propto P(\zeta, f_u, \theta|S_u)\\\nonumber
&= \int_{f_S}\underbrace{P(\zeta|f_S)}_\text{IRL}\underbrace{P(f_S|f_u, \theta, S_u)}_\text{GP
posterior}df_S\underbrace{P(f_u, \theta|S_u)}_\text{GP prior},
\end{align}
where $S_u$ denotes a set of supporting states for sparse Gaussian approximation
\cite{irl::sparsegaussian}, $f_u$ denotes the \textit{VR values} of $S_u$, $f_S$ denotes the
\textit{VR values} of the whole set of states, and $\theta$ denotes the parameter of the Gaussian
process. Without a closed-form integration, we use the mean function of the Gaussian posterior as the
\textit{VR value}:
\begin{align}
P(\zeta, f_u, \theta|S_u)=P(\zeta|\bar{f_S})P(f_u, \theta|S_u),
\label{equation:gpfairl}
\end{align}
where $\bar{f_S}$ denotes the mean function. Given a kernel function $k(x_i, x_j, \theta)$, the log-likelihood function is given as:
\begin{align}
&L_{gp}(\theta, f_u)\label{eq:gausslikelihood}\\&=\log P(\zeta|\bar{f_S})+\log P(f_u, \theta|S_u)\\
&=b*\sum_{(s, a)\in\zeta}(\sum_{s'|s, a}P_{ss'}^a\bar{f(s')}
-\log{\sum_{\hat{a}\in A}\exp{\sum_{s'|s, \hat{a}}P_{ss'}^{\hat{a}}\bar{f(s'))}}}\label{eq:irlterm}\\
&-\frac{f_u^TK_{S_u, S_u}^{-1 }f_u}{2 }-\frac{\log|K_{S_u, S_u}|}{2 }-\frac{n\log
2 \pi}{2 }\label{eq:gaussterm}
\\ &+\log P(\theta), \label{eq:priorterm}
\end{align}
where $K$ denotes the covariance matrix computed with the kernel function,
$\bar{f(s)}=K_{s, S_u}^TK_{S_u, S_u}^{-1 }f_u$ denotes the \textit{VR value} with the mean function
$\bar{f_S}$, expression \eqref{eq:irlterm} is the IRL likelihood, expression \eqref{eq:gaussterm} is
the Gaussian prior likelihood, and expression \eqref{eq:priorterm} is the kernel parameter prior. The parameters $\theta, f_u$ can be similarly learned with gradient methods. It has similar
properties with neural net-based approach, and the full algorithm is shown in Algorithm
\ref{alg:gpapprox}. \begin{algorithm}[tb]
\caption{Function Approximation IRL with Gaussian Process}
\label{alg:gpapprox}
\begin{algorithmic}[1 ]
\STATE Data: {$\zeta, S, A, P, \gamma, b, \alpha$}
\STATE Result: {optimal value $V[S]$, optimal action value $Q[S, A]$, reward value $R[S]$}
\STATE create variable $\theta$ for a kernel function and $f_u$ for supporting points
\STATE compute $\bar{f(s, \theta, f_u)}=K_{s, S_u}^TK_{S_u, S_u}^{-1 }f_u$
\STATE build $Q[S, A]$, $V[S]$, and $R[S]$ based on Equation \eqref{equation:approxrewardvalue},
\eqref{equation:approxQ}, \eqref{equation:approxV}, and \eqref{equation:approxR}. \STATE build loglikelihood $L_{gp}[\theta, f_u]$ based on Equation \eqref{eq:gausslikelihood}. \STATE compute gradient $\nabla_{\theta, f_u} L_{gp}[\theta, f_u]$
\STATE initialize $\theta, f_u$
\WHILE{not converging}
\STATE $[\theta, f_u]=[\theta, f_u]+\alpha*\nabla_{\theta, f_u} L_{gp}[\theta, f_u]$
\ENDWHILE
\STATE evaluate {optimal value $V[S]$, optimal action value $Q[S, A]$, reward value $R[S]$}
\STATE return $R[S]$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{irl::experiments}
We use a simulated environment to compare the proposed methods with existing methods and demonstrate
the accuracy and scalability of the proposed solution, then we show how the function approximation
framework can extend existing methods to large state spaces. In the end, we apply the proposed
method to a clinical application. \subsection{Simulated Environment}
\begin{figure}
\centering
\includegraphics[width=0.2 \textwidth]{objectworld. eps}
\caption{An example of a reward table for one objectworld mdp on a $10 \times 10 $ grid: it depends
on randomly placed objects. }
\label{fig:objectworld}
\end{figure}
The simulated environment is an objectworld mdp \cite{irl::gaussianirl}. It is a $N*N$ grid, but
with a set of objects randomly placed on the grid. Each object has an inner color and an outer
color, selected from a set of possible colors, $C$. The reward of a state is positive if it is
within 3 cells of outer color $C1 $ and 2 cells of outer color $C2 $, negative if it is within 3 cells
of outer color $C1 $, and zero otherwise. Other colors are irrelevant to the ground truth reward. One
example of the reward values is shown in Figure \ref{fig:objectworld}. In this work, we place two
random objects on a $5 *5 $ grid, and the feature of a state describes its discrete distance to each
inter color and outer color in $C$. We evaluate the proposed method in three aspects. First, we compare its accuracy in reward learning
with other methods. We generate different sets of trajectory samples, and implement the
maximum-entropy method in \cite{irl::maxentropy}, deep inverse reinforcement learning method in
\cite{irl::deepirl}, and Bellman Gradient Iteration approaches \cite{irl::BGI}. The \textit{VR
function} based on a neural network has five-layers, where the number of nodes in the first four
layers equals to the feature dimensions, and the last layer outputs a single value as the summation
of the reward and the optimal value. The \textit{VR function} based on a Gaussian process uses an
automatic relevance detection (ARD) kernel \cite{irl::gaussianml} and an uninformed prior, and the
supporting points are randomly picked. The accuracy is defined as the correlation coefficient
between the ground truth reward value and the learned reward value. The result is shown in Figure \ref{fig:objectaccuracy}. The accuracy is not monotonously increasing
as the number of sample grows. The reason is that a function approximator based on a large neural
network will overfit the observed trajectory, which may not reflect the true reward function
perfectly. During reward learning, we observe that as the loglikelihood increases, the accuracy of
the updated reward function reaches the maximum after a certain number of iterations, and then
decreases to a stable value. A possible solution to this problem is early-stopping during reward
learning. For a function approximator with Gaussian process, the supporting set is important,
although an universal solution is unavailable. \begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{accuracy. eps}
\caption{Accuracy comparison with different numbers of observations: "maxent" denotes maximum
entropy method; "deep maxent" denotes the deep inverse reinforcement learning approach, "pnorm
irl" and "gsoft irl" denote Bellman Gradient Iteration method; "fairl with nn" denotes the
function approximation inverse reinforcement learning with a neural network; "fairl with gp"
denotes the function approximation inverse reinforcement learning with a Gaussian process. }
\label{fig:objectaccuracy}
\end{figure}
Second, we evaluate the scalability of the proposed method. Since all these methods involve gradient
method, we choose different numbers of states, ranging from 25 to 9025, and compute the time for
one iteration of gradient ascent under each state size with each method. "Maxent" and "BGI" are
implemented with a mix of Python and C programming language; "DeepMaxent" is implemented with
Theano, and "FAIRL" is implemented with Tensorflow. They all have C programming language in the
backend and Python in the forend. \begin{table}
\caption{The computation time (second) of one iteration of gradient method under different number
of states with different methods: "Maxent" denotes maximum entropy method, "DeepMaxent" denotes
the deep inverse reinforcement learning approach, "BGI" denotes Bellman Gradient Iteration method,
and "FAIRLNN" and "FAIRLGP" denote the function approximation inverse reinforcement learning.
|
498
| 2
|
arxiv
|
hline
625 & 24.151 &0.95 & 20.963 & 0.724 &1.317 \\
\hline
1225 & 133.839 &3.158 & 102.460 & 0.921 &2.163 \\
\hline
2025 & 474.907 &8.119 & 352.007 & 0.776 &2.332 \\
\hline
3025 & 1319.365 &20.253 & 1061.147 & 0.762 &3.723 \\
\hline
4225 & 3030.723 &59.279 & 2630.309 & 2.468 &4.459 \\
\hline
5625 & 6197.718 &101.434 & 5228.343 & 2.831 &6.495 \\
\hline
7225 & 12234.417 &229.752 & 10147.628 & 2.217 &9.316 \\
\hline
9025 & 20941.9 &10466.784 & 16345.874 & 3.347 &12.372 \\
\hline
\end{tabular}
\end{table}
The result is shown in Table \ref{tab:time}. Even though the computation time may be affected by
different implementations, it still shows that the proposed method is significantly better than
the alternatives in scalability, and in practice, it can be further improved by paralleling the
computation of the reward function, the value function, and the Q function from the function
approximator. Besides, the Gaussian process-based method requires more time than the neural net,
because of the matrix inverse operations.
Third, we demonstrate how the proposed framework extends existing methods to large-scale state
spaces. We increase the objectworld to a $80 *80 $ grid, with 10 objects in 5 colors, and generate
a large sample set with size ranging from 16000 to 128000 at an interval of 16000. Then we show the
accuracy and computation time of inverse reinforcement learning with different combinations of
Bellman backup operators and motion models. The combinations include LogSumExp as Bellman backup
operator with a motion model based on the reward value \cite{irl::maxentropy} and three Bellman
backup operators ($max$, $pnorm$, $gsoft$) with a motion model based on the Q values. We do not use
even larger state spaces because the generation of trajectories from the ground truth reward function
requires a computation-intensive and memory-intensive reinforcement learning step in larger state
spaces. A three-layer neural network is adopted for function approximation, implemented with
Tensorflow on NVIDIA GTX 1080. The training is done with batch sizes 400, learning rate 0.001, and
20 training epochs are ran. The accuracy is shown in Figure \ref{fig:extendaccuracy}. The
computation time for one training epoch is shown in Figure \ref{fig:extendtime}.
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{largeaccuracy. eps}
\caption{Reward learning accuracy of existing methods in large state spaces: "LogSumExp", "Max",
"PNorm", and "GSoft" are the Bellman backup operators; "Reward" and "QValues" are the types of
motion models; different combinations of extended methods are plotted. The accuracy is measured as
the correlation between the ground truth and the recovered reward. }
\label{fig:extendaccuracy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{largetime. eps}
\caption{Computation time for one training epoch of existing methods in large state spaces:
"LogSumExp", "Max", "PNorm", and "GSoft" are the Bellman backup operators; "Reward" and "QValues"
are the types of motion models; different combinations of extended methods are plotted. }
\label{fig:extendtime}
\end{figure}
The results show that the proposed method achieves accuracy and efficiency simultaneously. In
practice, multi-start strategy may be adopted to avoid local optimum.
\subsection{Clinical Experiment}
In the clinic, a patient with spinal cord injuries sits on a box, with a force sensor, capturing the
center-of-pressure (COP) of the patient during movement. Each experiment is composed of two
sessions, one without transcutaneous stimulation and one with stimulation. The electrodes
configuration and stimulation signal pattern are manually selected by the clinician.
In each session, the physician gives eight (or four) directions for the patient to follow, including
left, forward left, forward, forward right, right, right backward, backward, backward left, and the
patient moves continuously to follow the instruction. The physician observes the patient's behaviors
and decides the moment to change the instruction.
Six experiments are done, each with two sessions. The COP trajectories in Figure \ref{fig:patient1 }
denote the case with four directional instructions; Figure \ref{fig:patient2 }, \ref{fig:patient3 },
\ref{fig:patient4 }, \ref{fig:patient5 }, and \ref{fig:patient6 } denote the sessions with eight
directional instructions.
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{motion5. eps}
\caption{Patient 1 under four directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations. }
\label{fig:patient1 }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{motion0. eps}
\caption{Patient 2 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations. }
\label{fig:patient2 }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{motion1. eps}
\caption{Patient 3 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations. }
\label{fig:patient3 }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{motion2. eps}
\caption{Patient 4 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations. }
\label{fig:patient4 }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{motion3. eps}
\caption{Patient 5 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations. }
\label{fig:patient5 }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4 \textwidth]{motion4. eps}
\caption{Patient 6 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations. }
\label{fig:patient6 }
\end{figure}
The COP sensory data from each session is discretized on a $100 \times 100 $ grid, which is fine
enough to capture the patient's small movements. The problem is formulated into a MDP, where each
state captures the patient's discretized location and velocity, and the set of actions changes the
velocity into eight possible directions. The velocity is represented with a two-dimensional vector
showing eight possible velocity directions. Thus the problem has 80000 states and 8 actions, and the
transition model assumes that each action will lead to one state with probability one.
\begin{table*}
\centering
\caption{Evaluation of the learned rewards: "forward" etc. denote the instructed direction; "1 u"
denote the patient id "1 ", with "u" denoting unstimulated session and "s" denoting stimulated
sessions. The table shows the correlation coefficient between the ideal reward and the recovered
reward. }
\label{tab:feature1 }
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
&forward&backward&left&right&top left&top right&bottom left&bottom right&origin\\\hline
1 u&-0.352172 &-0.981877 &-0.511908 &-0.399777 &&&&&-0.0365778 \\\hline
1 s&-0.36437 &-0.999993 &-0.14757 &-0.321745 &&&&&0.154132 \\\hline
2 u&-0.459214 &-0.154868 &0.134229 &0.181629 &0.123853 &0.677538 &-0.398259 &0.264739 &-0.206476 \\\hline
2 s&-0.115516 &-0.127179 &0.569024 &0.164638 &0.360013 &0.341521 &0.0817681 &0.134049 &-0.00986036 \\\hline
3 u&0.533031 &0.0364088 &0.128325 &-0.729293 &0.397182 &0.155565 &-0.48818 &-0.293617 &-0.176923 \\\hline
3 s&-0.340902 &-0.091139 &0.344993 &0.0557266 &0.162783 &0.740827 &-0.0897398 &-0.00674047 &-0.414462 \\\hline
4 u&0.099563 &-0.0965766 &0.145509 &-0.912844 &0.250434 &-0.299531 &0.577489 &0.134106 &-0.151334 \\\hline
4 s&-0.258762 &-0.019275 &-0.263354 &0.549305 &0.0910128 &0.755755 &-0.225137 &0.289126 &-0.216737 \\\hline
5 u&0.287442 &0.0859648 &-0.368503 &0.504589 &-0.297166 &0.401829 &0.0583192 &-0.23662 &-0.0762139 \\\hline
5 s&-0.350374 &-0.0969275 &0.538291 &-0.617767 &-0.00442265 &0.0923481 &0.115864 &-0.576655 &-0.0108339 \\\hline
6 u&0.205348 &0.302459 &0.550447 &0.0549231 &-0.348898 &0.420478 &0.378317 &0.56191 &0.145699 \\\hline
6 s&0.105335 &-0.155296 &0.0193898 &-0.283895 &-0.0577008 &0.220243 &-0.31611 &-0.296682 &-0.0753326 \\\hline
\end{tabular}
\end{table*}
To learn the reward function from the observed trajectories based on the formulated MDP, we use the
coordinate and velocity direction of each grid as the feature, and learn the reward function
parameter from each set of data. The function approximator is a neural network with three hidden
layers and $[100,50,25 ]$ nodes.
We only test the proposed method with a neural-net function approximator, because it will take
prohibitive amount of time to learn the reward function with other methods, and the GP approach
relies on the set of supporting points. Assuming it takes only 100 iterations to converge, the
proposed method takes about one minute while others run for two to four weeks, and in practice, it
may take more iterations to converge.
To compare the reward function with and without stimulations, we adopt the same initial parameter
during reward function learning, and run both learning process with 10000 iterations with learning
rate 0.00001.
Given the learned reward function, we score the patient's recovery with the correlation coefficient
between the recovered rewards and the ideal rewards under the clinicians' instructions of the states
visited by the patient. The ideal reward for each state is the cosine similarity between the state's
velocity vector and the instructed direction.
The result is shown in Table \ref{tab:feature1 }. It shows that the patient's ability to follow the
instructions is affected by the stimulations, but whether it is improved or not varies among
different directions. The clinical interpretations will be done by physicians.
\section{Conclusions}
\label{irl::conclusions}
This work deals with the problem of inverse reinforcement learning in large state spaces, and solves
the problem with a function approximation method that avoids solving reinforcement learning problems
during reward learning. The simulated experiment shows that the proposed method is more accurate and
scalable than existing methods, and can extends existing methods to high-dimensional spaces. A
clinical application of the proposed method is presented.
In future work, we will remove the requirement of a-priori known transition function by combining an
environment model learning process into the function approximation framework.
|
312
| 0
|
arxiv
|
\section{Introduction} \label{sec:introduction}
We consider nonconvex quadratically constrained quadratic programs (QCQPs) of the form
\begin{align}
\label{eq:hqcqp} \tag{$\PC$}
\begin{array}{rl}
\min & \trans{\x}Q^0 \x \\
\subto & \trans{\x}Q^p \x \leq b_p, \quad p \in [m],
\end{array}
\end{align}
where $Q^0, \ldots, Q^m \in \SymMat^n$, $\b \in \Real^m$, $\x \in \Real^n$, and $[m]$ denotes
the set $\left\{i \in \Natural\, \middle|\, 1 \leq i \leq m \right\}$. We use $\SymMat^n$ to denote
the space of $n \times n$ symmetric matrices. A general form of QCQPs with linear terms
\begin{align*}
\begin{array}{rl}
\min & \trans{\x}Q^0 \x + \trans{(\q^0 )}\x \\
\subto & \trans{\x}Q^p \x + \trans{(\q^p)}\x \leq b_p \quad p \in [m],
\end{array}
\end{align*}
can be represented in the form of \eqref{eq:hqcqp}
using a new variable $x_0 $ such that $x_0 ^2 = 1 $, where $\q^0, \ldots, \q^m \in \Real^n$. For simplicity, we describe QCQPs as \eqref{eq:hqcqp}
and we assume that \eqref{eq:hqcqp} is feasible in this paper. Nonconvex QCQPs \eqref{eq:hqcqp} are known to be NP-hard in general, however, finding the exact solution of
some class of QCQPs has been a popular subject \cite{Azuma2021, Burer2019, Jeyakumar2014, kim2003 exact, Sojoudi2014 exactness, Wang2021 geometric, Wang2021 tightness}
as they can provide solutions for important applications
formulated as QCQPs \eqref{eq:hqcqp}. They include optimal power flow problems~\cite{Lavaei2012, Zhou2019 }, pooling problems~\cite{kimizuka2019 solving},
sensor network localization problems~\cite{BISWAS2004, KIM2009, SO2007 },
quadratic assignment problems~\cite{PRendl09, ZHAO1998 },
the max-cut problem~\cite{Geomans1995 }. Moreover, it is well-known that polynomial optimization problems can be recast as QCQPs. By replacing $\x\trans{\x}$ with a rank-1 matrix $X \in \SymMat^n$ in \eqref{eq:hqcqp}
and removing the rank constraint of $X$,
the standard (Shor) SDP relaxation
and its dual problem can be expressed as
\begin{align}
\label{eq:hsdr} \tag{$\PC_R$} &
\begin{array}{rl}
\min & \ip{Q^0 }{X} \\
\subto & \ip{Q^p}{X} \le b_p, \quad p \in [m], \\
& X \succeq O,
\end{array} \\
\label{eq:hsdrd} \tag{$\DC_R$} &
\begin{array}{rl}
\max & \trans{-\b}\y \\
\subto & S(\y) := Q^0 + \sum\limits_{p=1 }^m y_p Q^p \succeq O,
\quad \y \ge \0,
\end{array}
\end{align}
where $\ip{Q^p}{X}$ denotes the Frobenius inner product of $Q^p$ and $X$, i. e., $\ip{Q^p}{X} \coloneqq \sum_{i, j} Q^p_{ij} X_{ij}$,
and $X \succeq O$ means that $X$ is positive semidefinite. The SDP relaxation provides a lower bound of the optimal value of \eqref{eq:hqcqp} in general. When the SDP relaxation ({$\PC_R$}) provides a rank-1 solution $X$, we say that
the SDP relaxation is exact. In this case, the exact optimal solution
and exact optimal value can be computed in polynomial time. A second-order cone programming (SOCP) relaxation can be obtained by
further relaxing the positive semidefinite constraint $X \succeq O$, for instance,
requiring all $2 \times 2 $ principal submatrices of $X$ to be positive
semidefinite~\cite{kim2003 exact, sheen2020 exploiting}. For QCQPs with a certain sparsity structure, e. g., forest structures,
the SDP relaxation coincides with the SOCP relaxation. In this paper, we present a wider class of QCQPs that can be solved exactly with the SDP relaxation by extending
the results in \cite{Azuma2021 } and \cite{Sojoudi2014 exactness}. The extension is based on that trees or forests are bipartite graphs
and that QCQPs with no structure and the same sign of $Q_{ij}^p$ for $ p=0,1, \ldots, m$ can be transformed into
ones with bipartite structures. Sufficient conditions for the exact
SDP relaxation of QCQP \eqref{eq:hqcqp} are described. These conditions are called exactness conditions in the subsequent discussion. We mention that our results on the exact SDP relaxation is obtained by investigating the rank of $S(\y)$ in the dual of SDP relaxation ({$\DC_R$}). When discussing the exact optimal solution of nonconvex QCQPs, convex relaxations of QCQPs such as the SDP or
SOCP have played a pivotal role. In particular,
the signs of the elements in the data matrices $Q^0, \ldots, Q^m$
as in \cite{kim2003 exact, Sojoudi2014 exactness} and
graph structures such as forests \cite{Azuma2021 } and bipartite structures \cite{Sojoudi2014 exactness} have been used to identify the classes
of nonconvex QCQPs whose exact optimal solution can be attained via the SDP relaxation. QCQPs with nonpositive off-diagonal data matrices were shown to have an exact SDP and SOCP relaxation~\cite{kim2003 exact}. This result was generalized by Sojoudi and Lavaei~\cite{Sojoudi2014 exactness}
with a sufficient condition that can be tested by
the sign-definiteness based on the cycles in the aggregated sparsity pattern graph induced from the nonzero elements of data matrices in \eqref{eq:hqcqp}. A finite set $\{Q^0 _{ij}, Q^1 _{ij}, \ldots, Q^m_{ij}\} \subseteq \Real$ is called sign-definite
if the elements of the set are either all nonnnegative or all nonpositive. We note that these results are obtained by analyzing the primal problem ($\PC_R$). For general QCQPs with no particular structure,
Burer and Ye in \cite{Burer2019 } presented sufficient conditions for the exact semidefinite formulation
with a polynomial-time checkable polyhedral system. From the dual SDP relaxation \eqref{eq:hsdrd} using strong duality,
they proposed
an LP-based technique to detect the exactness of the SDP relaxation of QCQPs
consisting of diagonal matrices $Q^0, \ldots, Q^m$ and linear terms. Azuma et al. ~\cite{Azuma2021 } presented related results on QCQPs with forest structures. With respect to the exactness conditions,
Yakubovich's S-lemma~\cite{Polik2007, Yakubovich1971 }
(also known as S-procedure) can be regarded as one of the most important results. It showed that the trust-region subproblem,
a subclass of QCQPs with only one constraint ($m = 1 $) and $Q^1 \succeq O$, always admits an exact SDP relaxation. Under some mild assumptions,
Wang and Xia~\cite{Wang2015 } generalized this result to QCQPs with two constraints ($m = 2 $)
and any matrices satisfing $Q^1 = -Q^2 $ but not necessarily being positive semidefinite. For the extended trust-region subproblem
whose constraints consist of one ellipsoid and linear inequalities,
the exact SDP relaxation has been studied by
Jeyakumar and Li~\cite{Jeyakumar2014 }. They proved that
the SDP relaxation of the extended trust-region subproblem is exact if
the algebraic multiplicity of the minimum eigenvalue of $Q^0 $ is strictly greater than
the dimension of the space spanned by the coefficient vectors of the linear inequalities. This condition was slightly improved by Hsia and Sheu~\cite{Hsia2013 }. In addition, Locatelli~\cite{Locatelli2016 } introduced
a new exactness condition for the extended trust-region subproblem
based on the KKT conditions
and proved that it is more general than the previous results. A different approach on the exactness of the SDP relaxation for QCQPs is to study the convex hull exactness, i. e., the coincidence of
the convex hull of the epigraph of a QCQP and the projected epigraph of its SDP relaxation. Wang and K{\dotlessi}l{\dotlessi}n\chige{c}-Karzan in~\cite{Wang2021 tightness}
presented sufficient conditions for the convex hull exactness under the condition that
the feasible set $\Gamma \coloneqq \{\y \geq \0 \, |\, S(\y) \succeq O\}$ of \eqref{eq:hsdrd} is polyhedral. Their results were improved in~\cite{Wang2021 geometric} by eliminating this condition. The rank-one generated (ROG) property, a geometric property,
was employed by Argue et al. ~\cite{Argue2020 necessary}
to evaluate the feasible set of the SDP relaxation. In their paper, they proposed sufficient conditions that
the feasible set of the SDP relaxation is ROG, and
connected the ROG property with both the objective value and the exactness of the convex hull. We describe our contributions:
\begin{itemize} \vspace{-2 mm}
\item
We first show that
if the aggregated sparsity pattern graph is connected and bipartite
and a feasibility checking system constructed from QCQP \eqref{eq:hqcqp} is infeasible,
then the SDP relaxation is exact in section~\ref{sec:main}. It is a polynomial-time method as the systems can be represented as SDPs. This result can be regarded as an extension of Azuma et al. ~\cite{Azuma2021 }
in the sense that
the aggregated sparsity pattern
was generalized from forests to bipartite. We should mention that the signs of data are irrelavant. We give in section~\ref{sec:example} two numerical examples of QCQPs which can be shown to
have exact SDP relaxations by our method, but fails to meet the conditions for real-valued
QCQP of \cite{Sojoudi2014 exactness}. \item
We propose a conversion method to derive a bipartite graph structure in \eqref{eq:hqcqp} from QCQPs with no apparent structure,
so that the SDP relaxation of the resulting QCQP provides the exact optimal solution. More precisely, for every off-diagonal index $(i, j)$, if the set $\{Q^0 _{ij}, \ldots, Q^m_{ij}\}$ is sign-definite,
i. e., either all nonnegative or all nonpositive,
then any QCQP \eqref{eq:hqcqp} can be transformed into nonnegative off-diagonal QCQPs
with bipartite aggregated sparsity
by introducing a new variable $\z \coloneqq -\x$ and a new constraint $\|\x + \z\|_2 ^2 \leq 0 $, which
covers a result for the real-valued QCQP proposed in \cite{Sojoudi2014 exactness}. \item We also show that the known results on the exactness of QCQPs
where (a) all the off-diagonal elements are sign-definite and the aggregated sparsity pattern graph is forest
or (b) all the off-diagonal elements are nonpositive can be proved using our method. \item For disconnected pattern graphs,
a perturbation of the objective function is introduced, as in~\cite{Azuma2021 }, in section~\ref{sec:perturbation}
to demonstrate that a QCQP is exact
if there exists a sequence of perturbed problems converging to the QCQP
while maintaining the exactness of their SDP relaxation
under assumptions weaker than \cite{Azuma2021 }. \end{itemize}
Throughout this paper, the following example is used to illustrate the difference between our result and previous works. \begin{example}
\label{examp:cycle-graph-4 -vertices}
\begin{align*}
\begin{array}{rl}
\min & \trans{\x} Q^0 \x \\
\subto & \trans{\x} Q^1 \x \leq 10, \quad \trans{\x} Q^2 \x \leq 10,
\end{array}
\end{align*}
where \begin{align*}
& Q^0 = \begin{bmatrix}
0 & -2 & 0 & 2 \\ -2 & 0 & -1 & 0 \\
0 & -1 & 5 & 1 \\ 2 & 0 & 1 & -4 \end{bmatrix}, \
Q^1 = \begin{bmatrix}
5 & 2 & 0 & 1 \\ 2 & -1 & 3 & 0 \\
0 & 3 & 3 & -1 \\ 1 & 0 & -1 & 4 \end{bmatrix}, \
Q^2 = \begin{bmatrix}
-1 & 1 & 0 & 0 \\ 1 & 4 & -1 & 0 \\
0 & -1 & 6 & 1 \\ 0 & 0 & 1 & -2 \end{bmatrix}. \end{align*}
\end{example}
\noindent
Although Example \ref{examp:cycle-graph-4 -vertices} does not satisfy
the sign-definiteness,
the proposed method can successfully show that the SDP relaxation is exact. The rest of this paper is organized as follows. In section~\ref{sec:preliminaries},
the aggregated sparsity pattern of QCQPs and the sign-definiteness are defined
and related works on the exactness of the SDP relaxation for QCQPs
with some aggregated sparsity pattern are described. Sections~\ref{sec:main} and \ref{sec:perturbation} include the main results of this paper. In section~\ref{sec:main}, the assumptions necessary for the exact SDP relaxation are described,
and sufficient conditions for the exact SDP relaxation are presented
under the connectivity of the aggregated sparsity pattern. In section~\ref{sec:perturbation},
we show that the sufficient conditions can be extended to
QCQPs which do not satisfy the connectivity condition. The perturbation results on the exactness are utilized to remove the connectivity condition. In section~\ref{sec:example}, we also provide specific numerical instances to compare our result with the existing work and
illustrate our method. Finally, we conclude in section~\ref{sec:conclution}. \section{Preliminaries} \label{sec:preliminaries}
We denote
the $n$-dimensional Euclidean space by $\Real^n$
and the nonnegative orthant of $\Real^n$ by $\Real_+^n$. We write the zero vector and the vector of all ones as $\0 \in \Real^n$ and $\1 \in \Real^n$, respectively.
|
312
| 1
|
arxiv
|
the vertex and edge sets are clear. \subsection{Aggregated sparsity pattern}
The aggregated sparsity pattern of the SDP relaxation, defined from the data matrices $Q^p \, (p \in [0, m])$, is used
to describe the sparsity structure of QCQPs. Let $\VC = [n]$ denote the set of indices of
rows and columns of $n \times n$ symmetric matrices. Then, the set of indices
\begin{equation*}
\EC = \left\{
(i, j) \in \VC \times \VC \, \middle|\,
\text{$i \neq j$ and $Q^p_{ij} \ne 0 $ for some $p \in [0, m]$}
\right\}
\end{equation*}
is called the aggregated sparsity pattern
for both a given QCQP \eqref{eq:hqcqp} and its SDP relaxation \eqref{eq:hsdr}. If $\EC$ denotes the set of edges of a graph with vertices $\VC$,
the graph $G(\VC, \EC)$ is called the aggregated sparsity pattern graph. If $\EC$ corresponds to an adjacent matrix $\QC$ of $n$ vertices,
$\QC$ is called the aggregated sparsity pattern matrix. Consider the QCQP in Example \ref{examp:cycle-graph-4 -vertices} as an illustrative example. \noindent
As $(1, 3 )$th and $(2, 4 )$th elements are zeros in $Q^0, Q^1, Q^2 $,
the aggregated sparsity pattern graph is a cycle with 4 vertices as
shown in Figure~\ref{fig:example-aggregated-sparsity}. We see that
the graph has only one cycle with 4 vertices. This graph is the simplest of connected bipartite graphs with cycles. \begin{figure}[t]
\centering
\begin{minipage}{0.30 \textwidth}
\tikzset{every picture/. style={line width=0.75 pt}}
\begin{tikzpicture}[x=0.75 pt, y=0.75 pt, yscale=-0.6, xscale=0.6 ]
\draw (140,40 ) -- (40,140 ) -- (140,140 ) -- (40,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (10,40 ) .. controls (10,23.43 ) and (23.43,10 ) .. (40,10 ) .. controls (56.57,10 ) and (70,23.43 ) .. (70,40 ) .. controls (70,56.57 ) and (56.57,70 ) .. (40,70 ) .. controls (23.43,70 ) and (10,56.57 ) .. (10,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (10,140 ) .. controls (10,123.43 ) and (23.43,110 ) .. (40,110 ) .. controls (56.57,110 ) and (70,123.43 ) .. (70,140 ) .. controls (70,156.57 ) and (56.57,170 ) .. (40,170 ) .. controls (23.43,170 ) and (10,156.57 ) .. (10,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (110,140 ) .. controls (110,123.43 ) and (123.43,110 ) .. (140,110 ) .. controls (156.57,110 ) and (170,123.43 ) .. (170,140 ) .. controls (170,156.57 ) and (156.57,170 ) .. (140,170 ) .. controls (123.43,170 ) and (110,156.57 ) .. (110,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (110,40 ) .. controls (110,23.43 ) and (123.43,10 ) .. (140,10 ) .. controls (156.57,10 ) and (170,23.43 ) .. (170,40 ) .. controls (170,56.57 ) and (156.57,70 ) .. (140,70 ) .. controls (123.43,70 ) and (110,56.57 ) .. (110,40 ) -- cycle ;
\draw (40,40 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1 }
\end{center}\end{minipage}};
\draw (140,40 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2 }
\end{center}\end{minipage}};
\draw (140,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4 }
\end{center}\end{minipage}};
\draw (40,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3 }
\end{center}\end{minipage}};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.30 \textwidth}
\begin{equation*}
\QC = \begin{bmatrix}
\star & \star & 0 & \star \\ \star & \star & \star & 0 \\
0 & \star & \star & \star \\ \star & 0 & \star & \star
\end{bmatrix}. \end{equation*}
\end{minipage}
\caption{The aggregated sparsity pattern graph and matrix of Example~\ref{examp:cycle-graph-4 -vertices}. $\star$ denotes an arbitrary value. }
\label{fig:example-aggregated-sparsity}
\end{figure}
For the discussion on QCQPs with sign-definiteness, we adopt the following notation
from \cite{Sojoudi2014 exactness}. We define the sign $\sigma_{ij}$ of each edge in $\VC \times \VC$ as
\begin{equation} \label{eq:definition-sigma-ij}
\sigma_{ij} = \begin{cases}
\quad +1 \quad & \text{if $Q^0 _{ij}, \ldots, Q^m_{ij} \geq 0 $, } \\
\quad -1 \quad & \text{if $Q^0 _{ij}, \ldots, Q^m_{ij} \leq 0 $, } \\
\quad 0 \quad & \text{otherwise. }
\end{cases}
\end{equation}
Obviously,
$\sigma_{ij} \in \{-1, +1 \}$ if and only if $\{Q^0 _{ij}, \ldots, Q^m_{ij}\}$ is sign-definite. Sojoudi and Lavaei~\cite{Sojoudi2014 exactness}
proposed the following condition for exactness. \vspace{-2 mm}
\begin{theorem}[{\cite[Theorem 2 ]{Sojoudi2014 exactness}}] \label{thm:sojoudi-theorem}
The SOCP relaxation and the SDP relaxation of \eqref{eq:hqcqp} are exact
if both of the following hold:
\begin{align}
&\sigma_{ij} \neq 0, && \forall (i, j) \in \EC, \label{eq:sign-constraint-sign-definite} \\
\prod_{(i, j) \in \mathcal{C}_r} & \sigma_{ij} = (-1 )^{\left|\mathcal{C}_r\right|}, && \forall r \in \{1, \ldots, \kappa\}, \label{eq:sign-constraint-simple-cycle}
\end{align}
where the set of cycles $\mathcal{C}_1, \ldots, \mathcal{C}_\kappa \subseteq \EC$ denotes a cycle basis for $G$. \end{theorem}
\vspace{-2 mm}
With the aggregated sparsity pattern graph $G$ of a given QCQP,
they presented the following corollary:
\begin{coro}[{\cite[Corollary 1 ]{Sojoudi2014 exactness}}]
\label{coro:sojoudi-corollary1 }
The SDP relaxation and the SOCP relaxation of \eqref{eq:hqcqp} are exact
if one of the following holds:
\begin{enumerate}[label=(\alph*)]
\item $G$ is forest with $\sigma_{ij} \in \{-1, 1 \}$ for all $(i, j) \in \EC$, \label{cond:sojoudi-forest}
\item $G$ is bipartite with $\sigma_{ij} = 1 $ for all $(i, j) \in \EC$, \label{cond:sojoudi-bipartite}
\item $G$ is arbitrary with $\sigma_{ij} = -1 $ for all $(i, j) \in \EC$. \label{cond:sojoudi-arbitrary}
\end{enumerate}
\end{coro}
\subsection{Conditions for exact SDP relaxations with forest structures}
Recently,
Azuma et al. ~\cite{Azuma2021 } proposed
a method to decide the exactness of the SDP relaxation of QCQPs with forest structures. The forest-structured QCQPs or their SDP relaxation have no cycles in their aggregated sparsity pattern graph. In their work,
the rank of the dual SDP relaxation was determined using feasibility systems under the following assumption:
\begin{assum}
\label{assum:previous-assumption}
The following conditions hold for \eqref{eq:hqcqp}:
\begin{enumerate}[label=(\roman*)]
\item there exists $\bar{\y} \geq 0 $ such that $\sum \bar{y}_p Q^p \succ O$, and \label{assum:previous-assumption-1 }
\item \eqref{eq:hsdr} has an interior feasible point. \label{assum:previous-assumption-2 }
\end{enumerate}
\end{assum}
\noindent
We note that Assumption \ref{assum:previous-assumption} is used to derive
strong duality of the SDP relaxation
and the boundedness of the feasible set. More precisely, for $\bar{\y}$ in Assumption~\ref{assum:previous-assumption},
multiplying $\ip{Q^p}{X} \leq b_p$ by $\bar{y}_p$ and adding together leads to
\begin{equation*}
\ip{\left(\sum_{p = 1 }^m \bar{y}_pQ^p\right)}{X} \leq \trans{\b}\bar{\y},
\end{equation*}
which implies that the feasible set of $X$ is bounded from $X \succeq O$. We describe the result in \cite{Azuma2021 } for our subsequent discussion. \begin{prop}[\cite{Azuma2021 }] \label{prop:forest-results}
Assume that a given QCQP satisfies Assumption~\ref{assum:previous-assumption},
and that its aggregated sparsity pattern graph $G(\VC, \EC)$ is a forest. The problem \eqref{eq:hsdr} is exact
if, for all $(k, \ell) \in \EC$, the following system has no solutions:
\begin{equation} \label{eq:system-zero}
\y \geq 0, \; S(\y) \succeq O, \; S(\y)_{k\ell} = 0. \end{equation}
\end{prop}
\noindent
The above feasibility system, formulated as SDPs, can be checked in polynomial time
since the number of edges of a forest graph with $n$ vertices is at most $n - 1 $. \section{Conditions for exact SDP relaxations with connected bipartite structures} \label{sec:main}
Throughout this section,
we assume that the aggregated sparsity pattern graph $G(\VC, \EC)$ of a QCQP is connected and bipartite. Under this assumption,
we present sufficient conditions for the SDP relaxation to be exact. The main result described in Theorem~\ref{thm:system-based-condition-connected} in this section is extended to
the ones for the disconnected aggregated sparsity in section~\ref{sec:perturbation}. Assumption~\ref{assum:previous-assumption}
has been introduced
only to derive the strong duality which is used in the proof of Proposition~\ref{prop:forest-results}. Instead of Assumption~\ref{assum:previous-assumption}, we introduce
Assumption~\ref{assum:new-assumption}. In Remark~\ref{rema:comparison-assumption} below, we will consider a relation between
Assumptions~\ref{assum:previous-assumption} and
\ref{assum:new-assumption}. \begin{assum} \label{assum:new-assumption}
The following two conditions hold:
\begin{enumerate}[label=(\roman*)]
\item \label{assum:new-assumption-1 }
the sets of optimal solutions for \eqref{eq:hsdr} and \eqref{eq:hsdrd} are nonempty; and
\item \label{assum:new-assumption-2 }
at least one of the following two conditions holds:
\begin{enumerate}[label=(\alph*)]
\item \label{assum:new-assumption-2 -1 }
the feasible set of \eqref{eq:hsdr} is bounded; or
\item \label{assum:new-assumption-2 -2 }
the set of optimal solutions for \eqref{eq:hsdrd} is bounded. \end{enumerate}
\end{enumerate}
\end{assum}
\noindent
The following lemma states that strong duality holds under Assumption~\ref{assum:new-assumption}. \begin{lemma
\label{lem:feasible-set-strong-duality}
If Assumption~\ref{assum:new-assumption} is satisfied,
strong duality holds between \eqref{eq:hsdr} and \eqref{eq:hsdrd}, that is,
\eqref{eq:hsdr} and \eqref{eq:hsdrd} have optimal solutions
and their optimal values are finite and equal.
|
312
| 2
|
arxiv
|
{y}_pQ^p \succ O$. Then, there obviously exists sufficiently large $\lambda > 0 $
such that
\begin{equation*}
\lambda \bar{\y} \geq \0 \quad \text{and} \quad Q^0 + \sum_p \lambda\bar{y}_pQ^p \succ O,
\end{equation*}
which implies \eqref{eq:hsdrd} has an interior feasible point. It follows that the set of optimal solutions of \eqref{eq:hsdr} is bounded. Similarly, since \eqref{eq:hsdr} has an interior point by Assumption~\ref{assum:previous-assumption},
the set of optimal solutions of \eqref{eq:hsdrd} is also bounded. This indicates Assumption~\ref{assum:new-assumption} {\it \ref{assum:new-assumption-1 }} and
{\it \ref{assum:new-assumption-2 }\ref{assum:new-assumption-2 -2 }}. In addition,
as mentioned right after Assumption~\ref{assum:previous-assumption},
the feasible set of \eqref{eq:hsdr} is bounded. %
Thus,
Assumption~\ref{assum:new-assumption}
{\it \ref{assum:new-assumption-2 }\ref{assum:new-assumption-2 -1 }} is also satisfied,
under Assumption~\ref{assum:previous-assumption}. \end{rema}
\subsection{Bipartite sparsity pattern matrix} \label{ssec:bipartite-matrix}
For a given matrix $M \in \SymMat^n$,
a sparsity pattern graph $G(\VC, \EC_M)$ can be defined by the vertex set and edge set:
\begin{equation*}
\VC = [n], \quad
\EC_M = \left\{(i, j) \in \VC \times \VC \, \middle|\, M_{ij} \neq 0 \right\}. \end{equation*}
Conversely, if $(i, j) \not\in \EC_M$, then the $(i, j)$th element of $M$ must be zero. The graph $G(\VC, \EC)$ is called bipartite if
its vertices can be divided into two disjoint sets $\LC$ and $\RC$ such that
no two vertices in the same set are adjacent. Equivalently, a bipartite $G$ is a graph with no odd cycles. If $G(\VC, \EC)$ is bipartite, it can be represented with $G(\LC, \RC, \EC)$,
where $\LC$ and $\RC$ are disjoint sets of vertices. The sets $\LC$ and $\RC$ are sometimes called parts of the bipartite graph $G$. The following lemma is an immediate consequence of Proposition 1 of \cite{grone1992 nonchordal}. It shows that the rank of a nonnegative positive semidefinite matrix can be bounded below by $n - 1 $
under some sparsity conditions if the sum of every row of the matrix is positive. We utilize Lemma~\ref{lemma:bipartite-rank} to estimate the rank of solutions of the dual SDP relaxation,
and establish conditions for the exact SDP relaxation in this section. \begin{lemma}[{\cite[Proposition 1 ]{grone1992 nonchordal}}] \label{lemma:bipartite-rank}
Let $M \in \Real^{n \times n}$ be a nonnegative and positive semidefinite matrix with $M\1 > \0 $. If the sparsity pattern graph of $M$ is bipartite and connected,
then $\rank(M) \geq n - 1 $. \end{lemma}
As the aggregated sparsity pattern graph $G$ composed from
$Q_0, Q_1, \dots, Q_m$ is used
to investigate the exactness of the SDP relaxation of a QCQP,
the sparsity pattern graph of the matrix $S(\y)$ in the dual of the SDP relaxation is clearly a subgraph of $G$. As a result, if $G$ is bipartite,
then the rank of $S(\y)$ can be estimated by Lemma~\ref{lemma:bipartite-rank}
since $S(\y)$ is also bipartite. This will be used in the proof of Theorem~\ref{thm:system-based-condition-connected}. \subsection{Main results} \label{ssec:main-connected}
We present our main results, that is,
sufficient conditions for the SDP relaxation of the QCQP with bipartite structures to be exact. \begin{theorem}
\label{thm:system-based-condition-connected}
Suppose that Assumption~\ref{assum:new-assumption} holds
and the aggregated sparsity pattern $G(\VC, \EC)$ is a bipartite graph. Then, \eqref{eq:hsdr} is exact if
\begin{itemize}
\item $G(\VC, \EC)$ is connected,
\item
for all $(k, \ell) \in \EC$, the following system has no solutions:
\begin{equation} \label{eq:system-nonpositive}
\y \geq \0, \; S(\y) \succeq O, \; S(\y)_{k\ell} \leq 0. \end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
Let $X^*$ be any optimal solution for \eqref{eq:hsdr} which exists by Assumption~\ref{assum:new-assumption}. By Lemma~\ref{lem:feasible-set-strong-duality},
the optimal values of \eqref{eq:hsdr} and \eqref{eq:hsdrd} are finite and equal. Thus, there exists an optimal solution $\y^*$ for \eqref{eq:hsdrd}
such that the complementary slackness holds, i. e.,
\begin{equation*}
X^* S(\y^*) = O. \end{equation*}
%
Since $\y^* \geq \0 $ and $S(\y^*) \succeq O$,
by the infeasibility of \eqref{eq:system-nonpositive},
we obtain $S(\y^*)_{k\ell} > 0 $ for every $(k, \ell) \in \EC$. Furthermore, for each $i \in \VC$, the $i$th element of $S(\y^*)\1 $ is
\begin{equation*}
[S(\y^*)\1 ]_i
= \sum_{j = 1 }^n S(\y^*)_{ij}
= S(\y^*)_{ii} + \sum_{(i, j) \in \EC} S(\y^*)_{ij}
> 0. \end{equation*}
By Lemma~\ref{lemma:bipartite-rank},
$\rank\left\{S(\y^*)\right\} \geq n - 1 $. %
From the Sylvester’s rank inequality~\cite{Anton2014 },
\begin{equation*}
\rank(X^*)
\leq n - \rank\left\{S(\y^*)\right\} + \rank\left\{X^*S(\y^*)\right\}
\leq n - (n - 1 )
= 1. \end{equation*}
%
Therefore, the SDP relaxation is exact. \end{proof}
The exactness of a given QCQP can be determined
by checking the infeasibility of $|\EC|$ systems. Since \eqref{eq:system-nonpositive} can be formulated as
an SDP with the objective function $0 $,
checking their infeasibility is not difficult. Compared with Proposition~\ref{prop:forest-results} in \cite{Azuma2021 },
Theorem~\ref{thm:system-based-condition-connected} can determine the exactness of a wider class of QCQPs
in terms of the required assumption and sparsity. As mentioned in Remark~\ref{rema:comparison-assumption},
the assumptions in Theorem \ref{thm:system-based-condition-connected} are weaker
than those in Proposition~\ref{prop:forest-results},
and the aggregated sparsity pattern of $G$ is extended from forest graphs to bipartite graphs. \subsection{Nonnegative off-diagonal QCQPs} \label{ssec:nonnegative-offdiagonal-connected}
We can also prove a known result by Theorem~\ref{thm:system-based-condition-connected},
i. e.,
the exactness of the SDP relaxation for QCQPs with nonnegative off-diagonal data matrices $Q^0, \ldots, Q^m$,
which was
referred as Corollary~\ref{coro:sojoudi-corollary1 }\ref{cond:sojoudi-bipartite} above and was proved in~\cite{Sojoudi2014 exactness}. The aggregated sparsity pattern graph $G(\VC, \EC)$ is assumed to be connected
and $Q^0 _{ij} > 0 $ for all $(i, j) \in \EC$ in this subsection. These assumptions will be relaxed in
section~\ref{ssec:nonnegative-offdiagonal}. \begin{coro} \label{coro:nonnegative-offdiagonal-connected}
Suppose that Assumption~\ref{assum:new-assumption} holds,
and the aggregated sparsity pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp}
is bipartite and connected. If $Q^0 _{ij} > 0 $ for all $(i, j) \in \EC$ and
$Q^p_{ij} \geq 0 $ for all $(i, j) \in \EC$ and all $p \in [m]$,
then the SDP relaxation is exact. \end{coro}
\begin{proof}
Let $\hat{\y} \ge \0 $ be any nonnegative vector satisfying $S(\hat{\y}) \succeq O$. By the assumption, for any $(i, j) \in \EC$,
\begin{equation*}
S(\hat{\y})_{ij}
= Q^0 _{ij} + \sum_{p \in [m]} \hat{y}_p Q^p_{ij}
\geq Q^0 _{ij}
> 0. \end{equation*}
Hence, the system \eqref{eq:system-nonpositive} for every $(i, j) \in \EC$ has no solutions. Therefore, by Theorem~\ref{thm:system-based-condition-connected},
the SDP relaxation is exact. \end{proof}
\subsection{Conversion to QCQPs with bipartite structures} \label{ssec:comparison}
We show that a QCQP can be transformed into an equivalent QCQP with bipartite structures. We then compare Theorem~\ref{thm:system-based-condition-connected}
with Theorem~\ref{thm:sojoudi-theorem}. As our result has been obtained by
the rank of the dual SDP \eqref{eq:hsdrd} via strong duality while
the result in \cite{Sojoudi2014 exactness} is from the evaluation of \eqref{eq:hsdr}, the classes of QCQPs that can be solved exactly with the SDP relaxation become different. In this section, we show that
a class of QCQPs obtained by Theorem~\ref{thm:system-based-condition-connected}
under Assumption~\ref{assum:new-assumption} is wider than those by Theorem~\ref{thm:sojoudi-theorem}. To transform a QCQP into an equivalent QCQP with bipartite structures and to apply Theorem~\ref{thm:system-based-condition-connected},
we define a diagonal matrix $D^p \in \SymMat^n$ with a positive number
from the diagonal of $Q^p$ for every $p$. In addition, off-diagonal elements of $Q^p$ are divided into
two nonnegative symmetric matrices $2 N^p_+, \, 2 N^p_- \in \SymMat^n$ according to their signs
such that $Q^p = D^p + 2 N^p_+ - 2 N^p_-$. More precisely, for an arbitrary positive number $\delta > 0 $,
\begin{align*}
D^p_{ii} &= Q^p_{ii} + 2 \delta, \\
2 [N^p_+]_{ij} &= \begin{cases}
+ Q^p_{ij} & \text{if $i \neq j$ and $Q^p_{ij} > 0 $}, \\
0 & \text{otherwise, }
\end{cases} \\
2 [N^p_-]_{ij} &= \begin{cases}
- Q^p_{ij} & \text{if $i \neq j$ and $Q^p_{ij} < 0 $}, \\
2 \delta & \text{if $i = j$}, \\
0 & \text{otherwise. }
\end{cases}
\end{align*}
We introduce a new variable $\z$ such that $\z \coloneqq -\x$. Then,
\begin{equation*}
\trans{\x}Q^p\x =
\trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} D^p + 2 N^p_+ & N^p_- \\ N^p_- & O \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix},
\end{equation*}
The constraint $\z = -\x$ can be expressed as
$\|\x + \z\|^2 \leq 0 $,
which can be written as
\begin{equation*}
\trans{(\x + \z)}(\x + \z)
= \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} I & I \\ I & I \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix}
\leq 0. \end{equation*}
Thus, we have an equivalent QCQP:
\begin{equation} \label{eq:decomposed-hqcqp}
\begin{array}{rl}
\min & \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} D^0 + 2 N^0 _+ & N^0 _- \\ N^0 _- & O \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix} \\
\subto & \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} D^p + 2 N^p_+ & N^p_- \\ N^p_- & O \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix} \leq b_p, \quad p \in [m], \\
& \trans{\begin{bmatrix} \x \\ \z \end{bmatrix}}
\begin{bmatrix} I & I \\ I & I \end{bmatrix}
\begin{bmatrix} \x \\ \z \end{bmatrix} \leq 0. \end{array}
\end{equation}
Note that
\eqref{eq:decomposed-hqcqp} includes $m + 1 $ constraints and
all off-diagonal elements of data matrices are nonnegative since $N^p_+$ and $N^p_-$ are nonnegative. Let $\bar{G}(\bar{\VC}, \bar{\EC})$ denote
the aggregated sparsity pattern graph of \eqref{eq:decomposed-hqcqp}. The number of vertices in $\bar{G}$ is twice as many as that
in $G$ due to the additional variable $\z$. If $\bar{G}$ is bipartite and $Q^0 _{ij} \neq 0 $ for all $(i, j) \in \EC$,
the SDP relaxation of \eqref{eq:decomposed-hqcqp} is exact
since the assumptions of Corollary~\ref{coro:nonnegative-offdiagonal-connected} are satisfied. \begin{figure}[t]
\centering
\tikzset{every picture/.
|
312
| 3
|
arxiv
|
\draw (40,40 ) -- (140,40 ) -- (140,140 ) -- (40,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (10,40 ) .. controls (10,23.43 ) and (23.43,10 ) .. (40,10 ) .. controls (56.57,10 ) and (70,23.43 ) .. (70,40 ) .. controls (70,56.57 ) and (56.57,70 ) .. (40,70 ) .. controls (23.43,70 ) and (10,56.57 ) .. (10,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (10,140 ) .. controls (10,123.43 ) and (23.43,110 ) .. (40,110 ) .. controls (56.57,110 ) and (70,123.43 ) .. (70,140 ) .. controls (70,156.57 ) and (56.57,170 ) .. (40,170 ) .. controls (23.43,170 ) and (10,156.57 ) .. (10,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (110,140 ) .. controls (110,123.43 ) and (123.43,110 ) .. (140,110 ) .. controls (156.57,110 ) and (170,123.43 ) .. (170,140 ) .. controls (170,156.57 ) and (156.57,170 ) .. (140,170 ) .. controls (123.43,170 ) and (110,156.57 ) .. (110,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (110,40 ) .. controls (110,23.43 ) and (123.43,10 ) .. (140,10 ) .. controls (156.57,10 ) and (170,23.43 ) .. (170,40 ) .. controls (170,56.57 ) and (156.57,70 ) .. (140,70 ) .. controls (123.43,70 ) and (110,56.57 ) .. (110,40 ) -- cycle ;
\draw (40,40 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1 }
\end{center}\end{minipage}};
\draw (140,40 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2 }
\end{center}\end{minipage}};
\draw (140,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3 }
\end{center}\end{minipage}};
\draw (40,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4 }
\end{center}\end{minipage}};
\draw (91.33,64.33 ) node [anchor=north west][inner sep=0.75 pt] [font=\large] [align=left] {$\displaystyle -$};
\draw (79.67,7 ) node [anchor=north west][inner sep=0.75 pt] [font=\large] [align=left] {$\displaystyle +$};
\draw (80.33,106.67 ) node [anchor=north west][inner sep=0.75 pt] [font=\large] [align=left] {$\displaystyle +$};
\draw (42,71 ) node [anchor=north west][inner sep=0.75 pt] [font=\large] [align=left] {$\displaystyle +$};
\draw (142,71 ) node [anchor=north west][inner sep=0.75 pt] [font=\large] [align=left] {$\displaystyle +$};
\end{tikzpicture}
\caption{
An aggregated sparsity pattern graph with edge signs. The solid and dashed lines show that
the corresponding $\sigma_{ij}$ are $+1 $ and $-1 $, respectively. Both lines indicate the existence of nonzero elements in some $Q^p$. }
\label{fig:example-edge-signs}
\end{figure}
\begin{example}
Now, consider an instance of QCQP~\eqref{eq:hqcqp}
with $n=4 $, $Q^p_{24 } = 0 \, (p \in [0, m])$ and the edge signs as:
\begin{equation*}
\begin{aligned}
\sigma_{12 } &= +1, & \sigma_{13 } &= -1, & \sigma_{14 } &= +1, & \sigma_{23 } &= +1, & \sigma_{34 } &= +1. \end{aligned}
\end{equation*}
\autoref{fig:example-edge-signs} illustrates the above signs. We also suppose that $Q^0 _{ij} \neq 0 $ for all $(i, j) \in \EC$. Then, for any distinct $i, j \in [n]$,
the set $\left\{Q^0 _{ij}, \ldots, Q^m_{ij}\right\}$ is sign-definite by definition. Since there exist odd cycles, e. g., $\{(1, 2 ), (2, 3 ), (3, 1 )\}$,
the aggregated sparsity pattern graph of a QCQP with the above edge signs is not bipartite. Next, we transform the QCQP instance into an equivalent QCQP with bipartite structures. Since $n=4 $, we see $\bar{\VC} = [8 ]$. \autoref{fig:example-transformed-sparsity-before} displays $\bar{G}$ from
\begin{equation*}
\begin{bmatrix} D^p + 2 N^p_+ & N^p_- \\ N^p_- & O \end{bmatrix} =
\left[
\begin{array}{c|c}
\begin{matrix}
Q^p_{11 } & Q^p_{12 } & 0 & Q^p_{14 } \\
Q^p_{21 } & Q^p_{22 } & Q^p_{23 } & 0 \\
0 & Q^p_{32 } & Q^p_{33 } & Q^p_{34 } \\
Q^p_{41 } & 0 & Q^p_{43 } & Q^p_{44 }
\end{matrix} &
\begin{matrix}
0 & 0 & -\frac{1 }{2 }Q^p_{13 } & 0 \\
0 & 0 & 0 & 0 \\
-\frac{1 }{2 }Q^p_{31 } & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix} \\ \hline
\begin{matrix}
0 & 0 & -\frac{1 }{2 }Q^p_{13 } & 0 \\
0 & 0 & 0 & 0 \\
-\frac{1 }{2 }Q^p_{31 } & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix} & O \\
\end{array}
\right] +
\delta \left[
\begin{array}{c|c}
2 I & I \\ \hline
I & O
\end{array}
\right]
\end{equation*}
and $[I\; I; I\; I]$. There exist three types of edges:
\begin{equation*}
\def1.5 {1.5 }
\left\{\begin{array}{rl}
\text{ (i)} & (1, 2 ), (2, 3 ), (3, 4 ), (1, 4 ); \\
\text{ (ii)} & (1, 7 ), (3, 5 ); \\
\text{(iii)} & (1, 5 ), (2, 6 ), (3, 7 ), (4, 8 ). \end{array}\right. \end{equation*}
The edges in (i) and (ii) are derived from
four $N^p_+$ on the upper-left of the data matrices,
and two $N^p_-$ on the upper-right and the lower-left of the data matrices, respectively. The edges for (iii) represents off-diagonal elements in $[I\; I; I\; I]$ in the new constraint. In \autoref{fig:example-transformed-sparsity-before},
the cycle in the solid lines is bipartite with the vertices $\{1,2,3,4 \}$,
and hence its vertices can be divided into two distinct sets $L_1 = \{1, 3 \}$ and $R_1 = \{2, 4 \}$. If we let $L_2 \coloneqq \{6, 8 \}$ and $R_2 \coloneqq \{5, 7 \}$,
there are no edges between any distinct $i, j$ in $L_1 \cup L_2 $,
and the same is true for $R_1 \cup R_2 $. The graph $\bar{G}$ is thus bipartite (\autoref{fig:example-transformed-sparsity-after}). We can conclude that the SDP relaxation of \eqref{eq:decomposed-hqcqp} is exact
by Corollary~\ref{coro:nonnegative-offdiagonal-connected}. \end{example}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.45 \textwidth}
\centering
\tikzset{every picture/. style={line width=0.75 pt}}
\begin{tikzpicture}[x=0.75 pt, y=0.75 pt, yscale=-0.6, xscale=0.6 ]
\draw [dash pattern={on 4.5 pt off 4.5 pt}] (70,60 ) -- (270,160 ) ;
\draw [dash pattern={on 4.5 pt off 4.5 pt}] (70,160 ) -- (270,60 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (270,60 ) -- (270,160 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (70,60 ) -- (70,160 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (170,60 ) -- (170,160 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (370,60 ) -- (370,160 ) ;
\draw (370,10 ) -- (70,10 ) -- (70,60 ) -- (370,60 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (40,60 ) .. controls (40,43.43 ) and (53.43,30 ) .. (70,30 ) .. controls (86.57,30 ) and (100,43.43 ) .. (100,60 ) .. controls (100,76.57 ) and (86.57,90 ) .. (70,90 ) .. controls (53.43,90 ) and (40,76.57 ) .. (40,60 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (240,60 ) .. controls (240,43.43 ) and (253.43,30 ) .. (270,30 ) .. controls (286.57,30 ) and (300,43.43 ) .. (300,60 ) .. controls (300,76.57 ) and (286.57,90 ) .. (270,90 ) .. controls (253.43,90 ) and (240,76.57 ) .. (240,60 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (140,60 ) .. controls (140,43.43 ) and (153.43,30 ) .. (170,30 ) .. controls (186.57,30 ) and (200,43.43 ) .. (200,60 ) .. controls (200,76.57 ) and (186.57,90 ) .. (170,90 ) .. controls (153.43,90 ) and (140,76.57 ) .. (140,60 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (340,60 ) .. controls (340,43.43 ) and (353.43,30 ) .. (370,30 ) .. controls (386.57,30 ) and (400,43.43 ) .. (400,60 ) .
|
312
| 4
|
arxiv
|
6.57 ) .. (40,160 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (240,160 ) .. controls (240,143.43 ) and (253.43,130 ) .. (270,130 ) .. controls (286.57,130 ) and (300,143.43 ) .. (300,160 ) .. controls (300,176.57 ) and (286.57,190 ) .. (270,190 ) .. controls (253.43,190 ) and (240,176.57 ) .. (240,160 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (140,160 ) .. controls (140,143.43 ) and (153.43,130 ) .. (170,130 ) .. controls (186.57,130 ) and (200,143.43 ) .. (200,160 ) .. controls (200,176.57 ) and (186.57,190 ) .. (170,190 ) .. controls (153.43,190 ) and (140,176.57 ) .. (140,160 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (340,160 ) .. controls (340,143.43 ) and (353.43,130 ) .. (370,130 ) .. controls (386.57,130 ) and (400,143.43 ) .. (400,160 ) .. controls (400,176.57 ) and (386.57,190 ) .. (370,190 ) .. controls (353.43,190 ) and (340,176.57 ) .. (340,160 ) -- cycle ;
\draw (170,60 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2 }
\end{center}\end{minipage}};
\draw (370,60 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4 }
\end{center}\end{minipage}};
\draw (70,60 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 1 }
\end{center}\end{minipage}};
\draw (270,60 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3 }
\end{center}\end{minipage}};
\draw (170,160 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 6 }
\end{center}\end{minipage}};
\draw (370,160 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 8 }
\end{center}\end{minipage}};
\draw (70,160 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 5 }
\end{center}\end{minipage}};
\draw (270,160 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 7 }
\end{center}\end{minipage}};
\draw (15,60 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{20.4 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle \x$
\end{center}\end{minipage}};
\draw (15,160 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{20.4 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle \z$
\end{center}\end{minipage}};
\end{tikzpicture}
\caption{
Vertices are divided into two groups:
the upper vertices correspond to $\x$
while the lower ones correspond to $\z$. }
\label{fig:example-transformed-sparsity-before}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45 \textwidth}
\centering
\tikzset{every picture/. style={line width=0.75 pt}}
\begin{tikzpicture}[x=0.75 pt, y=0.75 pt, yscale=-0.6, xscale=0.6 ]
\draw (215,110 ) -- (315,70 ) -- (388,125.35 ) -- (141,53.35 ) -- cycle ;
\draw [dash pattern={on 4.5 pt off 4.5 pt}] (115,40 ) -- (315,140 ) ;
\draw [dash pattern={on 4.5 pt off 4.5 pt}] (115,140 ) -- (315,40 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (315,40 ) -- (315,140 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (115,40 ) -- (115,140 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (215,40 ) -- (215,140 ) ;
\draw [dash pattern={on 0.84 pt off 2.51 pt}] (415,40 ) -- (415,140 ) ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (85,40 ) .. controls (85,23.43 ) and (98.43,10 ) .. (115,10 ) .. controls (131.57,10 ) and (145,23.43 ) .. (145,40 ) .. controls (145,56.57 ) and (131.57,70 ) .. (115,70 ) .. controls (98.43,70 ) and (85,56.57 ) .. (85,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (285,40 ) .. controls (285,23.43 ) and (298.43,10 ) .. (315,10 ) .. controls (331.57,10 ) and (345,23.43 ) .. (345,40 ) .. controls (345,56.57 ) and (331.57,70 ) .. (315,70 ) .. controls (298.43,70 ) and (285,56.57 ) .. (285,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (185,40 ) .. controls (185,23.43 ) and (198.43,10 ) .. (215,10 ) .. controls (231.57,10 ) and (245,23.43 ) .. (245,40 ) .. controls (245,56.57 ) and (231.57,70 ) .. (215,70 ) .. controls (198.43,70 ) and (185,56.57 ) .. (185,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (385,40 ) .. controls (385,23.43 ) and (398.43,10 ) .. (415,10 ) .. controls (431.57,10 ) and (445,23.43 ) .. (445,40 ) .. controls (445,56.57 ) and (431.57,70 ) .. (415,70 ) .. controls (398.43,70 ) and (385,56.57 ) .. (385,40 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (85,140 ) .. controls (85,123.43 ) and (98.43,110 ) .. (115,110 ) .. controls (131.57,110 ) and (145,123.43 ) .. (145,140 ) .. controls (145,156.57 ) and (131.57,170 ) .. (115,170 ) .. controls (98.43,170 ) and (85,156.57 ) .. (85,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (285,140 ) .. controls (285,123.43 ) and (298.43,110 ) .. (315,110 ) .. controls (331.57,110 ) and (345,123.43 ) .. (345,140 ) .. controls (345,156.57 ) and (331.57,170 ) .. (315,170 ) .. controls (298.43,170 ) and (285,156.57 ) .. (285,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (185,140 ) .. controls (185,123.43 ) and (198.43,110 ) .. (215,110 ) .. controls (231.57,110 ) and (245,123.43 ) .. (245,140 ) .. controls (245,156.57 ) and (231.57,170 ) .. (215,170 ) .. controls (198.43,170 ) and (185,156.57 ) .. (185,140 ) -- cycle ;
\draw [fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (385,140 ) .. controls (385,123.43 ) and (398.43,110 ) .. (415,110 ) .. controls (431.57,110 ) and (445,123.43 ) .. (445,140 ) .. controls (445,156.57 ) and (431.57,170 ) .. (415,170 ) .. controls (398.43,170 ) and (385,156.57 ) .
|
312
| 5
|
arxiv
|
0 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 3 }
\end{center}\end{minipage}};
\draw (215,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 2 }
\end{center}\end{minipage}};
\draw (415,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 4 }
\end{center}\end{minipage}};
\draw (115,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 5 }
\end{center}\end{minipage}};
\draw (315,140 ) node [font=\large] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
{\fontfamily{pcr}\selectfont 7 }
\end{center}\end{minipage}};
\draw (30,40 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{54.4 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle L_{1 } \cup L_{2 }$
\end{center}\end{minipage}};
\draw (30,140 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{54.4 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle R_{1 } \cup R_{2 }$
\end{center}\end{minipage}};
\end{tikzpicture}
\caption{
Vertices are reorganized to show the bipartite structure of the graph. }
\label{fig:example-transformed-sparsity-after}
\end{subfigure}
\caption{
Aggregated sparsity pattern graph of the transformed example. The solid lines and the dashed lines come from $N^p_+$ and $N^p_-$, respectively. The dotted lines are for the new constraint $\|\x + \z\|^2 \leq 0 $. }
\label{fig:example-transformed-sparsity}
\end{figure}
Similarly, the SDP relaxation of any QCQP that satisfies Theorem~\ref{thm:sojoudi-theorem}
can be shown to be exact by the transformation. Therefore,
Theorem~\ref{thm:system-based-condition-connected} includes a wider classes of QCQPs than Theorem~\ref{thm:sojoudi-theorem}. We prove this assertion in the following. \begin{prop}
\label{prop:weaker-than-sojoudi-connected}
Suppose that Assumption~\ref{assum:new-assumption} holds,
the aggregated sparsity pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp}
is connected,
and for all $(i, j) \in \EC$, $Q^0 _{ij} \neq 0 $. If \eqref{eq:hqcqp} satisfies the assumption of Theorem~\ref{thm:sojoudi-theorem},
then \eqref{eq:hqcqp} also satisfies that of Corollary~\ref{coro:nonnegative-offdiagonal-connected}. In addition, the exactness of its SDP relaxation
can be proved by Theorem~\ref{thm:system-based-condition-connected}. \end{prop}
\begin{proof}
Let $\bar{G}(\bar{\VC}, \bar{\EC})$ be
the aggregated sparsity pattern graph of \eqref{eq:decomposed-hqcqp}. Since the number of variables is $2 n$, $\bar{\VC} = [2 n]$ holds. The edges in $\bar{G}$ are:
\begin{equation*}
\def1.5 {1.5 }
\left\{\begin{array}{rll}
\text{ (i)} & (i, j) & \text{for $i, j \in \VC$ such that $\sigma_{ij} = +1 $}, \\
\text{ (ii)} & (i, j + n), (j, i + n) &
\text{for $i, j \in \VC$ such that $\sigma_{ij} = -1 $}, \\
\text{(iii)} & (i, i + n) & \text{for $i \in \VC$}. \end{array}\right. \end{equation*}
%
Note that
no edges exist among the vertices in $\{n+1, \ldots, 2 n\}$. By the definition of \eqref{eq:decomposed-hqcqp},
an edge $(i, j)$ with $\sigma_{ij} = -1 $ in $G$
is decomposed into two paths with positive signs in $\bar{G}$:
(a) the edges $(j, i + n)$ and $(i + n, i)$;
(b) the edges $(i, j + n)$ and $(j + n, j)$, as shown in \autoref{fig:transform-minus-edge-sign}. Since $G$ is connected, so is the graph $\bar{G}$. Recall that all off-diagonal elements of
the data matrices in \eqref{eq:decomposed-hqcqp} are nonnegative, since both $N^p_+$ and $N^p_-$ are nonnegative matrices. In particular, for each $(i, j) \in \bar{\EC}$,
the $(i, j)$th element of the matrix in the objective function is not only nonnegative but also positive by assumption. Thus, to apply Corollary~\ref{coro:nonnegative-offdiagonal-connected}, it remains to show that $\bar{G}$ is bipartite. Assume on the contrary there exists an odd cycle $\bar{\CC}$ in $\bar{G}$. Let $\bar{\UC} \subseteq [n+1, 2 n]$ denote the set of vertices on $[n+1, 2 n]$ in $\bar{\CC}$. As illustrated in \autoref{fig:transform-minus-edge-sign},
any vertex $v \coloneqq i + n \in \bar{\UC}$ connects with $i$ and $j \in \VC$ in $\bar{\CC}$. Hence for every vertex $v \in \bar{\UC}$,
by removing the edges $(i, v)$ and $(v, j)$ from $\bar{\CC}$ and
adding the edge $(i, j)$ with the negative sign to $\bar{\CC}$,
we obtain a new cycle $\CC$ in $G$. Since $2 |\bar{\UC}|$ edges are removed and $|\bar{\UC}|$ edges are added in this procedure,
it follows $|\CC| = |\bar{\CC}| - 2 |\bar{\UC}| + |\bar{\UC}| = |\bar{\CC}| - |\bar{\UC}|$. \autoref{fig:removing-adding-cycle-edges} displays a case for $|\bar{\UC}| = 2 $. Thus, if $|\bar{\UC}|$ is even (odd), $|\CC|$ is odd (resp., even),
hence, by \eqref{eq:sign-constraint-simple-cycle} in Theorem~\ref{thm:sojoudi-theorem},
the number of negative edges in $\CC$ must be odd (resp., even). However, the number of negative edges in $\CC$ is equal to $|\bar{\UC}|$
since $\bar{\CC}$ has no negative edges and all the additional edges
in the conversion from $\bar{\CC}$ to $\CC$
are negative. This is a contradiction. Therefore, there are no odd cycles in $\bar{G}$,
which implies $\bar{G}$ is bipartite. Since \eqref{eq:decomposed-hqcqp} satisfies the assumptions of Corollary~\ref{coro:nonnegative-offdiagonal-connected},
it also satisfies the assumptions of Theorem~\ref{thm:system-based-condition-connected}. \end{proof}
\begin{figure}[t]
\centering
\tikzset{every picture/. style={line width=0.75 pt}}
\begin{minipage}[t]{0.54 \textwidth}
\centering
\begin{tikzpicture}[x=0.75 pt, y=0.75 pt, yscale=-0.6, xscale=0.6 ]
\draw [draw opacity=0 ] (324.92,109.3 ) .. controls (318.07,127.62 ) and (305,140 ) .. (290,140 ) .. controls (267.91,140 ) and (250,113.14 ) .. (250,80 ) .. controls (250,46.86 ) and (267.91,20 ) .. (290,20 ) .. controls (305,20 ) and (318.07,32.38 ) .. (324.92,50.7 ) -- (290,80 ) -- cycle ; \draw (324.92,109.3 ) .. controls (318.07,127.62 ) and (305,140 ) .. (290,140 ) .. controls (267.91,140 ) and (250,113.14 ) .. (250,80 ) .. controls (250,46.86 ) and (267.91,20 ) .. (290,20 ) .. controls (305,20 ) and (318.07,32.38 ) .. (324.92,50.7 ) ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][line width=1.5 ] [dash pattern={on 1.69 pt off 2.76 pt}] (324.92,50.7 ) -- (404.9,50.7 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (318.92,50.7 ) .. controls (318.92,47.39 ) and (321.6,44.7 ) .. (324.92,44.7 ) .. controls (328.23,44.7 ) and (330.92,47.39 ) .. (330.92,50.7 ) .. controls (330.92,54.02 ) and (328.23,56.7 ) .. (324.92,56.7 ) .. controls (321.6,56.7 ) and (318.92,54.02 ) .. (318.92,50.7 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (318.92,109.3 ) .. controls (318.92,105.98 ) and (321.6,103.3 ) .. (324.92,103.3 ) .. controls (328.23,103.3 ) and (330.92,105.98 ) .. (330.92,109.3 ) .. controls (330.92,112.61 ) and (328.23,115.3 ) .. (324.92,115.3 ) .. controls (321.6,115.3 ) and (318.92,112.61 ) .. (318.92,109.3 ) -- cycle ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][line width=1.5 ] [dash pattern={on 1.69 pt off 2.76 pt}] (324.92,109.3 ) -- (404.9,109.3 ) ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][line width=1.5 ] [dash pattern={on 5.63 pt off 4.5 pt}] (324.92,50.7 ) -- (404.9,109.3 ) ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][line width=1.5 ] [dash pattern={on 5.63 pt off 4.5 pt}] (324.92,109.3 ) -- (404.9,50.7 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (398.9,50.7 ) .. controls (398.9,47.39 ) and (401.59,44.7 ) .. (404.9,44.7 ) .. controls (408.21,44.7 ) and (410.9,47.39 ) .. (410.9,50.7 ) .. controls (410.9,54.01 ) and (408.21,56.7 ) .. (404.9,56.7 ) .. controls (401.59,56.7 ) and (398.9,54.01 ) .. (398.9,50.7 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (398.9,109.3 ) .. controls (398.9,105.98 ) and (401.59,103.3 ) .. (404.9,103.3 ) .. controls (408.21,103.3 ) and (410.9,105.98 ) .. (410.9,109.3 ) .. controls (410.9,112.61 ) and (408.21,115.3 ) .. (404.9,115.3 ) .. controls (401.59,115.3 ) and (398.9,112.61 ) .. (398.9,109.3 ) -- cycle ;
\draw [draw opacity=0 ] (114.92,109.3 ) .
|
312
| 6
|
arxiv
|
0 ) -- cycle ; \draw (114.92,109.3 ) .. controls (108.07,127.62 ) and (95,140 ) .. (80,140 ) .. controls (57.91,140 ) and (40,113.14 ) .. (40,80 ) .. controls (40,46.86 ) and (57.91,20 ) .. (80,20 ) .. controls (95,20 ) and (108.07,32.38 ) .. (114.92,50.7 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (108.92,50.7 ) .. controls (108.92,47.39 ) and (111.6,44.7 ) .. (114.92,44.7 ) .. controls (118.23,44.7 ) and (120.92,47.39 ) .. (120.92,50.7 ) .. controls (120.92,54.02 ) and (118.23,56.7 ) .. (114.92,56.7 ) .. controls (111.6,56.7 ) and (108.92,54.02 ) .. (108.92,50.7 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (108.92,109.3 ) .. controls (108.92,105.98 ) and (111.6,103.3 ) .. (114.92,103.3 ) .. controls (118.23,103.3 ) and (120.92,105.98 ) .. (120.92,109.3 ) .. controls (120.92,112.61 ) and (118.23,115.3 ) .. (114.92,115.3 ) .. controls (111.6,115.3 ) and (108.92,112.61 ) .. (108.92,109.3 ) -- cycle ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][line width=1.5 ] (114.92,50.7 ) -- (114.92,109.3 ) ;
\draw (150,80 ) -- (167.5,60 ) -- (167.5,70 ) -- (202.5,70 ) -- (202.5,60 ) -- (220,80 ) -- (202.5,100 ) -- (202.5,90 ) -- (167.5,90 ) -- (167.5,100 ) -- cycle ;
\draw (290,157.5 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{68 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle \mathcal{C} \setminus \{( i, j)\}$
\end{center}\end{minipage}};
\draw (240,30 ) node [font=\large] [align=left] {\begin{minipage}[lt]{27.2 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle \overline{G}$
\end{center}\end{minipage}};
\draw (404.9,34.7 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle i+n$
\end{center}\end{minipage}};
\draw (404.9,129.3 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{40.8 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle j+n$
\end{center}\end{minipage}};
\draw (334.92,34.7 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle i$
\end{center}\end{minipage}};
\draw (334.92,129.3 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle j$
\end{center}\end{minipage}};
\draw (80,157.5 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{68 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle \mathcal{C}$
\end{center}\end{minipage}};
\draw (30,30 ) node [font=\large] [align=left] {\begin{minipage}[lt]{27.2 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle G$
\end{center}\end{minipage}};
\draw (124.92,34.7 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle i$
\end{center}\end{minipage}};
\draw (124.92,129.3 ) node [font=\normalsize] [align=left] {\begin{minipage}[lt]{13.6 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle j$
\end{center}\end{minipage}};
\draw (100,80 ) node [font=\large] [align=left] {\begin{minipage}[lt]{20.4 pt}\setlength\topsep{0 pt}
\begin{center}
$\displaystyle -$
\end{center}\end{minipage}};
\end{tikzpicture}
\caption{
An edge with the negative sign. If the cycle $\CC$ has the edge $(i, j)$ with $\sigma_{ij} = -1 $,
then $(i, j)$ is decomposed into two paths:
(a) $(j, i+n)$ and $(i+n, i)$ via the vertex $i+n$;
(b) $(i, j+n)$ and $(j+n, j)$ via the vertex $j+n$. }
\label{fig:transform-minus-edge-sign}
\end{minipage}
\hfill
\begin{minipage}[t]{0.42 \textwidth}
\centering
\begin{tikzpicture}[x=0.75 pt, y=0.75 pt, yscale=-0.725, xscale=0.725 ]
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (45,10 ) .. controls (45,7.24 ) and (47.24,5 ) .. (50,5 ) .. controls (52.76,5 ) and (55,7.24 ) .. (55,10 ) .. controls (55,12.76 ) and (52.76,15 ) .. (50,15 ) .. controls (47.24,15 ) and (45,12.76 ) .. (45,10 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (45,110 ) .. controls (45,107.24 ) and (47.24,105 ) .. (50,105 ) .. controls (52.76,105 ) and (55,107.24 ) .. (55,110 ) .. controls (55,112.76 ) and (52.76,115 ) .. (50,115 ) .. controls (47.24,115 ) and (45,112.76 ) .. (45,110 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (45,40 ) .. controls (45,37.24 ) and (47.24,35 ) .. (50,35 ) .. controls (52.76,35 ) and (55,37.24 ) .. (55,40 ) .. controls (55,42.76 ) and (52.76,45 ) .. (50,45 ) .. controls (47.24,45 ) and (45,42.76 ) .. (45,40 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (45,80 ) .. controls (45,77.24 ) and (47.24,75 ) .. (50,75 ) .. controls (52.76,75 ) and (55,77.24 ) .. (55,80 ) .. controls (55,82.76 ) and (52.76,85 ) .. (50,85 ) .. controls (47.24,85 ) and (45,82.76 ) .. (45,80 ) -- cycle ;
\draw (10,10 ) -- (50,10 ) -- (90,25 ) -- (50,40 ) -- (50,80 ) -- (90,95 ) -- (50,110 ) -- (10,110 ) -- (10,10 );
\draw [draw opacity=0 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (45,64 ) -- (55,59 ) -- (55,55 ) -- (45,60 ) -- cycle ;
\draw (45,60 ) -- (55,55 ) ;
\draw (45,64 ) -- (55,59 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (5,64 ) -- (15,59 ) -- (15,55 ) -- (5,60 ) -- cycle ;
\draw (5,60 ) -- (15,55 ) ;
\draw (5,64 ) -- (15,59 ) ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (85,25 ) .. controls (85,22.24 ) and (87.24,20 ) .. (90,20 ) .. controls (92.76,20 ) and (95,22.24 ) .. (95,25 ) .. controls (95,27.76 ) and (92.76,30 ) .. (90,30 ) .. controls (87.24,30 ) and (85,27.76 ) .. (85,25 ) -- cycle ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (85,95 ) .. controls (85,92.24 ) and (87.24,90 ) .. (90,90 ) .. controls (92.76,90 ) and (95,92.24 ) .. (95,95 ) .. controls (95,97.76 ) and (92.76,100 ) .. (90,100 ) .. controls (87.24,100 ) and (85,97.76 ) .. (85,95 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (125,10 ) .. controls (125,7.24 ) and (127.24,5 ) .. (130,5 ) .. controls (132.76,5 ) and (135,7.24 ) .. (135,10 ) .. controls (135,12.76 ) and (132.76,15 ) .. (130,15 ) .. controls (127.24,15 ) and (125,12.76 ) .. (125,10 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (125,110 ) .. controls (125,107.24 ) and (127.24,105 ) .. (130,105 ) .. controls (132.76,105 ) and (135,107.24 ) .. (135,110 ) .. controls (135,112.76 ) and (132.76,115 ) .. (130,115 ) .. controls (127.24,115 ) and (125,112.76 ) .. (125,110 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (125,40 ) .. controls (125,37.24 ) and (127.24,35 ) .
|
312
| 7
|
arxiv
|
6,85 ) .. (130,85 ) .. controls (127.24,85 ) and (125,82.76 ) .. (125,80 ) -- cycle ;
\draw (130,10 ) -- (170,25 ) -- (130,40 ) ;
\draw (130,80 ) -- (170,95 ) -- (130,110 ) ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (165,25 ) .. controls (165,22.24 ) and (167.24,20 ) .. (170,20 ) .. controls (172.76,20 ) and (175,22.24 ) .. (175,25 ) .. controls (175,27.76 ) and (172.76,30 ) .. (170,30 ) .. controls (167.24,30 ) and (165,27.76 ) .. (165,25 ) -- cycle ;
\draw [color={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , draw opacity=1 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (165,95 ) .. controls (165,92.24 ) and (167.24,90 ) .. (170,90 ) .. controls (172.76,90 ) and (175,92.24 ) .. (175,95 ) .. controls (175,97.76 ) and (172.76,100 ) .. (170,100 ) .. controls (167.24,100 ) and (165,97.76 ) .. (165,95 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (215,10 ) .. controls (215,7.24 ) and (217.24,5 ) .. (220,5 ) .. controls (222.76,5 ) and (225,7.24 ) .. (225,10 ) .. controls (225,12.76 ) and (222.76,15 ) .. (220,15 ) .. controls (217.24,15 ) and (215,12.76 ) .. (215,10 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (215,110 ) .. controls (215,107.24 ) and (217.24,105 ) .. (220,105 ) .. controls (222.76,105 ) and (225,107.24 ) .. (225,110 ) .. controls (225,112.76 ) and (222.76,115 ) .. (220,115 ) .. controls (217.24,115 ) and (215,112.76 ) .. (215,110 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (215,40 ) .. controls (215,37.24 ) and (217.24,35 ) .. (220,35 ) .. controls (222.76,35 ) and (225,37.24 ) .. (225,40 ) .. controls (225,42.76 ) and (222.76,45 ) .. (220,45 ) .. controls (217.24,45 ) and (215,42.76 ) .. (215,40 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (215,80 ) .. controls (215,77.24 ) and (217.24,75 ) .. (220,75 ) .. controls (222.76,75 ) and (225,77.24 ) .. (225,80 ) .. controls (225,82.76 ) and (222.76,85 ) .. (220,85 ) .. controls (217.24,85 ) and (215,82.76 ) .. (215,80 ) -- cycle ;
\draw (220,10 ) -- (220,40 ) ;
\draw (220,80 ) -- (220,110 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (320,10 ) .. controls (320,7.24 ) and (322.24,5 ) .. (325,5 ) .. controls (327.76,5 ) and (330,7.24 ) .. (330,10 ) .. controls (330,12.76 ) and (327.76,15 ) .. (325,15 ) .. controls (322.24,15 ) and (320,12.76 ) .. (320,10 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (320,110 ) .. controls (320,107.24 ) and (322.24,105 ) .. (325,105 ) .. controls (327.76,105 ) and (330,107.24 ) .. (330,110 ) .. controls (330,112.76 ) and (327.76,115 ) .. (325,115 ) .. controls (322.24,115 ) and (320,112.76 ) .. (320,110 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (320,40 ) .. controls (320,37.24 ) and (322.24,35 ) .. (325,35 ) .. controls (327.76,35 ) and (330,37.24 ) .. (330,40 ) .. controls (330,42.76 ) and (327.76,45 ) .. (325,45 ) .. controls (322.24,45 ) and (320,42.76 ) .. (320,40 ) -- cycle ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 0 ; green, 0 ; blue, 0 } , fill opacity=1 ] (320,80 ) .. controls (320,77.24 ) and (322.24,75 ) .. (325,75 ) .. controls (327.76,75 ) and (330,77.24 ) .. (330,80 ) .. controls (330,82.76 ) and (327.76,85 ) .. (325,85 ) .. controls (322.24,85 ) and (320,82.76 ) .. (320,80 ) -- cycle ;
\draw (285,10 ) -- (285,110 ) -- (325,110 ) -- (325,10 ) -- (285,10 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (320,64 ) -- (330,59 ) -- (330,55 ) -- (320,60 ) -- cycle ;
\draw (320,60 ) -- (330,55 ) ;
\draw (320,64 ) -- (330,59 ) ;
\draw [draw opacity=0 ][fill={rgb, 255 :red, 255 ; green, 255 ; blue, 255 } , fill opacity=1 ] (280,64 ) -- (290,59 ) -- (290,55 ) -- (280,60 ) -- cycle ;
\draw (280,60 ) -- (290,55 ) ;
\draw (280,64 ) -- (290,59 ) ;
\draw (110,60 ) node [align=left] {$\displaystyle -$};
\draw (195,60 ) node [align=left] {$\displaystyle +$};
\draw (250,60 ) node [align=left] {$\displaystyle =$};
\draw (50,135 ) node [align=left] {$\displaystyle | \overline{\mathcal{C}}| $};
\draw (150,135 ) node [align=left] {$\displaystyle 2 | \overline{U}| $};
\draw (220,135 ) node [align=left] {$\displaystyle | \overline{U}| $};
\draw (305,135 ) node [align=left] {$\displaystyle | \mathcal{C}| $};
\end{tikzpicture}
\caption{
Removing and adding edges, and calculating of the number of edges if $\bar{\UC} = 2 $. The black circles are the vertices in $[n]$ while the white circles represent those in $[n+1, 2 n]$. }
\label{fig:removing-adding-cycle-edges}
\end{minipage}
\end{figure}
\noindent
Proposition~\ref{prop:weaker-than-sojoudi-connected} is proved
under the assumptions that: (i) $G$ is connected; (ii) for all $(i, j) \in \EC$, $Q^0 _{ij} \neq 0 $. These assumptions may seem strong;
however, we will show that they can be removed
using
Corollary~\ref{coro:nonnegative-offdiagonal} in section 4. At the end of this section, we apply Proposition~\ref{prop:weaker-than-sojoudi-connected} to
a class of QCQPs where all the off-diagonal elements of every matrix $Q^0, \ldots, Q^m$ are nonpositive. We call QCQPs in this class nonpositive off-diagonal QCQPs. It is well-known that their SDP relaxations are exact~\cite{kim2003 exact}. By applying the same transformation above,
we obtain \eqref{eq:decomposed-hqcqp} with $N^p_+ = O$ for every $p$
since no positive off-diagonal elements exist. The diagonal elements of $D^p$ do not generate edges in the aggregated sparsity pattern graph, thus,
the
data matrices in \eqref{eq:decomposed-hqcqp} induce a bipartite sparsity pattern graph. Therefore, the SDP relaxation is exact. This can be regarded as an alternative proof for \cite{kim2003 exact} and Corollary~\ref{coro:sojoudi-corollary1 }\ref{cond:sojoudi-arbitrary}. \begin{coro} \label{coro:nonpositive-offdiagonal-connected}
Under Assumption~\ref{assum:new-assumption},
the SDP relaxation of a nonpositive off-diagonal QCQP is exact
if the aggregate spartiy pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp} is connected
and $Q^0 _{ij} < 0 $ for all $(i, j) \in \EC$. \end{coro}
\section{Perturbation for disconnected aggregated sparsity pattern graph} \label{sec:perturbation}
The connectivity of $G$ has played an important role for
our main theorem in \autoref{sec:main}. For QCQPs with sparse data matrices,
the connectivity assumption might be a difficult condition to be satisfied. In this section,
we replace the assumption for connected graphs
by a slightly different assumption (Assumption~\ref{assum:new-assumption-strong}),
and present a new condition for the exact SDP relaxation. The following assumption is slightly stronger than Assumption~\ref{assum:new-assumption}
in the sense that it requires the existence of a feasible interior point of \eqref{eq:hsdrd}. However, it can be satisfied in practice without much difficulty. \begin{assum} \label{assum:new-assumption-strong}
The following two conditions hold:
\begin{enumerate}[label=(\roman*)]
\item \label{assum:new-assumption-strong-1 }
the sets of optimal solutions for \eqref{eq:hsdr} and \eqref{eq:hsdrd} are nonempty; and
\item \label{assum:new-assumption-strong-2 }
at least one of the following two conditions holds:
\begin{enumerate}[label=(\alph*)]
\item \label{assum:new-assumption-strong-2 -1 }
the feasible set of \eqref{eq:hsdr} is bounded; or
\item \label{assum:new-assumption-strong-2 -2 }
for \eqref{eq:hsdrd},
the set of optimal solutions is bounded,
and the interior of the feasible set is nonempty. \end{enumerate}
\end{enumerate}
\end{assum}
We now perturb the objective function of a given QCQP
to remove the connectivity of $G$ from Theorem~\ref{thm:system-based-condition-connected}. Let $P \in \SymMat^n$ be an $n \times n$ nonzero matrix,
and let $\varepsilon > 0 $ denote the magnitude of the perturbation.
|
312
| 8
|
arxiv
|
]{Azuma2021 } proved that the SDP relaxation is exact
if a sequence of perturbed QCQPs that satisfy the exactness condition converges to the original one. This result was used to eliminate the requirement that
the aggregated sparsity pattern graph is connected from their main theorem. The following lemmas are extensions of the results in \cite{Azuma2021 } under a weaker assumption. \begin{lemma} \label{lemma:perturbation-technique-primal}
Suppose that Assumption
~\ref{assum:new-assumption-strong} {\it \ref{assum:new-assumption-strong-1 }} and
\ref{assum:new-assumption-strong-2 }\ref{assum:new-assumption-strong-2 -1 } hold. Let $P \neq O$ be an $n \times n$ nonzero matrix, and
$\{\varepsilon_t\}_{t = 1 }^\infty$ be a monotonically decreasing sequence
such that $\lim_{t \to \infty} \varepsilon_t = 0 $. If the SDP relaxation of the $\varepsilon_t$-perturbed problem
$(\PC^{\varepsilon_t})$
is exact for all $t = 1, 2, \ldots$,
then the SDP relaxation of the original problem \eqref{eq:hqcqp} is also exact. \end{lemma}
\begin{proof}
Let $A$ and $B$ be
the feasible sets of \eqref{eq:hqcqp} and \eqref{eq:hsdr}, respectively:
%
\begin{align*}
A \coloneqq & \left\{
\x \in \Real^n \, \middle|\,
\ip{Q^p}{(\x\trans{\x})} \leq b_p, \quad p = 1, \ldots, m\right\}, \\
B \coloneqq & \left\{
X \in \SymMat_+^n \, \middle|\,
\ip{Q^p}{X} \leq b_p, \quad p = 1, \ldots, m\right\}. \end{align*}
%
Note that $B$ is a compact set by the assumption. The intersection of $B$ and the set of rank-1 matrices
\begin{align*}
B_1
&\coloneqq B \cap \left\{X \in \SymMat^n \, \middle|\, \rank(X) \leq 1 \right\} \\
&= \left\{
X \succeq O \, \middle|\,
\rank(X) \leq 1, \;
\ip{Q^p}{X} \leq b_p, \; p = 1, \ldots, m\right\}
\end{align*}
is also a compact set since $\left\{X \in \SymMat^n \, \middle|\, \rank(X) \leq 1 \right\}$ is closed. There exists a bijection $f: A \to B_1 $ given by $f(\x) = \x\trans{\x}$,
thus $A$ is also a compact set. By an argument similar to the proof of \cite[Lemma 3.3 ]{Azuma2021 },
we obtain the desired result. \end{proof}
\begin{lemma} \label{lemma:perturbation-technique-dual}
Suppose that Assumption~\ref{assum:new-assumption-strong} {\it \ref{assum:new-assumption-strong-1 }} and
\ref{assum:new-assumption-strong-2 }\ref{assum:new-assumption-strong-2 -2 } hold. Let $P \neq O$ be an $n \times n$ negative semidefinite nonzero matrix, and
$\{\varepsilon_t\}_{t = 1 }^\infty$ be a monotonically decreasing sequence
such that $\lim_{t \to \infty} \varepsilon_t = 0 $. If the SDP relaxation of the $\varepsilon_t$-perturbed problem
$(\PC^{\varepsilon_t})$
is exact for all $t = 1, 2, \ldots$,
then the SDP relaxation of the original problem \eqref{eq:hqcqp} is also exact. \end{lemma}
\begin{proof}
Let $\Gamma \coloneqq \left\{\y \geq \0 \, \middle|\, S(\y) \succeq O\right\}$
be the feasible set of \eqref{eq:hsdrd}. Let $(\DC_R^{\varepsilon})$ denote
the dual of the SDP relaxation for $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed},
and define $\Gamma(\varepsilon) \coloneqq \left\{\y \geq \0 \, \middle|\, S(\y;\, \varepsilon) \succeq O\right\}$
as the feasible set of $(\DC_R^{\varepsilon})$. %
Since $P$ is negative semidefinite, we have $S(\y;\, \varepsilon_1 ) \preceq S(\y;\, \varepsilon_2 )$
for any $\y \geq \0 $ and $\varepsilon_1 > \varepsilon_2 > 0 $, which indicates
a monotonic structure of the sequence $\left\{\Gamma(\varepsilon_t)\right\}_{t=1 }^\infty$:
\begin{equation*}
\Gamma = \Gamma(0 ) \supseteq \cdots \supseteq \Gamma(\varepsilon_{t+1 })
\supseteq \Gamma(\varepsilon_t) \supseteq \cdots. \end{equation*}
From Assumption~\ref{assum:new-assumption-strong}{\it\ref{assum:new-assumption-strong-2 }\ref{assum:new-assumption-strong-2 -2 }},
there exists a point $\bar{\y} \in \Gamma$ such that $S(\bar{\y}) \succ O$. Since each $\Gamma(\varepsilon_t)$ is a closed set and $\lim_{t \to \infty} \varepsilon_t = 0 $,
there exists an integer $T$ such that
$S(\bar{\y}; \varepsilon_T) \succ O$. In addition, it holds that
$S(\bar{\y}; \varepsilon_t) \succeq S(\bar{\y}; \varepsilon_T)$ for $t \ge T$. Let $v_t^*$ and $B^*(\varepsilon_t)$ be the optimal value and
the set of the corresponding optimal solutions of $(\PC^{\varepsilon_t})$, respectively. From the assumptions that $(\PC)$ has a feasible point
and $P$ is negative semidefinite,
there is an upper bound $\bar{v}$ such that $v_t^* \le \bar{v}$ for any $t$. Therefore, it holds that, for any $t \ge T$,
\begin{align*}
B^*(\varepsilon_t)
&= \left\{
X \in \SymMat^n \, \middle|\,
X \succeq O, \;
\ip{(Q^0 + \varepsilon_t P)}{X} = v_t^*, \;
\ip{Q^p}{X} \leq b_p \; \text{for all $p \in [m]$}
\right\} \\
& \subseteq \left\{
X \in \SymMat^n \, \middle|\,
X \succeq O, \;
\ip{\left(Q^0 + \varepsilon_t P + \sum_{p=1 }^m \bar{y}_pQ^p\right)}{X} \leq v_t^* + \trans{\bar{\y}}\b
\right\} \\
&= \left\{
X \in \SymMat^n \, \middle|\,
X \succeq O, \;
\ip{S(\bar{\y};\, \varepsilon_t)}{X} \leq v_t^* + \trans{\bar{\y}}\b
\right\}, \\
&\subseteq \left\{
X \in \SymMat^n \, \middle|\,
X \succeq O, \;
\ip{S(\bar{\y};\, \varepsilon_T)}{X} \leq \bar{v} + \trans{\bar{\y}}\b
\right\},
\end{align*}
which implies $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is bounded
since $S(\bar{\y};\, \varepsilon_T) \succ O$. %
With the exact SDP relaxation of the perturbed problems and strong duality,
we can consider $X^t \in B^*(\varepsilon_t)$, an rank-1 solution of the primal SDP relaxation,
and $\y^t \in \Gamma(\varepsilon_t)$, an optimal solution of $(\DC_R^{\varepsilon_t})$
satifying $X^t S(\y^t;\, \varepsilon_t) = O$. We define a closed set as
\begin{equation*}
U \coloneqq \closure\left(\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)\right)
\end{equation*}
so that the sequence $\{X^t\}_{t=T}^\infty \subseteq U$. Since $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is bounded, the set $U$ is a compact set. %
As the sequence has an accumulation point, we let
$X^\mathrm{lim} \coloneqq \lim_{t \to \infty} X^t \in U$
by taking an appropriate subsequence from $\{X^t \, |\, t \ge T\}$. Moreover, since $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is included in the feasible set of \eqref{eq:hsdr},
its closure $U$ is also in the same set,
which implies that $X^\mathrm{lim}$ is an at most rank-1 feasible point of \eqref{eq:hsdr}. Finally, we show the optimality of $X^\mathrm{lim}$ for \eqref{eq:hsdr}. We assume that $\bar{X}$ is a feasible point of \eqref{eq:hsdr}
such that $\ip{Q^0 }{\bar{X}} < \ip{Q^0 }{X^\mathrm{lim}}$
and derive a contradiction. Since $\bigcup_{t=T}^\infty \; B^*(\varepsilon_t)$ is bounded,
there is a sufficiently large $M$ such that
$\| \bar{X} \| \le M$ and $\| X^{t} \| \le M$ for all $t \geq T$. %
Let $\delta = \ip{Q^0 }{X^\mathrm{lim}} - \ip{Q^0 }{\bar{X}} > 0 $. Since
$X^\mathrm{lim} = \lim_{t \to \infty} X^t $ and
$\lim_{t \to \infty} \varepsilon_t = 0 $,
we can find $\hat{T} \ge T$ such that
$|Q_0 \bullet (X^\mathrm{lim} - X^{\hat{T}})| \le \frac{\delta}{4 }$
and $\varepsilon_{\hat{T}} \le \frac{\delta}{8 \|P\| M }$. Since $\bar{X}$ and $X^{\hat{T}}$ are feasible for $(\PC^{\varepsilon_{\hat{T}}})$,
$\frac{\bar{X} +X^{\hat{T}}}{2 }$ is also feasible
for $(\PC^{\varepsilon_{\hat{T}}})$. Thus, we have
\begin{align}
& \ \left(Q_0 + \varepsilon_{\hat{T}} P\right) \bullet \left(\frac{\bar{X} +X^{\hat{T}}}{2 }\right)
- \left(Q_0 + \varepsilon_{\hat{T}} P\right) \bullet X^{\hat{T}} \\
= & \ \frac{1 }{2 }\left(Q_0 + \varepsilon_{\hat{T}} P\right) \bullet \left(\bar{X} - X^{\hat{T}}\right) \\
= & \ \frac{1 }{2 } Q_0 \bullet \left(\bar{X} - X^{\mathrm{lim}}\right)
+ \frac{1 }{2 } Q_0 \bullet \left(X^{\mathrm{lim}} - X^{\hat{T}}\right)
+ \frac{1 }{2 } \varepsilon_{\hat{T}} P \bullet \left(\bar{X} - X^{\hat{T}}\right) \\
\le & \ \frac{1 }{2 } Q_0 \bullet \left(\bar{X} - X^{\mathrm{lim}}\right)
+ \frac{1 }{2 } \left|Q_0 \bullet \left(X^{\mathrm{lim}} - X^{\hat{T}}\right)\right|
+ \frac{1 }{2 } \varepsilon_{\hat{T}} \|P\| (2 M) \\
\le & -\frac{\delta}{2 } + \frac{\delta}{8 } + \frac{\delta}{8 }
= - \frac{\delta}{4 } < 0. \end{align}
This contradicts the optimality of
$X^{\hat{T}}$ in $(\PC^{\varepsilon_{\hat{T}}})$. This completes the proof. \end{proof}
We note that the negative semidefiniteness of $P$ assumed in Lemma~\ref{lemma:perturbation-technique-dual}
is not included in Lemma~\ref{lemma:perturbation-technique-primal}. In the subsequent discussion, we remove the assumption on the connectivity of $G$ from Theorem~\ref{thm:system-based-condition-connected}
using Lemmas~\ref{lemma:perturbation-technique-primal} and \ref{lemma:perturbation-technique-dual}. \subsection{QCQPs with disconnected bipartite structures} \label{ssec:main-disconnected}
We present an improved version of Theorem~\ref{thm:system-based-condition-connected}
for QCQPs with disconnected aggregated sparsity pattern graphs $G$. \begin{theorem}
\label{prop:system-based-condition}
Suppose that Assumption~\ref{assum:new-assumption-strong} holds
and that the aggregated sparsity pattern graph $G(\VC, \EC)$ is bipatite. Then, \eqref{eq:hsdr} is exact if, for all $(k, \ell) \in \EC$,
the system \eqref{eq:system-nonpositive} has no solutions. \end{theorem}
\begin{proof}
Let $L$ denote the number of connected components of $G$,
and choose an arbitrarily vertex $u_i$ from the connected components indexed by $i \in [L]$. Then, we define the edge set
\begin{equation*}
\FC = \bigcup_{i \in [L-1 ]} \left\{\left(u_i, u_{i+1 }\right), \left(u_{i+1 }, u_i\right)\right\}. \end{equation*}
%
Since $\FC$ connects the $i$th and $(i+1 )$th component,
the graph $\tilde{G}(\VC, \tilde{\EC} \coloneqq \EC \cup \FC)$
is a connected and bipartite graph. %
Let $P \in \SymMat^n$ be the negative of the Laplacian matrix of a subgraph $\hat{G}(\VC, \FC)$ of $\tilde{G}$ induced by $\FC$, i. e.,
\begin{equation*}
P_{ij} = \begin{cases}
\; -\deg(i) & \quad \text{if $i = j$}, \\
\; 1 & \quad \text{if $(i, j) \in \FC$}, \\
\; 0 & \quad\text{otherwise},
\end{cases}
\end{equation*}
where $\deg(i)$ denotes the degree of the vertex $i$ in the subgraph $\hat{G}(\VC, \FC)$. Since the Laplacian matrix is positive semidefinite,
$P$ is negative semidefinite.
|
312
| 9
|
arxiv
|
}
whose aggregated sparsity pattern graph is $\tilde{G}(\VC, \tilde{\EC})$. To check the exactness of the SDP relaxation for \eqref{eq:hqcqp-perturbed}
by Theorem~\ref{thm:system-based-condition-connected},
it suffices to show that the following system
\begin{equation*}
\y \geq \0, \; S(\y;\, \varepsilon) \succeq O, \;
S(\y;\, \varepsilon)_{k\ell} \leq 0. \end{equation*}
has no solutions for all $(k, \ell) \in \tilde{\EC}$,
where $S(\y;\, \varepsilon) \coloneqq (Q^0 + \varepsilon P) + \sum_{p \in [m]} y_p Q^p$. Let $\hat{\y}$ be an arbitrary vector satifying the first two constraints, i. e.,
$\hat{\y} \geq \0 $ and $S(\hat{\y};\, \varepsilon) \succeq O$. \begin{enumerate}[label=(\roman*)]
\item
If $(k, \ell) \in \FC$, then $P_{k\ell} = 1 $ and
$Q^p_{k\ell} = 0 $ for any $p \in [0, m]$ by definition. Thus, we have
\begin{equation*}
S(\hat{\y};\, \varepsilon)_{k\ell}
= \varepsilon P_{k\ell}
> 0. \end{equation*}
\item
If $(k, \ell) \in \tilde{\EC} \setminus \FC = \EC$,
the system \eqref{eq:system-nonpositive} with $(k, \ell)$ has no solutions,
which implies $S(\hat{\y})_{k\ell} > 0 $. Since $(k, \ell) \not\in \FC$, we have $P_{k\ell} = 0 $. Hence, it follows
\begin{equation*}
S(\hat{\y};\, \varepsilon)_{k\ell}
= S(\hat{\y})_{k\ell}
> 0. \end{equation*}
\end{enumerate}
%
Therefore, all the systems have no solutions,
and the SDP relaxation of \eqref{eq:hqcqp-perturbed} is exact. Let $\{\varepsilon_t\}_{t=1 }^\infty \subseteq \Real_+$ be a monotonically decreasing sequence converging to zero,
then the SDP relaxation of the $\varepsilon_t$-perturbed QCQP is exact as discussed above. By Lemmas~\ref{lemma:perturbation-technique-primal} or \ref{lemma:perturbation-technique-dual},
the desired result follows. \end{proof}
\subsection{Disconnected sign-definite QCQPs} \label{ssec:nonnegative-offdiagonal}
For QCQPs with the bipartite sparsity pattern and nonnegative off-diagonal elements of $Q^0, \ldots, Q^m$,
their SDP relaxation is known to be exact (see Theorem~\ref{thm:sojoudi-theorem}
\cite{Sojoudi2014 exactness}). In contrast, when we have dealt with such QCQPs in section~\ref{ssec:nonnegative-offdiagonal-connected},
the connectivity of $G$ and $Q^0 _{ij} > 0 $ have been assumed to derive the exactness of the SDP relaxation. In this subsection, we eliminate these assumptions using the perturbation techniques of section~\ref{ssec:perturbation-techniques}. \begin{coro} \label{coro:nonnegative-offdiagonal}
Suppose that Assumption~\ref{assum:new-assumption-strong} holds, and
suppose the aggregated sparsity pattern graph $G(\VC, \EC)$ of \eqref{eq:hqcqp}
is bipartite. If $Q^p_{ij} \geq 0 $ for all $(i, j) \in \EC$ and for all $p \in [0, m]$,
then the SDP relaxation is exact. \end{coro}
\begin{proof}
Let $P \in \SymMat^n$ be the negative of the Laplacian matrix of $G(\VC, \EC)$, i. e.,
\begin{equation*}
P_{ij} = \begin{cases}
\; -\deg(i) & \quad \text{if $i = j$}, \\
\; 1 & \quad \text{if $(i, j) \in \EC$}, \\
\; 0 & \quad\text{otherwise}. \end{cases}
\end{equation*}
Since the Laplacian matrix is positive semidefinite,
$P$ is negative semidefinite. By adding a perturbation $\varepsilon P$ with any $\varepsilon > 0 $,
we obtain an $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed}
whose aggregated sparsity pattern graph remains the same as the graph $G(\VC, \EC)$. To determine whether the SDP relaxation is exact for this $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed},
it suffices to check the infeasibility of the system, according to Theorem~\ref{prop:system-based-condition}:
\begin{equation*}
\y \geq \0, \; S(\y;\, \varepsilon) \succeq O, \;
S(\y;\, \varepsilon)_{k\ell} \leq 0. \end{equation*}
Let $\hat{\y} \geq \0 $ be an arbitrary vector
satisfying the first two constraints, i. e.,
$\hat{\y} \geq \0 $ and $S(\hat{\y};\, \varepsilon) \succeq O$. For every $(k, \ell) \in \EC$, since $S(\hat{\y})_{k\ell} \geq 0 $ and $P_{k\ell} > 0 $,
we have
\begin{equation*}
S(\hat{\y};\, \varepsilon)_{k\ell}
\geq \varepsilon P_{k\ell} > 0,
\end{equation*}
which implies that the system above has no solutions. Hence, by Theorem~\ref{prop:system-based-condition},
the SDP relaxation of the $\varepsilon$-perturbed QCQP \eqref{eq:hqcqp-perturbed} is exact. Let $\{\varepsilon_t\}_{t=1 }^\infty \subseteq \Real_+$ be a monotonically decreasing sequence converging to zero,
then the SDP relaxation of the $\varepsilon$-perturbed QCQP is exact as discussed above. By Lemmas~\ref{lemma:perturbation-technique-primal} or \ref{lemma:perturbation-technique-dual},
the SDP relaxation of a QCQP with nonnegative off-diagonal elements and bipartite structures
is also exact. \end{proof}
We can extend
Proposition~\ref{prop:weaker-than-sojoudi-connected}
and Corollary~\ref{coro:nonpositive-offdiagonal-connected}
using Corollary~\ref{coro:nonnegative-offdiagonal} to the following results. \begin{prop}
\label{prop:weaker-than-sojoudi}
Suppose that Assumption~\ref{assum:new-assumption-strong} holds and no conditions on sparsity is considered. If \eqref{eq:hqcqp} satisfies the assumption of Theorem~\ref{thm:sojoudi-theorem},
then \eqref{eq:hqcqp} also satisfies that of Corollary~\ref{coro:nonnegative-offdiagonal}. In addition, the exactness of its SDP relaxation
can be proved by Theorem~\ref{prop:system-based-condition}. \end{prop}
\begin{coro} \label{coro:nonpositive-offdiagonal}
Under Assumption~\ref{assum:new-assumption-strong},
the SDP relaxation of a nonpositive off-diagonal QCQP is exact. \end{coro}
\begin{proof}
{(Both Proposition~\ref{prop:weaker-than-sojoudi} and Corollary~\ref{coro:nonpositive-offdiagonal})}
It is easy to check that
the aggregated sparsity pattern graph of
\eqref{eq:decomposed-hqcqp} generated by the given problem is bipartite
by the arguments similar to the proof of Proposition~\ref{prop:weaker-than-sojoudi-connected}. Therefore, \eqref{eq:decomposed-hqcqp} satisfies the assumption of Corollary~\ref{coro:nonnegative-offdiagonal}. \end{proof}
\section{Numerical experiments} \label{sec:example}
We investigate analytical and computational aspects of the conditions in
Theorem~\ref{thm:system-based-condition-connected}
with two QCQP instances below. The first QCQP consists of $2 \times 2 $ data matrices. We show the exactness of its SDP relaxation
by checking the feasibility systems in Theorem~\ref{thm:system-based-condition-connected} without SDP solvers. Next, Example~\ref{examp:cycle-graph-4 -vertices} is considered for the second QCQP. As the size $n$ of the second QCQP is 4, it is difficult to handle
the positive semidefinite constraint $S(\y) \succeq O$ without numerical computation. We present a numerical method for testing the exactness of the SDP relaxation with a computational solver. We also detail the difference between our results and the existing results
using these two QCQP instances. As discussed in section~\ref{ssec:comparison},
if the aggregated sparsity pattern graph is bipartite,
then Theorem~\ref{thm:system-based-condition-connected} covers a wider class of QCQPs than
those by Theorem~\ref{thm:sojoudi-theorem} in~\cite{Sojoudi2014 exactness}
under the connectivity and the elementwise condition on $Q^0 $. Theorem~\ref{thm:system-based-condition-connected} has been generalized in section~\ref{sec:perturbation} to Theorem~\ref{prop:system-based-condition},
and this theorem covers a wider class of QOCPs without the connectivity condition. For numerical experiments,
JuMP~\cite{Dunning2017 } was used with the solver MOSEK~\cite{mosek}
and SDPs were solved with tolerance $1.0 \times 10 ^{-8 }$. All numerical results are shown with four significant digits. \subsection{A QCQP instance with $n=2 $} \label{ssec:analytical-example}
\begin{example}
\label{examp:small-example}
Consider the QCQP \eqref{eq:hqcqp} with
\begin{align*}
& n = 2, \quad m = 1, \quad \b = \begin{bmatrix} 1 \end{bmatrix}, \\
& Q^0 = \begin{bmatrix} -3 & -1 \\ -1 & -2 \end{bmatrix}, \quad
Q^1 = \begin{bmatrix} 3 & 4 \\ 4 & 6 \end{bmatrix}. \end{align*}
\end{example}
We first verify whether the problem satisfies the assumption of Theorem~\ref{thm:system-based-condition-connected}. The aggregated sparsity pattern graph $G$ is bipartite and connected
as it has only two vertices and $Q^0 _{12 } \neq 0 $. Since $Q^1 $ is positive definite,
the problem satisfies Assumption~\ref{assum:previous-assumption}{\it \ref{assum:previous-assumption-1 }}. By the discussion in Remark~\ref{rema:comparison-assumption},
it also satisfies Assumption~\ref{assum:new-assumption}. It only remains to show that the system
\begin{equation*}
y_1 \geq 0, \quad
\hat{S}(y_1 ) \coloneqq \begin{bmatrix} -3 & -1 \\ -1 & -2 \end{bmatrix} +
y_1 \begin{bmatrix} 3 & 4 \\ 4 & 6 \end{bmatrix} \succeq O, \quad
-1 + 4 y_1 \leq 0
\end{equation*}
has no solutions. By definition,
$\hat{S}(y_1 ) \succeq O$ holds if and only if all the principal minors of $\hat{S}(y_1 )$ are nonnegative,
or equivalently, $-3 + 3 y_1 \geq 0 $, $-2 + 6 y_1 \ge 0 $, and $2 y_1 ^2 - 16 y_1 + 5 \geq 0 $. Hence, if $y_1 \geq 4 + 3 \sqrt{6 }/2 \simeq 7.674 $, then
the first two inequalities of the system are satisfied. Since $-1 + 4 y_1 \geq -1 + 4 (4 + 3 \sqrt{6 }/2 ) = 15 + 6 \sqrt{6 } > 0 $,
the last inequality does not hold for such $y_1 $. The problem therefore admits the exact SDP relaxation. Actually, we numerically obtained an optimal solution of the above QCQP in
Example~\ref{examp:small-example} and its SDP relaxation
as $\x^* \simeq [1.731 ; -1.167 ]$ and
$X^* \simeq [2.997, -2.021 ; -2.021, 1.362 ]$, respectively. From $\trans{(\x^*)}Q^0 \x^* - Q^0 \bullet X^* \simeq 5.379 \times 10 ^{-10 }$,
we see numerically that the SDP relaxation provided the exact optimal value. Since $G$ is clearly a forest (no cycles),
we can also apply Proposition~\ref{prop:forest-results} in~\cite{Azuma2021 }. From the discussion above,
the system \eqref{eq:system-zero} has no solutions for $(k, \ell) = (1, 2 )$
and Assumption~\ref{assum:previous-assumption}{\it \ref{assum:previous-assumption-1 }} is satisfied. By taking $\hat{X} = [0.1 \ \ 0 ; 0 \ \ 0.1 ] \succ O$,
we know $\ip{Q^1 }{\hat{X}} = 0.9 \leq 1 = b_1 $. Hence, the exactness of the SDP relaxation can be proved by Proposition~\ref{prop:forest-results}. We mention that this result cannot be obtained by
Theorem~\ref{thm:sojoudi-theorem} in~\cite{Sojoudi2014 exactness}. Since $Q^0 _{12 } = -1 $ and $Q^1 _{12 } = 4 $,
the edge sign $\sigma_{12 }$ of the edge $(1, 2 )$ must be zero by definition,
contradicting \eqref{eq:sign-constraint-sign-definite}. \subsection{Example~\ref{examp:cycle-graph-4 -vertices}} \label{ssec:computational-example}
We computed an optimal solution of Example~\ref{examp:cycle-graph-4 -vertices} and that of its SDP relaxation as
\begin{equation*}
x^* \simeq \begin{bmatrix}
7.818 \\ -8.331 \\ 1.721 \\ -7.019
\end{bmatrix}\ \text{and} \
X^* \simeq \begin{bmatrix}
61.12 & -65.13 & 13.45 & -54.87 \\
-65.13 & 69.41 & -14.34 & 58.48 \\
13.45 & -14.34 & 2.961 & -12.08 \\
-54.87 & 58.48 & -12.08 & 49.27
\end{bmatrix} \in \SymMat^4,
\end{equation*}
respectively.
|
312
| 10
|
arxiv
|
sity}).
We first see whether
it satisfies the assumption of Theorem~\ref{thm:system-based-condition-connected}.
We compute $3 Q_1 + 4 Q_2 $ as
\begin{equation*}
3 \begin{bmatrix}
5 & 2 & 0 & 1 \\ 2 & -1 & 3 & 0 \\
0 & 3 & 3 & -1 \\ 1 & 0 & -1 & 4 \end{bmatrix} +
4 \begin{bmatrix}
-1 & 1 & 0 & 0 \\ 1 & 4 & -1 & 0 \\
0 & -1 & 6 & 1 \\ 0 & 0 & 1 & -2 \end{bmatrix} =
\begin{bmatrix}
11 & 10 & 0 & 3 \\ 10 & 13 & 5 & 0 \\
0 & 5 & 33 & 1 \\ 3 & 0 & 1 & 4 \end{bmatrix},
\end{equation*}
and its minimum eigenvalue is approximately $0.1577 $.
Thus, there exists $\bar{\y} \geq 0 $ such that $\bar{y}_1 Q_1 + \bar{y}_2 Q_2 \succ O$,
e. g., $\bar{\y} = [3 ; 4 ]$.
As mentioned in Remark~\ref{rema:comparison-assumption},
it follows that the second problem satisfies Assumption~\ref{assum:new-assumption}.
To show the exactness of the SDP relaxation for the problem,
it only remains to show that
the systems \eqref{eq:system-nonpositive} for all $(k, \ell) \in \EC$ has no solutions.
Using an SDP solver on a computer, we could observe that there is no solution for the system.
Indeed, for every $(k, \ell) \in \EC$, the SDP
\begin{equation} \label{eq:optimal-value-systems-example}
\begin{array}{rl}
\mu^* =
\min & S(\y)_{k\ell} \\
\subto & \y \geq \0, \; S(\y) \succeq O,
\end{array}
\end{equation}
returns the optimal values shown in \autoref{tab:optimal-value-systems-example},
which implies that no solution exists for \eqref{eq:system-nonpositive}
since $S(\y)_{k\ell}$ cannot attain a nonpositive value.
Therefore,
the SDP relaxation of Example~\ref{examp:cycle-graph-4 -vertices} is exact by
Theorem~\ref{thm:system-based-condition-connected}.
\begin{table}[b]
\caption{Optimal values of \eqref{eq:optimal-value-systems-example} for each $(k, \ell)$}
\label{tab:optimal-value-systems-example}
\centering
\begin{tabular}{c|cccc}
$(k, \ell)$ & $(1, 2 )$ & $(2, 3 )$ & $(1, 4 )$ & $(3, 4 )$ \\ \hline
$\mu^*$ & 18.58 & 12.84 & 8.897 & 0.3215
\end{tabular}
\end{table}
With Theorem~\ref{thm:sojoudi-theorem} in~\cite{Sojoudi2014 exactness},
it is not possible to show the exactness of the SDP relaxation.
The edge sign $\sigma_{12 }$ for $(1, 2 )$th element is $0 $ by definition.
Since the cycle basis of $\GC$ is only $\CC_1 = \GC$,
the left-hand side of \eqref{eq:sign-constraint-simple-cycle} is
$\sigma_{12 }\sigma_{23 }\sigma_{34 }\sigma_{41 } = 0 $.
However, its right-hand side only takes $-1 $ or $+1 $.
This implies that Theorem~\ref{thm:sojoudi-theorem} cannot be applied to Example~\ref{examp:cycle-graph-4 -vertices}.
\section{Concluding remarks} \label{sec:conclution}
We have proposed sufficient conditions for
the exact SDP relaxation of QCQPs whose
aggregated sparsity pattern graph can be represented by bipartite graphs.
Since these conditions consist of at most $n^2 /4 $ SDP systems,
the exactness can be investigated in polynomial time.
The derivation of the conditions is based on
the rank of optimal solutions $\y$ of the dual SDP relaxation under strong duality.
More precisely, a QCQP admits the exact SDP relaxation
if the lower bound of the rank of $S(\y)$ is $n-1 $.
For the lower bound,
we have used the fact that
any nonnegative matrix $M \succeq O$ with bipartite sparsity pattern is of at least rank $n - 1 $
if it satisfies $M\1 > \0 $.
Using results from the recent paper~\cite{kim2021 strong},
the sufficient conditions have been considered under weaker assumptions than
those in \cite{Azuma2021 }.
That is, the sparsity of bipartite graphs includes that of tree and forest graphs,
therefore, the proposed conditions
can serve for a wider class of QCQPs
than those in \cite{Azuma2021 }.
We have also shown in Proposition~\ref{prop:weaker-than-sojoudi} that
one can determine the exactness for all the problems
which satisfy the condition considered in Theorem~\ref{thm:sojoudi-theorem} (\cite{Sojoudi2014 exactness}).
For our future work,
sufficient conditions for the exactness of
a wider class of QCQPs than those with bipartite structures will be investigated.
Furthermore, examining our conditions
to analyze the exact SDP relaxation of QCQPs
transformed from polynomial optimization would be an interesting subject.
\vspace{0.5 cm}
\noindent
{\bf Acknowledgements. }
The authors would like to thank Prof. Ram Vasudevan and Mr. Jinsun Liu for pointing out that there exists
no edge $(i, i+n)$ in the objective function in the proof of Proposition 3.8
of the original version.
|
489
| 0
|
arxiv
|
\section{#1 }
\protect\setcounter{secnum}{\value{section}}
\protect\setcounter{equation}{0 }
\protect\renewcommand{\theequation}{\mbox{\arabic{secnum}. \arabic{equation}}}}
\setcounter{tocdepth}{1 }
\begin{document}
\title{Infinitesimal Deformations and Obstructions for Toric Singularities}
\author{Klaus Altmann\footnotemark[1 ]\\
\small Dept. ~of Mathematics, M. I. T., Cambridge, MA 02139, U. S. A. \vspace{-0.7 ex}\\ \small E-mail: altmann@math. mit. edu}
\footnotetext[1 ]{Die Arbeit wurde mit einem Stipendium des DAAD unterst\"utzt. }
\date{}
\maketitle
\begin{abstract}
The obstruction space $T^2 $ and the cup product
$T^1 \times T^1 \to T^2 $ are computed for toric singularities. \end{abstract}
\tableofcontents
\sect{Introduction}\label{s1 }
\neu{11 }
For an affine scheme $\, Y= \mbox{Spec}\; A$, there are two important $A$-modules,
$T^1 _Y$ and $T^2 _Y$, carrying information about its deformation theory:
$T^1 _Y$ describes the infinitesimal deformations, and $T^2 _Y$ contains the
obstructions for extending deformations of $Y$ to larger base spaces. \\
\par
In case $Y$ admits a versal deformation, $T^1 _Y$ is the tangent space of the
versal base space $S$. Moreover, if $J$ denotes the ideal defining $S$ as a
closed
subscheme of the affine space $T^1 _Y$, the module
$\left( ^{\displaystyle J}\. / \. _{\displaystyle m_{T^1 } \, J} \right) ^\ast$
can be canonically embedded into $T^2 _Y$, i. e. $(T_Y^2 )^\ast$-elements induce
the
equations defining $S$ in $T^1 _Y$. \\
\par
The vector spaces $T^i_Y$ come with a cup product
$T_Y^1 \times T^1 _Y \rightarrow T^2 _Y$. The associated quadratic form $T^1 _Y \rightarrow T^2 _Y$ describes the
quadratic part of the elements of $J$, i. e. it can be used to get a better
approximation of the versal base space $S$ as regarding its tangent space
only. \\
\par
\neu{12 }
In \cite{T1 } we have determined the vector space $T^1 _Y$ for affine toric
varieties. The present paper can be regarded as its continuation - we will compute $T^2 _Y$
and
the cup product. \\
These modules $T^i_Y$ are canonically graded (induced from the character group
of
the torus). We will describe their homogeneous pieces as cohomology groups of
certain complexes, that are directly induced from the combinatorial structure
of the
rational, polyhedral cone defining our variety $Y$. The results
can be found in \S \ref{s3 }. \\
\par
Switching to another, quasiisomorphic complex provides a second formula for the
vector spaces $T^i_Y$ (cf. \S \ref{s6 }). We will use this particular version
for
describing these spaces and the cup product in the special case of
three-dimensional toric Gorenstein singularities (cf. \S \ref{s7 }). \\
\par
\sect{$T^1 $, $T^2 $, and the cup product (in general)}\label{s2 }
In this section we will give a brief reminder to the well known
definitions of these objects. Moreover, we will use this opportunity to fix
some
notations. \\
\par
\neu{21 }
Let $Y \subseteq \, I\. \. \. \. C^{w+1 }$ be given by equations $f_1, \dots, f_m$, i. e. its ring of regular functions equals
\[
A=\;^{\displaystyle P}\. \. / \. _{\displaystyle I} \quad
\mbox{ with }
\begin{array}[t]{l}
P = \, I\. \. \. \. C[z_0, \dots, z_w]\\
I = (f_1, \dots, f_m)\, . \end{array}
\]
Then, using $d:^{\displaystyle I}\. / \. _{\displaystyle I^2 }
\rightarrow A^{w+1 }\;$ ($d(f_i):= (\frac{\partial f_i}{\partial z_0 }, \dots
\frac{\partial f_i}{\partial z_w})$),
the vector space $T^1 _Y$ equals
\[
T^1 _Y = \;^{\displaystyle \mbox{Hom}_A(^{\displaystyle I}\. \. / \. _{\displaystyle I^2 }, A)} \. \left/ \. _{\displaystyle \mbox{Hom}_A(A^{w+1 }, A)} \right. \; . \vspace{1 ex}
\]
\par
\neu{22 }
Let ${\cal R}\subseteq P^m$ denote the $P$-module of relations between the equations
$f_1, \dots, f_m$. It contains the so-called Koszul relations
${\cal R}_0 := \langle f_i\, e^j - f_j \, e^i \rangle$ as a submodule. \\
Now, $T^2 _Y$ can be obtained as
\[
T^2 _Y = \;^{\displaystyle \mbox{Hom}_P(^{\displaystyle {\cal R}}\. / \. _{\displaystyle {\cal R}_0 }, A)} \. \left/ \. _{\displaystyle \mbox{Hom}_P(P^m, A)} \right. \; . \vspace{1 ex}
\]
\par
\neu{23 }
Finally, the cup product $T^1 \times T^1 \rightarrow T^2 $ can be defined in the
following way:
\begin{itemize}
\item[(i)]
Starting with an $\varphi\in \mbox{Hom}_A(^{\displaystyle I}\. / \. _{\displaystyle I^2 }, A)$, we lift the images of the $f_i$ obtaining
elements $\tilde{\varphi}(f_i)\in P$. \item[(ii)]
Given a relation $r\in {\cal R}$, the linear combination
$\sum_ir_i\, \tilde{\varphi}(f_i)$ vanishes in $A$, i. e. it is contained in the
ideal $I\subseteq P$. Denote by $\lambda(\varphi)\in P^m$ any set of
coefficients such that
\[
\sum_i r_i \, \tilde{\varphi}(f_i) + \sum_i \lambda_i(\varphi)\, f_i =0 \quad
\mbox{ in } P. \]
(Of course, $\lambda$ depends on $r$ also. )
\item[(iii)]
If $\varphi, \psi \in \mbox{Hom}_A(^{\displaystyle I}\. / \. _{\displaystyle I^2 }, A)$ represent two elements of $T^1 _Y$, then we define for
each relation $r\in {\cal R}$
\[
(\varphi \cup \psi)(r) := \sum_i \lambda_i (\varphi)\, \psi(f_i) +
\sum_i \varphi(f_i)\, \lambda_i(\psi)\; \in A\, . \vspace{1 ex}
\]
\end{itemize}
{\bf Remark:}
The definition of the cup product does not depend on the choices we made:
\begin{itemize}
\item[(a)]
Choosing a $\lambda'(\varphi)$ instead of $\lambda(\varphi)$ yields
$\lambda'(\varphi) - \lambda(\varphi) \in {\cal R}$, i. e. in $A$ we obtain the same
result. \item[(b)]
Let $\tilde{\varphi}'(f_i)$ be different liftings to $P$. Then, the difference
$\tilde{\varphi}'(f_i) - \tilde{\varphi}(f_i)$ is contained in $I$, i. e. it can
be written as some linear combination
\[
\tilde{\varphi}'(f_i) - \tilde{\varphi}(f_i) = \sum_j t_{ij}\, f_j\, . \]
Hence,
\[
\sum_i r_i \, \tilde{\varphi}'(f_i) = \sum_i r_i \, \tilde{\varphi}(f_i) +
\sum_{i, j} t_{ij}\, r_i\, f_j\,,
\]
and we can define $\lambda'_j(\varphi):= \lambda_j(\varphi) -
\sum_it_{ij}\, r_i$
(corresponding to $\tilde{\varphi}'$ instead of $\tilde{\varphi}$). Then, we
obtain
for the cup product
\[
(\varphi\cup\psi)'(r) - (\varphi\cup\psi)(r) = -\sum_ir_i\cdot
\left( \sum_j t_{ij}\, \psi(f_j)\right)\, ,
\]
but this expression comes from some map $P^m\rightarrow A$. \vspace{3 ex}
\end{itemize}
\sect{$T^1 $, $T^2 $, and the cup product (for toric varieties)}\label{s3 }
\neu{31 }
We start with fixing the usual notations when dealing with affine toric
varieties (cf. \cite{Ke} or
\cite{Oda}):
\begin{itemize}
\item
Let $M, N$ be mutually dual free Abelian groups, we denote by $M_{I\. \. R}, N_{I\. \. R}$
the associated real
vector spaces obtained by base change with $I\. \. R$. \item
Let $\sigma=\langle a^1, \dots, a^N\rangle \subseteq N_{I\. \. R}$ be a rational,
polyhedral
cone with apex - given by its fundamental generators. \\
$\sigma^{\scriptscriptstyle\vee}:= \{ r\in M_{I\. \. R}\, |\; \langle \sigma, \, r\rangle \geq 0 \}
\subseteq M_{I\. \. R}$
is called the dual cone. It induces a partial order on the lattice $M$ via
$[\, a\geq b$ iff
$a-b \in \sigma^{\scriptscriptstyle\vee}\, ]$. \item
$A:= \, I\. \. \. \. C[\sigma^{\scriptscriptstyle\vee}\cap M]$ denotes the semigroup algebra. It is the ring of
regular
functions of the toric variety $Y= \mbox{Spec}\; A$ associated to $\sigma$. \item
Denote by $E\subset \sigma^{\scriptscriptstyle\vee}\cap M$ the minimal set of generators of this
semigroup
("the Hilbert basis"). $E$ equals the set of all primitive (i. e. non-splittable) elements
of $\sigma^{\scriptscriptstyle\vee}\cap M$. In particular, there is a surjection of semigroups $\pi:I\. \. N^E \longrightarrow\hspace{-1.5 em}\longrightarrow
\sigma^{\scriptscriptstyle\vee}\cap M$, and
this fact translates into a closed embedding $Y\hookrightarrow \, I\. \. \. \. C^E$. \\
To make the notations
coherent with \S \ref{s2 }, assume that $E=\{r^0, \dots, r^w\}$ consists of $w+1 $
elements. \vspace{2 ex}
\end{itemize}
\neu{32 }
To a fixed degree $R\in M$ we associate ``thick facets'' $K_i^R$ of the dual
cone
\[
K_i^R := \{r\in \sigma^{\scriptscriptstyle\vee}\cap M \, | \; \langle a^i, r \rangle <
\langle a^i, R \rangle \}\quad (i=1, \dots, N) . \vspace{2 ex}
\]
\par
{\bf Lemma:}{\em
\begin{itemize}
\item[(1 )]
$\cup_i K_i^R = (\sigma^{\scriptscriptstyle\vee}\cap M) \setminus (R+ \sigma^{\scriptscriptstyle\vee})$. \item[(2 )]
For each $r, s\in K_i^R$ there exists an $\ell\in K_i^R$ such that $\ell\geq
r, s$. Moreover, if $Y$ is smooth in codimension 2, the intersections $K^R_i\cap
K^R_j$
(for 2 -faces $\langle a^i, a^j\rangle <\sigma$) have the same property. \vspace{1 ex}
\end{itemize}
}
\par
{\bf Proof:}
Part (i) is trivial; for (ii) cf. (3.7 ) of \cite{T1 }. \hfill$\Box$\\
\par
Intersecting these sets with $E\subseteq \sigma^{\scriptscriptstyle\vee}\cap M$, we obtain the
basic objects for describing the modules $T^i_Y$:
\begin{eqnarray*}
E_i^R &:=& K_i^R \cap E = \{r\in E\, | \; \langle a^i, r \rangle <
\langle a^i, R \rangle \}\, , \\
E_0 ^R &:=& \bigcup_{i=1 }^N E_i^R\; , \mbox{ and}\\
E^R_{\tau} &:=& \bigcap_{a^i\in \tau} E^R_i \; \mbox{ for faces }
\tau < \sigma\,. \end{eqnarray*}
We obtain a complex
$L(E^R)_{\bullet}$ of free Abelian groups via
\[
L(E^R)_{-k} := \bigoplus_{\begin{array}{c}
\tau<\sigma\\ \mbox{dim}\, \tau=k \end{array}} \. \. L(E^R_{\tau})
\]
with the usual differentials. ($L(\dots)$ denotes the free Abelian group of integral, linear dependencies. )
\\
The most interesting part ($k\leq 2 $) can be written explicitely as
\[
L(E^R)_{\bullet}:\quad \cdots
\rightarrow
\oplus_{\langle a^i, a^j\rangle<\sigma} L(E^R_i\cap E^R_j)
\longrightarrow
\oplus_i L(E_i^R) \longrightarrow L(E_0 ^R) \rightarrow 0 \,. \vspace{1 ex}
\]
\par
\neu{33 }
{\bf Theorem:}
{\em
\begin{itemize}
\item[(1 )]
$T^1 _Y(-R) = H^0 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\. \. \. Z}\, I\. \. \. \. C\right)$
\item[(2 )]
$T^2 _Y(-R) \supseteq H^1 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\. \. \. Z}\, I\. \. \. \. C\right)$
\item[(3 )]
Moreover, if $Y$ is smooth in codimension 2
(i. e. \ if the 2 -faces $\langle a^i, a^j \rangle < \sigma$ are spanned
by a part of a $Z\. \. \. Z$-basis of the lattice $N$), then
\[
T^2 _Y(-R) = H^1 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\. \. \. Z}\, I\. \. \. \. C\right)\, .
|
489
| 1
|
arxiv
|
ab}:= \underline{z}^a-\underline{z}^b\quad (a, b\in I\. \. N^{w+1 } \mbox{ with } \pi(a)=\pi(b)
\mbox{ in } \sigma^{\scriptscriptstyle\vee}
\cap M),
\]
and it is easier to deal with this infinite set of equations
(which generates the ideal $I$ as a $\, I\. \. \. \. C$-vector
space) instead of selecting a finite number of them in some non-canonical way. In particular, for
$m$ of \zitat{2 }{1 } and \zitat{2 }{2 } we take
\[
m:= \{ (a, b)\in I\. \. N^{w+1 }\timesI\. \. N^{w+1 }\, |\;\pi(a)=\pi(b)\}\,. \]
The general $T^i$-formulas mentioned in \zitat{2 }{1 } and \zitat{2 }{2 } remain
true. \\
\par
{\bf Theorem:}
{\em
For a fixed element $R\in M$ let $\varphi: L(E)_{\, I\. \. \. \. C}\rightarrow \, I\. \. \. \. C$ induce some
element of
\[
\left(\left. ^{\displaystyle L(E_0 ^R)}\. \. \right/
\. _{\displaystyle \sum_i L(E_i^R)} \right)^\ast \otimes_{Z\. \. \. Z} \, I\. \. \. \. C
\cong T^1 _Y(-R)\quad \mbox{(cf. Theorem \zitat{3 }{3 }(1 )). }
\]
Then, the $A$-linear map
\begin{eqnarray*}
^{\displaystyle I}\. \. /\. _{\displaystyle I^2 } &\longrightarrow& A\\
\underline{z}^a-\underline{z}^b & \mapsto & \varphi(a-b)\cdot x^{\pi(a)-R}
\end{eqnarray*}
provides the same element via the formula \zitat{2 }{1 }. }\\
\par
Again, this Theorem follows from the paper \cite{T1 } - accompanied by the
commutative diagram
of \zitat{4 }{3 } in the present paper. (Cf. Remark \zitat{4 }{4 }. )\\
\par
{\bf Remark:}
A simple, but nevertheless important check shows that the map
$(\underline{z}^a-\underline{z}^b) \mapsto
\varphi(a-b)\cdot x^{\pi(a)-R}$ goes into $A$, indeed:\\
Assume $\pi(a)-R \notin \sigma^{\scriptscriptstyle\vee}$. Then, there exists an index $i$ such
that
$\langle a^i, \pi(a)-R \rangle <0 $. Denoting by "supp $q$" (for a $q\in I\. \. R^E$) the set of those $r\in E$ providing
a non-vanishing entry
$q_r$, we obtain
\[
\mbox{supp}\, (a-b) \subseteq \mbox{supp}\, a \cup \mbox{supp}\, b \subseteq
E^R_i\, ,
\]
i. e. $\varphi(a-b)=0 $. \\
\par
\neu{35 }
The $P$-module ${\cal R}\subseteq P^m$ is generated by relations of two different
types:
\begin{eqnarray*}
r(a, b;c) &:=& e^{a+c, \, b+c}- \underline{z}^c\, e^{a, b}\quad
(a, b, c\in I\. \. N^{w+1 };\, \pi(a)=\pi(b))\quad \mbox{ and}\\
s(a, b, c) &:=& e^{b, c} - e^{a, c} + e^{a, b}\quad
(a, b, c\in I\. \. N^{w+1 };\, \pi(a)=\pi(b)=\pi(c))\,. \\
&&\qquad\qquad(e^{\bullet, \bullet} \mbox{ denote the standard basis vectors of
} P^m. )
\vspace{1 ex}
\end{eqnarray*}
\par
{\bf Theorem:}
{\em
For a fixed element $R\in M$ let $\psi_i: L(E_i^R)_{\, I\. \. \. \. C}\rightarrow \, I\. \. \. \. C$ induce
some
element of
\[
\left( \frac{\displaystyle
\mbox{Ker}\, \left( \oplus_i L(E_i^R) \longrightarrow
L(E'^R)\right)}{\displaystyle
\mbox{Im}\, \left( \oplus_{\langle a^i, a^j\rangle<\sigma} L(E_i^R\cap E_j^R)
\rightarrow \oplus_i L(E_i^R)\right)} \right)^\ast
\otimes_{Z\. \. \. Z}\, I\. \. \. \. C \subseteq T^2 _Y(-R)
\quad \mbox{(cf. \zitat{3 }{3 }(2 )). }
\]
Then, the $P$-linear map
\begin{eqnarray*}
^{\displaystyle {\cal R}}\. \. /\. _{\displaystyle {\cal R}_0 } &\longrightarrow & A\\
r(a, b;c) & \mapsto &
\left\{ \begin{array}{ll}
\psi_i(a-b)\, x^{\pi(a+c)-R} & \mbox{for } \pi(a)\in K_i^R;\; \pi(a+c)\geq R\\
0 & \mbox{for }\pi(a)\geq R \mbox{ or } \pi(a+c)\in\bigcup_i K_i^R
\end{array}\right. \\
s(a, b, c) &\mapsto & 0
\end{eqnarray*}
is correct defined, and via the formula \zitat{2 }{2 } it induces the same
element of
$T^2 _Y$. }
\vspace{2 ex}
\\
\par
For the prove we refer to \S \ref{s4 }. Nevertheless, we check the {\em
correctness of
the definition} of the $P$-linear map
$^{\displaystyle {\cal R}}\. /\. _{\displaystyle {\cal R}_0 } \rightarrow A$ instantly:
\begin{itemize}
\item[(i)]
If $\pi(a)$ is contained in two different sets $K_i^R$ and $K_j^R$, then the
two
fundamental generators $a^i$ and $a^j$ can be connected by a sequence
$a^{i_0 }, \dots, a^{i_p}$, such that
\begin{itemize}
\item[$\bullet$]
$a^{i_0 }=a^i, \, a^{i_p}=a^j, $
\item[$\bullet$]
$a^{i_{v-1 }}$ and $a^{i_v}$ are the edges of some 2 -face of $\sigma$
($v=1, \dots, p$),
and
\item[$\bullet$]
$\pi(a)\in K^R_{i_v}$ for $v=0, \dots, p$. \end{itemize}
Hence, $\mbox{supp}\, (a-b)\subseteq E^R_{i_{v-1 }}\cap E^R_{i_v}$
($v=1, \dots, p$),
and we obtain
\[
\psi_i(a-b)=\psi_{i_1 }(a-b)=\dots=\psi_{i_{p-1 }}(a-b)=\psi_j(a-b)\,. \]
\item[(ii)]
There are three types of $P$-linear relations between the generators $r(\dots)$
and
$s(\dots)$ of ${\cal R}$:
\begin{eqnarray*}
0 &=& \underline{z}^d\, r(a, b;c) -r(a, b;c+d) + r(a+c, b+c;d)\,, \\
0 &=& r(b, c;d) - r(a, c;d) + r(a, b;d) - s(a+d, b+d, c+d) + \underline{z}^d\,
s(a, b, c)\,, \\
0 &=& s(b, c, d) - s(a, c, d) + s(a, b, d) - s(a, b, c)\,. \end{eqnarray*}
Our map respects them all. \item[(iii)]
Finally, the typical element
$(\underline{z}^a-\underline{z}^b)e^{cd} - (\underline{z}^c-\underline{z}^d)e^{ab} \in {\cal R}_0 $
equals
\[
-r(c, d;a)+r(c, d;b)+r(a, b;c)-r(a, b;d) - s(a+c, b+c, a+d) - s(a+d, b+c, b+d)\,. \]
It will be sent to 0, too. \vspace{2 ex}
\end{itemize}
\par
\neu{36 }
The cup product $T^1 _Y\times T^1 _Y\rightarrow T^2 _Y$ respects the grading, i. e. it splits
into pieces
\[
T^1 _Y(-R)\times T^1 _Y(-S) \longrightarrow T^2 _Y(-R-S)\quad (R, S\in M)\,. \]
To describe these maps in our combinatorial language, we choose some
set-theoretical
section $\Phi:M\rightarrowZ\. \. \. Z^{w+1 }$ of the $Z\. \. \. Z$-linear map
\begin{eqnarray*}
\pi: Z\. \. \. Z^{w+1 } &\longrightarrow& M\\
a&\mapsto&\sum_v a_v\, r^v
\end{eqnarray*}
with the additional property $\Phi(\sigma^{\scriptscriptstyle\vee}\cap M)\subseteq I\. \. N^{w+1 }$. \\
\par
Let $q\in L(E)\subseteqZ\. \. \. Z^{w+1 }$ be an integral relation between the generators
of
the semigroup $\sigma^{\scriptscriptstyle\vee}\cap M$. We introduce the following notations:
\begin{itemize}
\item
$q^+, q^-\inI\. \. N^{w+1 }$ denote the positive and the negative part of $q$,
respectively. (With other words: $q=q^+-q^-$ and $\sum_v q^-_v\, q^+_v=0 $. )
\item
$\bar{q}:=\pi(q^+)=\sum_v q_v^+\, r^v = \pi(q^-)=\sum_v q_v^-\, r^v \in M$. \item
If $\varphi, \psi: L(E)\rightarrowZ\. \. \. Z$ are linear maps and $R, S\in M$, then we
define
\[
t_{\varphi, \psi, R, S}(q):=
\varphi(q)\cdot \psi \left( \Phi(\bar{q}-R)+\Phi(R)-q^-\right) +
\psi(q)\cdot \varphi\left( \Phi(\bar{q}-S)+\Phi(S)-q^+\right)\,. \vspace{2 ex}
\]
\end{itemize}
\par
{\bf Theorem:}
{\em
Assume that $Y$ is smooth in codimension 2. \\
Let $R, S\in M$, and let $\varphi, \psi: L(E)_{\, I\. \. \. \. C}\rightarrow\, I\. \. \. \. C$ be linear maps
vanishing on $\sum_i L(E_i^R)_{\, I\. \. \. \. C}$ and $\sum_i L(E_i^S)_{\, I\. \. \. \. C}$, respectively. In particular,
they define elements $\varphi\in T^1 _Y(-R), \, \psi\in T^1 _Y(-S)$ (which involves
a slight
abuse of notations). \\
Then, the cup product $\varphi\cup\psi\in T^2 _Y(-R-S)$ is given (via
\zitat{3 }{3 }(3 ))
by the linear maps $(\varphi\cup\psi)_i: L(E_i^{R+S})_{\, I\. \. \. \. C}\rightarrow\, I\. \. \. \. C$
defined as follows:
\begin{itemize}
\item[(i)]
If $q\in L(E_i^{R+S})$ (i. e. $\langle a^i, \mbox{supp}\, q\rangle < \langle
a^i, R+S\rangle$) is an integral relation,
then there exists a decomposition $q=\sum_k q^k$ such that
\begin{itemize}
\item
$q^k\in L(E_i^{R+S})$, and moreover
\item
$\langle a^i, \bar{q}^k\rangle < \langle a^i, R+S\rangle$. \end{itemize}
\item[(ii)]
$(\varphi\cup\psi)_i\left( q\in L(E_i^{R+S})\right):= \sum_k
t_{\varphi, \psi, R, S}(q^k)$. \vspace{2 ex}
\end{itemize}
}
\par
It is even not obvious that the map $q\mapsto \sum_k t(q^k)$
\begin{itemize}
\item
does not depend on the representation of $q$ as a particular sum of $q_k$'s
(which
would instantly imply linearity on $L(E_i^{R+S})$), and
\item
yields the same result on $L(E_i^{R+S}\cap E_j^{R+S})$ for $i, j$ corresponding
to edges
$a^i, a^j$ of some 2 -face of $\sigma$. \end{itemize}
The proof of these facts (cf. \ \zitat{5 }{4 })and of the entire theorem is
contained
in \S \ref{s5 }. \\
\par
{\bf Remark 1 :}
Replacing all the terms $\Phi(\bullet)$ in the $t$'s of the previous formula
for
$(\varphi\cup\psi)_i(q)$ by arbitrary liftings from $M$ to $Z\. \. \. Z^{w+1 }$,
the result in $T^2 _Y(-R-S)$ will be unchanged as long as we obey the following
two
rules:
\begin{itemize}
\item[(i)]
Use always (for all $q$, $q^k$, and $i$)
the {\em same liftings} of $R$ and $S$ to $Z\. \. \. Z^{w+1 }$ (at the places of
$\Phi(R)$ and $\Phi(S)$, respectively). \item[(ii)]
Elements of $\sigma^{\scriptscriptstyle\vee}\cap M$ always have to be lifted to $I\. \. N^{w+1 }$. \vspace{2 ex}
\end{itemize}
{\bf Proof:}
Replacing $\Phi(R)$ by $\Phi(R)+d$ ($d\in L(E)$) at each occurence changes all
maps $(\varphi\cup\psi)_i$ by the summand $\psi(d)\cdot\varphi(\bullet)$. However,
this additional linear map comes from $L(E)^\ast$, hence it is trivial on
$\mbox{Ker}\left(\oplus_iL(E_i^{R+S})\rightarrow L(E_0 ^{R+S})\subseteq
L(E)\right)$. \\
\par
Let us look at the terms $\Phi(\bar{q}-R)$ in $t(q)$ now:
Unless $\bar{q}\geq R$, the factor $\varphi(q)$ vanishes (cf. Remark
\zitat{3 }{4 }). On
the other hand, the expression $t(q)$ is never used for those relations $q$
satisfying
$\bar{q}\geq R+S$ (cf. conditions for the $q^k$'s).
|
489
| 2
|
arxiv
|
w+1 }$. Now, the condition $\bar{q}\in\bigcup_iE_i^R$
implies
that $\varphi\left( \Phi(\bar{q}-S)+\Phi(S)-q^+\right)=0 $. \\
\par
(ii)
We can choose $\Phi(\bar{q}-R):=q^+-\Phi(R)$ and
$\Phi(\bar{q}-S):=q^+-\Phi(S)$. Then,
the claim follows straight forward. \hfill$\Box$\\
\par
\sect{Proof of the $T^2 $-formula}\label{s4 }
\neu{41 }
We will use the sheaf $\Omega^1 _Y=\Omega^1 _{A|\, I\. \. \. \. C}$ of K\"ahler
differentials for computing the modules $T^i_Y$. The maps
\[
\alpha_i: \mbox{Ext}^i_A\left(
\;^{\displaystyle\Omega_Y^1 }\. \. \left/\. _{\displaystyle
\mbox{tors}\, (\Omega_Y^1 )}\right. ,
\, A \right)
\hookrightarrow
\mbox{Ext}^i_A\left( \Omega^1 _Y, \, A\right) \cong T^i_Y\quad
(i=1,2 )
\]
are injective. Moreover, they are isomorphisms for
\begin{itemize}
\item
$i=1 $, since $Y$ is normal, and for
\item
$i=2 $, if $Y$ is smooth in codimension 2. \vspace{2 ex}
\end{itemize}
\par
\neu{42 }
As in \cite{T1 }, we build a special $A$-free resolution (one step further now)
\[
{\cal E}\stackrel{d_E}{\longrightarrow}{\cal D}\stackrel{d_D}{\longrightarrow}
{\cal C}\stackrel{d_C}{\longrightarrow}{\cal B} \stackrel{d_B}{\longrightarrow}
\;^{\displaystyle\Omega_Y^1 }\. \. \left/\. _{\displaystyle
\mbox{tors}\, (\Omega_Y^1 )}\right. \rightarrow 0 \,. \vspace{2 ex}
\]
With $L^2 (E):=L(L(E))$, $L^3 (E):=L(L^2 (E))$, and
\[
\mbox{supp}^2 \xi:= \bigcup_{q\in supp\, \xi} \mbox{supp}\, q\quad (\xi\in
L^2 (E)), \quad
\mbox{supp}^3 \omega:= \bigcup_{\xi\in supp\, \omega}\mbox{supp}^2 \xi\quad
(\omega\in L^3 (E)),
\]
the $A$-modules involved in this resolution are defined as follows:
\[
\begin{array}{rcl}
{\cal B}&:=&\oplus_{r\in E} \, A\cdot B(r), \qquad
{\cal C}\, :=\, \oplus_{\. \. \. \. \. \begin{array}[b]{c}\scriptstyle q\in L(E) \vspace{-1 ex}\\
\scriptstyle\ell\geq supp\, q\end{array}}
\. \. \. A\cdot C(q;\ell), \\
{\cal D}&:=&\left(
\oplus_{\. \. \. \. \. \. \. \begin{array}{c}\scriptstyle q\in L(E)\vspace{-1 ex}\\
\scriptstyle\eta\geq\ell\geq supp\, q\end{array}}
\. \. \. \. A\cdot D(q;\ell, \eta) \right)
\oplus \left(
\oplus_{\. \. \. \. \begin{array}{c}\scriptstyle\xi\in L^2 (E)\vspace{-1 ex}\\
\scriptstyle\eta\geq supp^2 \xi\end{array}}
\. \. \. A\cdot D(\xi;\eta) \right), \;\mbox{ and}\\
{\cal E}&:=&
\begin{array}[t]{r} \left(
\oplus_{\. \. \. \. \. \. \. \. \begin{array}{c}\scriptstyle q\in L(E)\vspace{-1 ex}\\
\scriptstyle\mu\geq\eta\geq\ell\geq supp\, q\end{array}}
\. \. \. \. \. A\cdot E(q;\ell, \eta, \mu) \right)
\oplus \left(
\oplus_{\. \. \. \. \. \. \. \begin{array}{c}\scriptstyle\xi\in L^2 (E)\vspace{-1 ex}\\
\scriptstyle\mu\geq\eta\geq supp^2 \xi\end{array}}
\. \. \. A\cdot E(\xi;\eta, \mu) \right) \oplus \qquad \\
\oplus \left(
\oplus_{\. \. \. \. \begin{array}{c}\scriptstyle\omega\in L^3 (E)\vspace{-1 ex}\\
\scriptstyle\omega\geq supp^3 \omega\end{array}}
\. \. \. \. A\cdot E(\omega;\mu)\right)
\end{array}
\end{array}
\]
($B, C, D, $ and $E$ are just symbols). The differentials equal
\[
\begin{array}{cccl}
d_B: &B(r)&\mapsto &d\, x^r\vspace{1 ex}\\
d_C: &C(q;\ell)&\mapsto &\sum_{r\in E} q_r\, x^{\ell-r}\cdot B(r)\vspace{1 ex}\\
d_D: &D(q;\ell, \eta)&\mapsto &C(q;\eta) - x^{\eta-\ell}\cdot C(q, \ell)\\
d_D: &D(\xi;\eta)&\mapsto& \sum_{q\in L(E)}\xi_q\cdot C(q, \eta)\vspace{1 ex}\\
d_E: &E(q;\ell, \eta, \mu)&\mapsto& D(q;\eta, \mu)-D(q;\ell, \mu)+
x^{\mu-\eta}\cdot D(q;\ell, \eta) \\
d_E: &E(\xi;\eta, \mu)&\mapsto &D(\xi;\mu) - x^{\mu-\eta}\cdot D(\xi;\eta) -
\sum_{q\in L(E)} \xi_q\cdot D(q;\eta, \mu)\\
d_E: &E(\omega;\mu)&\mapsto &\sum_{\xi\in L^2 (E)} \omega_{\xi}\cdot
D(\xi;\mu)\, . \vspace{2 ex}
\end{array}
\]
\par
Looking at these maps, we see that the complex is $M$-graded: The degree of
each of
the elements $B$, $C$, $D$, or $E$ can be obtained by taking the last of its
parameters
($r$, $\ell$, $\eta$, or $\mu$, respectively). \\
\par
{\bf Remark:} If one prefered a resolution with free $A$-modules of finite rank
(as it was
used in
\cite{T1 }), the following replacements would be necessary:
\begin{itemize}
\item[(i)]
Define succesively $F\subseteq L(E)$, $G\subseteq L(F) \subseteq L^2 (E)$, and
$H\subseteq L(G)\subseteq L^2 (F) \subseteq L^3 (E)$ as the finite
sets of normalized, minimal relations between elements of $E$, $F$, or
$G$, respectively. Then, use them instead of $L^i(E)$ ($i=1,2,3 $). \item[(ii)]
Let $\ell$, $\eta$, and $\mu$ run through finite generating
(under $(\sigma^{\scriptscriptstyle\vee}\cap M)$-action)
systems of all possible elements meeting the desired inequalities. \end{itemize}
The disadvantages of those treatment are a more comlplicated description of
the resolution, on the one hand, and difficulties to obtain the
commutative diagram \zitat{4 }{3 },
on the other hand. \\
\par
\neu{43 }
Combining the two exact sequences
\[
^{\displaystyle {\cal R}}\. /\. _{\displaystyle I\, {\cal R}} \longrightarrow
A^m \longrightarrow
^{\displaystyle I}\. \. /\. _{\displaystyle I^2 }\rightarrow 0 \quad
\mbox{and}\quad
^{\displaystyle I}\. \. /\. _{\displaystyle I^2 }\longrightarrow
\Omega^1 _{\, I\. \. \. \. C^{w+1 }}\otimes A \longrightarrow
\Omega_Y^1 \rightarrow 0 \,,
\]
we get a complex (not exact at the place of $A^m$) involving $\Omega_Y^1 $. We
will compare
in the following commutative diagram this complex with the previous resolution
of
$^{\displaystyle\Omega_Y^1 }\. \. \left/\. _{\displaystyle
\mbox{tors}\, (\Omega_Y^1 )}\right. $:
\vspace{-5 ex}\\
\[
\dgARROWLENGTH=0.8 em
\begin{diagram}
\node[5 ]{^{\displaystyle I}\. \. /\. _{\displaystyle I^2 }}
\arrow{se, t}{d}\\
\node[2 ]{^{\displaystyle {\cal R}}\. /\. _{\displaystyle I\, {\cal R}}}
\arrow[2 ]{e}
\arrow{se, t}{p_D}
\node[2 ]{A^m}
\arrow{ne}
\arrow[2 ]{e}
\arrow[2 ]{s, l}{p_C}
\node[2 ]{\Omega_{\, I\. \. \. \. C^{w+1 }}\. \otimes \. A}
\arrow{e}
\arrow[2 ]{s, lr}{p_B}{\sim}
\node{\Omega_Y}
\arrow[2 ]{s}
\arrow{e}
\node{0 }\\
\node[3 ]{\mbox{Im}\, d_D}
\arrow{se}\\
\node{{\cal E}}
\arrow{e, t}{d_E}
\node{{\cal D}}
\arrow[2 ]{e, t}{d_D}
\arrow{ne}
\node[2 ]{{\cal C}}
\arrow[2 ]{e, t}{d_C}
\node[2 ]{{\cal B}}
\arrow{e}
\node{^{\displaystyle\Omega_Y^1 }\. \. \. \left/\. \. _{\displaystyle
\mbox{tors}\, (\Omega_Y^1 )}\right. }
\arrow{e}
\node{0 }
\end{diagram}
\]
\par
Let us explain the three labeled vertical maps:
\begin{itemize}
\item[(B)]
$p_B: dz_r \mapsto B(r)$ is an isomorphism between two free $A$-modules of rank
$w+1 $. \item[(C)]
$p_C: e^{ab} \mapsto C(a-b;\pi(a))$. In particular, the image of this map is
spanned by
those $C(q, \ell)$ meeting $\ell\geq \bar{q}$ (which is stronger than just
$\ell\geq\mbox{supp}\, q$). \item[(D)]
Finally, $p_D$ arises as pull back of $p_C$ to $^{\displaystyle {\cal R}}\. /\. _{\displaystyle I\, {\cal R}}$. It can
be described by
$r(a, b;c)\mapsto D(a-b; \pi(a), \pi(a+c))$ and $s(a, b, c)\mapsto D(\xi;\pi(a))$
($\xi$ denotes
the relation $\xi=[(b-c)-(a-c)+(a-b)=0 ]$). \vspace{2 ex}
\end{itemize}
\par
{\bf Remark:}
Starting with the typical ${\cal R}_0 $-element mentioned in \zitat{3 }{5 }(iii), the
previous
description of the map $p_D$ yields 0 (even in ${\cal D}$). \\
\par
\neu{44 }
By \zitat{4 }{1 } we get the $A$-modules $T^i_Y$ by computing the cohomology
of the complex dual to those of \zitat{4 }{2 }. \\
\par
As in \cite{T1 }, denote by $G$ one of the capital letters $B$, $C$, $D$, or
$E$. Then, an element $\psi$ of the dual free module $(\bigoplus\limits_G
\, I\. \. \. \. C[\check{\sigma}\cap M]\cdot G)^\ast$ can be described by giving elements
$g(x)\in\, I\. \. \. \. C[\check{\sigma}\cap M]$ to be the images of the generators $G$
($g$ stands for $b$, $c$, $d$, or $e$, respectively). \\
\par
For $\psi$ to be homogeneous of degree $-R\in M$, $g(x)$ has to be
a monomial of degree
\[
\deg g(x)=-R+\deg G. \]
In particular, the corresponding complex coefficient $g\in \, I\. \. \. \. C$ (i. e. $g(x)=g\cdot x^{-R+\deg G}$) admits the property that
\[
g\neq 0 \quad\mbox{implies}\quad -R+\deg G\ge 0 \quad (\mbox{i. e. }\;
-R+\deg G\in\check{\sigma}). \vspace{2 ex}
\]
\par
{\bf Remark:}
Using these notations,
Theorem \zitat{3 }{3 }(1 ) was proved in \cite{T1 } by showing that
\begin{eqnarray*}
\left(\left. ^{\displaystyle L(E_0 ^R)}\. \. \right/
\. _{\displaystyle \sum_i L(E_i^R)} \right)^\ast \otimes_{Z\. \. \. Z} \, I\. \. \. \. C
&\longrightarrow&
^{\displaystyle \mbox{Ker}({\cal C}^\ast_{-R}\rightarrow {\cal D}^\ast_{-R})}\. \. \left/
\. _{\displaystyle \mbox{Im}({\cal B}^\ast_{-R}\rightarrow{\cal C}^\ast_{-R})}\right. \\
\varphi &\mapsto&
[\dots, \, c(q;\ell):=\varphi(q), \dots]
\end{eqnarray*}
is an isomorphism of vector spaces. \\
Moreover, looking at the diagram of
\zitat{4 }{3 }, $e^{ab}\in A^m$ maps to both $\underline{z}^a-\underline{z}^b\in ^{\displaystyle
I}\. \. /\. _{\displaystyle I^2 }$ and $C(a-b;\pi(a))\in {\cal C}$. In particular, we can verify Theorem
\zitat{3 }{4 }:
Each $\varphi:L(E)_{\, I\. \. \. \. C} \rightarrow\, I\. \. \. \. C$, on the one hand,
and its associated $A$-linear map
\begin{eqnarray*}
^{\displaystyle I}\. \. /\. _{\displaystyle I^2 } &\longrightarrow& A\\
\underline{z}^a-\underline{z}^b & \mapsto & \varphi(a-b)\cdot x^{\pi(a)-R},
\end{eqnarray*}
on the other hand, induce the same element of $T^1 _Y(-R)$. \\
\par
\neu{45 }
For computing $T_Y^2 (-R)$,
the interesting part of the dualized complex
$\zitat{4 }{2 }^\ast$ in degree $-R$ equals the complex
of $\, I\. \. \. \.
|
489
| 3
|
arxiv
|
cdot
d(q;\eta, \mu), \\
e(\omega ;\mu) &=&
\sum\limits_{\xi\in G} \omega_{\xi}\cdot d(\xi;\mu). \end{array}
\vspace{1 ex}
\]
\par
Denote $V:= \mbox{Ker}\, d^{\ast}_E \subseteq {\cal D}_{-R}^{\ast}\, $ and
$\, W:= \mbox{Im}\, d_D^{\ast}\subseteq V$, i. e. \begin{eqnarray*}
V&=& \{ [\underline{d(q;\ell, \eta)};\, \underline{d(\xi;\eta)}]\, |\;
\begin{array}[t]{l}
q\in L(E), \;\eta\geq\ell\geq\mbox{supp}\, q\mbox{ in }M;\\
\xi\in L^2 (E), \;\eta\geq\mbox{supp}^2 \xi;
\vspace{0.5 ex}\\
d(q;\ell, \eta) = d(\xi;\eta) = 0 \mbox{ for } \eta -R \notin \check{\sigma}, \\
d(q;\ell, \mu) = d(q;\ell, \eta) + d(q;\eta, \mu) \; (\mu\geq\eta\geq\ell\geq
\mbox{supp}\, q), \\
d(\xi;\mu)= d(\xi;\eta) + \sum_q \xi_q \cdot d(q;\eta, \mu)\;
(\mu\geq\eta\geq \mbox{supp}^2 \xi), \\
\sum_{\xi\in G}\omega_{\xi} \, d(\xi;\mu) =0 \mbox{ for }\omega \in L^3 (E)
\mbox{ with } \mu \geq \mbox{supp}^3 \omega\, \},
\end{array}\\
W&=& \{ [\underline{d(q;\ell, \eta)};\, \underline{d(\xi;\eta)}]\, |\;
\mbox{there are $c(q;\ell)$'s with}
\begin{array}[t]{l}
c(q, \ell)=0 \mbox{ for } \ell-R\notin\check{\sigma}, \\
d(q;\ell, \eta) = c(q;\eta)-c(q;\ell), \\
d(\xi;\eta)= \sum_q\xi_q\cdot c(q;\eta)\, \}. \end{array}
\end{eqnarray*}
By construction, we obtain
\[
V\. \left/\. _{\displaystyle W}\right. =
\mbox{Ext}^i_A\left(
\;^{\displaystyle\Omega_Y^1 }\. \. \left/\. _{\displaystyle
\mbox{tors}\, (\Omega_Y^1 )}\right. , \, A \right)(-R)
\subseteq T^2 _Y(-R)
\]
(which is an isomorphism, if $Y$ is smooth in codimension 2 ). \\
\par
\neu{46 }
Let us define the much easier vector spaces
\begin{eqnarray*}
V_1 &:=& \{[\underline{x_i(q)}_{(q\in L(E_i^R))}]\, |\;
\begin{array}[t]{l}
x_i(q)=x_j(q) \mbox{ for }
\begin{array}[t]{l}
\bullet\,
\langle a^i, a^j \rangle < \sigma \mbox{ is a 2 -face and}\\
\bullet\,
q\in L(E_i^R\cap E_j^R)\,,
\end{array}\\
\xi\in L^2 (E_i^R) \mbox{ implies } \sum_q\xi_q\cdot
x_i(q)=0
\, \}\;\mbox{ and}
\end{array}
\vspace{1 ex}
\\
W_1 &:=& \{[\underline{x(q)}_{(q\in \cup_i L(E_i^R))}]
\, |\;
\begin{array}[t]{l}
\xi\in L(\bigcup_i L(E_i^R)) \mbox{ implies } \sum_q\xi_q\cdot
x(q)=0 \, \}. \vspace{2 ex}
\end{array}
\end{eqnarray*}
\par
{\bf Lemma:}
{\em
The linear map $V_1 \rightarrow V$ defined by
\begin{eqnarray*}
d(q;\ell, \eta) &:=& \left\{
\begin{array}{lll}
x_i(q) &\mbox{ for }& \ell\in K_i^R, \;\; \eta \geq R\\
0 &\mbox{ for }& \ell \geq R \;\mbox{ or } \;\eta \in
\bigcup_i K_i^R\, ;
\end{array} \right. \\
d(\xi;\eta) &:=&
0 \,
\end{eqnarray*}
induces an injective map
\[
V_1 \. \left/\. _{\displaystyle W_1 }\right. \hookrightarrow
V\. \left/\. _{\displaystyle W}\right. \,. \]
If $Y$ is smooth in codimension 2, it will be an isomorphism. }
\\
\par
{\bf Proof:}
1 ) The map $V_1 \rightarrow V$ is {\em correct defined}:
On the one hand, an argument as used in \zitat{3 }{5 }(i) shows that $\ell\in
K_i^R\cap K_j^R$ would imply $x_i(q)=x_j(q)$. On the other hand,
the image of
$[x_i(q)]_{q\in L(E_i^R)}$ meets all conditions in the definition of $V$. \vspace{1 ex}
\\
2 ) $W_1 $ maps to $W$ (take $c(q, \ell):=x(q)$ for $\ell\geq R$ and
$c(q, \ell):=0 $ otherwise). \vspace{1 ex}
\\
3 ) The map between the two factor spaces is {\em injective}: Assume for
$[x_i(q)]_{q\in L(E_i^R)}$ that there exist elements $c(q, \ell)$, such that
\begin{eqnarray*}
c(q;\ell) &=& 0 \; \mbox{ for } \ell \in \bigcup_i K^R_i\, , \\
x_i(q) &=& c(q;\eta) - c(q;\ell) \; \mbox{ for }
\eta \geq \ell, \, \ell\in K_i^R, \, \eta\geq R\,, \\
0 &=&
c(q;\eta) - c(q;\ell)\; \mbox{ for } \eta\geq \ell \mbox{ and }
[\ell\geq R\mbox{ or } \eta\in \bigcup_iK_i^R]\, , \mbox{ and}\\
0 &=&
\sum_q \xi_q \cdot c(q;\eta) \; \mbox{ for } \eta \geq
\mbox{supp}^2 \xi\, . \end{eqnarray*}
In particular, $x_i(q)$ do not depend on $i$, and these elements
meet the property
\[
\sum_q \xi_q \cdot x_{\bullet}(q) = 0 \; \mbox{ for } \xi\in L(\bigcup_i
L(E_i^R)). \]
4 ) If $Y$ is smooth in codimension 2, the map is {\em surjective} :\\
Given an element $[d(q;\ell, \eta), \, d(\xi;\eta)]\in V$, there exist
complex numbers $c(q;\eta)$ such that:
\begin{itemize}
\item[(i)]
$d(\xi;\eta) = \sum_q\xi_q\cdot c(q;\eta)\, $ ,
\item[(ii)]
$c(q;\eta)=0 \mbox{ for } \eta\notin R+\sigma^{\scriptscriptstyle\vee}\,
(\mbox{i. e. }\eta\in \bigcup_iK_i^R)\, $. \end{itemize}
(Do this separately for each $\eta$ and distinguish between the cases
$\eta\in R +\sigma^{\scriptscriptstyle\vee}$ and $\eta\notin R+\sigma^{\scriptscriptstyle\vee}$. )\\
In particular, $[c(q;\eta) - c(q;\ell), \, d(\xi;\eta)]\in W$. Hence, we
have seen that we may assume $d(\xi;\eta)=0 $. \\
\par
Let us choose some sufficiently high degree $\ell^\ast\geq E$. Then,
\[
x_i(q):= d(q;\ell, \eta) - d(q;\ell^\ast\!, \eta)
\]
(with $\ell\in K_i^R$, $\ell\geq \mbox{supp}\, q$
(cf. \ Lemma \zitat{3 }{2 }(2 )), and $\eta\geq\ell, \ell^\ast\!, R$)
defines some preimage:
\begin{itemize}
\item[(i)]
It is independent from the choice of $\eta$: Using a different $\eta'$
generates
the difference $d(q;\eta, \eta')-d(q;\eta, \eta')$. \item[(ii)]
It is independent from $\ell\in K_i^R$: Choosing another $\ell'\in K_i^R$
with $\ell'\geq\ell$ would add the summand $d(q;\ell, \ell')$, which is 0 ;
for the general case use Lemma \zitat{3 }{2 }(2 ). \item[(iii)]
If $\langle a^i, a^j\rangle < \sigma$ is a 2 -face with $\mbox{supp}\, q
\subseteq L(E^R_i)\cap L(E_j^R)$, then by Lemma \zitat{3 }{2 }(2 ) we can choose
an
$\ell\in K_i^R\cap K_j^R$ achieving $x_i(q)=x_j(q)$. \item[(iv)]
For $\xi\in L^2 (E_i^R)$ we have
\[
\sum_q \xi_q\cdot d(q;\ell, \eta) = \sum_q \xi_q\cdot d(q;\ell^\ast\!, \eta) =
0 \,,
\]
and this gives the corresponding relation for the $x_i(q)'$s. \item[(v)]
Finally, if we apply to
$[\underline{x_i(q)}]\in V_1 $
the linear map $V_1 \rightarrow V$, the result differs from
$[d(q;\ell, \eta),0 ]\in V$ by
the $W$-element built from
\[
c(q;\ell) := \left\{ \begin{array}{ll}
d(q;\ell, \eta) - d(q;\ell^\ast\!, \eta) & \mbox{ if } \ell\geq R \\
0 & \mbox{ otherwise }. \end{array} \right. \vspace{-2 ex}
\]
\end{itemize}
\hfill$\Box$\\
\par
\neu{47 }
Now, it is easy to complete the proofs for Theorem \zitat{3 }{3 } (part 2 and 3 )
and
Theorem \zitat{3 }{5 }:\\
\par
First, for a tuple $[\underline{x_i(q)}]_{q\in L(E_i^R)}$, the condition
\[
\xi\in L^2 (E_i^R) \mbox{ implies } \sum_q\xi_q\cdot x_i(q)=0
\]
is equivalent to the fact the components $x_i(q)$ are induced by elements
$x_i\in L(E_i^R)_{\, I\. \. \. \. C}^\ast$. \\
The other condition for elements of $V_1 $ just says that for 2 -faces
$\langle a^i, a^j\rangle<\sigma$ there is $x_i=x_j$ on
$L(E_i^R\cap E_j^R)_{\, I\. \. \. \. C}=L(E_i^R)_{\, I\. \. \. \. C}\cap L(E_j^R)_{\, I\. \. \. \. C}$. In particular, we
obtain
\[
V_1 = \mbox{Ker}\left( \oplus_i L(E_i^R)_{\, I\. \. \. \. C}^\ast \rightarrow
\oplus_{\langle a^i, a^j\rangle <\sigma} L(E_i^R\cap E_j^R)_{\, I\. \. \. \. C}^\ast \right)\,. \]
In the same way we get
\[
W_1 = \left( \sum_i L(E^R_i)_{\, I\. \. \. \. C}\right)^\ast\,,
\]
and our $T^2 $-formula is proven. \\
\par
Finally, if $\psi_i:L(E_i^R)_{\, I\. \. \. \. C}\rightarrow \, I\. \. \. \. C$ are linear maps defining an
element of
$V_1 $, they induce the following $A$-linear map on ${\cal D}$ (even on
$\mbox{Im}\, d_D$):
\begin{eqnarray*}
D(q;\ell, \eta) &\mapsto& \left\{
\begin{array}{lll}
\psi_i(q)\cdot x^{\eta-R} &\mbox{ for }& \ell\in K_i^R, \;\; \eta \geq R\\
0 &\mbox{ for }& \ell \geq R \;\mbox{ or } \;\eta \in
\bigcup_i K_i^R
\end{array} \right. \\
D(\xi;\eta &\mapsto& 0 \,. \end{eqnarray*}
Now, looking at the diagram of \zitat{4 }{3 }, this translates exactly into the
claim of
Theorem \zitat{3 }{5 }. \\
\par
\sect{Proof of the cup product formula}\label{s5 }
\neu{51 }
Fix an $R\in M$, and let $\varphi\in L(E)^\ast_{\, I\. \. \. \. C}$ induce some element
(also denoted by $\varphi$)
of $T^1 _Y(-R)$. Using the notations of \zitat{2 }{3 }, \zitat{3 }{4 },
and \zitat{3 }{6 } we can take
\[
\tilde{\varphi}(f_{\alpha\beta}):=
\varphi(\alpha-\beta)\cdot \underline{z}^{\Phi(\pi(\alpha)-R)}
\]
for the auxiliary $P$-elements needed to compute the $\lambda(\varphi)$'s
(cf. Theorem \zitat{3 }{4 }). \\
\par
Now, we have to distinguish between the two several types of relations
generating the $P$-module ${\cal R}\subseteq P^m$:
\begin{itemize}
\item[(r)]
Regarding the relation $r(a, b;c)$ we obtain
\begin{eqnarray*}
\sum_{(\alpha, \beta)\in m} r(a, b;c)_{\alpha\beta}\cdot
\tilde{\varphi}(f_{\alpha\beta}) &=&
\tilde{\varphi}(f_{a+c, b+c}) - \underline{z}^c\, \tilde{\varphi}(f_{ab})
\\
&=&
\varphi(a-b)\cdot \left(
\underline{z}^{\Phi(\pi(a+c)-R)} - \underline{z}^{c+\Phi(\pi(a)-R)} \right)
\\
&=&
\varphi(a-b)\cdot
f_{\Phi(\pi(a+c)-R), \, c+\Phi(\pi(a)-R)}\,.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.